Home Back

Global HE rankings: Why do they continue to be relevant?

universityworldnews.com 2 days ago

Image: iStock
Every time global university rankings are released, discussions are had on their inherent value and usefulness (or lack thereof). Global rankings are particularly criticised for their reductionist approach in presenting a picture about what universities do, their role in shaping society and their impact on the economy.

At the same time, global rankings are widely used by institutions and governments for a range of purposes, including student recruitment, staff promotion, benchmarking and monitoring progress against agreed policy objectives.

Rankings season kicks off

This year’s global rankings season started in earnest when QS released its 2025 World University Rankings (WUR) on 4 June and with Times Higher Education ( THE) releasing the sixth edition of its Impact Rankings on 12 June. Later this month, US News will publish its 2024-2025 Best Global Universities Rankings. The season continues with the release of the Academic Ranking of World Universities in August and then in September THE releases its World University Rankings and QS publishes its Sustainability Rankings in December.

Running out of steam?

There is increasing debate in academia about whether global rankings are running out of steam after 20 years and whether they will prevail over the long term.

On the one hand, the appetite for global rankings is waning in some national systems; in others it continues to be a pathway for institutions looking to undertake either national or regional benchmarking.

What’s more, the number of institutions which are participating in global rankings continues to rise. This year, QS evaluated 5,663 institutions, but published results for 1,502 institutions spread across 106 countries. When QS and THE released their first joint global ranking in 2004, it included 200 universities from 29 countries.

Since the inaugural edition of THE Impact Rankings in 2019, when 462 institutions provided data for four or more sustainable development goals (SDGs), the number of participating institutions has increased by 1,501. In 2024, 1,963 institutions were ranked, up from 1,591 in 2023.

Although we may think we have enough global rankings, even more ranking schemas keep coming along. Earlier in the year, the CTWS Leiden Ranking launched an open edition of its bibliometric-focused ranking. There is also another ranking of highly ranked scholars, Scholar GPS, and later in the year, THE will launch its Interdisciplinary Science Rankings.

A changing global landscape

The world of higher education has changed significantly over the past two decades. Up until 2002, there were more students enrolled in higher education in North America & Western Europe than in any other world region. Since 2003, East Asia & the Pacific has both the highest volume and largest share of enrolments globally.

And by 2012, South & West Asia had overtaken North America & Western Europe as the world’s second largest region enrolment-wise. North America & Western Europe’s global share of enrolments have gone from 29.7% in 1999 to 15.6% in 2022 and it is projected to have a global share of just 13% by 2030 and of 10.4% by 2040. Conversely, East Asia & the Pacific had a global share of 34.2% in 2022 and is projected to increase to 38.4% by 2030.

Another lens by which to view the changing landscape relates to how global science is changing across geographies. Over the 2020-2022 period, China produced 2.7 million publications compared to 2.3 million for the United States.

India is third globally and ahead of the United Kingdom, Germany and Italy, followed by Japan, Canada, France and Russia in the top 10 producers of scholarly outputs based on Elsevier’s Scopus database.

While the dominance of the United States and Western European countries is steadily waning in knowledge production, China and India are yet to experience an increase in research impact.

Over the past 20 years, we have seen that countries such as Egypt, Malaysia, Iran and Saudi Arabia stand out because they have experienced a strong growth in publications and a strong uplift in impact.

There are also other countries such as Brazil, South Korea, Taiwan and Turkey which have also experienced a sustained uplift in publications and impact, although at a lower rate than the four listed above.

Global rankings are beginning to show these trends and also reflect that we have evolved from a bipolar world to a tripolar world. There is a shift away from North America & Western Europe and towards Asia & the Pacific. We are also witnessing that the Global North is adapting to the rise of the Global South.

A tripolar world

For the first edition of the joint QS and THE WUR in 2004, 78.5% (or 157) of the 200 ranked universities were from North America & Western Europe, 11.5% were from Asia, 8.5% were from Australia and New Zealand and 1.5% were from other world regions.

This year, 64% (or 128) of the world’s top 200 universities in QS WUR are from North America & Western Europe, compared to 16% (or 32) from Asia; 13% are from Australia & New Zealand, and 7% from all other world regions.

Can we therefore expect that the older, well-endowed and research-intensive universities from North America & Western Europe will retain top positions in global rankings? It may take some years before we can see a different geographical composition for those universities ranked in the top 10, 20 and 50. However, it will happen and is only a matter of time.

The evidence suggests that universities from Asia and other world regions will continue to have increased visibility in global rankings. The proportion of Asian universities ranked by QS in the WUR has increased from 21.8% in 2018 to 26.6% in the 2025 edition, whilst the proportion of universities from North America & Western Europe has decreased from 51.5% to 38.5%.

At the same time, the proportion of universities from all other world regions has increased from 22.0% in 2018 to 31.8% this year. In the case of Australia & New Zealand, the number of participating institutions has remained relatively constant (between 43 and 46 institutions over the eight-year period), but its proportion has fallen from 4.6% to 3.1% of the global number of participating institutions.

The increased visibility of Asian universities is yet to be seen at the top. Over the past two years, the only Asian university ranked in the world’s top 10 is the National University of Singapore.

Four others are ranked in the 11-20 range. It may be a matter of time before we see either Nanyang Technological University, Peking University, The University of Hong Kong or Tsinghua University among the world’s top 10.

There are also a group of 10 to 13 institutions from China, Hong Kong, Japan, Malaysia, South Korea and Taiwan that have ranked in the world’s top 50 over the past eight years and continue to make annual gains. But the leading universities from Argentina, Brazil, Chile and Mexico are also moving up the rankings ladder.

Last year, QS and THE WUR introduced new methodologies to their respective world rankings. I would imagine that there will be more refinements in years to come. But the continued progress of universities in national systems outside the mature economies remains unabated.

Countries which continue to invest in strengthening their national educational systems, including improving participation and completion rates at tertiary education level and investment in research and research training, will ultimately prevail over others.

The improved performance for these rising universities and many others spread across several metrics covers a higher volume of scholarly outputs, including a greater proportion of outputs in top quartile journals and increased international co-authorship collaboration.

These efforts are contributing to improved scores in the reputation surveys which, to the discomfort of many critics of global rankings, present a picture of what is happening at the school or discipline level within universities.

Over the years, we have seen that, when there is structural change at the discipline, school or college level, staff share their sentiments with peers in other institutions. Often enough, this sentiment influences how people assess institutions when reputation surveys are conducted. This is because QS conducts academic and employer reputation surveys annually.

Early in the year, QS sends out emails inviting academics and employers to respond to these surveys. These responses are analysed and inform the results which were published early June.

Through the QS results from the reputation surveys, one can assess that the perception of the quality of national educational systems is changing. Many countries that were previously importers of educational services are now home to thousands of international students and are rapidly developing strong quality assurance mechanisms and therefore gaining in reputation.

One of the reasons universities elect to participate in regional and specialised rankings is for the ability to have benchmark data against peers, but also to showcase progress against the Sustainable Development Goals, their institutional mission and government policy objectives.

As a result of participation, in the case of the THE Impact Rankings or the QS Sustainability Rankings, universities have a tool to develop roadmaps for improvement and evidence-based pathways for impact for government, civil society and the market.

Behind the overall ranking

Most people only see the published results in a summarised manner void of context or the finer detail behind every indicator score and rank. Every year, QS provides participating institutions with a fact file that contains information on their performance on every metric and the underlying data used for the ranking’s calculations.

This is an invaluable resource because it equips analysts with insights that show them what drove their institution’s performance ranking up or down. This information arrives at least a week ahead of public release.

There are also additional tools (such as QS reputation datasets and Elsevier’s SciVal) that allow users to derive further insights to determine which factors are influencing performance and to give them an idea of their progress against their institutional mission.

Also, on a subscription basis, institutions which participate in THE Impact Rankings can access the data and evidence submitted by other institutions.

Through this mechanism, there is a level of transparency to this ranking. In the absence of centralised data collection and timely reporting of institutional level performance in many national systems, the major ranking schemas have become a source of institutional
benchmarking.

Many national systems lack the resources and infrastructure to have centralised data collection systems to enable institutions to undertake comparisons or benchmark against peers. Here lies a critical point of failure for the global higher education ecosystem. If this failure is not addressed, we are likely to see increased reliance on a fee-for-service provision of data to support decision-making across institutions.

People are also reading