European University ranking
On June 15, 2011, the European University Association (EUA) made public the results of the report ‘Global University rankings and their impact’. This report, led by Professor Andrejs Rauhvargers, provides a comparative analysis of the methodologies used in the most popular rankings*. The presentation of the report’s results was followed by a panel discussion with university leaders and higher education experts** about the impact of ranking on universities. The report does not intend to rank their various rankings but to analyze the methodologies and indicate the current developments of alternatives to measure university quality and performance in all its dimensions and complexity.
The authors of the report recognize that rankings are here to stay, given their high level of acceptance by various stakeholders. The report acknowledges the positive aspects of the rankings for universities: they draw the attention of governments to higher education and research; they improve accountability and management methods; and they demonstrate the importance of collecting reliable data. Regarding the robustness of the data on the output, both Web of Science and Scopus were mentioned as reliable databases as far as the sciences and medicine are concerned.
Main findings and criticisms
Going through the comparison of the various methodologies, the report details what is actually measured, how the scores for indicators are measured, and how the final scores are calculated — and therefore what the results actually mean.
The first criticism of university rankings is that they tend to principally measure research activities and not teaching. Moreover, the ‘unintended consequences’ of the rankings are clear, with more and more institutions tending to modify their strategy in order to improve their position in the rankings instead of focusing on their main missions.
For some ranking systems, lack of transparency is a major concern, and the QS World University Ranking in particular was criticized for not being sufficiently transparent.
The report also reveals the subjectivity in the proxies chosen and in the weight attached to each, which leads to composite scores that reflect the ranking provider’s concept of quality (for example, it may be decided that a given indicator may count for 25% or 50% of overall assessment score, yet this choice reflects a subjective assessment of what is important for a high-quality institute). In addition, indicator scores are not absolute but relative measures, which can complicate comparisons of indicator scores. For example, if the indicator is number of students per faculty, what does a score of, say, 23 mean? That there are 23 students per faculty member? Or does it mean that this institute has 23% of the students per faculty compared with institutes with the highest number of students/faculty? Moreover, considering simple counts or relative values is not neutral. As an example, the Academic Ranking of World Universities ranking does not take into consideration the size of the institutions.
buy proxy list api