Ranking education at business schools

By Emma Simmons

Tulips in a bagWe explore rankings and the issues around trying to evaluate the case classroom.

The rankings landscape

More than a quarter of a century after business school rankings first appeared, the annual, or biennial, findings of Bloomberg Businessweek, The Economist, The Financial Times (FT), Forbes, Princeton Review, QS, US News and others, have become essential reading. Business school applicants and their parents keep an interested eye on the latest results, together with alumni and employers whose opinions have often been surveyed for the results. Governments and organisations worldwide take note. Inevitably, schools’ faculty and staff monitor the relative ranking of their institution and its programmes, while colleagues are appointed specifically to coordinate and disseminate the multitude of data they are asked to supply. Rising up the rankings becomes major strategic goal for business school leaders. Rankings are both eagerly and anxiously anticipated, and the ‘winners and losers’ make headlines.

Adding complexity to the picture, more recently emerging global university rankings produce a separate rank for the university to which a business school belongs. And with schools often achieving a different rank in every publication, ‘super rankings’ by media outlets such as Poets and Quants and Top Management Degrees combine and condense them to ostensibly identify the ‘overall top’ business schools, frequently with very surprising results.

What do rankings evaluate?

So, is it possible to measure the ultimate ‘best’ business school? Is it the student experience you will have or the added boost your career will get? Are the campus facilities or city nightlife the most important thing, or is it the participants you will study with and the lifelong friends you will make? Is it more important to be under the same roof as the world’s leading researchers, or is it the teaching that matters? To what extent can these things be measured? Every ranking unearths something different and each has evolved its own methodology. Understanding the remit of each ranking is key to their interpretation.

Introducing its 2014 methodology, The Economist stated: “Rankings are little more than an indication of the MBA market at a particular moment. They reflect the prevailing conditions such as salaries, jobs available and the situation at a school at the time the survey was carried out. Results of rankings can be volatile, so they should be treated with caution.......... None are definitive, so our advice to prospective students is to understand the ethos behind each one before deciding whether what it believes is important is also what is important for you.” According to Ben Sowter, of the Intelligence Unit, behind the many QS Rankings, “Far from being a high-resolution photograph, rankings are more akin to an impressionist painting, where different interpretations by different artists will emphasise different aesthetics of the subject, whilst none will reveal the whole.”

Schools’ view and teaching

Yet, because of the visibility of rankings, many business schools struggle with the feeling that they fail to accurately reflect their strengths, or that they portray them more in comparison with competitors than in their own light. It is sometimes hard for schools not to shift their strategic focus onto things that will have a positive rankings impact. According to Maria Gabriela Espeche Gil of IAE Business School, Universidad Austral: “Schools can face the risk of adapting their programmes to fit the rankings, thus losing their identity. Rankings should be used to improve products, but without putting a school’s identity at stake.” At the Darden School of Business, University of Virginia, Senior Associate Dean for Degree Programs, Peter Rodriguez also sees dangers: “It can be challenging for schools to uphold their institutional strategy, when rankings may seem to be compelling them to do something else.”

In particular, schools that put a strong emphasis on the learning experience often feel that pedagogy, teaching innovation and investment in the classroom are under evaluated and underrepresented in rankings. Those schools for which the case method is a significant and differentiating aspect of their offering, can find this to be a particular source of fruastration. Knowing what they can expect in the classroom is certainly relevant to business school applicants, the prime target for rankings. Exploring this, QS recently published an inaugural report examining how students actually use rankings. The findings highlighted the dilemmas rankings face around adequately including teaching quality: the most cited indicator of interest for students was found to be teaching quality at 58%, followed by employer reputation, cited by 50%.

Challenges of measuring teaching

In fact, rankings organisations are well aware of the importance of adequately including teaching in their research, but equally of the problems of finding a viable methodology to assess its quality. Measuring, evaluating and weighting case development and use is particularly challenging. Della Bradshaw recalls that the very first FT Business School Ranking included a measurement of case output. “Though many schools produce some cases, there are certain institutions which produce disproportionately higher volumes, and this skewed the results and consequent balance between schools,” she reflects. It was not repeated. Like other ranking organisations, the FT seeks to extrapolate teaching quality through additional metrics, including graduate employability: “One theory is that if participants have been well taught, they will get good jobs afterwards,” says Della Bradshaw. But here, too, problems can lurk, such as finding a way to allow for lower employment rates in recession-hit economies.

The most common approach to measuring teaching quality is to ask those who have experienced it what they think, and a core component of many rankings is the student (satisfaction) survey (accounting for 45% of the Bloomberg Businessweek ranking total score, for example) and/or alumni survey (alumni responses have an impact on 59% of the FT’s Global MBA Ranking). Periodically, experiences of case teaching have been covered in such surveys. Though commonly employed, such questionnaires have drawbacks. Culture can affect response levels and, while some nationalities typically focus on the positive when evaluating teaching, others will be more inclined to highlight the negative. A contrasting point of view is that students and alumni will always ‘talk up’ their own school – supposedly they would like their alma mater to be highly ranked and to help to keep it there. Others see a self-fulfilling cycle in which a good rankings performance will automatically impact positively on a future student satisfaction rating.

At INSEAD, Assistant Dean, Graham Hastie reflects on the surveying of students and alumni: “Often the questions are asked in an absolute sense e.g. ‘How good was your experience of the teaching?’ But the findings may then be used in a relative way, e.g, ‘Participants at school x say the teaching is better than at school y.’ So, the real question rankings seem to be asking is, ‘Compared to teaching at other schools, how good do you thing the teaching was at your school? Because people generally only go to one school, they would not be able to answer this question.” he suggests.

But, even with surveys, the picture is not consistent. Peter Rodriguez comments: “Today, fewer questions than ever are asked about teaching quality in the MBA student surveys sent out by many rankings organisations. Whereas previously, some went into depth, distinguishing between core and elective programmes, for example, the trend is generally in the other direction to more general questions and very few.” By contrast, Executive Education rankings do, as a general rule, incorporate more direct questioning about the classroom experience. Some see more questionnaire space being given to a multitude of issues such as diversity and sustainability, which, while clearly important and of topical interest to applicants, are probably generators of more attractive headlines than teaching would be.

Maria Gabriela Espeche Gil feels that most rankings do reflect “the basics of a business school, but there are many intangibles that are not easily measured, and are thus left out.” For example, “many questionnaires seek to identify faculty quality, but not teaching quality, and where cases are incorporated, the metric will be case quantity, without the opportunity to reflect their quality or innovations in teaching materials.”

The challenge is that quality in teaching and classroom materials is incredibly hard to measure, while rankings are necessarily based on metrics. The onus is always on those who put together rankings to demonstrate the credibility of their investigative and analytic processes and to be rigorous in generating and applying robust data. For example, the fact that research excellence is more frequently evaluated than teaching quality must be partly explained by the reality that peer reviewed publications can be quantified and citations counted. Nevertheless, perhaps research and teaching are not as unrelated as often portrayed. “Good teaching needs to be informed by good research,” comments Della Bradshaw. There are no simple solutions it would seem, and measurements made without sufficient rigour will undermine the critical scrutiny of a ranking. Pragmatically, Ben Sowter observes: “In the end, all models are wrong – but, some do remain useful.”

Seeking new metrics

In fact, the problem of identifying robust ways to measure educational performance is now recognised at the highest and widest international level. At the OECD, the Feasibility Study for the Assessment of Higher Education Learning Outcomes (AHELO) has been working for some years to distil universally useful ways of measuring and comparing student outcomes – the products of teaching – worldwide. They outline the challenge: “More than a ranking, the AHELO assessment aims to be a direct evaluation of student performance at the global level and valid across diverse cultures, languages and different types of institutions...............designed to provide higher education institutions with feedback on the learning outcomes of their students and which they can use to foster improvement in student learning outcomes” – in other words, how to improve teaching.

But, the research for this article reveals that ranking organisations, in spite of receiving regular criticism for a lack of consistency in methodology over time, remain open to both exploring ideas that could facilitate a more accurate measurement of teaching quality and innovation, and to creating methodologically robust ways in which particular teaching approaches, such as case activity could be included. The challenge is to find these new ways to enrich their data and if schools can help identify them, so much the better.

Rather than focusing primarily on the student or alumni perspective, could there perhaps be merit in considering whether faculty themselves would have useful and measureable things to say about teaching? Their scientific mind-set has the potential to elicit factually based, straight talking responses. All business schools have to balance the encouragement of research output to bolster their academic profiles, with pursuing excellence in teaching and the development of innovative pedagogical materials and approaches. The pendulum swings between these two with variable force in different schools, and faculty know which way it moves and where. Faculty also often have experience of several institutions, plus contacts with colleagues elsewhere. Peer assessment is used in rankings, including US News, but rarely in relation to teaching.

Some suggest carrying out a separate teaching survey with multiple criteria reflecting its richness and diversity, and employing various evaluation methods. This approach would enable ‘soft’ indicators such as classroom innovation and the output of original materials like cases to be included. It could be explored whether external organisations such as The Case Centre could conceivably provide additional information, to be applied in an algorithmic data framework, about the uptake of cases by third party schools, for example, to indicate where schools are having a classroom impact far beyond their walls.

Is there also potentially scope to ask employers in the surveys they already complete about the teaching methods used at the schools from which they recruit? Do these have an impact on where they recruit, and do particular teaching methods produce more useful recruits? Many employers use management case studies in their recruitment process to showcase the skills staff need such as problem solving, flexibility and team working. Does it therefore follow - or not - that a graduate conversant in solving cases would be a preferred recruitment target?

Graham Hastie is cautious about rankings focusing too closely on specific teaching methods: “How a course is taught is of course an important piece of information for a prospective student, but rating one method more highly than another would be unwise unless there was empirical evidence of superior learning outcomes. The information about pedagogy is relevant,” he suggests, “but rather than rating it, why not let intelligent readers make their own judgement?”

Showcasing teaching

So, what can schools do to communicate a clearer picture of their pedagogical profile within a rankings framework? Our analysis suggests that there is currently limited scope to highlight teaching, although the metrics used by rankings compilers are constantly evolving, so it is certainly important that those responsible at schools are fully conversant with what is required from year to year. As an example, The Economist Full time MBA ranking has three rankings criteria which relate to teaching: Personal development and educational experience, Faculty quality, and Education experience. Imagining case activity fully integrated into rankings remains some way off. Meanwhile, some rankings organisations are initiating separate awards, specifically recognising teaching innovation such as QS’ ‘Reimagine Education’.

But, rankings are only one source of information and perhaps not the best place to even try to showcase teaching excellence or a great case experience. Many schools already give applicants a taste of what might await them in the classroom and to ‘try before you buy’. According to Graham Hastie, INSEAD runs popular applicant Master Classes around the world and classes take place monthly on campus as well. And like many other schools, INSEAD offers freely available sessions including case and taster classes onlineMaria Gabriela Espeche Gil reflects that: “Rankings certainly help international students hear about our school, but the essence of our admission process is based on personal contact. Our regular ‘IAE breakfasts’ are held both locally and across Latin America. They include a class to enable prospective students to sample the ‘IAE Experience’, and to have a first ‘taste’ of our constantly evolving case pedagogy.”

The concluding remarks go to Darden, where cases are the cornerstone of the school’s pedagogy: “If we are able to have direct contact with applicants at an early stage and offer them a trial case class, they invariably respond to its unpredictability, energy and fun,” says Peter Rodriguez. “Future students are smart; of course they consult rankings, but they also know to listen closely to their friends and alumni, and especially to their own, first hand, impressions when they seek the right business school for them.”

The Case Centre would like to thank all those who agreed to participate in this article.

Join the debate

Should cases feature in business school rankings? How can the quality of teaching be effectively measured? Have your say!

To display your photo next to your comment simply and upload a picture to your profile before commenting. View our participation guidelines.

Report a comment

Please only report comments that are in violation of our participation guidelines.
First name:
Last name:
Organisation:
Email:
Confirm email:
Reason for reporting:
Share this page:

Share your experience

Contact us
 
If you have recently registered a case with us and would like the chance to talk about your experience of writing and teaching it please contact Antoinette.
 
Antoinette Mills Antoinette Mills
Media and Systems Development Manager
+44 (0)1234 756416
antoinette@thecasecentre.org