Head of School, Professor Simon Lilley and Director of Research, Professor Martin Parker, discuss the problems of comparing apples, pears and potatoes, in the ranking of business and management research.
We live in a world of rankings nowadays. There are league tables for schools, washing machines and doctor’s surgeries. In a complicated world, it’s not surprising that customers, policy makers and citizens would want to turn to some sort of list which appears to give them the answers that they need. Rankings simplify the world for us and for this we are thankful.
There are, of course, lots of ways of criticising rankings. Head teachers might suggest that comparing a diverse school in an inner city with the leafy suburbs is unfair, just as washing machine manufacturers might debate the importance of spin speeds and temperature levels. But the most basic idea of a ranking is that you are comparing the same things, in the same ways, in a manner which all players may not like but will nevertheless respect, at least in principle.
And this is what makes the ranking of research so odd. The recent Research Excellence Framework (REF), a review of research capacity and results at UK universities, reported just before Christmas. You would have thought that research about research, of all things, would have a robust methodology, but this is very far from the case. In fact, each institution was allowed to exclude all those researchers who it didn’t believe were going to gain them a high ranking. This meant that some institutions submitted very high numbers of staff, whilst others submitted very low numbers. Imagine if a school only submitted data about the pupils who got ‘A’s, or a hospital could choose not to report death rates, or a local-council could mention all the emails they’d sent while ignoring all the bins they didn’t collect. Then you get the basic idea.
In the Business and Management panel, the submission rates varied from 95% of the staff at London Business School and Imperial College, down to less than 10% in many cases, and less than 5% in a few. Absurdly, this means that an institution which deems the vast majority of its staff to be non-credible researchers could, in principle, nevertheless perform remarkably well in a research assessment exercise, as indeed was the case. Cass Business School, for example, which submitted 50% of its staff, finished about twenty places above the Open University, which submitted 87%. The Said Business School at Oxford University excluded 50% of its staff, whilst the Northampton Business School excluded 5%: they finished at opposite ends of the league table. We, for our part, submitted 85% of our staff yet finished below our neighbours, De Montfort University, which submitted less than a third of theirs. Not only is this a crazy way to compare, it is also damaging to those academics who are left out of the REF to bump their institutions up rankings.
This perverse situation has rightly generated a great deal of discussion about the problems with a straight comparison. In other tables, employing alternative methodologies which take into account the percentage submitted, we finished 14th out of over a hundred schools. Over the coming weeks Leicester, like everyone else, will pick the list that flatters it most and this is hardly an objectively reliable scenario. The sheer scope for game-playing only goes to show the extent to which the exercise’s present methodology is deeply flawed. How can any prospective student or funder make meaningful decisions surrounding league tables, given how the rankings are constructed? We aren’t the first to suggest that such exercises must be replaced, as a matter of urgency, with more logical – or at least less insane – methodologies. Nor will we be the last.
You could argue that the distorting effect of excluding lower-rated research portfolios from REF submissions is rather greater that the arithmetic effect on the percentage of 3* and 4* work. Institutions which have gamed their submissions in this manner seem also be those who have boosted their research portfolios by recruiting ‘star performers’ at inflated salaries and with nominal teaching commitments. This has only been made possible by loading the teaching onto more junior staff whose consequent difficulties in finding time to develop their research has led to their exclusion from the REF. Where this is the case, these exclusions subsidise the research profiles of their departments in a manner which is additional to the reduction in the denominator of the fractions of 3* and 4* work, and which is invisible to the REF.
Particularly in the case of business and management studies, it could be argued that the work of non-teaching research stars should be excluded from the REF altogether. Very little research in this area is consequential in the sense that it directly influences practice (normally the vector of influence is the other way, from practice to research) and the little that does so is already allowed for in principle by measures of ‘impact’. This being the case, it could be maintained that the major influence of MOS research occurs through teaching, probably on MBA programmes. On this argument a proper measure of ‘research intensity’ would only count research active staff when they also make a full contribution to teaching programmes.
Given the already-demonstrated willingness of certain university managements to game the system to its limits, one can’t be optimistic that this suggestion will ever be taken up. What Vice-Chancellor would regard an audit of teaching timetables as anything other than an unmitigated insult?
There’s a difference with your analogy though. It was not possible to only submit students who got As in the REF – because you don’t know who got As.
The current rumour is that only 30% of papers in 4* ABS List journals were ranked at 4* by the REF panel for Business and Management and a higher proportion of ABS List 3* papers were eventually ranked at 4*. If that’s roughly true, if you submitted only 4* researchers using the ABS List as your metric, chances are you’d be worse off!
It is also important to emphasise that the REF is not a ranking exercise. It’s meant to be a measurement to distribute funds. Those funds are effected by the amount of research submitted. So, again, gaming the system could be counter productive.
I’d argue that rather than employ expensive research leaders who won’t/can’t teach and can only offer 4×4* papers, a better strategy would have been to employ lots of innovative, energetic and cheap early career researchers, spread teaching thinly among them and aim for 3* outlets across the board. This would provide individuals time to create an excellence ‘environment’ and develop routes to ‘impact’.
The majority of academic papers are chip paper tomorrow. Whether 4* or any other star. History is the only judge really. But, if we are having to measure, then at least let’s measure the same things. The exercise should be predicated on the assumption that all staff of certain HESA categories are included. Anything else compounds the stupidity of measurement with a stupid measurement.
[…] recent article by two professors at the University of Leicester drew attention to game-playing by business schools […]
[…] studies published in longer books, rather than a slavish adherence to the ABS journal rankings and playing REF games, that will allow for a true representation of the contemporary […]
[…] https://staffblogs.le.ac.uk/management/2015/01/22/how-do-you-win-the-research-game-hide-the-results-y… […]