A summary of these points, with a link here, was published online in
Times Higher Education
on 2 Mar 2008, as a comment on Ian Marshall's article "Hefce's
Hobson's choice", published on
28th Feb 2008.
HEFCE (The Higher Education Funding Council for England) recently
announced proposals for replacing the RAE (Research Assessment
Exercise) with a new REF (Research Excellence Framework), in the
hopes of saving effort and costs, and improving the accuracy or
objectivity of the research ratings.
The proposals, available here
http://www.hefce.ac.uk/research/assessment/reform/
included much heavier use of metrics, including citation counts.
Many bodies have now produced comments, many of them highly critical
of proposals to make substantial use of metrics based on citation
counts, grant funding etc.
Some of the background and criticisms are summarised in
this report by the Times Higher Education Supplement:
by Zoe Corbyn, on 28th Feb 2008, and
this comment by Ian Marshall.
Members of the UKCRC http://www.ukcrc.org.uk/, like many other
interested groups, discussed the proposals, and I was one of many
who circulated comments.
My main criticism of the REF proposal was not that the proposed
evaluation would be done in the wrong way but that its objectives
were misguided and the whole exercise should be replaced as part of
a better "joined-up" policy for funding and managing post school
education as well as for research.
The current leading universities would not favour my proposals
because they would expect to lose out to supposedly less
institutions, so there was no hope of getting my proposals forwarded
through any academic body. Consequently I am making them available
online here.
The need for better goals and assumptions
The whole exercise starts from a flawed set of assumptions. The
nation should decide how many properly resourced research+teaching
universities it can afford on the basis of what it costs to support
such organisations and the various long term and short term benefits
they produce -- cultural and educational as well as economic, using
an estimate of numbers of school leavers+late developers who are
capable of benefitting from an intellectually very demanding
post-school education, and the staff-student ratios suitable for
teaching them well while doing world-class research.
These teaching+research institutions could coexist with separate
higher education teaching institutions (polytechnic universities?)
doing what the best of the old polys used to do, with good 'slipway'
mechanisms for transfer of students between them, and no penalties
for losing students via the slipways.
Both sets of institutions should be given adequate funding to do
their jobs, with the option to compete for additional funding
required for particularly expensive new projects (e.g. large cross
disciplinary, cross-university, research projects). Some departments
in old polys managed to get research grants and did excellent
research. That possibility should remain.
Research evaluation should primarily provide feedback:
Similar points were made in a letter in 2004 to my MP, Lynne Jones, about Top Up Fees, available here here.
Regular monitoring/review mechanisms would check whether individual
university departments are spending their research money wisely and
provide advice on how to improve or redirect what they are doing.
A similar mechanism has been used for many years in schools and
is often far more useful to schools, children and teachers than
the current emphasis on numeric targets.
That kind of in-depth, on-site, research review, giving detailed
feedback, including constructive criticism, could be done at least
once every three to five years for each department doing research.
It could be done by a specially tailored collection of researchers
from other institutions visiting the reviewed department and could
include researchers from other disciplines, and, where appropriate,
some people from industry.
There might also be a nationally chosen panel of approved reviewers
for each discipline, with members appointed by research councils
and professional bodies.
At least one member of a relevant panel should be on each review
board. The department to be assessed should be able to select half
the reviewers, the university management could appoint some, and the
remainder could be nominated by the appropriate national panel (or
panels) for the discipline (or disciplines) represented in the
department. Variants could easily be devised to meet the
requirements of highly interdisciplinary departments.
Departments could provide reasons for objecting to particular
nominated reviewers. [E.g. because of known animosities, or risks of
'revenge' evaluations, or objections to qualifications.]
Between the external reviews there should be at least one 'internal'
research review run by the university's research management.
This process might be costly in time, but probably no more costly
overall than the RAE, and would have far more valuable effects than
reviews whose only effect is to shift money around.
Our department has nearly always found such external research
evaluations (based in part on a day of presentations and
discussions, as well as documentation), well worth the effort.
I suspect others have also, but I have not done research to
check this.
Acting on recommendations of reviewers should not be compulsory, but
full reasons for not following them should be made available both to
the central authorities of the university and perhaps also to the
national subject research panel.
(E.g. a department that has decided to focus heavily on theoretical
research might resist a recommendation to do more applied research,
or vice versa. A department with a high proportion of
interdisciplinary research might resist a recommendation to
develop a stronger 'core' research portfolio. A judgement that
some risky research is unlikely to succeed could be rejected as
premature.)
Sanctions:
Various sanctions could be available if a department receives
repeated poor research evaluations after reviews where
recommendations are not followed. University managements would be
highly motivated to take action to preserve the reputation of the
university.
Financial penalties should be the last resort: intelligent managers
do not try to fix something that's broken by removing the resources
needed to fix it.
Sacking, redeploying, or retiring, individuals who are not up to the
job could come first! (Done as humanely as possible.)
Many departments have in the past used mechanisms like giving poor
researchers bigger teaching and admin loads, to help the better
researchers. That option should remain available. It can be done in
ways that make the less good researchers feel their teaching and
admin contributions are highly valued.
This recommendation to abandon the policy of using funding
re-distribution to optimise research quality will not be popular
with very successful departments who like the idea of continuing to
get a larger than average share of the research funding.
It's not clear that that's in the best interests of the nation, and
it can be very unfair to excellent students and excellent young
researchers who go to some of the less well funded departments, even
if equal opportunities legislation does not (yet) identify that
category of unequal treatment.
Perhaps it should, making funding councils liable if their funding
policies are the cause of the inequality!
Of course, research funds from industry, and competitively awarded
government funds for unusually large and expensive projects may
inevitably produce some inequality. That could be a tolerable
situation.
Why using RAE (or REF) primarily to determine funding is daft
The growing use of simple numerical evaluations and rankings to
determine funding for major national service bodies, including
schools, universities, hospitals, police departments, etc. is daft
because it is like building a huge ocean platform, or a major
bridge, supported on pillars, doing regular inspections of the
support pillars, and then shifting resources from the pillars that
are in greatest danger of not doing their job to the pillars that
are doing well.
Of course, apart from automatic regular funding required to do its
core research job, each department, and appropriate groups of
researchers should be able to apply to research councils and other
bodies for special funding for large projects requiring large
expensive equipment, large amounts of additional manpower, etc.
These would need to be regularly monitored to decide whether the
funding should continue.
If we want to encourage young researchers to be creative, and to
open up new research territory, they need long term funding that can
continue even while they are studying new possibilities, learning
about other disciplines, developing new kinds of expertise,
exploring new techniques, or new collaborations, even if they are
not regularly spewing out highly rated publications. They should,
however be able to give good accounts of what they are doing to
visiting research panels, who may alos be able to help with
constructive criticism and advice.
My first lectureship post started in 1962. In the first two decades
of my academic career, nobody hassled me about getting publications
or grants. I wrote papers because I had had a good idea and polished
it up, not because someone was nagging me to maximise research
ratings for the department. I also spent a great deal of time
attending seminars in other departments, learning about other
disciplines, and developing new teaching materials and curricula
reflecting what I had learnt about important future developments.
Although I would have had a low research rating by any of the
currently used or proposed criteria, I certainly did not waste my
time, and my cross-disciplinary explorations laid the foundations
for a great deal of productive research in the following decades,
including the development of internationally known research
and teaching centre.
It is very sad that pressures to publish and get grants make that
kind of career development almost impossible for the majority of
young researchers nowadays. The harm it is doing to research and
teaching is incalculable.
Comments and criticisms welcome
Constructive comments and criticisms will be acknowledged, unless
you ask not to be named.
Maintained by
Aaron Sloman
School of Computer Science
The University of Birmingham