Updated: Jul 24, 2021
As anyone who has attended my trainings will know, I’m not the world’s biggest fan of the Research Excellence Framework (REF2021). I see it is part of my job to undermine the all-pervasive dominance of REF in UK academic life, by encouraging researchers to stop chasing their 4* impact case study, and remember why they originally fell in love with research. Most people didn’t fall in love with impact – they fell in love with curiosity and that’s what drives them to this day. My message: do impact if you want to, and do it for your own reasons, not because REF, your funders or your University want you to do impact. If you’re motivated by curiosity, ask if engaging with people outside the academy might enable you to ask more interesting questions, and pursue the impacts that’s fascinate you. Be proud of your identity as a researcher and your motives, and don’t let anyone tell you curiosity driven research is self-indulgent, or that their motivation to change the world is somehow “purer” than your motives.
It therefore feels very strange to be defending REF in response to a comprehensive and well-researched full-frontal attack on the assessment by Richard Watermeyer. His new book, published this month by Edward Elgar, is a root and branch critique of REF impact that will resonate strongly with many. His argument is that “the confluence of market principles with public needs” forces academics to compete with each other for high scoring impacts, “in a perpetual game of one-upmanship and struggle to prove who's best or who can claim a greater portion of "excellence””. This, he argues, “stimulates performance-based anxieties that are corruptive to academics' self-concept as active contributors to the public sphere”, resulting in “inflated or otherwise counterfeit rationalisations of the public value of academic research”. The effect is a “dehumanising… deprofessionalisation of the academic profession”.
The problem, Richard argues, lies with the capitalist system that our Universities and Government are embedded within, which we are then co-opted into without our knowledge or consent:
“Too much internalised, too far assimilated and homogenized, academics have become estranged from a duty to dissent and to wilfully contest and/or conflict with authority, which has culminated in the obfuscation and unchecked distension of unequal and unjust regimes and, moreover, the kinds of capitalistic rapacity now endemic within Universities.”
In an article last year with Jenn Chubb in British Politics, I too have argued that it is this politicisation of impact that is at the root of most negative unintended consequences arising from the assessment of impact.
Two alternative responses to a dystopian future driven by the impact agenda
What has become clear to me from reading Richard’s new book, is that there are two alternative approaches to this problem. First, as Richard cogently argues, we can further politicise the issue by understanding the neoliberal political roots of the impact agenda, raise awareness of these roots and raise up an army of empowered academics who see it as their duty to resist the impact agenda.
Alternatively however, we can understand these roots, accept that they are problematic (for some not all, depending on our values and beliefs), and find our own reasons for generating impact on our own terms. In so doing, we transplant our impact in our own soil and enable it to produce very different fruits. Now rather than accepting that impact is a process of “academics …leveraging positional goods into their universities… through a commercial sensibility”, I can leverage justice and give voice to disadvantaged communities by undermining the corporate interests that dispossessed them. I can challenge dehumanising policies and regimes, and give strength to those who challenge them through my research. I can choose the political roots of my own impact, rather than having to accept those given to me by REF or my University.
Few of the researchers I enable to achieve impact from their work would view themselves as “entrepreneurs who view the future as an investment opportunity and research as a strategic career move rather than a civic and collective effort to improve the public good”. I work across the natural and physical sciences, social sciences, arts and humanities with people whose impact is to inform policies that can protect the environment, often at the expense of big business, or whose impact is to awaken parents to the danger of sugar, protecting public health at the expense of corporate profits. While for-profit spin-out companies may be prevalent in some disciplines, it is a small minority of researchers who have the opportunity to become entrepreneurs, and an even smaller minority who want to.
This is a book that seeks to understand the political roots of the impact agenda, and the consequences of this political dimension. It achieves both of these goals very successfully, eloquently diagnosing both the problem and its cause. The impact agenda has led to unintended negative consequences that other countries embarking on the design of their own assessments of impact should pay heed to. Gemma Derrick calls this problem “grimpact”, starting with research that linked MMR vaccine to autism but pointing out how REF increases the likelihood of grimpacts arising from a wider range of incentivised behaviours (not just research misconduct). Richard extends the critique based on extensive qualitative data from interviews with academics and members of the policy community.
Collecting testimonials for REF impact case studies is undermining trust in the academy
There is growing evidence as we reach the 2021 census date and evidence is required to support impact claims, that the evidence capture process is compromising relationships and trust between researchers, publics and stakeholders. One researcher interviewed for the book described how they discovered that their University had contacted their stakeholders to ask for testimonials to be re-written to their specification: “you can imagine how that went down with the organisations we were working with” (p56). Others felt embarrassed for other reasons (p57):
“I felt ashamed that the research-users may have thought that the interactions I had with them based on my research had a hidden egoistic objective to satisfy university business.”
Other interviewees explained the pressure they were under to collect evidence, and how this led to compromising situations:
“I was pushed at times to gather evidence in premature and improperly direct ways, which affected some of my relations adversely. I learned from this and now refuse to adversely affect and instrumentalise users.” (p59)
“Last time it was mainly people who were long-term engaged with impact – now it seems like all ambitious careerist types are trying to get in on the act (given it’s now even more important). I’m hearing some very negative stories from the policy/NGO community about the behaviour of some academics. This risks the relations that people like me have built up over decades, so while I agree with the impact weighting, I think this increases risks of having perverse consequences.” (p63)
Informally, I have heard an increasing number of stories from people I train over the last year about relationships that have been inadvertently compromised through innocent emails asking for testimonials that have backfired. In addition to the issues raised by Richard’s interviewees, there is a very real issue of contested claims, given that most impacts are co-generated with those we want to help, or other organisations trying to help the same group as us. Publicly claiming that a policy is based on a single study, as a result of close engagement with civil servants, is highly unlikely to be an accurate representation of the policy-making process, and if it was then the evidence analyst who was meant to synthesise from across the evidence-base may now be in trouble. If an NGO is funded to deliver an outcome and reaches out to a researcher for help, they don’t expect the researcher to then claim credit for the ultimate impact, undermining their attempt to claim impact to their own funders. I heard of one case where just such a double counting claim only came to light after the REF2014 case studies were published.
I have written about these and other issues on the Fast Track Impact blog, arguing that REF creates a conflict of interest for researchers submitting impact case studies. Based on my recent paper in British Politics with Jenn Chubb, who co-authored some of the work that underpins Richard’s book, we showed that many researchers fear the impact agenda is changing the questions researchers ask and compromising research quality.
But it isn’t all doom and gloom. As the recent Real-Time REF Review showed, researchers generally approve of the REF2021 reforms, even if 57.4% still hold negative attitudes towards the assessment. However, 29.1% perceive REF positively, and Richard offers some explanation for these views too. Applied researchers in particular, who had previously felt undervalued by their peers and institutions are now valued more than ever before for their capacity to generate impact, correcting a bias against applied research and legitimising those whose research falls into this category. This left a bitter taste in the mouth for some, because they were onl