New book calls for civil disobedience to fight “dehumanising” impact agenda

As anyone who has attended my trainings will know, I’m not the world’s biggest fan of the Research Excellence Framework (REF2021). I see it is part of my job to undermine the all-pervasive dominance of REF in UK academic life, by encouraging researchers to stop chasing their 4* impact case study, and remember why they originally fell in love with research. Most people didn’t fall in love with impact – they fell in love with curiosity and that’s what drives them to this day. My message: do impact if you want to, and do it for your own reasons, not because REF, your funders or your University want you to do impact. If you’re motivated by curiosity, ask if engaging with people outside the academy might enable you to ask more interesting questions, and pursue the impacts that’s fascinate you. Be proud of your identity as a researcher and your motives, and don’t let anyone tell you curiosity driven research is self-indulgent, or that their motivation to change the world is somehow “purer” than your motives.


It therefore feels very strange to be defending REF in response to a comprehensive and well-researched full-frontal attack on the assessment by Richard Watermeyer. His new book, published this month by Edward Elgar, is a root and branch critique of REF impact that will resonate strongly with many. His argument is that “the confluence of market principles with public needs” forces academics to compete with each other for high scoring impacts, “in a perpetual game of one-upmanship and struggle to prove who's best or who can claim a greater portion of "excellence””. This, he argues, “stimulates performance-based anxieties that are corruptive to academics' self-concept as active contributors to the public sphere”, resulting in “inflated or otherwise counterfeit rationalisations of the public value of academic research”. The effect is a “dehumanising… deprofessionalisation of the academic profession”.


The problem, Richard argues, lies with the capitalist system that our Universities and Government are embedded within, which we are then co-opted into without our knowledge or consent:


“Too much internalised, too far assimilated and homogenized, academics have become estranged from a duty to dissent and to wilfully contest and/or conflict with authority, which has culminated in the obfuscation and unchecked distension of unequal and unjust regimes and, moreover, the kinds of capitalistic rapacity now endemic within Universities.”


In an article last year with Jenn Chubb in British Politics, I too have argued that it is this politicisation of impact that is at the root of most negative unintended consequences arising from the assessment of impact.



Two alternative responses to a dystopian future driven by the impact agenda


What has become clear to me from reading Richard’s new book, is that there are two alternative approaches to this problem. First, as Richard cogently argues, we can further politicise the issue by understanding the neoliberal political roots of the impact agenda, raise awareness of these roots and raise up an army of empowered academics who see it as their duty to resist the impact agenda.


Alternatively however, we can understand these roots, accept that they are problematic (for some not all, depending on our values and beliefs), and find our own reasons for generating impact on our own terms. In so doing, we transplant our impact in our own soil and enable it to produce very different fruits. Now rather than accepting that impact is a process of “academics …leveraging positional goods into their universities… through a commercial sensibility”, I can leverage justice and give voice to disadvantaged communities by undermining the corporate interests that dispossessed them. I can challenge dehumanising policies and regimes, and give strength to those who challenge them through my research. I can choose the political roots of my own impact, rather than having to accept those given to me by REF or my University.


Few of the researchers I enable to achieve impact from their work would view themselves as “entrepreneurs who view the future as an investment opportunity and research as a strategic career move rather than a civic and collective effort to improve the public good”. I work across the natural and physical sciences, social sciences, arts and humanities with people whose impact is to inform policies that can protect the environment, often at the expense of big business, or whose impact is to awaken parents to the danger of sugar, protecting public health at the expense of corporate profits. While for-profit spin-out companies may be prevalent in some disciplines, it is a small minority of researchers who have the opportunity to become entrepreneurs, and an even smaller minority who want to.


This is a book that seeks to understand the political roots of the impact agenda, and the consequences of this political dimension. It achieves both of these goals very successfully, eloquently diagnosing both the problem and its cause. The impact agenda has led to unintended negative consequences that other countries embarking on the design of their own assessments of impact should pay heed to. Gemma Derrick calls this problem “grimpact”, starting with research that linked MMR vaccine to autism but pointing out how REF increases the likelihood of grimpacts arising from a wider range of incentivised behaviours (not just research misconduct). Richard extends the critique based on extensive qualitative data from interviews with academics and members of the policy community.



Collecting testimonials for REF impact case studies is undermining trust in the academy


There is growing evidence as we reach the 2021 census date and evidence is required to support impact claims, that the evidence capture process is compromising relationships and trust between researchers, publics and stakeholders. One researcher interviewed for the book described how they discovered that their University had contacted their stakeholders to ask for testimonials to be re-written to their specification: “you can imagine how that went down with the organisations we were working with” (p56).  Others felt embarrassed for other reasons (p57):


“I felt ashamed that the research-users may have thought that the interactions I had with them based on my research had a hidden egoistic objective to satisfy university business.”


Other interviewees explained the pressure they were under to collect evidence, and how this led to compromising situations:


“I was pushed at times to gather evidence in premature and improperly direct ways, which affected some of my relations adversely. I learned from this and now refuse to adversely affect and instrumentalise users.” (p59)


“Last time it was mainly people who were long-term engaged with impact – now it seems like all ambitious careerist types are trying to get in on the act (given it’s now even more important). I’m hearing some very negative stories from the policy/NGO community about the behaviour of some academics. This risks the relations that people like me have built up over decades, so while I agree with the impact weighting, I think this increases risks of having perverse consequences.” (p63)


Informally, I have heard an increasing number of stories from people I train over the last year about relationships that have been inadvertently compromised through innocent emails asking for testimonials that have backfired. In addition to the issues raised by Richard’s interviewees, there is a very real issue of contested claims, given that most impacts are co-generated with those we want to help, or other organisations trying to help the same group as us. Publicly claiming that a policy is based on a single study, as a result of close engagement with civil servants, is highly unlikely to be an accurate representation of the policy-making process, and if it was then the evidence analyst who was meant to synthesise from across the evidence-base may now be in trouble. If an NGO is funded to deliver an outcome and reaches out to a researcher for help, they don’t expect the researcher to then claim credit for the ultimate impact, undermining their attempt to claim impact to their own funders. I heard of one case where just such a double counting claim only came to light after the REF2014 case studies were published.


I have written about these and other issues on the Fast Track Impact blog, arguing that REF creates a conflict of interest for researchers submitting impact case studies. Based on my recent paper in British Politics with Jenn Chubb, who co-authored some of the work that underpins Richard’s book, we showed that many researchers fear the impact agenda is changing the questions researchers ask and compromising research quality.



But it isn’t all doom and gloom. As the recent Real-Time REF Review showed, researchers generally approve of the REF2021 reforms, even if 57.4% still hold negative attitudes towards the assessment. However, 29.1% perceive REF positively, and Richard offers some explanation for these views too. Applied researchers in particular, who had previously felt undervalued by their peers and institutions are now valued more than ever before for their capacity to generate impact, correcting a bias against applied research and legitimising those whose research falls into this category. This left a bitter taste in the mouth for some, because they were only being valued now for the “monetary and positional value” of their impact, and not the research underpinning it (p50). Richard suggests that these researchers were hence being “made complicit in a phony declaration of their research excellence and concurrently find their neoliberal identity - perhaps unwittingly – consummated” (p54). However, for most applied researchers, myself included, it would appear that this rebalancing is welcome, no matter how over-due it may be.



What should we do about all this?


As the evidence grows, we should all be very concerned about the negative unintended consequences of the impact agenda, and specifically the impact component of REF. However, Richard and I differ on the solution to these very real problems. Richard’s book further politicises impact in an attempt to waken researchers from their slumber to rise up against the corporate academy and its political masters (p11):


“Too much internalised, too far assimilated and homogenized, academics have become estranged from a duty to dissent and to wilfully contest and/or conflict with authority, which has culminated in the obfuscation and unchecked distension of unequal and unjust regimes and, moreover, the kinds of capitalistic rapacity now endemic within Universities.”


He goes on to argue that our unwitting collusion with our employers extends to collusion with policymakers, implying that some of us are now willing to alter research findings in line with political values to achieve policy impacts that can be claimed in REF:


“For social scientists especially, prominence in the public sphere is achieved most often via a close relationship with or cosying up to the policy making community and machinery of government.” (p85)


“Academics exceeding the parameters of their expertise or stepping outside of their comfort zone for the sake of being politically pragmatic. For some the co-mixing of academic facts with, or their subjugation to, political values may feel like a betrayal of their scientific code of practice.” (p89)


For me however, there is an uneasy relationship between the argument that we should voice our dissent and actively push back against the neo-liberal politics of REF, whilst avoiding any kind of political engagement if that means talking to the neo-liberal elite. For me this simplifies the world of politics into something black and white, and suggests that having values is a betrayal of research ethics, unless those values are opposed to the capitalist machine. For me, any scientific code that suggests researchers are value-free, independent and objective, is of limited use in the policy world, where alternative lines of conflicting evidence have to be weighed against each other and against lines of argument, including moral argument.


What is important for me, is that researchers interrogate their motives and understand why they do what they do. In so doing, they make their many identities and values explicit to themselves, and can see how these then influence the questions they ask and the research they do. When we look deeply, we may find that our own personal values fit on the left or right of the political spectrum, and I think we need to respect these differences. I should not be judged if, as a mathematician, the world’s largest investment bank approaches me with a problem that fascinates me, and I fix it for them, enabling them to make even more money. But I should interrogate my values before embarking on any such journey if I want to ensure I don’t later regret what I got myself into. Most of the researchers I know are naturally more liberal and left-leaning in their politics, and they believe in the power of research to make the world a better place. If my data shows that current Government policy is making the poorest in society more vulnerable, then I should not be judged if I want to use that evidence to challenge Government policy or empower those fighting for a different future. As someone who works with climate scientists, the whole thrust of climate science challenges the political status quo and will require entire business sectors to cease operating, if we are to avoid catastrophic climate change. Many of the researchers I know in this space would describe themselves as “researcher activists” and exemplify an altogether more hopeful vision of academic freedom, and the value of knowledge.


But of course, the problem remains: how do you know if your research actually made a difference? For me, whether or not we use the answer to this question to create a REF impact case study, the fact that we are now asking the question is hugely important. There are many of us who would be content to continue engaging with publics and stakeholders with no evidence to tell us whether or not it is helping. But unless we know if it is helping, there is always the danger that we are wasting our time or worse, causing negative unintended consequences. For me, this should always be part of “responsible research and innovation”, and REF is normalising the monitoring and evaluation of engagement and impact in the UK research system in ways that became normal decades ago in international development.


Many of the researchers I meet when I am doing impact training, especially those from more applied disciplines, tell me that they always wanted to generate impact from their research but there was never enough time. The incentives that have been created by REF have enabled many of these researchers to justify making the time, shifting impact up their "to do" list, and are inspired by what they have achieved as a result. When I train researchers from pure sciences, arts and humanities disciplines around the world, I recommend taking a look at the impact case study database for inspiration, and I regularly see light-bulb moments as researchers realise that there are more relevant, creative and meaningful ways of generating impact from their work than they had previously thought possible.



Did REF panels ignore evidence?


These case studies are not the sum total of the impact of UK research, as some of erroneously suggested. They are a highly polished and skewed tip of an iceberg – selected and crafted by researchers who are assessing their own work for a specific purpose. While that may seem like an obvious conflict of interests or flaw in the assessment process, case studies have to be supported by evidence, and assessment panels have to be satisfied that there is sufficient evidence to support any given claim. Richard undermines the validity of the entire database of case studies by suggesting that these panels did not look at the evidence. While I think that the assessment process was far from perfect, I think it is important to challenge this assertion, given its wide-ranging ramifications, if correct.


Here’s what Richard says:


“…evidence is treated as an encumbrance on the evaluation process - the scrutiny of which would be time and labour costly - and analogously therefore, expendable, superfluous and ideally circumvented, certainly where trying to adjudicate the veracity and quality of the impact promulgated in case studies in any efficient manner.” (p79)


“Where evidence was consulted, its incorporation was also seen to prejudice evaluation decisions, where certain forms of research produced impact that was easier to evidence or produced more obvious or striking forms of evidence of impact, which was further reasoning why evidence was best left alone.” (p80)


“The apparent eschewal of corroborating evidence by panellists I, in the context of a process of scientific decision-making or rather decision-making by scientists, altogether peculiar and perplexing, and self-evidently contrary to the scientific method. It also reveals the significant effort made, and no doubt expense incurred, by universities in reconnoitring impact evidence was all for nothing.” (p80)


I think a very simple misunderstanding may explain this interpretation of the quotes from interviewees. It is true that very few sub-panels requested the full evidence held by Universities, but in well-written case studies they didn’t need to, because the evidence was laid out before them in the narrative. While can be argued that panellists should not have taken as much on trust as they did, it cannot be argued that they ignored evidence. As the next quote below makes clear, testimonials were almost always quoted in the case study, and all manner of other evidence, including numbers, statistics and graphs, where presented as part of the case study. What the panels failed to do systematically was to check that the evidence presented was robust. So it may be the case, that had they checked the original testimonials, they would have found quotes had been massaged or negative comments left out. They may have discovered that some of the statistics were made up. But in a time-poor assessment process, instead, they only requested the full, original evidence where there was something that looked suspicious or didn’t make sense. Such an insight might mean we question the robustness of the process, but does not provide evidence that the process was so deeply flawed that the resulting scores were entirely subjective, as Richard suggests.


Finally, Richard claims that testimonial based evidence is inherently untrustworthy:


“The impact of academic research is largely confined to statements of support or testimonials from parliamentarians recruited by academics (as favours) for the purpose of corroborating their ICS claims - collected "evidence", ironically it would seem, that was ultimately largely ignored by REF panellists”. (p92)


“Perhaps only the most infirm or unfit of researcher-research user relationships will produce an account that is anything other than complimentary.” (p56)


First, it is important to point out that testimonials were only one form of evidence, and while they may have been pervasive they were rarely the only form of evidence presented. My own experience gathering testimonials is that most people give me disappointing assessments of my role in the generation of impact, if at all, and in some cases (such as evidence analysts who are meant to synthesise evidence or NGOs or politicians who want to be able to claim as much impact for themselves rather than having to give credit to others, as described above), it is very difficult to get anyone to go on record to say your work played a role in the impact. While some researchers may be able to “call in favours”, I do not think the evidence presented here supports such a dim view of human nature. It is also worth bearing in mind that the subjective nature of testimonials was recognised by Research England in its guidance for REF2021, asking us to seek evidence-based, rather than opinion-based testimonials.





Richard and I share a dislike and distrust of the impact agenda, and its incarnation in REF. Richard’s root and branch critique of this is well researched and insightful, and we would all do well to heed some of the dark warnings it provides. I hope many researchers and managers read this and take a more critical approach to impact, that reduces the chances of things going wrong for them and the people they seek to help. But I hope that in this review, I have offered some alternative interpretations and an alternative response. I would characterise Richard’s approach as a need to uproot the impact tree because it is rotten to the core. We cannot fix the ills of capitalism with structures that are so deeply rooted in the system that caused the problems we seek to solve. Richard’s post-impact world harks back to a time when we were not required to account for our time or taxpayers’ money the same extent that w