In this blog we discuss reasons why the impact agenda is often negatively perceived by some academics. We do so by drawing on new research into sources of resistance to impact based on interviews with 51 academics in the UK and Australia between 2011-2013.
Following the 2006 Warry report of the economic return on investment in research, anyone working in and around higher education and research would be hard pressed not to have heard mumblings and/or grumblings about the impact agenda. Considered by many as symptomatic of broader challenges in HE, impact was, at least at first, seen to present dangers to blue skies thinking; potentially narrowing the ways we fundamentally understand science and its applications.
But since its inception, we have seen a huge investment in impact support across the sector and a growing acceptance and, in fact, a welcoming by many of this development. Impact support teams have been recruited in universities alongside growing incentives for impact, bringing with it new currency and esteem.
But how far have academic attitudes towards impact shifted, if at all?
In a paper we published last week, we provide baseline information on the reasons why the impact agenda presented challenges for so many academics at the time of its inception. Specifically focusing on the sources of this resistance, we discuss academics’ fears that the impact agenda could compromise the quality of some research and potentially undermine many researchers’ intrinsic motivation to generate benefits from their work.
Fears about dwindling quality
Although these may be isolated examples, our research identified a number of ways in which academics feel the impact agenda may compromise research quality.
1. Researchers claimed they were changing the way they write grant proposals to prioritize research questions they believed would generate impact, but they felt that these questions rarely pushed the boundaries of their discipline. An archaeology researcher from the UK explained that, “choosing research questions on the basis of real-world need can lead to tedious work that does not push disciplinary boundaries and ends up unpublishable”. One UK psychology researcher alarmingly stated: “I’m doing shit research because I thought that’s what they wanted”. This may be indicative of a misconception around the weighting of impact versus research excellence (excellence has always been weighted higher than impact).
2. Researchers perceived that there was an increased risk of conflicts of interest emerging as they worked more closely with beneficiaries who co-fund or support their work. It is vital that we reflect on the relationship between impact and integrity. For instance, a 1998 study showed that a third of published journal articles in one discipline disclosed financial interests and a 1999 study showed that two-thirds of academic institutions held equity in ‘start-up’ businesses that sponsored their research. There is evidence that industry-sponsored research is associated with publication delays and data withholding, and there are some reports of industry altering, obstructing or even stopping publication of negative studies. Despite these concerns, Australia’s Engagement and Research Assessment, launched in December 2017, asks institutions to report “cash support from research end-users” as an indicator of engagement that is designed to “provide quantitative evidence of links between researchers and research end-users”. It is therefore important to reflect on how impact pathways might affect others external to the academic world, applying principles of ethical conduct just as one would with respect to research. Embedding impact throughout the life cycle will enable this so as to avoid divorcing impact from research.
3. Researchers feel that they are forced to broaden rather than deepen their disciplinary expertise to address real-world problems, leading to “shallow research”. There were concerns that the impact agenda is spreading researcher capacity too thinly, taking time away from research. As researchers are forced to look beyond their core expertise, there is a danger that they prioritise breadth over depth. One UK English literature scholar explained that “impact...has to grow out of research, yet it pulls us away from research.” Similarly, a mathematician admitted that, “it does force me a bit in the direction of more shallow and more directly applicable research”.
Undermining self-determination and intrinsic motivations for impact
In addition to these immediate concerns around research quality, the interviews revealed deeper concerns about how impact might be changing the motivations of researchers. There is evidence that intrinsic and altruistic motives for engaging with impact (e.g. a desire to benefit others) are being increasingly crowded out by extrinsic motivations for impact (e.g. to get research funding, promotion or improve institutional rankings or reputation). As one UK philosophy researcher said, “it’s unwelcome because it’s measuring and distorting things that people were happy to do”.
This is known as “motivational crowding”. Motivation crowding theory posits that extrinsic motivators such as monetary incentives or punishments may undermine intrinsic motivation. When someone is rewarded for a behaviour they had been performing previously based on intrinsic motivation (e.g. paid for giving blood), the extrinsic motivation can replace or ‘crowd out’ the intrinsic motivation, leading to non-performance of the behaviour when the extrinsic reward is no longer available. Researchers who were intrinsically motivated to pursue impacts prior to the introduction of a research assessment’s rewards and punishments, may stop pursuing impact when they are told that their work will not be submitted, or has been reviewed as low quality.
Moreover, there were concerns that the impact agenda may affect perceptions of self-determination and self-efficacy, which have been shown to significantly impact on levels of intrinsic motivation. A UK Theatre and Television scholar explained: “The danger is it’s like a Tsunami. Its crashing over everything and will knock stuff out that is a precious part of what has kept universities going. For many academics it is keeping alive that sense that they have control over what they’re doing that enables them to be valuable and to have impact. You run the risk of taking away a raison d’etre”.
The politicisation of impact
The politicisation of the impact agenda is a major ideological barrier to many researchers. The political roots of impact policies in both the UK and Australia continue to fuel suspicion that this agenda is an extension of neoliberal attempts to marketise the academy, only valuing knowledge in narrow, instrumental terms as a return on public investment despite attempts from government to demystify these messages and involve the sector. The evolution of research impact in both settings, away from simplistic metrics with those with an emphasis on economic value towards more holistic conceptions of impact that are assessed both qualitatively and quantitatively with expert peer review, from these accounts (at the time of interviewing), failed to convince many researchers that the objectives of impact assessment were benign. The shift towards the use of responsible metrics and the implementation of the San Francisco Declaration on Research Assessment (DORA) in many institutions in 2018 however, marks a step change in these developments in the UK.
The timing of the inception of an impact agenda alongside economic austerity in both the UK and Australia, partly explains why the political roots of an agenda designed to protect and justify spending on research in an increasingly competitive public spending environment. This has been further compounded by the top-down approach to the introduction of impact assessment in each country, from national research institutions, in collaboration with senior management teams in HEIs who have been among the first to recognise the political imperatives of the impact agenda.
How concerned should we be?
It is important not to take all of these findings at face value. Not least since the interviews took place in a pre-REF context, some respondents suggested that the impact agenda had deeper roots that pre-dated its neoliberal hijacking by politicians keen to show a return on their investment in research.
Rather than seeing impact as compromising research, many felt they had a deep responsibility to make a difference with their research. Some felt that it could enrich the research process and enhance research quality by encouraging more joined up thinking across disciplines. Indeed for many of these researchers, who often co-produced their research, impact was an indivisible part of the research process.
These academics pointed out that researchers are still able to choose whether to pursue their own research agenda or allow their work to be skewed by their own perceptions of what funders would support. In fact, those who had experience of reviewing were even clearer on this point, revealing perhaps a persistent lack of coherent understanding across the community of impact policies and definitions. There is in fact little evidence (other than anecdotal) that impact significantly skews funding decisions, as these are still weighted (heavily in most schemes) towards research excellence. There is even less evidence that impact is actually driving down the quality of research being proposed or conducted.
However, it is clear from this research that negative perceptions of the impact agenda were strong at the time of interviewing. The mechanisms through which impact is purported to be compromising research quality are credible if not proven. It is possible that these mechanisms are already at work and leading to more widespread negative unintended consequences especially if impact policies continue to be misconceived.
What can we do?
Research assessments are inevitably externalizing and instrumentalising the motives of researchers around the world. This is a complex issue that is inevitably driven from the top-down, but that cannot be solved solely by governments or university managers. Instead