Has the bar been raised for REF2021?

November 9, 2018

Research England have consistently stated that an impact case study worth 4* in REF2014 will be worth the same score in REF2021. However, as institutions compete for top scores there are increasing fears over grade inflation and rumours about an informal cap on the number high scoring case studies. I thought it was time to catch up with Steven Hill, Director of Research Policy at Research England, and put these questions directly to him.

 

 

 

 

I caught up with Steven at an event in Kings College London last week where we were both speaking, and asked him, if he agrees with the received wisdom that it will be harder to get a 4* impact case study in REF2021 compared to REF2014. In response, he confirmed that there will be no quota system, whether formal or informal. I put it to him that this may lead to a significantly higher proportion of 4* case studies being awarded in REF2021, due to a greater understanding of what it took to reach top scores. In response, Steven suggested that any such increase may be counter-balanced by a reduction in 4* awards at the 3* boundary. He cited Gemma Derrick's recent book analysing the REF2014 process (see her Fast Track Impact blog), to point out that a number of sub-panels appeared to have given case studies at the 3-4* boundary the benefit of the doubt in REF2014, and he questioned whether they would do this again in REF2021.

 

Competition at the 3-4 star boundary

 

Of course, Steven cannot predict panel behaviours, and if there's one lesson I took from Gemma's book, it is that panels will socially construct their own interpretations of the REF rules as necessary to guide their work. Steven's admission that competition may indeed me more fierce at the 3-4 star boundary confirms the suspicions of many researchers and will confirm the strategy being taken by most institutions to aim for top end of the 4* category.

 

Whether or not panels in REF2021 do in fact raise the bar, it is clear to me as I train across the sector that the bar has already been raised by submitting institutions. I have had the privilege of training all Russel Group Universities and the majority of other Universities across the UK during the current REF cycle, and I have seen draft case studies from the majority of these institutions. The key difference I see between the top end case studies being prepared for REF2021 compared to the best case studies from REF2014 is the quality of evidence they provide for their claims.

 

How is the bar being raised?

 

I'm seeing this in two ways. First, case studies are relentless in their pursuit of evidence to demonstrate cause and effect between research and impacts. It doesn't matter how long the causal chain is between research and impact. The strength of your impact claim will only ever be as strong as the weakest link in this chain. The best case studies I am seeing are making the causal chain clear and evidencing each link in the chain beyond reasonable doubt. This is one of the key findings from research being led by one of my PhD students, Bella Reichard, based on a linguistic analysis of high versus low-scoring case studies from REF2014. You can see preliminary findings we presented at a conference in the summer here

 

Second, I'm increasingly seeing longitudinal evidence of impact. Although I'm seeing this across the board, it is particularly noticeable in public engagement and policy case studies. Rather than focussing on feedback from specific events (which often just boiled down to enjoyment), I'm seeing people using methods like "a postcard to your future self" to create opportunities to deepen relationships and impact whilst getting GDPR-compliant opportunities to collect longitudinal impact evaluation. In our sample of 175 of the highest and lowest scoring case studies from across all Main Panels in REF2014, 42% of high scoring policy case studies (11 out of 26) described both changes in policy and the benefits arising from the implementation of those policies. This compares to 17% of low-scoring case studies that included both policy and implementation (5 out of 29). Low-scoring cases were more likely to report only changes in policy with no evidence of implementation (83%) compared to high-scoring case studies (58% of which were policy only). Whether continuation case studies or new work, the majority of REF2021 policy case studies I am seeing contain both policy change and evidence that those policies are being implemented and delivering benefits.

 

At this point in the REF cycle it is impossible to predict how panels will operate in REF2021, but it is clear that fear of more fierce competition is raising standards across the sector, and there is credible evidence that this competition is likely to be most fierce at the 3-4 star boundary. If you want to be sure of getting top scores next time round, my advice is to aim for 4+. 

 

 

Please reload

Featured Posts
Archive