How to design a whole institution REF impact internal review: lessons from Northumbria University


What would you do if you were asked to organise an internal review of REF impact case studies for your whole university? Run away? Hide? Look at what everyone else is doing?

As the two Impact Managers for Northumbria University given this task, running away and hiding was not an option. Learning from the experience of other institutions was also not an option, since it is difficult to find publically available information about internal processes for assessing impact.

In this blog, we want to take off the shroud of secrecy surrounding internal REF processes and share the approach we developed. Our hope is that others will share their experiences, and we can all learn more widely from best practice across the sector.

The challenge

We were tasked with designing an internal process for reviewing draft impact case studies for REF2021 that could help us identify progress towards submission in 2020, enabling us to shortlist those with greatest likelihood of scoring well, and helping us prioritise support for those with currently unrealised potential. Where are our strengths and weaknesses, where are resources needed and extra effort required? How can we manage risks to delivery and make the case for extra support and investment? We needed to develop a replicable rating mechanism that could be applied to every potential impact case study. The process must relate to all disciplines and the rating must be applied consistently across all four faculties, seventeen departments and fifteen units of assessment…and the results must be something that achieve institutional buy in.

Our approach

Our approach consists of five steps using a scoring matrix that grades case studies in relation to their relative impact (in terms of significance and reach) and likelihood of occurrence by 2020. The planned approach and associated guidance was shared with authors of potential case studies, together with a submission template and the promise of individual feedback following panel discussion.

Step 1 – Score by likelihood of occurrence by 2020

At this stage in the REF cycle we felt it was premature to make judgements on star ratings for impacts that were typically still in progress. To use an Olympic analogy, we wanted this to be a selection process for our elite (REF) athletes, identifying those that would probably form our Olympic REF squad in 2021 (green), those that could possibly be included with further support (amber) and those that need significant support or might be future stars for REF 2028 (red). Figure 1 shows our Red Amber Green (RAG) rating. To achieve consistency, a scoring matrix was developed for reviewers, supported by guidance on how to apply the scores based on the only information available at the time, the REF2014 panel guidance.

Figure 1. Red-Amber-Green (RAG) rating definition for filtering case studies by likelihood of occurrence in the 2017 internal review at Northumbria University (Alisha Peart, 2017)

Step 2 –Score by impact potential based on eligibility, reach, significance, strength of evidence and probability for improvement

The next step was to break case studies down into their component parts and their elements to assess the potential grade of the impacts deemed most likely to occur (see Figure 2 ‘Element a’). These included:

  • the reach and significance of the impact;

  • the quality of the supporting evidence collected to date; and

  • the eligibility of the case study (including whether the underpinning research was conducted within the relevant time window, was conducted at this institution, at least 2 star in quality and clearly linked to the impact).