top of page

Latest REF2021 intelligence

Updated: May 7, 2020


We’re getting to a point in the REF cycle where details really matter, so in this blog I'm collating relevant new insights on how institutions are interpreting the REF guidance on impact, and responses from Research England to questions that I and others have posed on impact.

How many testimonials can you include in a case study?

There has been some confusion over the number of testimonials that can be included in case studies, with some people (incorrectly) interpreting the guidance below as limiting this to five:

  • REF guidance (p98, Guidance on Submissions): “Where the sources are individuals who could be contacted *or have provided factual statements* to the HEI, the submitted case study should state only the organisation (and, if appropriate, the position) of the individuals concerned, and which claim(s) they can corroborate…Details of a maximum of five individuals may be entered for each case study; these data will not be published as part of the submission.”

New FAQs published by Research England on 29th August clarify that you can include up to 10 testimonials (though I think it would be better to have more diverse sources of evidence if possible, rather than relying solely on testimonials), and you can then enter contact details for up to 5 of these for them to be contacted to check claims:

  • "How many testimonials can be included as corroborating sources for impact? A maximum of 10 references to sources that can corroborate the impact may be included in each case study. This may include any number, within the maximum of 10, of factual statements already provided to the HEI. These must be submitted to the REF team by the deadline of 29 January 2021. The details of a maximum of five individuals relating to these 10 sources may be entered for each case study, and these are to be submitted through the submission system. These five individuals may be contacted directly by the REF team to corroborate the information provided as part of the audit process. We do not envisage contacting more than five individuals for any particular case study, which is why we have set this limit. If a larger number of individuals could potentially provide such corroboration, then five should be selected that best represent this larger group. The corroborating sources listed should focus on the key claims made within the case study. For further guidance on corroborating evidence, including the use of testimonials, please refer to paragraphs 310 and 311 of the ‘Panel criteria and working methods’.”

If you want to use more testimonials though, one sneaky way to get around the limit is to conduct interviews as part of a research project and publish the quotes in a peer-reviewed article. Then you can quote as many people as you want, all linked to a single piece of corroborating evidence (the article). Alternatively, and easier, based on an FAQ published by Research England on 12th December 2019, you can integrate multiple testimonials (and other sources of evidence e.g. lists of media engagement or different evaluations of public engagement) in a single source:

"HEIs may group multiple items of evidence into a single source to corroborate the impact where this is an appropriate way of presenting related evidence items. In these cases, the HEI must clearly identify and describe each item in the grouped source in section 5 of the impact case study template."

Should we write about the quality of our underpinning research, and if so where should we put this?

In section 3 of the REF impact case study template (Guidance on Submissions, p96), it states that “evidence of the quality of the research must also be provided in this section” and refers to the panel criteria. In the Panel Criteria and Working Methods (p57) it explains that “sub-panels do not expect to read the underpinning research output(s) as a matter of course to establish that the threshold has been met. The submitting institution should aim, where possible, to provide evidence of this quality level”.

Option 1: use quality indicators. Panels C and D then list a range of indicators that could be used to evidence quality (Panels A and B say nothing about this), including: evidence of rigorous peer-review process for outputs; peer-reviewed funding; reviews of outputs from authoritative sources; prizes or awards made to individual research outputs; evidence that an output is an important reference point for further research beyond the original institution. The REF team have confirmed that “Main Panels A and B do specify evidence is required, but do not give examples of evidence they expect to see”.

Suggestions for additional indicators I’ve heard or come up with so far include

  • For prestigious conference proceedings (like IEEE), acceptance rates are often published and could be used to show that an output was one of just 25% (for example) of papers accepted

  • When listing prestigious funding, state if the proposal was ranked in the top quartile of submissions to the call or state the percentage of proposals that were funded (assuming this is a small proportion)

  • For projects that have been completed, provide the assessment grade of the end of award report referees (and perhaps some nice quotes)

  • For projects funded as part of a programme (from directed calls), state if it was the largest project (by award) in the programme, or based on ResearchFish records if it published more outputs than any other project in the programme

  • Citation of research in prestigious/influential reports e.g. science-policy interfaces like IPCC (sciences) and published reviews of research (assuming these are by reputable people in prestigious outlets) (arts and humanities)

  • A letter of support by named leading experts in the field testifying to the quality of the research (only for key outputs that were not formally published e.g. reports, that are particularly important for underpinning impacts, as this would take up one of your slots for corroborating sources)

I am personally reluctant to consider citation data and journal impact factors as indicators, given that they are not in the list given in the Guidance on Submissions, and we are clearly told not to include this information for outputs (though certain panels will be given access to citation data for outputs).

Option 2: narrative justification of research quality. Other than the fact that research outputs are peer-reviewed (which will be a given for the vast majority of work submitted in this section), many case studies will not have any indicators they can draw upon as evidence of quality. Given that panels do not want to read the outputs unless they have to, it would therefore seem prudent to include a narrative justification of the rigour, originality and academic significance of the underpinning research.

One of my colleagues asked the REF team what they thought about justifying quality in this way, and they said “we are not expecting or requiring a narrative statement on the quality of the underpinning research for impact case studies”. However, when I asked what else case study authors could do to evidence quality if they didn’t have indicators, they explained that “beyond the criteria we wouldn’t be prescriptive and it is up to the HEI to determine what is appropriate evidence. What we mean by not expecting a narrative is that it doesn’t need to be lengthy prose, but could just be a short factual statement”. So we can provide a short narrative justification of underpinning research quality, and my advice would be to do this, especially if you don't have other indicators of quality you can refer to. But remember that this section is just an eligibility criterion and won't contribute directly to high scores, so do this as concisely as possible.

Option 3: ignore it entirely. I asked colleagues from the Association of Research Managers and Administrators (ARMA)'s Impact Special Interest Group what they were advising their case study authors, and the three responses I received each said that they were ignoring this part of the guidance. The rationale for this was that the risk of a case study being downgraded or deemed ineligible on the basis of weak underpinning research is very low (there are few identifiable examples where this happened in REF2014).

Finally, there is some confusion about whether any evidence of quality should be provided in Section 3 (as the template implies) or integrated into Section 2. So far, the REF team have not answered the original question on this or my follow-up question to clarify this. The answer is probably that it doesn’t matter where you put it, but for my money I’ll be putting it where we’re asked to, so that the panels can tick the quality box as quickly and easily as possible.

Some clarifications on making evidence available to panels

  • It is acceptable to compile a document containing multiple sources e.g. media reports or evaluations of schools work and submit this document as a single source of corroborating evidence with your case study. In response to my question on this Research England say they will be updating their FAQs shortly, but in the meantime, "HEIs may group multiple items of evidence into a single source to corroborate an impact case study where this is appropriate. Each item within the group should be clearly identified and described in section 5 of the case study" (source: email from REF team, 31st October 2019)

  • Despite the focus on “external sources” of evidence in the guidance (paras 94 and 319c, p23 and 73, and impact case study template, p98, Guidance on Submissions), the REF team have said that "as its verifiable by some means, it’s verifiable. In other words, I don’t think public availability of corroborating evidence should be a priority route to verifiability". This again implies that the submission of internally collected evidence along with case studies should in theory be acceptable (source: email from REF team, 14th May 2019)

  • How should you capture web-based evidence to make sure evidence isn’t lost if sites go down? Screenshots are one option, but not practical for long pages or entire sites. Printing and scanning as a PDF is a bit of a nightmare, so an alternative is to make sure the web-based evidence you need is available on a web archive like The WayBack Machine (source: ARMA Impact Special Interest Group email list)

You may also be interested...

Sign up for my newsletter to get the latest research impact news, evidence and resources in your inbox every month.

REF2021 related:

Evidencing impact:

Find out how to write a winning impact summary and pathway to impact and explore my best practice library of pathways to impact.

About the author

Mark is a recognized international expert in research impact with >150 publications that have been cited over 14,000 times. He holds a Research England and N8 funded Chair in Socio-Technical Innovation at Newcastle University, and has won awards for the impact of his research. His work has been funded by ESRC, STFC, NERC, AHRC and BBSRC, and he regularly collaborates and publishes with scholars ranging from the arts and humanities to physical sciences. He has reviewed research and sat on funding panels for the UK Research Councils (including MRC, BBSRC, ESRC, NERC), EU Horizon 2020 and many national and international research funders. He has been commissioned to write reports and talk to international policy conferences by the United Nations, has acted as a science advisor to the BBC, and is research lead for an international charity. ​

Mark regularly advises research funders, helping write funding calls, evaluating the impact of funding programmes and training their staff (his interdisciplinary approach to impact has been featured by the UKRI, the largest UK research funder, as an example of good practice). He also regularly advises on impact to policy makers (e.g. evaluating the impact of research funded by Scottish Government and Forest Research), research projects (e.g. via the advisory panel for the EU Horizon 2020 SciShops project) and agencies (e.g. Australian Research Data Common).


2,649 views

Recent Posts

See All
bottom of page