Latest REF2021 intelligence

August 29, 2019

 

We’re getting to a point in the REF cycle where details really matter, and sadly there remains confusion over some small but potentially important details on impact. These insights come from a combination of my own queries to REF teams (to make sure I am providing accurate advice to Universities I'm working with on impact) and emails forwarded from the REF team to email lists I'm on. In some cases, we are still waiting for clarity from Research England, so I will update this blog when I get more answers...

 

How many testimonials can you include in a case study? 

 

There has been some confusion over the number of testimonials that can be included in case studies, with some people (incorrectly) interpreting the guidance below as limiting this to five:

  • REF guidance (p98, Guidance on Submissions): “Where the sources are individuals who could be contacted *or have provided factual statements* to the HEI, the submitted case study should state only the organisation (and, if appropriate, the position) of the individuals concerned, and which claim(s) they can corroborate…Details of a maximum of five individuals may be entered for each case study; these data will not be published as part of the submission.”

 

New FAQs published by Research England on 29th August clarify that you can include up to 10 testimonials (though I think it would be better to have more diverse sources of evidence if possible, rather than relying solely on testimonials), and you can then enter contact details for up to 5 of these for them to be contacted to check claims:

  • "How many testimonials can be included as corroborating sources for impact? A maximum of 10 references to sources that can corroborate the impact may be included in each case study. This may include any number, within the maximum of 10, of factual statements already provided to the HEI. These must be submitted to the REF team by the deadline of 29 January 2021. The details of a maximum of five individuals relating to these 10 sources may be entered for each case study, and these are to be submitted through the submission system. These five individuals may be contacted directly by the REF team to corroborate the information provided as part of the audit process. We do not envisage contacting more than five individuals for any particular case study, which is why we have set this limit. If a larger number of individuals could potentially provide such corroboration, then five should be selected that best represent this larger group. The corroborating sources listed should focus on the key claims made within the case study. For further guidance on corroborating evidence, including the use of testimonials, please refer to paragraphs 310 and 311 of the ‘Panel criteria and working methods’.”

 

If you want to use more testimonials though, one sneaky way to get around the limit is to conduct interviews as part of a research project and publish the quotes in a peer-reviewed article. Then you can quote as many people as you want, all linked to a single piece of corroborating evidence (the article). 

 

 

Should we write about the quality of our underpinning research, and if so where should we put this? 

 

In section 3 of the REF impact case study template (Guidance on Submissions, p96), it states that “evidence of the quality of the research must also be provided in this section” and refers to the panel criteria. In the Panel Criteria and Working Methods (p57) it explains that “sub-panels do not expect to read the underpinning research output(s) as a matter of course to establish that the threshold has been met. The submitting institution should aim, where possible, to provide evidence of this quality level”

 

Panels C and D then list a range of indicators that could be used to evidence quality (Panels A and B say nothing about this), including: evidence of rigorous peer-review process for outputs; peer-reviewed funding; reviews of outputs from authoritative sources; prizes or awards made to individual research outputs; evidence that an output is an important reference point for further research beyond the original institution. The REF team have confirmed that “Main Panels A and B do specify evidence is required, but do not give examples of evidence they expect to see”. However, other than the fact that research outputs are peer-reviewed (which will be a given for the vast majority of work submitted in this section), many case studies will not have any indicators they can draw upon as evidence of quality. Given that panels do not want to read the outputs unless they have to, it would therefore seem prudent to include a narrative justification of the rigour, originality and academic significance of the underpinning research. 

 

One of my colleagues asked the REF team what they thought about justifying quality in this way, and they said “we are not expecting or requiring a narrative statement on the quality of the underpinning research for impact case studies”. However, when I asked what else case study authors could do to evidence quality if they didn’t have indicators, they explained that “beyond the criteria we wouldn’t be prescriptive and it is up to the HEI to determine what is appropriate evidence. What we mean by not expecting a narrative is that it doesn’t need to be lengthy prose, but could just be a short factual statement”. So we can provide a short narrative justification of underpinning research quality, and my advice would be to do this, especially if you don't have other indicators of quality you can refer to. But remember that this section is just an eligibility criterion and won't contribute directly to high scores, so do this as concisely as possible. 

 

There is also confusion about whether any evidence of quality should be provided in Section 3 (as the template implies) or integrated into Section 2. So far, the REF team have not answered the original question on this or my follow-up question to clarify this. The answer is probably that it doesn’t matter where you put it, but for my money I’ll be putting it where we’re asked to, so that the panels can tick the quality box as quickly and easily as possible.

 

I am personally reluctant to consider citation data and journal impact factors as indicators, given that they are not in the list given in the Guidance on Submissions, and we are clearly told not to include this information for outputs (though certain panels will be given access to citation data for outputs).

 

Suggestions for additional indicators I’ve heard or come up with so far include:

  • For prestigious conference proceedings (like IEEE), acceptance rates are often published and could be used to show that an output was one of just 25% (for example) of papers accepted

  • When listing prestigious funding, state if the proposal was ranked in the top quartile of submissions to the call

  • For projects that have been completed, provide the assessment grade of the end of award report referees (and perhaps some nice quotes)

  • For projects funded as part of a programme (from directed calls), state if it was the largest project (by award) in the programme, or based on ResearchFish records if it published more outputs than any other project in the programme

  • Citation of research in prestigious/influential reports e.g. science-policy interfaces like IPCC.

 

 

Some clarifications on making evidence available to panels

  • It is acceptable to compile a document containing multiple sources e.g. media reports or evaluations of schools work and submit this document as a single source of corroborating evidence with your case study. In response to my question on this Research England say they will be updating their FAQs shortly, but in the meantime, "HEIs may group multiple items of evidence into a single source to corroborate an impact case study where this is appropriate. Each item within the group should be clearly identified and described in section 5 of the case study" (source: email from REF team, 31st October 2019)

  • Despite the focus on “external sources” of evidence in the guidance (paras 94 and 319c, p23 and 73, and impact case study template, p98, Guidance on Submissions), the REF team have said that "as its verifiable by some means, it’s verifiable.  In other words, I don’t think public availability of corroborating evidence should be a priority route to verifiability". This again implies that the submission of internally collected evidence along with case studies should in theory be acceptable (source: email from REF team, 14th May 2019)  

  • How should you capture web-based evidence to make sure evidence isn’t lost if sites go down? Screenshots are one option, but not practical for long pages or entire sites. Printing and scanning as a PDF is a bit of a nightmare, so an alternative is to make sure the web-based evidence you need is available on a web archive like The WayBack Machine (source: ARMA Impact Special Interest Group email list)

 

 

 

 

 

You may also be interested...

 

Sign up for my newsletter to get the latest research impact news, evidence and resources in your inbox every month. 

 

REF2021 related:

 

Evidencing impact:

 

Find out how to write a winning impact summary and pathway to impact, explore my best practice library of pathways to impactand try out my Pathway to Impact Builder.

 

 

 

About the author