Module 4: Evidencing the significance of impacts from media engagement

While measuring reach is fairly straightforward, there are no standard metrics that can tell you the significance of the benefits that arose for the audience that was reached. This is a problem if you want to claim impact from media engagement, as reach without significance means very little. Only once it has been possible to prove that (at least a proportion of)  those who engaged benefited significantly (e.g. the learned something new, gained a skill or changed a behaviour), does evidence of reach have value, because you are able to infer the scale of the benefits that have occurred. 

Types of evidence

Most survey respondents agreed that evidencing the significance of impacts arising from media engagement was a major challenge, and suggested that this should primarily be the responsibility of academics or professional services staff, rather than press offices, who currently mainly provide evidence of reach. Press office respondents focussed on getting reports of impact from academics at a later date (typically a month after engagement, using feedback forms and surveys). One respondent suggested sending information about the reach of an academic’s media engagement as a way of incentivising responses to such surveys. The Conversation’s follow-up survey was also mentioned (this goes out to academics who write articles for The Conversation, asking what impacts arose after the piece went out). Another suggested contacting professional services teams to see what impacts had been identified as part of the process of developing REF impact case studies. Professional services staff and academics focussed more on specific methods that they could use to evidence the significance of their impact, for example: 

  • Citations in policy literature

  • Contact stakeholder organisations linked to the media coverage to ask if they have seen benefits e.g. an increase in enquiries or donations

  • Interviews with representatives from these organisations to get testimonials of wider benefits

  • Collection of positive media reviews to show the impact of an event or book on cultural life

 

Reach was a clear benefit of media engagement, and where significance can be demonstrated, the kinds of metrics provided by press offices could play an important role in the construction of REF impact case studies. Reach metrics included: 

  • Media monitoring and clipping services e.g. Signal

  • Tracking media follow up calls/interview requests

  • Viewer/readership numbers e.g. National Readership Survey

  • Circulation/web hits of publication/outlet e.g. via Google Analytics

  • Metrics within individual social media platforms and third-party social media analysis e.g. Pulsar or Meltwater 

Methods for evidencing th impact of media engagement

Methods for evidencing impact from media engagement range from highly accurate but costly methods to methods you can use for free if you plan ahead and have enough time. All the methods I’m going to suggest have three things in common:

  1. They demonstrate the significance of impact by identifying and describing the benefits that arise from media engagement

  2. They use a sample of the population who engaged with the media. It isn’t possible to survey every person who listened, watched or read about the research, but it is possible to find out if a sample of those people benefited in any way. The more expensive methods work with statistically robust samples, but if you are evaluating impact on a tight budget it is possible to triangulate your findings using other methods, so that you can still say something with confidence with a biased sample

  3. They demonstrate cause and effect between the research and the benefits, so any impacts claimed are clearly attributable to the research. There are two types of causation you might want to try and prove. Necessary causation shows that the research was necessary to generate the impacts that arose via the media. Sufficient causation shows that the research could in theory have generated the impact, and you then need further evidence to build an argument that despite other factors also being plausible (or even probable), the research did indeed play a significant role. A third, weaker form is contributory causation, where the research may have been one of many contributing factors, and it is not possible to disentangle these competing/confounding factors to show that the research played a significant role.

 

Broadly speaking, there area three ways of evaluating impacts arising from media engagement, though there are many alternatives available: 1) social media analysis; 2) a funnel approach in which you direct a proportion of an audience to a website or other mechanism where you can follow up with them; and 3) before/after polling of media audiences.

1. Social media analysis

 

The first approach is inexpensive and may not take too much time (depending on the sort of analysis you do), but has some important limitations. Social media analysis might sound like a daunting prospect if you have not done it before, but it can be surprisingly accessible:

  • Quantitative content analysis can be used to count the frequency with which particular words or phrases appear in media (and trends can be tracked over time, looking for peaks that might correspond to public engagement activities). Similarly, this technique can be used to characterise a body of text that is known to relate to the public engagement that is being evaluated (e.g. newspaper cuttings or tweets on an event hashtag), based on the frequency of words within that body of text. 

  • Although more time consuming, qualitative analysis of text that has been aggregated using a hashtag or keyword search can offer more nuanced insights into the nature of debate stimulated by public engagement. Changes in the amount and nature of discourse may be tracked over time. 

  • Evidence of reach may be gathered for particular messages (e.g. number of retweets for particular tweets, and where possible the reach and impressions for that tweet), to evaluate which messages gained most traction. 

  • It may also be possible to study the diversity of people discussing (or liking or retweeting etc.) media stories linked to research, or discussing the research directly, where profile information is available e.g. based on gender and interests. Similar information may be sought from comments under mass media articles, but these are typically less frequent and less likely to be linked to profile information.

  • Alternatively, more nuanced findings can be gained from a qualitative analysis of social media comments, identifying key themes and using these to build rich descriptions of different responses to the research, illustrated by quotes

  • Finally, it is possible to reach out directly to social media users who made comments to ask additional questions, collecting further qualitative data on the platform or inviting them to take an online survey or to a telephone interview

It is important to note that social media analysis comes with a number of limitations and ethical challenges. Social media users are unlikely to be representative of the overall audience engaging with the media, for example different platforms have distinct demographic or geographical biases. It is also important to get ethics permission for this sort of evaluation, and different universities often have quite different norms and rules around social media research. While some argue that those posting social media comments understand that the material is available for anyone to view publicly, it is very difficult to prove that you have obtained informed consent from those who posted the comments to analyse them. 

 

2. The funnel approach

 

The second approach is still inexpensive, but requires a more significant investment of time, whilst still suffering from the same biases as social media analysis. Using this approach, the media opportunity is used to funnel a proportion of those engaging with the media to a website, social media and/or surveys. Using the funnel as an analogy, the media opportunity is the wide top of the funnel where many people engage, and the media opportunity is used to direct those who are interested towards the bottom of the funnel, where they end up in a targeted place where you have the opportunity to engage directly with them. 

There are three steps:

  1. Identify or create a target location (the bottom of the funnel) to which you will direct those engaging with the media, for example a website, landing page (of a website) or a social media account. 

  2. Direct those engaging with the media to the target location. This is easier to do in live interviews than it is in pre-recorded interviews that will be edited or newspaper articles, where the journalist may edit out the reference to your funnel. In some cases, it is possible to work with production teams to link to free resources at the end of a programme, or to work with their social media team to get links to your funnel put out alongside messages about a broadcast. In the majority of cases where this is not possible, it may be possible to work with your press office to co-ordinate social media around media activity. With some forward planning, you may be able to line up a few social media influencers with large and relevant followings to amplify these messages at the relevant time, to get further engagement. In this way, it may be possible to capture the interest of a proportion of those engaging with the media, taking them to the funnel. It may be possible to get more engagement if there is a clear benefit of visiting your website, for example exclusive unseen footage, a tooklit or guide, fun quizzes or games, a free e-book or some other benefit you think those engaging with the media would appreciate. 

  3. Engage with them once they arrive at your website or social media account. The goal of your engagement is: 1) to deliver further benefits, deepening the impact; and 2) to get their permission to follow-up with them in future to find out more about how they have benefited (from the media engagement and/or their further engagement via your own materials). For example, you might include a quiz to test people’s knowledge based on what they leaned from the media they engaged with. Although this does not provide rigorous before/after data, you can ask if they were aware before or if they had only become aware after they engaged with your work via the media. You can also double this up as a Twitter poll to get a larger sample. However, to get more rigorous data, you ideally want to persuade a proportion of those visiting your site to give you their email address (in line with GDPR rules), and those visiting your social media account to follow you. One way to do this is to offer them a resource, with the “payment” being that they provide you with their email address in the understanding that you will follow-up to ask them how they are using what they have downloaded. While you may reduce the number of people who will then download your resource, you will find out the identify of those who do download it and you will be able to ask them questions that can help you improve your resource and get evidence of impact (if they have benefited from its use). As long as they remain on your mailing list, you will also get the opportunity to deepen their interest and generate more impact by providing them with additional resources or opportunities, such as attending public lectures or other events linked to the research and their interests. 

Funnel approach case study

A researcher studying media and everyday life under communism drew on contacts from her press office and academic colleagues in her University’s School of English to build relationships with a range of TV and radio producers with related interests. This led to a BBC documentary focussed on her work. She couldn’t afford to run a large before/after survey, and the producers weren’t willing to link to her University project website from the programme itself. She therefore worked with her press office to design a social media strategy to amplify the messages from the documentary and capture evidence that people’s understanding and/or attitudes had changed.

 

On the evening it aired, her press office advertised the documentary across multiple social media channels with a link to her project webpage where viewers could access exclusive additional content. The press office social media received significantly more engagement than the BBC’s own official social media channels thanks to a co-ordinated campaign to reach key social media influences in advance, so that they shared the press office message to their networks on the evening in question. A Twitter poll, which also appeared on the project website, showed that the majority of people who watched the programme had changed their perceptions of cultural life under communism and said they would be interested in attending an exhibition based on the research. This was supported by a qualitative analysis of social media comments made by viewers during and after the programme.

 

Those who visited the project website were encouraged to sign up for a newsletter in return for a free exhibition guide which previewed previously unseen images. Those who signed up for the newsletter were given more information about the exhibition and sent a follow-up survey three months later to see how much learning people had retained and if they had acted on what they had learned in any way. As a result, the documentary helped generate interest in the subsequent exhibition, which attracted audiences of Eastern European were previously under-represented in exhibition audiences. A survey at the museum combined with data on visitor numbers and takings provided robust evidence that the researcher’s research had indeed changed how substantial numbers of people viewed media and everyday life under communism.

3. Before/after polling data

 

The third approach is more rigorous and relatively time-efficient, but it is expensive. You can commission polling companies to do a sample before and after the media featuring your research goes out. Polling data is typically used to assess changing in understanding and attitudes, but could also be used to assess whether viewers for intentions to act or perform specific actions suggested to them in the content they have engaged with. 

 

There are specialist polling companies that can target the viewers, listeners and readers of specific media outlets or programmes in a highly targeted way. For example, Prof Yamni Nigam from Swansea University had her research on maggot therapy featured in four episodes of the TV soap, Casualty, which has over 4 million weekly viewers. She commissioned a specialist TV polling company to find out what proportion of casualty viewers were aware of maggot therapy and its benefits for treating wounds that were resistant to anti-biotics, and the extent to which they viewed maggot therapy as acceptable or disgusting. After the episodes aired, she had evidence of an increase in awareness and understanding of maggot therapy and a reduction in what she called the “yuk factor”. This was important for her, because she had already convinced clinicians to offer maggot therapy on the NHS but uptake by patients was low due to the disgust they felt towards the treatment. 

Alternatively, you might commission a generalist polling company to obtain a representative sample of the UK population or a particular target audience (e.g. demographic), to see what proportion engaged with the media you put out. For those who engaged with the media in question, you can ask how they responded to it, and for those who did not, you can get the polling company to provide them with the key messages and then probe for their response. For example, Newcastle University designed a before/after poll targeting a representative sample of the UK and German populations after media coverage of their research on the health benefits of organic milk coincided with a spike in organic milk sales across Europe. The polls were designed to be conducted a week after planned media work around research on the benefits of a range of organic foods. Respondents were asked whether they had bought organic food in the previous week, whether they had seen media reports about the health benefits of organic food, and if so, whether they thought these reports had influenced their purchasing decisions. Those who had not seen the media reports were given the key findings in a short summary based on the University press release, and were asked if they would be more likely to buy organic food on the basis of what they had heard. At the same time, sales of organic food were being tracked to see if there was a sales spike. This time, unlike the spike in organic milk sales, it would be possible to infer media coverage of the research as a major factor contributing to the increase in sales. 

 

While polling data might seem like an expensive approach, it may be worth the investment if there is an important enough impact claim that could be proven with the data. However, in addition to the cash, you need to have the foresight to plan your polls well in advance of the media coverage, and this is not always possible.