top of page

How to avoid unintended negative impacts: what can we learn from the Cambridge Analytica controversy


The current controversy over Cambridge Analytica’s role in the 2017 US Presidential election and Brexit referendum raise serious questions about the role of research ethics in the pursuit of impact.

Is there anything we can do to prevent our research being used in ways that make us uncomfortable?

What can we learn from the researchers at the centre of the controversy?

The researchers who came up with the techniques that Cambridge Analytica adapted for their own purposes had already made it clear in their peer-reviewed articles that their work could be subverted:

“Commercial companies, governmental institutions, or even one’s Facebook friends could use software to infer attributes such as intelligence, sexual orientation, or political views that an individual may not have intended to share. One can imagine situations in which such predictions, even if incorrect, could pose a threat to an individual’s well-being, freedom, or even life. Importantly, given the ever-increasing amount of digital traces people leave behind, it becomes difficult for individuals to control which of their attributes are being revealed.” (Kosinski et al., 2013)

The researcher who collected the original data, via a popular personality-testing app on Facebook, was right to be suspicious. In his account of events, he described a number of warning signs other researchers may wish to pay heed to:

  • Although he was approached by a faculty member of his own department, he asked why he was interested in his work, rather than simply trusting the authenticity and motives behind the request

  • He started to become suspicious when his questions were not answered adequately – in this case, he discovered that his colleague was asking on behalf of a company and was not allowed to tell him the name of the company

  • Eventually, he persuaded his colleague to reveal the name of the company and researched its activities. Although his research did not show up many of the more questionable activities of the company, he found enough to ring alarm bells when he learned that it described itself as an “election management agency”

  • He continued to dig and found out other things about his colleague that made him further doubt his integrity, for example, he says he discovered that his colleague had secretly registered a company to deal with the election management agency

  • Based on this, he broke off contact and informed the director of his research institute

How can we control what happens to our research once it is published?

Exactly what happened next remains the subject of speculation, and will be covered by the mainstream media as the story develops. However, this story raises an important question in relation to research impact: how can we control what happens to our research once it is published?

As this story suggests, once our work is published in the public domain, it is possible for anyone to replicate and adapt our work without crediting us, to deliver impacts that may be highly controversial. This is a common argument for not engaging in the impact agenda, however in the case of Cambridge Analytica, the one unavoidable step these researchers took that exposed them to risk was to publish their work. As such, stories like these are unavoidable and cannot be used as an argument against engaging with impact.

However, there are things that we can do to increase our control over what happens next to our research:

  • Systematically identify and mitigate potential impact risks (such as unintended negative consequences for different groups in different places or times) and consider ways of mitigating those risks. The researchers in this case had clearly identified potential abuses of their work, but other than warning people of the dangers, appeared to have done little to prevent their work being subverted

  • Lead the pursuit of impact pro-actively: you are more likely to retain control of how your work is used, compared to reactively responding to requests by others to use our work. For example, had the researchers used their work to found their own spin-out company underpinned by a different set of values, they might have been able to legally protect their techniques from being copied by others and used in more questionable ways

  • Take a relational approach to impact, so you are able to assess the trustworthiness of those we work with as we get to know them and build trust on your pathway to impact. By taking your time in this way, you are less likely to enter into relationships with individuals or organisations that have ulterior motives without spotting the warning signs. The researcher behind the original method clearly took this approach, and it prevented him from being directly implicated in the controversy

It is easy to imagine that negative impacts, such as those featuring in the headlines this week, will only happen to other researchers who work on more controversial topics. However, controversy can find almost any researcher and is rarely predictable. Rather than burying our heads in the sand, we need to ask what lessons we can learn from our colleagues who are currently in the eye of this media storm, that could prevent the same fate befalling us.


231 views

Recent Posts

See All
bottom of page