In news-that-ought-to-be-satire-but-isn’t, the AV Club reports, via New Scientist, that Facebook has been manipulating users’ feeds in order to test whether they can manipulate their emotions. 689,003 users, to be precise.
The full paper is here, and makes for interesting reading. The researchers found that, yes, emotional states are contagious across networks, even if you’re only seeing someone typing something and not interacting with them face-to-face. They also found that people who don’t see emotional words are less expressive – a “withdrawal effect”.
Where things get rather concerning is the part where Facebook didn’t bother telling any of its test subjects that they were being tested. The US has a few regulations governing clinical research that make clear informed consent must be given by human test subjects. Informed consent requires subjects to know that research is occurring, be given a description of the risks involved, and have the option to refuse to participate without being penalised. None of these things were available to the anonymous people involved in the study.
As it happens, I have to use Facebook for work. I also happen to have a chronic depressive disorder.
It would be interesting to know whether Facebook picked me for their experiment. It’d certainly be interesting to know whether they screened for mental health issues, and how they justified the lack of informed consent about the risks involved, given they had no way to screen out those with psychiatric and psychological disorders that might be exacerbated by emotional manipulations, however tangential or small.
The researchers chose to manipulate the news feed in order to remove or amplify emotional content, rather than by observing the effect of that content after the fact. There’s an argument here that Facebook manipulates the news feed all the time anyway, therefore this is justifiable – but unless Facebook is routinely A/B testing on its users’ happiness and emotional wellbeing, the two things are not equivalent. Testing where you click is different to testing what you feel. A 0.02% increase in video watch rates is not the same as a 0.02% increase in emotionally negative statements. One of these things has the potential for harm.
The effect the researchers found, in the end, was very small. That goes some way towards explaining their huge sample size: the actual contagion effect of negativity or positivity on any one individual is so tiny that it’s statistically significant only across a massive pool of people.
But we know that only because they did the research. What if the effect had been larger? What if the effect on the general population was small, but individuals with certain characteristics – perhaps, say, those with chronic depressive disorders – experienced much larger effects? At what point would the researchers have decided it would be a good idea to tell people, after the fact, that they had been deliberately harmed?
The research was conducted in collaboration with Cornell and UCSF researchers. All universities have research ethics review boards which are supposed to prevent this kind of misconduct. If you are upset you can contact them:
Cornell: http://www.irb.cornell.edu
UCSF: http://compliance.ucsf.edu
They are also only taking into account how many people posted negative (by their computers’ definition) facebook statuses. They have no way of knowing how many people they actually effected because not everyone posts their more negative feelings online.