Facebook, let’s talk about harm

In news-that-ought-to-be-satire-but-isn’t, the AV Club reports, via New Scientist, that Facebook has been manipulating users’ feeds in order to test whether they can manipulate their emotions. 689,003 users, to be precise.

The full paper is here, and makes for interesting reading. The researchers found that, yes, emotional states are contagious across networks, even if you’re only seeing someone typing something and not interacting with them face-to-face. They also found that people who don’t see emotional words are less expressive – a “withdrawal effect”.

Where things get rather concerning is the part where Facebook didn’t bother telling any of its test subjects that they were being tested. The US has a few regulations governing clinical research that make clear informed consent must be given by human test subjects. Informed consent requires subjects to know that research is occurring, be given a description of the risks involved, and have the option to refuse to participate without being penalised. None of these things were available to the anonymous people involved in the study.

As it happens, I have to use Facebook for work. I also happen to have a chronic depressive disorder.

It would be interesting to know whether Facebook picked me for their experiment. It’d certainly be interesting to know whether they screened for mental health issues, and how they justified the lack of informed consent about the risks involved, given they had no way to screen out those with psychiatric and psychological disorders that might be exacerbated by emotional manipulations, however tangential or small.

The researchers chose to manipulate the news feed in order to remove or amplify emotional content, rather than by observing the effect of that content after the fact. There’s an argument here that Facebook manipulates the news feed all the time anyway, therefore this is justifiable – but unless Facebook is routinely A/B testing on its users’ happiness and emotional wellbeing, the two things are not equivalent. Testing where you click is different to testing what you feel. A 0.02% increase in video watch rates is not the same as a 0.02% increase in emotionally negative statements. One of these things has the potential for harm.

The effect the researchers found, in the end, was very small. That goes some way towards explaining their huge sample size: the actual contagion effect of negativity or positivity on any one individual is so tiny that it’s statistically significant only across a massive pool of people.

But we know that only because they did the research. What if the effect had been larger? What if the effect on the general population was small, but individuals with certain characteristics – perhaps, say, those with chronic depressive disorders – experienced much larger effects? At what point would the researchers have decided it would be a good idea to tell people, after the fact, that they had been deliberately harmed?

Play requires consent

For any game to be a game, to work as play, it requires consent. Everyone has to agree to play, as individuals, and then collectively (or individually) agree the rules by which you’ll play, and the boundaries on the experience – the things that aren’t in the game, as well as the things that are.

You learn this, running live games or even tabletop ones. Playing with other people requires consent from all the participants, in the same way that sex does, and if it’s withdrawn then play with that person has to end. At live events we even set up safe words, ways to stop the fantasy and reassert the real world – we’ve always used “STOP THE GAME” shouted as loud as you can, for the avoidance of doubt – and that’s not just a safety call for injuries. It’s also a “get me out of here”, an “I’m not OK with this”, a withdrawal of consent.

In tabletop games, or at least ones with a good group that might touch on dark themes, it’s pretty common to have a quick discussion of hard limits up front. Some people are fine with body horror in their tabletop play, other people just don’t want to go there during pretendy fun time. Some people are terrified of spiders. Some people don’t want in-character relationships. It’s all fine, as long as you negotiate your boundaries up front and don’t make assumptions. (Sometimes you only find out where your boundaries are in the middle of a game, and that’s OK too. That’s when you step out.)

A fair few videogames forget that consent can be withdrawn, or assume that the act of picking up a controller is consent to anything that happens while playing. They forget to set out their boundaries in advance; they don’t signal strongly enough that this or that theme will come up in play and if that’s a problem you might not want to play on. I’ve yet to see a non-text-based videogame that acknowledges scenes players might not want to participate in, warns them ahead of time and lets them skip those scenes specifically without having to just stop playing altogether.

There’s interesting variations on the rule-setting elements of consent in things like permadeath playthroughs, speed runs, cheats and exploits. Some are players adding extra levels of rules for themselves, defining the experience more tightly than the game does; others are players implicitly trying to break the game’s own defined experience – effectively trying to do things the game itself doesn’t consent to. (Except that by virtue of not being sentient, games can’t consent.)

And there are interesting game spaces springing up in which consent is a serious issue. DayZ and Rust are games in which you can not just die but be taken prisoner, have your avatar’s actions dictated by players, and be put in situations to which you have not consented. The tale of a player imprisoned in Rust is funny, sure, but it’s also something they haven’t consented to. It’s only fun as long as you’re happy to go along with it, within the experience you want to have. It stops being fun, it stops being play, the minute you as a human being want out.

A few videogames that are played in group settings or party spaces sometimes run into problems; I’ve been witness to sessions of Johann Sebastian Joust, for example, in which people not playing were used as obstacles, or otherwise drawn into the game. That leads to issues, sometimes. The boundaries between player and not-player aren’t always as clear as who’s holding the controller, and one player assuming consent to play from a not-player who doesn’t want to can get tricky. It’s irritating at best.

But the worst culprits for failing to understand that play requires consent are not really game creators at all. Gamification in the workplace, which is still around and still annoying me, takes the idea of playful activity and participation and makes it compulsory. By removing the ability to refuse your consent you remove a player’s ability to play. Meta-game mechanics (note: none of these are actual game mechanics) like points, scoreboards, achievements and so on rely on a playable game to function in the game world. Without play, an achievement is not anything like a game, in the same way that an exam certificate is not anything like a game. It’s all just work, which you must now do while you’re smiling.