Trigger warnings: a broken system with good intentions

This is an interesting thing: a New Review post that looks at the history and present of trigger warnings, and how they’ve moved out of communities online and into public life and spaces. If you don’t know what a trigger warning is, it’s essentially a note indicating that you might be about to encounter something upsetting, something that could negatively affect your psychological wellbeing; they’ve grown out of supportive communities in which people needed to carefully negotiate conversations about subjects that need to be spoken about, but that also could prove detrimental to readers’ health. The roots, however, aren’t quite as simple as the New Review piece paints it them:

Initially, trigger warnings were used in self-help and feminist forums to help readers who might have post traumatic stress disorder to avoid graphic content that might cause painful memories, flashbacks, or panic attacks. Some websites, like Bodies Under Siege, a self-injury support message board, developed systems of adding abbreviated topic tags—from SI (self injury) to ED (eating disorders)—to particularly explicit posts. As the Internet grew, warnings became more popular, and critics began to question their use.

It’s rare to see an article on trigger warnings mentioning Bodies Under Siege, despite its early adoption of warnings as a way for its users to safeguard themselves. It’s a shame, then, that the piece skips over the ways trigger warnings were used there in the late 90s, when I was an active user. They were not a way for users with PTSD specifically to avoid harm; they were for all users – including those without mental health issues – to avoid subjects that could trigger them into unsafe behaviour, or that they didn’t have the mental energy to tackle. They were carefully considered and carefully enforced alongside a list of verboten things that mods would delete on sight: discussions of weights, calorie counts, numbers of self-inflicted wounds, images. Those things were not done lightly. Bodies Under Siege was a community of vulnerable people struggling with mental illnesses of various degrees, and it was built entirely around recovery and support. Trigger warnings and removal of things that could prompt ‘competitive’ behaviour were not courtesies. They were absolutely integral to the community’s existence.

I used a couple of other forums for people who self-harmed, in my teens. BUS was the one that did not make me worse. There’s a direct analogy between one of those forums and pro-anorexia communities; at its worst, it provided encouragement to hurt yourself, and at best it was simply reinforcing the behaviour, a reassurance that self-injury was an OK thing to do. It was not a healthy space. The second, though, tried to be about recovery, but allowed images and discussions of self-injury particulars. It was a deeply conflicted space, as a result: if you were feeling OK, you could quite easily end up feeling worse after a visit. If you were already feeling bad, you went there knowing it would most likely spiral downwards, playing Russian roulette with your feelings. You would, almost without doubt, stumble across something that would likely tip you from ‘maybe I could hurt myself’ into the act.

Trigger warnings on BUS made it safe from that concern. It was a place you could go while feeling awful to try to be strong. It had thread after thread of distraction games, little time-wasting things you could do to stave off the need to self-injure. It had questionnaires to fill in before you did it, drawn up by users and psych professionals, and questionnaires to fill in afterwards. It had resources for asking for treatment, for dealing with emergency care, for supporting others. It had safe spaces for parents, partners, carers to socialise. It had diary threads you could post in and read, if you were well enough, and those diaries came by convention with warnings about the content. If you didn’t want to engage with the illnesses of others, for fear of worsening your own, you did not have to.

Words cannot express how valuable trigger warnings were to me, or to many of the other users on BUS. Not just those with PTSD, or anxiety disorders, or specific trauma-related illnesses; not even just those who self-harmed or those with eating disorders; all of us who used that space benefitted from its policies on keeping us safe.

Trigger warnings on the web were born in communities trying to balance the need to speak with the need not to hear. Those communities were closed, or at least only partially open; LiveJournal communities where membership rules could be enforced, forums and BBs where mods had control over members’ posts. Trigger warnings do not translate well to public spaces – Tumblr tags, Twitter, even Facebook groups, or some of the real-life scenarios mentioned in the New Review article – because those needs are different for the wider community. Interestingly, some Tumblr tags do take content warnings well – conventions have grown up around those tags, and those who transgress those conventions are essentially moderated out by the existing users. But there’s no system to support that, nothing to stop a sustained invasion, no way to organise that space to support that use.

But just as it is inadvisable to add trigger warnings to everything based on the possibility of harm, it’s just as inadvisable to remove them from everything based on disbelief in their effectiveness. In communities focussed on mental health and recovery, trigger warnings are absolutely necessary for users. Whether college classes, campuses or the Huffington Post need the same level of consideration is a valid question, for sure, but it’s one worth asking. If you want people with disabilities to be able to participate fully in your spaces, you’d better be thinking about accessibility in terms of triggers and mental wellbeing as well as wheelchair ramps and sign language. And that doesn’t always need to be in formal language: sometimes it’s as simple as editing a tweeted headline to include the word ‘distressing’, to give your followers the choice about what they click on.

The New Review piece concludes:

Trigger warnings are presented as a gesture of empathy, but the irony is they lead only to more solipsism, an over-preoccupation with one’s own feelings—much to the detriment of society as a whole. Structuring public life around the most fragile personal sensitivities will only restrict all of our horizons. Engaging with ideas involves risk, and slapping warnings on them only undermines the principle of intellectual exploration. We cannot anticipate every potential trigger—the world, like the Internet, is too large and unwieldy. But even if we could, why would we want to? Bending the world to accommodate our personal frailties does not help us overcome them.

There is no way to stop every vulnerable person from coming across things that will make them more vulnerable. There is, however, courtesy and consideration, and a need for equal access for those with mental health issues. Those are not small things. There is a valuable, important baby being thrown out with this bathwater.

UsVsTh3m turns comments on

UsVsTh3m has decided to give Th3m a direct voice on site, and turned its comments on.

That’s perhaps not a huge surprise, given Rob Manuel’s involvement – he’s talked in the past about the class issues involved in online commenting, as well as presiding over one of the most interesting hotbeds of user activity on the internet. But it runs counter to a long-term trend of sites shutting down comments, deliberately deciding that they’re too much work, too unruly, too problematic, or even counter to the entire purpose of what the site’s trying to do.

It’s a nice start, opening with a joke and a clear prompt to participate, and a potential reward for excellence in the form of inclusion in the daily newsletter – a promise internet bragging rights that act as an incentive to be awesome, rather than merely guidelines that tell you how not to be bad. Worth noting that Rob’s participating there too.

It’ll be an interesting experiment to watch, and if a creative community of jokers is what UsVsTh3m is after, they seem to have started out pretty well.

11 quick thoughts on the new Steam reviews

Steam reviews are a thing now, apparently.

Now it’s easy to see what other Steam users think about a product before you buy. With Steam Reviews, you can browse for reviews that others have found helpful, or write your own reviews for titles you’ve played on Steam

A few quick thoughts in no particular order:

  1. Valve is displaying the time you’ve spent on a particular game next to your review. That’s interesting; that suggests they might also use it as a ranking factor for your review. It certainly means people will judge your review as less helpful if you’ve spent less time in the game than others. For positive reviews maybe that makes some sense; for negative reviews maybe it doesn’t, so much, because I don’t need to play 20 hours of Duke Nukem Forever to know it’s awful, or more than 5 minutes of the PC port of Fez to know it’s unplayably crashy on my setup, for example.
  2. They’re also flagging up the number of things you’ve bought on Steam, even ahead of your Steam level (which is to some extent a proxy for money spent). That’s an even more interesting choice, because it is almost certainly going to affect how people see the review on a subconscious level.
  3. You have to launch the game via Steam in order to review it. So I can’t review some of the games I’ve played most, because I didn’t buy them on Steam. Platform lock-in. But I also can’t review games just for the sake of hating on them from a distance, which deals with some of the Metacritic & Amazon swarming problems.
  4. But what I can do, if I want to game this system, is launch the game once, leave it on overnight to gather Steam cards & game cred, and then review it. Whether anyone will care enough to actually do that is an open question at this point.
  5. The only ranking factor they specifically mention is time – ie more recent reviews will be visible on game pages – and that’s framed as a good thing for the devs. But there will be others – game time and helpfulness are the obvious ones, but Valve would be daft not to include things like friendship data, similarity of game libraries etc in personalising reviews for individual readers.
  6. They’re defaulting to post-moderation, removing or hiding things when flagged, and not giving devs the ability to hide things directly without moderator input. That makes some sense (hide all negative reviews won’t be a valid strategy) but is also potentially concerning (we don’t yet know how much moderator support they have, or the moderation guidelines by which they’re operating, or the speed with which they’ll respond, or… etc).
  7. This could be a serious Metacritic competitor, because of Steam’s metadata about who’s played what games for how long, which could tie into an authority system using upvotes and activity more generally…
  8. …but (at the moment) they’re not including a scoring system, just recommend vs not recommend. Thankfully. Any numerical system would be exactly as open to abuse as the current Metacritic system is, with all the existing issues about people only looking at the score when purchasing or devs’ pay/bonuses being dependent on numerical scores that are, let’s be honest here, based on spit and whimsy and nothing more.
  9. The language stuff – allowing users to review games in their own languages and search for reviews in particular languages – is great for users especially in areas underserved by games press. And potentially a nightmare for devs, if they can’t translate.
  10. Helpful vs non-helpful is a nice way to harness the middle bit of the 1:9:90 rule.
  11. Mutualisation is interesting. I wonder how many devs and users were clamouring for this feature.

Scientifically accurate comments

PopSci has turned off its comments, citing research demonstrating that rude comments beneath an article polarise a reader’s opinion of its content, and tend to make people doubt the science involved.

Given their stated aims, it seems like a reasonable move – if they’re not going to be able to give the conversation the attention it needs, and especially if they’re facing coordinated astroturfing to undermine their science. They don’t owe anyone a platform.

A politically motivated, decades-long war on expertise has eroded the popular consensus on a wide variety of scientifically validated topics. Everything, from evolution to the origins of climate change, is mistakenly up for grabs again. Scientific certainty is just another thing for two people to “debate” on television. And because comments sections tend to be a grotesque reflection of the media culture surrounding them, the cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science.

Has anyone yet tried a pre-mod commenting policy requiring scientific accuracy and cited sources in comments? That could be an interesting community – labour-intensive for the moderators and maintainers, but a fascinating place for expert discussion.

Social places, not networks

In the light of recent events, this post from earlier this month seems timely:

Some years ago, the tech industry set out to redefine our perception of the web. Facebook (and other similar sites) grew at amazing rates and their reasonable focus on the “social network” and the “social graph”, made “social networks” the new kid on the block.

But even though the connections of each individual user are his social network, these sites are not social networks. They are social networking places.

This is an important distinction. They are places, not networks. Much like your office, school, university, the place where you usually spend your summer vacation, the pub where your buddies hang out or your hometown.

And, much like your office, school, university, etc, they all have their own behavioural expectations and norms. When those spaces get big and full of people jostling for room, if they aren’t broken up into their own smaller spaces – or if the partitions are porous – those differing expectations rub up against each other in all sorts of interesting and problematic ways.

The Twitter I have is not the Twitter you have, because we follow different folks and interact with them in our own ways. There are pretty regular examples of this disparity: when people write posts about how Twitter’s changed, it’s no fun any more, but the reality is that it’s just the folks they follow and talk to that have changed how they use it. My Twitter experience doesn’t reflect that – I’m in a different space with different people.

Part of the abuse problem all online spaces face is working out their own norms of behaviour and how to deal with incidents that contravene them. One of the particular problems faced by Twitter and a few others is how to deal with incidents that turn up because of many different, overlapping, interconnected spaces and the different expectations of each one.

And on practical ways to handle those problems, go read this excellent post by an experienced moderator. It’s too good to quote chunks here.

Is online abuse increasing, or are we just less tolerant of it?

A thought that follows on from yesterday’s post about Twitter and freedom of speech: it’s easy, I think, to see all the anger and distress caused by online abuse and come to the conclusion that it’s a growing problem. That social spaces online are increasingly hostile to women and other minorities, and that such incidents are increasing in both frequency and severity. In short, it’s easy to think that things are getting worse.

But I don’t believe that’s true. Social spaces online have historically always been fairly unpleasant places to be a visible minority, with notable exceptions. Usenet wasn’t a fun place to be openly female. Neither were early IRC channels (a/s/l and all). Parts of 4chan and Reddit still aren’t. But as online space has become easier to enter, easier to use, more important and less socially obscure, a broader section of society has colonised it. I learned when I was about 12 that you don’t admit your gender online, if you’re female; it’s less than three months since I first felt comfortable using a real picture of my face as my avatar, knowing what that can open you up to.

The evolution over the last couple of years has been that more women and other minorities feel safe enough online to be visible at all, rather than hiding behind the default masculine assumption that comes with anonymity and some pseudonymity. The target pool for abuse is larger, because more people are unafraid to simply be in public.

At the same time, the backlash to such behaviour is more visible and more outspoken. Abuse and threats are increasingly seen as unacceptable. That means more visibility for particularly reprehensible abuse, where a decade ago it would have been more hidden and harder to speak out against. The availability heuristic means people are more likely to overestimate the frequency of abuse now as opposed to abuse years ago, because they can think of more recent visible examples – not necessarily because it’s more frequent, but because it’s more frequently spoken of. It also means that social norms are changing for the better.

Maybe this is too optimistic a take. But I’d like to believe so.

Twitter’s freedom of speech

Caroline Criado-Perez, the journalist who successfully campaigned for Jane Austen to appear on British banknotes, has been subjected to a horrendous barrage of threats and abuse on Twitter, and has called for Twitter to improve the way it deals with abuse. Her supporters kicked off a petition asking Twitter for a better system, and they’ve had some success. The whole saga as it unfolded has been Storified by @kegill.

Twitter’s now said it will step up work on a ‘report abuse’ button for individual tweets. That’s a good step, but a button without something connected to it is just a placebo, and in this situation it won’t work unless it links to an action. Xbox Live’s community is enough to prove that abuse reports without enforcement are pointless, and that placebo buttons aren’t enough to deter campaigns of abuse or unpleasant individuals. And Facebook’s trigger-happy abuse policies are enough to prove that automated responses based on volumes of reports aren’t nuanced enough to be appropriate here either.

The problem is a human one, and it may be impossible to automate. That doesn’t mean it shouldn’t be tried, nor that the work is unimportant. Watching an abuse queue might not be the best way to solve the problem, nor a sustainable or scalable one. But I would love to see Twitter innovate around this issue. Moderation tools that understand the patterns of abuse on Twitter don’t yet exist, as far as I’m aware – and if they do exist, they clearly don’t work. I wonder what would happen if the same effort went in to understanding and predicting organised campaigns of abuse as spam campaigns.

I do not believe a solution is impossible. I do doubt whether Twitter thinks it’s important enough to devote significant resources to, for now, and I suspect it will continue to use freedom of speech as a convenient baffle.

If freedom of speech on Twitter means freedom to abuse, freedom to harass and to threaten, then speech on Twitter is not free. Freedom of speech for abusers means curtailed speech for victims. What critics of moderation tend not to understand is that both options force people to be silent. What supporters tend to believe is that it is better for the community as a whole to silence abusers than to allow victims to be silenced.

IGN’s commitment to changing its comment culture

Some of the comments on the IGN announcement of their new moderation policy. As they say, there's a long way to go and a lot of work to be done before the change takes hold.

Some of the comments on the IGN announcement of their new moderation policy. As they say, there’s a long way to go before the change takes hold.

IGN, one of the largest gaming sites in the world, has recently announced changes to its commenting policy explicitly aimed at tackling the culture of abuse in its threads. In a blog post announcing the change, editor-in-chief Steve Butts says:

Will that mean we won’t tolerate disagreement or fiery debates? Not at all. We’re an audience of advocates who come to IGN because we feel passionately about certain platforms, products, and philosophies. Being able to express and defend those tastes is part of why we’re here. Articulate disagreements about those tastes are a healthy and necessary part of those interactions. The comment guidelines aren’t meant to stop that.

The problem comes when a disagreement stops being about the merits of the argument and starts being about the people making it. It’s okay for us to disagree with each other, but we won’t tolerate abuse and threats disguised as disagreement. We also won’t tolerate ad hominem attacks, where you insult a person’s character or identity merely because you don’t like that they’re not the same person as you. None of us are perfect, and we all have bad days, of course, but we can’t let a difference of opinion devolve into being nasty to each other.

The context to this change, on top of years of growing hostility in the comment threads at IGN and elsewhere, is an open letter posted on Reaction last month by Samantha Allen, calling games media generally and IGN among others specifically to account over the toxic discussions they host below articles. It is worth reading in full, repeatedly; it’s a measured, articulate, passionate piece that firmly places responsibility for debates in comment threads with the sites that host those debates, and gives three clear calls to action for those in a position to change those debates. Addressing site editors by name, it says:

We have a problem and you can do something about it.

Our medium and the culture surrounding it is still in its adolescence and we’ve been experiencing a lot of growing pains lately. Those of us in the games community who are a part of marginalized groups have been going through hell lately. You can help us. You can do more than just express sympathy.

“The arc of the moral universe is long, but it bends toward justice.” You have a chance, right now, to shorten that arc. You are in positions of power and privilege. You have the luxury of being able to effect change at a level that we can only dream about.

Framing commenting and community policy and moderation as a moral issue is not new, but locating responsibility squarely with sites and publishers, rather than the commenters who frequent them, is a quietly revolutionary attitude. And a right one: much as people who run social spaces in the real world take on responsibility for enforcing behaviour norms within those spaces, people who open up social spaces online have to enforce the behaviour they want to see within them too. Simply opening a door then washing your hands of the damage caused is not enough.

IGN’s new policy is interesting not least because of its relative mildness. It bans personal attacks and discrimination, while encouraging debate and disagreement; it bans trolling, flaming and spam while permitting sensible pseudonymity. There’s also a section on questionable content, to act as a sort of catch-all:

Since we can’t have a rule to cover everything, this is the rule to, well, cover everything. These are public discussions, so act like you would if you were in a public place (a nice place). These issues are left to the discretion of individual moderators and staff, but may include any material that is knowingly false and/or defamatory, misleading, spammy, inaccurate, abusive, vulgar, hateful, harassing, sexist, obscene, racist, profane, sexually oriented, threatening, invasive of a person’s privacy, that otherwise violates any law, or that encourages conduct constituting a criminal offense. Asking for or offering any of the material listed above is also not permitted.

It’s a sensible policy and it’s excellent to see IGN taking responsibility for the comments on their site and committing to improving the discussion. They’re being careful not to throw the baby out with the bathwater, keeping what’s good about their community and reinforcing the positive behaviours they want to see – rather than turfing over the comment section, closing it or outsourcing it. I hope it comes with increased mod resource and support, and the buy-in of their writers too. It’s a strong commitment, and I hope their actions speak as loudly as their words on this – and that more sites follow their lead.

People are still people, even when typing

Adam Tinworth, in a piece from 2007 that he tweeted earlier today, gives 10 things he’s learned about online community that still hold true:

  1. Whatever you do, don’t listen to the loudest voices in preference to the rest
  2. You can’t avoid conflict in the community, and even splits, no matter how had you try to control who joins
  3. Calming voices are invaluable
  4. Controlling voices are deadly
  5. Conversations that drift off topic and into running jokes are the sign of a good community developing – but if it goes too far, it alienates newcomers…

Read them all – they’re short, well-phrased and insightful, and every single one also applies equally well to communities offline. People are people all over, whether they’re communicating in text or in person, and the same dramas, difficulties, successes and failures play out online as they do in meatspace.

If you don’t want to talk to people, turn your comments off

Advance warning: long post is long, and opinionated. Please, if you disagree, help me improve my thinking on this subject. And if you have more good examples or resources to share, please do.

News websites have a problem.

Well, OK, they have a lot of problems. The one I want to talk about is the comments. Generally, the standard of discourse on news websites is pretty low. It’s become almost an industry standard to have all manner of unpleasantness below the line on news stories.

Really, this isn’t limited to news comments. All over the web, people are discovering a new ability to speak without constraints, with far fewer consequences than speech acts offline, and to explore and colonise new spaces in which to converse.

Continue reading