Reddit meltdown: how not to build a community

Reddit is having a bit of a meltdown. Volunteer moderators have taken many of the site’s most popular and trafficked communities to private, making them impossible to read or participate in. Many others are staying open based on their purpose (to inform or to educate) but making clear statements that they support the issues raised.

The shutdown was triggered in protest at the sudden dismissal of Victoria Taylor, Reddit’s director of communications, who coordinated the site’s Ask Me Anything feature. But it’s more than that: the reason communities beyond r/IAmA are going dark is about longstanding issues with the treatment of moderators, communication problems and moderation tools, according to many prominent subreddit mods.

Really good community management matters. Communication matters. Being heard matters enormously to users, and the more work an individual is doing for the site, the more it matters to them personally.

Relying solely on volunteer moderators and community self-organisation limits what’s possible, because without the company’s support – both negative, in terms of banning and sanctioning, and positive in terms of tools, recognition and organisation – its users can’t effect significant change. What’s possible with buy-in from Reddit staff is far more interesting than what’s possible without – the AMAs Victoria supported are the prime example. It should be concerning for Reddit that there are so few others.

Communities grow and evolve through positive reinforcement, not just punishment when they contravene the rules. If the only time they get attention is when they push the boundaries, users will likely continue to push boundaries rather than creating constructively. They act out. Encouraging positive behaviour is vitally important if you want to shape a community around certain positive activities – say, asking questions – rather than focussing on its negatives.

That encouragement extends to offering the community leaders the tools they need to lead. The majority of moderators of Reddit’s default communities – the most popular ones on the site – use third-party tools because the site’s own architecture makes their work impossible. That should not be

And evolving communities need consistent procedures and policies, and those have to be implemented by someone with power as well as the trust and respect of the community. Power is relatively easy; any Reddit admin or employee has power, in the eyes of the community. Trust and respect is incredibly difficult. It has to be earned, piece by piece, often from individuals disinclined to trust or respect because of the power differential. That work doesn’t scale easily and can’t be mechanised; it’s about relationships.

Today’s meltdown isn’t just about u/chooter, though what’s happened to her is clearly the catalyst. It’s about the fact that she’s (rightly or wrongly) perceived to be the only Reddit admin to have both power and trust. She was seen as the sole company representative who listened, who worked with the community rather than above or around them. She was well-known and, crucially, well-liked.

Reddit needs more Victorias on its staff, not fewer. It needs more admins who are personally known within the community, more people who respond to messages and get involved on an individual level with the mods it relies on to do the hard work of maintaining its communities. It needs internal procedures to pass community issues up the chain and get work done for its super users and those who enable its communities to exist. It needs more positive reinforcement from those in power, especially in the light of increasing (and, I’d say, much-needed) negative reinforcement for certain behaviours; the community needs to see what ‘good’ looks like as well as ‘bad’. Not just spotlighting subreddits and blog posts about gift exchanges – actual, human engagement with the humans using the site.

Firing the figurehead for Reddit-done-right is not a good way to start.

Reddit thinks it’s a government, but doesn’t want to govern

In non-spoof news, today Reddit’s CEO posted a blog post about why it wasn’t going to take down a community specifically devoted to sharing naked photos of celebrities acquired by hackers and very much not endorsed by those pictured. Then, having drawn a line in the sand, it promptly banned the community. That caused, unsurprisingly, a lot of users to react with confusion and not a little anger, pointing out – among other things – that ban was more than a little hypocritical if Reddit was going to continue not to police other problematic communities (pro-anorexia and self harm communities, for instance), and suggesting that Reddit’s response was only because of the status, profile and power of the victims in this instance (the site doesn’t take down revenge porn, for example). There’s been another round of explanation, which boils down to: Reddit got overwhelmed and therefore had to take action. That actually bolsters some of the arguments made by users – that it’s only the high-profile nature of this incident that forced action – but if the first post is to be believed, Reddit doesn’t see that as a problem. It wants the community to choose to be “virtuous” rather than being compelled to do so – it wants its users to govern themselves. But it also thinks it’s a government. Yishan says:

… we consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community. The role and responsibility of a government differs from that of a private corporation, in that it exercises restraint in the usage of its powers.

Yishan simultaneously argues that Reddit users must arrive at their own self-policing anarchic nirvana in which no bad actors exist, and that Reddit is not a corporation but a governing force which has both the right to police and, strangely, the responsibility not to do so. Of course Reddit is a corporation, subject to US and international laws. Of course its community is not a state, and its users are not citizens. Yishan is dressing up a slavish devotion to freedom of speech regardless of consequence as a lofty ideal rather than the most convenient way to cope with a community rife with unpleasant, unethical and often unlawful behaviour. Doxxing, revenge porn, copyright infringement so rampant it’s a running joke, r/PicsOfDeadKids: none of these things are dealt with according to the social norms and laws of the societies of which Reddit is, in reality, a part. Only when admins become overwhelmed is action taken to police its community, and at the same time the CEO declares the site to be, effectively, the creator of its own laws. This would be nothing but self-serving nonsense if it weren’t for the way it’s being used to justify ignoring harmful community behaviours. Reddit’s users are right to point out that the company only acts on high-profile issues, that Reddit’s lack of moral standards for its users allows these situations to develop and makes it much harder for the company to police them when they do, and that the site’s users suffer as a result of its haphazard approach:

This is just what happens when your stance is that anything goes. If you allow subreddits devoted to sex with dogs, of course people will be outraged when you take down something else. If you allow subreddits like /r/niggers,of course they’re going to be assholes who gang up to brigade. The fine users of /r/jailbait are sharing kiddy porn? What a shocking revelation. The point is, you can’t let the inmates run the asylum and then get shocked when someone smears shit on the wall. Stand up for standards for a change. Actually make a stance for what you want reddit to be. You’ll piss off some people but who cares? They’re the shitty people you don’t want anyway. Instead you just alienate good users who are sick of all of the shit on the walls.

If Reddit thinks it’s a government, it should be considering how to govern well, not how to absolve itself of the responsibility to govern at all.

Trigger warnings: a broken system with good intentions

This is an interesting thing: a New Review post that looks at the history and present of trigger warnings, and how they’ve moved out of communities online and into public life and spaces. If you don’t know what a trigger warning is, it’s essentially a note indicating that you might be about to encounter something upsetting, something that could negatively affect your psychological wellbeing; they’ve grown out of supportive communities in which people needed to carefully negotiate conversations about subjects that need to be spoken about, but that also could prove detrimental to readers’ health. The roots, however, aren’t quite as simple as the New Review piece paints it them:

Initially, trigger warnings were used in self-help and feminist forums to help readers who might have post traumatic stress disorder to avoid graphic content that might cause painful memories, flashbacks, or panic attacks. Some websites, like Bodies Under Siege, a self-injury support message board, developed systems of adding abbreviated topic tags—from SI (self injury) to ED (eating disorders)—to particularly explicit posts. As the Internet grew, warnings became more popular, and critics began to question their use.

It’s rare to see an article on trigger warnings mentioning Bodies Under Siege, despite its early adoption of warnings as a way for its users to safeguard themselves. It’s a shame, then, that the piece skips over the ways trigger warnings were used there in the late 90s, when I was an active user. They were not a way for users with PTSD specifically to avoid harm; they were for all users – including those without mental health issues – to avoid subjects that could trigger them into unsafe behaviour, or that they didn’t have the mental energy to tackle. They were carefully considered and carefully enforced alongside a list of verboten things that mods would delete on sight: discussions of weights, calorie counts, numbers of self-inflicted wounds, images. Those things were not done lightly. Bodies Under Siege was a community of vulnerable people struggling with mental illnesses of various degrees, and it was built entirely around recovery and support. Trigger warnings and removal of things that could prompt ‘competitive’ behaviour were not courtesies. They were absolutely integral to the community’s existence.

I used a couple of other forums for people who self-harmed, in my teens. BUS was the one that did not make me worse. There’s a direct analogy between one of those forums and pro-anorexia communities; at its worst, it provided encouragement to hurt yourself, and at best it was simply reinforcing the behaviour, a reassurance that self-injury was an OK thing to do. It was not a healthy space. The second, though, tried to be about recovery, but allowed images and discussions of self-injury particulars. It was a deeply conflicted space, as a result: if you were feeling OK, you could quite easily end up feeling worse after a visit. If you were already feeling bad, you went there knowing it would most likely spiral downwards, playing Russian roulette with your feelings. You would, almost without doubt, stumble across something that would likely tip you from ‘maybe I could hurt myself’ into the act.

Trigger warnings on BUS made it safe from that concern. It was a place you could go while feeling awful to try to be strong. It had thread after thread of distraction games, little time-wasting things you could do to stave off the need to self-injure. It had questionnaires to fill in before you did it, drawn up by users and psych professionals, and questionnaires to fill in afterwards. It had resources for asking for treatment, for dealing with emergency care, for supporting others. It had safe spaces for parents, partners, carers to socialise. It had diary threads you could post in and read, if you were well enough, and those diaries came by convention with warnings about the content. If you didn’t want to engage with the illnesses of others, for fear of worsening your own, you did not have to.

Words cannot express how valuable trigger warnings were to me, or to many of the other users on BUS. Not just those with PTSD, or anxiety disorders, or specific trauma-related illnesses; not even just those who self-harmed or those with eating disorders; all of us who used that space benefitted from its policies on keeping us safe.

Trigger warnings on the web were born in communities trying to balance the need to speak with the need not to hear. Those communities were closed, or at least only partially open; LiveJournal communities where membership rules could be enforced, forums and BBs where mods had control over members’ posts. Trigger warnings do not translate well to public spaces – Tumblr tags, Twitter, even Facebook groups, or some of the real-life scenarios mentioned in the New Review article – because those needs are different for the wider community. Interestingly, some Tumblr tags do take content warnings well – conventions have grown up around those tags, and those who transgress those conventions are essentially moderated out by the existing users. But there’s no system to support that, nothing to stop a sustained invasion, no way to organise that space to support that use.

But just as it is inadvisable to add trigger warnings to everything based on the possibility of harm, it’s just as inadvisable to remove them from everything based on disbelief in their effectiveness. In communities focussed on mental health and recovery, trigger warnings are absolutely necessary for users. Whether college classes, campuses or the Huffington Post need the same level of consideration is a valid question, for sure, but it’s one worth asking. If you want people with disabilities to be able to participate fully in your spaces, you’d better be thinking about accessibility in terms of triggers and mental wellbeing as well as wheelchair ramps and sign language. And that doesn’t always need to be in formal language: sometimes it’s as simple as editing a tweeted headline to include the word ‘distressing’, to give your followers the choice about what they click on.

The New Review piece concludes:

Trigger warnings are presented as a gesture of empathy, but the irony is they lead only to more solipsism, an over-preoccupation with one’s own feelings—much to the detriment of society as a whole. Structuring public life around the most fragile personal sensitivities will only restrict all of our horizons. Engaging with ideas involves risk, and slapping warnings on them only undermines the principle of intellectual exploration. We cannot anticipate every potential trigger—the world, like the Internet, is too large and unwieldy. But even if we could, why would we want to? Bending the world to accommodate our personal frailties does not help us overcome them.

There is no way to stop every vulnerable person from coming across things that will make them more vulnerable. There is, however, courtesy and consideration, and a need for equal access for those with mental health issues. Those are not small things. There is a valuable, important baby being thrown out with this bathwater.

UsVsTh3m turns comments on

UsVsTh3m has decided to give Th3m a direct voice on site, and turned its comments on.

That’s perhaps not a huge surprise, given Rob Manuel’s involvement – he’s talked in the past about the class issues involved in online commenting, as well as presiding over one of the most interesting hotbeds of user activity on the internet. But it runs counter to a long-term trend of sites shutting down comments, deliberately deciding that they’re too much work, too unruly, too problematic, or even counter to the entire purpose of what the site’s trying to do.

It’s a nice start, opening with a joke and a clear prompt to participate, and a potential reward for excellence in the form of inclusion in the daily newsletter – a promise internet bragging rights that act as an incentive to be awesome, rather than merely guidelines that tell you how not to be bad. Worth noting that Rob’s participating there too.

It’ll be an interesting experiment to watch, and if a creative community of jokers is what UsVsTh3m is after, they seem to have started out pretty well.

IGN’s commitment to changing its comment culture

Some of the comments on the IGN announcement of their new moderation policy. As they say, there's a long way to go and a lot of work to be done before the change takes hold.
Some of the comments on the IGN announcement of their new moderation policy. As they say, there’s a long way to go before the change takes hold.

IGN, one of the largest gaming sites in the world, has recently announced changes to its commenting policy explicitly aimed at tackling the culture of abuse in its threads. In a blog post announcing the change, editor-in-chief Steve Butts says:

Will that mean we won’t tolerate disagreement or fiery debates? Not at all. We’re an audience of advocates who come to IGN because we feel passionately about certain platforms, products, and philosophies. Being able to express and defend those tastes is part of why we’re here. Articulate disagreements about those tastes are a healthy and necessary part of those interactions. The comment guidelines aren’t meant to stop that.

The problem comes when a disagreement stops being about the merits of the argument and starts being about the people making it. It’s okay for us to disagree with each other, but we won’t tolerate abuse and threats disguised as disagreement. We also won’t tolerate ad hominem attacks, where you insult a person’s character or identity merely because you don’t like that they’re not the same person as you. None of us are perfect, and we all have bad days, of course, but we can’t let a difference of opinion devolve into being nasty to each other.

The context to this change, on top of years of growing hostility in the comment threads at IGN and elsewhere, is an open letter posted on Reaction last month by Samantha Allen, calling games media generally and IGN among others specifically to account over the toxic discussions they host below articles. It is worth reading in full, repeatedly; it’s a measured, articulate, passionate piece that firmly places responsibility for debates in comment threads with the sites that host those debates, and gives three clear calls to action for those in a position to change those debates. Addressing site editors by name, it says:

We have a problem and you can do something about it.

Our medium and the culture surrounding it is still in its adolescence and we’ve been experiencing a lot of growing pains lately. Those of us in the games community who are a part of marginalized groups have been going through hell lately. You can help us. You can do more than just express sympathy.

“The arc of the moral universe is long, but it bends toward justice.” You have a chance, right now, to shorten that arc. You are in positions of power and privilege. You have the luxury of being able to effect change at a level that we can only dream about.

Framing commenting and community policy and moderation as a moral issue is not new, but locating responsibility squarely with sites and publishers, rather than the commenters who frequent them, is a quietly revolutionary attitude. And a right one: much as people who run social spaces in the real world take on responsibility for enforcing behaviour norms within those spaces, people who open up social spaces online have to enforce the behaviour they want to see within them too. Simply opening a door then washing your hands of the damage caused is not enough.

IGN’s new policy is interesting not least because of its relative mildness. It bans personal attacks and discrimination, while encouraging debate and disagreement; it bans trolling, flaming and spam while permitting sensible pseudonymity. There’s also a section on questionable content, to act as a sort of catch-all:

Since we can’t have a rule to cover everything, this is the rule to, well, cover everything. These are public discussions, so act like you would if you were in a public place (a nice place). These issues are left to the discretion of individual moderators and staff, but may include any material that is knowingly false and/or defamatory, misleading, spammy, inaccurate, abusive, vulgar, hateful, harassing, sexist, obscene, racist, profane, sexually oriented, threatening, invasive of a person’s privacy, that otherwise violates any law, or that encourages conduct constituting a criminal offense. Asking for or offering any of the material listed above is also not permitted.

It’s a sensible policy and it’s excellent to see IGN taking responsibility for the comments on their site and committing to improving the discussion. They’re being careful not to throw the baby out with the bathwater, keeping what’s good about their community and reinforcing the positive behaviours they want to see – rather than turfing over the comment section, closing it or outsourcing it. I hope it comes with increased mod resource and support, and the buy-in of their writers too. It’s a strong commitment, and I hope their actions speak as loudly as their words on this – and that more sites follow their lead.

People are still people, even when typing

Adam Tinworth, in a piece from 2007 that he tweeted earlier today, gives 10 things he’s learned about online community that still hold true:

  1. Whatever you do, don’t listen to the loudest voices in preference to the rest
  2. You can’t avoid conflict in the community, and even splits, no matter how had you try to control who joins
  3. Calming voices are invaluable
  4. Controlling voices are deadly
  5. Conversations that drift off topic and into running jokes are the sign of a good community developing – but if it goes too far, it alienates newcomers…

Read them all – they’re short, well-phrased and insightful, and every single one also applies equally well to communities offline. People are people all over, whether they’re communicating in text or in person, and the same dramas, difficulties, successes and failures play out online as they do in meatspace.

If you don’t want to talk to people, turn your comments off

Advance warning: long post is long, and opinionated. Please, if you disagree, help me improve my thinking on this subject. And if you have more good examples or resources to share, please do.

News websites have a problem.

Well, OK, they have a lot of problems. The one I want to talk about is the comments. Generally, the standard of discourse on news websites is pretty low. It’s become almost an industry standard to have all manner of unpleasantness below the line on news stories.

Really, this isn’t limited to news comments. All over the web, people are discovering a new ability to speak without constraints, with far fewer consequences than speech acts offline, and to explore and colonise new spaces in which to converse.

Continue reading If you don’t want to talk to people, turn your comments off

Braindump: just add points

Interesting presentation by Sebastian Deterding looking at what user experience designers can learn from game design.

Although news orgs face very different challenges from UX designers, the basic messages about shallow vs deep engagement, using multiple interacting points/currencies and measuring achievement, effort and attainment in a meaningful way are very relevant. Take a look:

It’s interesting to look at the Huffington Post’s community moderation badges in terms of this presentation. My gut instinct is that they fall, along with Foursquare, into a category of too simplistic game-like systems (“Just Add Points”) that don’t actually tap into the power and fun of learning that is one of the fundamental building blocks of good game design.

It’s also worth checking out this post on rescuing princesses at the Lost Garden. If you click through to the slides (PDF) there’s a thoughtful discussion of the differences between app and game design, and a very useful breakdown of STARS atoms – essentially, small chunks that introduce players/users to new skills, let them discover how to use them, and ensure they have mastered them.

Between them, these two posts and the thoughts behind them make a mockery of the idea of game mechanics as simple point systems you can pop atop pre-designed apps or comment systems or whatever it is you’re already doing. You have to design with exploratory learning in mind, with a learning curve that doesn’t flatten out horizontally or vertically and with end goals and nested goals to maintain engagement.

I wonder how the Guardian’s crowdsourced investigation into MPs’ expenses would have gone if they’d added this sort of rich game-led design? As well as giving long-term and short-term goals/rewards (like Twitter translator levels, perhaps) with status bars to show progress, perhaps they could have rewarded people who found something of real import with a status bump, or added exploratory learning elements by advancing users towards the goal of signing off on things other people had flagged as interesting. Or teaching basic maths, or collating data into a wiki-style “what does my MP spend” database, or encouraging/letting users learn to create their own visualisations of the data. Hard to say how well or whether that would have worked, but it’s easy to see wider possibilities in projects like that.

/end braindump

Online games as training tools

Second Life: Porcupine: Autism Memorial
Porcupine Autism Memorial in Second Life

Today via @jayrosen_nyu I came across a post by @brad_king arguing that journalism has a lot to learn from the history of online games when it comes to online community management.

He makes some great points about hands-off community modding, and I’m a particular fan of the idea that online news communities could benefit from something like Richard Bartle’s taxonomy of gamer types (which splits gamers into four rough categories and helps game designers cater for all types).*

But I do have to disagree with this paragraph:

MMORPGs don’t have much to offer in terms of developing the traditional journalism skills. These games can’t teach students how to vet sources, how to interview, how to copy edit, how to hit the streets and find stories.

Wait a minute. Why not?

These communities aren’t just there to be managed – they don’t just have histories that can be dissected as useful examples. They’re living and breathing today. They are audiences, readers, participants, and they could be a great training tool for new journalists.

I mentioned that Second Life – one of the biggest and most influential online environment ever created – has three online newspapers with hundreds of thousands of readers.

They cover topics ranging from issues in the real world which affect the game – server outages, technology changes, ToS issues – to in-game gossip and affairs. This sort of information is valuable, and you can get it by employing all those traditional journalism skills that King mentions.

Sure, the rules of these communities are different. They present unique and diverse challenges to reporters trying to hit the street cold and generate stories. But they’re no more unfamiliar or hard to learn than Afghanistan is to a Geordie, or a Norfolk seaside town is to a young woman from inner-city Birmingham. And surely part of the point of j-school is to train us in how to learn the community rules and structures, how to work it out for ourselves, and how to engage.

So why not include a bit of MMO training for budding reporters? Lessons in:

  • interview technique via in-game chat and email
  • fact checking and how to spot a scam or a rumour online
  • vetting sources for legitimacy
  • editing copy – perhaps by crowdsourcing folks to tell them what they did wrong
  • engaging with readers as equals
  • learning a patch – getting to know the movers and shakers and the big issues, who to talk to, where to get quotes

All that and community management too. Bargain.

* Incidentally, I’m 67% Explorer, 67% Achiever, 40% Socialiser and 27% Killer.