Reddit thinks it’s a government, but doesn’t want to govern

In non-spoof news, today Reddit’s CEO posted a blog post about why it wasn’t going to take down a community specifically devoted to sharing naked photos of celebrities acquired by hackers and very much not endorsed by those pictured. Then, having drawn a line in the sand, it promptly banned the community. That caused, unsurprisingly, a lot of users to react with confusion and not a little anger, pointing out – among other things – that ban was more than a little hypocritical if Reddit was going to continue not to police other problematic communities (pro-anorexia and self harm communities, for instance), and suggesting that Reddit’s response was only because of the status, profile and power of the victims in this instance (the site doesn’t take down revenge porn, for example). There’s been another round of explanation, which boils down to: Reddit got overwhelmed and therefore had to take action. That actually bolsters some of the arguments made by users – that it’s only the high-profile nature of this incident that forced action – but if the first post is to be believed, Reddit doesn’t see that as a problem. It wants the community to choose to be “virtuous” rather than being compelled to do so – it wants its users to govern themselves. But it also thinks it’s a government. Yishan says:

… we consider ourselves not just a company running a website where one can post links and discuss them, but the government of a new type of community. The role and responsibility of a government differs from that of a private corporation, in that it exercises restraint in the usage of its powers.

Yishan simultaneously argues that Reddit users must arrive at their own self-policing anarchic nirvana in which no bad actors exist, and that Reddit is not a corporation but a governing force which has both the right to police and, strangely, the responsibility not to do so. Of course Reddit is a corporation, subject to US and international laws. Of course its community is not a state, and its users are not citizens. Yishan is dressing up a slavish devotion to freedom of speech regardless of consequence as a lofty ideal rather than the most convenient way to cope with a community rife with unpleasant, unethical and often unlawful behaviour. Doxxing, revenge porn, copyright infringement so rampant it’s a running joke, r/PicsOfDeadKids: none of these things are dealt with according to the social norms and laws of the societies of which Reddit is, in reality, a part. Only when admins become overwhelmed is action taken to police its community, and at the same time the CEO declares the site to be, effectively, the creator of its own laws. This would be nothing but self-serving nonsense if it weren’t for the way it’s being used to justify ignoring harmful community behaviours. Reddit’s users are right to point out that the company only acts on high-profile issues, that Reddit’s lack of moral standards for its users allows these situations to develop and makes it much harder for the company to police them when they do, and that the site’s users suffer as a result of its haphazard approach:

This is just what happens when your stance is that anything goes. If you allow subreddits devoted to sex with dogs, of course people will be outraged when you take down something else. If you allow subreddits like /r/niggers,of course they’re going to be assholes who gang up to brigade. The fine users of /r/jailbait are sharing kiddy porn? What a shocking revelation. The point is, you can’t let the inmates run the asylum and then get shocked when someone smears shit on the wall. Stand up for standards for a change. Actually make a stance for what you want reddit to be. You’ll piss off some people but who cares? They’re the shitty people you don’t want anyway. Instead you just alienate good users who are sick of all of the shit on the walls.

If Reddit thinks it’s a government, it should be considering how to govern well, not how to absolve itself of the responsibility to govern at all.

Do real names really make people nicer online?

Mathew Ingram at GigaOm has an interesting look at some Livefyre research suggesting that if you force people to use their real names to comment on your site, the vast majority will just stop commenting.

Most of those surveyed said that they responded anonymously (or pseudonymously) because they didn’t want their opinions to impact their work or professional life by being attached to their real names, or when they wanted the point of their comment to be the focus rather than their identity or background. And close to 80 percent of those surveyed said that if a site forced them to login with their offline identity, they would choose not to comment at all.

The bottom line is that by requiring real names, sites may decrease the potential for bad behavior, but they also significantly decrease the likelihood that many of their readers will comment.

That led me to an interesting question this morning: do real names really reduce the potential for bad behaviour in comments? It’s a popularly held belief, but there doesn’t seem to be a great deal of evidence out there to support the idea that meatspace identities are any more useful than persistent pseudonyms when it comes to holding people accountable for their actions online. On Twitter, Martin Belam points out:

…but there’s a big gap between “named staff members early in comment threads” and “real names for everyone”.

I can find some evidence that persistent pseudonymity is a positive thing. Disqus did a very large study in 2012 on their comments database; though their methodology is opaque, their results showed pseudonymous commenters posted both the largest number and the highest quality comments across their network. Pseudonyms make people more collaborative and more talkative in learning environments; they positively influence information sharing; they encourage people to share more about themselves in the relative safety of an identity disconnected from meatspace.

There’s also evidence that anonymity – a complete lack of identifiers and nothing to chain your interactions together to form a persona, as distinct from pseudonymity, where you pick your own identity then stick with it – is a negative influence on the civility of debate, and that it engenders more adversarial conversations in which fewer people’s minds are changed.

There’s a Czech study into the differences between anonymity and pseudonymity and how to design for reduced aggression, apparently, but the actual link 404s and the abstract isn’t detailed on the difference. There’s interesting research into the social cost of cheap, easily replaceable pseudonyms, which allow effective anonymity through the evasion of reputation consequences; Reddit and Twitter are great examples of communities where you can see this behaviour in action.

Investigating this issue isn’t helped by the fact that researchers have in the past conflated anonymous and pseudonymous behaviours, but there’s increasing awareness now that the two engender big community differences; it’s also skewed by places like 4chan and Reddit being prominently discussed while less adversarial communities like Tumblr or fandom communities are less often scrutinised. (Male-dominated communities are covered more than female-dominated communities, and widely seen as more typical: so it goes.)

I’ve found one study that suggests anonymity makes for less civil comments, but I can’t access the full study to find out what it says about pseudonymity. There are suggestions, like Martin’s, that moderation helps keep things civil; there’s evidence that genuine consequences for poor behaviour helps too. And there’s this:

“While evidence from South Korea, China, and Facebook is insufficient to draw conclusions about the long-term impacts of real name registration, the cases do provide insight into the formidable difficulties of implementing a real name system.”

But where’s the proof Facebook is right when it claims its real name policy is vital for civility on the site? Where’s the evidence that Facebook comments are more civil than a news site’s because of the identity attached, rather than because the news site goes largely unmoderated and Facebook’s comment plugin broadcasts your words to everyone you know on there? Or because most people just don’t comment any more, so arguments die faster? I am struggling to find one study that demonstrates a causal link.

So this is an open request: if you know of more studies that indicate that denying pseudonymity improves comment quality, let me know on Twitter @newsmary or share them in the comments here. If I’m wrong, and it makes a big difference, it’d be great to be better informed. And if I’m right, and it’s community norms, moderation and reputational consequences that matter, it’d be great to put the idea that real names are a magic bullet for community issues to bed once and for all.

Trigger warnings: a broken system with good intentions

This is an interesting thing: a New Review post that looks at the history and present of trigger warnings, and how they’ve moved out of communities online and into public life and spaces. If you don’t know what a trigger warning is, it’s essentially a note indicating that you might be about to encounter something upsetting, something that could negatively affect your psychological wellbeing; they’ve grown out of supportive communities in which people needed to carefully negotiate conversations about subjects that need to be spoken about, but that also could prove detrimental to readers’ health. The roots, however, aren’t quite as simple as the New Review piece paints it them:

Initially, trigger warnings were used in self-help and feminist forums to help readers who might have post traumatic stress disorder to avoid graphic content that might cause painful memories, flashbacks, or panic attacks. Some websites, like Bodies Under Siege, a self-injury support message board, developed systems of adding abbreviated topic tags—from SI (self injury) to ED (eating disorders)—to particularly explicit posts. As the Internet grew, warnings became more popular, and critics began to question their use.

It’s rare to see an article on trigger warnings mentioning Bodies Under Siege, despite its early adoption of warnings as a way for its users to safeguard themselves. It’s a shame, then, that the piece skips over the ways trigger warnings were used there in the late 90s, when I was an active user. They were not a way for users with PTSD specifically to avoid harm; they were for all users – including those without mental health issues – to avoid subjects that could trigger them into unsafe behaviour, or that they didn’t have the mental energy to tackle. They were carefully considered and carefully enforced alongside a list of verboten things that mods would delete on sight: discussions of weights, calorie counts, numbers of self-inflicted wounds, images. Those things were not done lightly. Bodies Under Siege was a community of vulnerable people struggling with mental illnesses of various degrees, and it was built entirely around recovery and support. Trigger warnings and removal of things that could prompt ‘competitive’ behaviour were not courtesies. They were absolutely integral to the community’s existence.

I used a couple of other forums for people who self-harmed, in my teens. BUS was the one that did not make me worse. There’s a direct analogy between one of those forums and pro-anorexia communities; at its worst, it provided encouragement to hurt yourself, and at best it was simply reinforcing the behaviour, a reassurance that self-injury was an OK thing to do. It was not a healthy space. The second, though, tried to be about recovery, but allowed images and discussions of self-injury particulars. It was a deeply conflicted space, as a result: if you were feeling OK, you could quite easily end up feeling worse after a visit. If you were already feeling bad, you went there knowing it would most likely spiral downwards, playing Russian roulette with your feelings. You would, almost without doubt, stumble across something that would likely tip you from ‘maybe I could hurt myself’ into the act.

Trigger warnings on BUS made it safe from that concern. It was a place you could go while feeling awful to try to be strong. It had thread after thread of distraction games, little time-wasting things you could do to stave off the need to self-injure. It had questionnaires to fill in before you did it, drawn up by users and psych professionals, and questionnaires to fill in afterwards. It had resources for asking for treatment, for dealing with emergency care, for supporting others. It had safe spaces for parents, partners, carers to socialise. It had diary threads you could post in and read, if you were well enough, and those diaries came by convention with warnings about the content. If you didn’t want to engage with the illnesses of others, for fear of worsening your own, you did not have to.

Words cannot express how valuable trigger warnings were to me, or to many of the other users on BUS. Not just those with PTSD, or anxiety disorders, or specific trauma-related illnesses; not even just those who self-harmed or those with eating disorders; all of us who used that space benefitted from its policies on keeping us safe.

Trigger warnings on the web were born in communities trying to balance the need to speak with the need not to hear. Those communities were closed, or at least only partially open; LiveJournal communities where membership rules could be enforced, forums and BBs where mods had control over members’ posts. Trigger warnings do not translate well to public spaces – Tumblr tags, Twitter, even Facebook groups, or some of the real-life scenarios mentioned in the New Review article – because those needs are different for the wider community. Interestingly, some Tumblr tags do take content warnings well – conventions have grown up around those tags, and those who transgress those conventions are essentially moderated out by the existing users. But there’s no system to support that, nothing to stop a sustained invasion, no way to organise that space to support that use.

But just as it is inadvisable to add trigger warnings to everything based on the possibility of harm, it’s just as inadvisable to remove them from everything based on disbelief in their effectiveness. In communities focussed on mental health and recovery, trigger warnings are absolutely necessary for users. Whether college classes, campuses or the Huffington Post need the same level of consideration is a valid question, for sure, but it’s one worth asking. If you want people with disabilities to be able to participate fully in your spaces, you’d better be thinking about accessibility in terms of triggers and mental wellbeing as well as wheelchair ramps and sign language. And that doesn’t always need to be in formal language: sometimes it’s as simple as editing a tweeted headline to include the word ‘distressing’, to give your followers the choice about what they click on.

The New Review piece concludes:

Trigger warnings are presented as a gesture of empathy, but the irony is they lead only to more solipsism, an over-preoccupation with one’s own feelings—much to the detriment of society as a whole. Structuring public life around the most fragile personal sensitivities will only restrict all of our horizons. Engaging with ideas involves risk, and slapping warnings on them only undermines the principle of intellectual exploration. We cannot anticipate every potential trigger—the world, like the Internet, is too large and unwieldy. But even if we could, why would we want to? Bending the world to accommodate our personal frailties does not help us overcome them.

There is no way to stop every vulnerable person from coming across things that will make them more vulnerable. There is, however, courtesy and consideration, and a need for equal access for those with mental health issues. Those are not small things. There is a valuable, important baby being thrown out with this bathwater.

UsVsTh3m turns comments on

UsVsTh3m has decided to give Th3m a direct voice on site, and turned its comments on.

That’s perhaps not a huge surprise, given Rob Manuel’s involvement – he’s talked in the past about the class issues involved in online commenting, as well as presiding over one of the most interesting hotbeds of user activity on the internet. But it runs counter to a long-term trend of sites shutting down comments, deliberately deciding that they’re too much work, too unruly, too problematic, or even counter to the entire purpose of what the site’s trying to do.

It’s a nice start, opening with a joke and a clear prompt to participate, and a potential reward for excellence in the form of inclusion in the daily newsletter – a promise internet bragging rights that act as an incentive to be awesome, rather than merely guidelines that tell you how not to be bad. Worth noting that Rob’s participating there too.

It’ll be an interesting experiment to watch, and if a creative community of jokers is what UsVsTh3m is after, they seem to have started out pretty well.

11 quick thoughts on the new Steam reviews

Steam reviews are a thing now, apparently.

Now it’s easy to see what other Steam users think about a product before you buy. With Steam Reviews, you can browse for reviews that others have found helpful, or write your own reviews for titles you’ve played on Steam

A few quick thoughts in no particular order:

  1. Valve is displaying the time you’ve spent on a particular game next to your review. That’s interesting; that suggests they might also use it as a ranking factor for your review. It certainly means people will judge your review as less helpful if you’ve spent less time in the game than others. For positive reviews maybe that makes some sense; for negative reviews maybe it doesn’t, so much, because I don’t need to play 20 hours of Duke Nukem Forever to know it’s awful, or more than 5 minutes of the PC port of Fez to know it’s unplayably crashy on my setup, for example.
  2. They’re also flagging up the number of things you’ve bought on Steam, even ahead of your Steam level (which is to some extent a proxy for money spent). That’s an even more interesting choice, because it is almost certainly going to affect how people see the review on a subconscious level.
  3. You have to launch the game via Steam in order to review it. So I can’t review some of the games I’ve played most, because I didn’t buy them on Steam. Platform lock-in. But I also can’t review games just for the sake of hating on them from a distance, which deals with some of the Metacritic & Amazon swarming problems.
  4. But what I can do, if I want to game this system, is launch the game once, leave it on overnight to gather Steam cards & game cred, and then review it. Whether anyone will care enough to actually do that is an open question at this point.
  5. The only ranking factor they specifically mention is time – ie more recent reviews will be visible on game pages – and that’s framed as a good thing for the devs. But there will be others – game time and helpfulness are the obvious ones, but Valve would be daft not to include things like friendship data, similarity of game libraries etc in personalising reviews for individual readers.
  6. They’re defaulting to post-moderation, removing or hiding things when flagged, and not giving devs the ability to hide things directly without moderator input. That makes some sense (hide all negative reviews won’t be a valid strategy) but is also potentially concerning (we don’t yet know how much moderator support they have, or the moderation guidelines by which they’re operating, or the speed with which they’ll respond, or… etc).
  7. This could be a serious Metacritic competitor, because of Steam’s metadata about who’s played what games for how long, which could tie into an authority system using upvotes and activity more generally…
  8. …but (at the moment) they’re not including a scoring system, just recommend vs not recommend. Thankfully. Any numerical system would be exactly as open to abuse as the current Metacritic system is, with all the existing issues about people only looking at the score when purchasing or devs’ pay/bonuses being dependent on numerical scores that are, let’s be honest here, based on spit and whimsy and nothing more.
  9. The language stuff – allowing users to review games in their own languages and search for reviews in particular languages – is great for users especially in areas underserved by games press. And potentially a nightmare for devs, if they can’t translate.
  10. Helpful vs non-helpful is a nice way to harness the middle bit of the 1:9:90 rule.
  11. Mutualisation is interesting. I wonder how many devs and users were clamouring for this feature.

Scientifically accurate comments

PopSci has turned off its comments, citing research demonstrating that rude comments beneath an article polarise a reader’s opinion of its content, and tend to make people doubt the science involved.

Given their stated aims, it seems like a reasonable move – if they’re not going to be able to give the conversation the attention it needs, and especially if they’re facing coordinated astroturfing to undermine their science. They don’t owe anyone a platform.

A politically motivated, decades-long war on expertise has eroded the popular consensus on a wide variety of scientifically validated topics. Everything, from evolution to the origins of climate change, is mistakenly up for grabs again. Scientific certainty is just another thing for two people to “debate” on television. And because comments sections tend to be a grotesque reflection of the media culture surrounding them, the cynical work of undermining bedrock scientific doctrine is now being done beneath our own stories, within a website devoted to championing science.

Has anyone yet tried a pre-mod commenting policy requiring scientific accuracy and cited sources in comments? That could be an interesting community – labour-intensive for the moderators and maintainers, but a fascinating place for expert discussion.

Social places, not networks

In the light of recent events, this post from earlier this month seems timely:

Some years ago, the tech industry set out to redefine our perception of the web. Facebook (and other similar sites) grew at amazing rates and their reasonable focus on the “social network” and the “social graph”, made “social networks” the new kid on the block.

But even though the connections of each individual user are his social network, these sites are not social networks. They are social networking places.

This is an important distinction. They are places, not networks. Much like your office, school, university, the place where you usually spend your summer vacation, the pub where your buddies hang out or your hometown.

And, much like your office, school, university, etc, they all have their own behavioural expectations and norms. When those spaces get big and full of people jostling for room, if they aren’t broken up into their own smaller spaces – or if the partitions are porous – those differing expectations rub up against each other in all sorts of interesting and problematic ways.

The Twitter I have is not the Twitter you have, because we follow different folks and interact with them in our own ways. There are pretty regular examples of this disparity: when people write posts about how Twitter’s changed, it’s no fun any more, but the reality is that it’s just the folks they follow and talk to that have changed how they use it. My Twitter experience doesn’t reflect that – I’m in a different space with different people.

Part of the abuse problem all online spaces face is working out their own norms of behaviour and how to deal with incidents that contravene them. One of the particular problems faced by Twitter and a few others is how to deal with incidents that turn up because of many different, overlapping, interconnected spaces and the different expectations of each one.

And on practical ways to handle those problems, go read this excellent post by an experienced moderator. It’s too good to quote chunks here.

Is online abuse increasing, or are we just less tolerant of it?

A thought that follows on from yesterday’s post about Twitter and freedom of speech: it’s easy, I think, to see all the anger and distress caused by online abuse and come to the conclusion that it’s a growing problem. That social spaces online are increasingly hostile to women and other minorities, and that such incidents are increasing in both frequency and severity. In short, it’s easy to think that things are getting worse.

But I don’t believe that’s true. Social spaces online have historically always been fairly unpleasant places to be a visible minority, with notable exceptions. Usenet wasn’t a fun place to be openly female. Neither were early IRC channels (a/s/l and all). Parts of 4chan and Reddit still aren’t. But as online space has become easier to enter, easier to use, more important and less socially obscure, a broader section of society has colonised it. I learned when I was about 12 that you don’t admit your gender online, if you’re female; it’s less than three months since I first felt comfortable using a real picture of my face as my avatar, knowing what that can open you up to.

The evolution over the last couple of years has been that more women and other minorities feel safe enough online to be visible at all, rather than hiding behind the default masculine assumption that comes with anonymity and some pseudonymity. The target pool for abuse is larger, because more people are unafraid to simply be in public.

At the same time, the backlash to such behaviour is more visible and more outspoken. Abuse and threats are increasingly seen as unacceptable. That means more visibility for particularly reprehensible abuse, where a decade ago it would have been more hidden and harder to speak out against. The availability heuristic means people are more likely to overestimate the frequency of abuse now as opposed to abuse years ago, because they can think of more recent visible examples – not necessarily because it’s more frequent, but because it’s more frequently spoken of. It also means that social norms are changing for the better.

Maybe this is too optimistic a take. But I’d like to believe so.

Twitter’s freedom of speech

Caroline Criado-Perez, the journalist who successfully campaigned for Jane Austen to appear on British banknotes, has been subjected to a horrendous barrage of threats and abuse on Twitter, and has called for Twitter to improve the way it deals with abuse. Her supporters kicked off a petition asking Twitter for a better system, and they’ve had some success. The whole saga as it unfolded has been Storified by @kegill.

Twitter’s now said it will step up work on a ‘report abuse’ button for individual tweets. That’s a good step, but a button without something connected to it is just a placebo, and in this situation it won’t work unless it links to an action. Xbox Live’s community is enough to prove that abuse reports without enforcement are pointless, and that placebo buttons aren’t enough to deter campaigns of abuse or unpleasant individuals. And Facebook’s trigger-happy abuse policies are enough to prove that automated responses based on volumes of reports aren’t nuanced enough to be appropriate here either.

The problem is a human one, and it may be impossible to automate. That doesn’t mean it shouldn’t be tried, nor that the work is unimportant. Watching an abuse queue might not be the best way to solve the problem, nor a sustainable or scalable one. But I would love to see Twitter innovate around this issue. Moderation tools that understand the patterns of abuse on Twitter don’t yet exist, as far as I’m aware – and if they do exist, they clearly don’t work. I wonder what would happen if the same effort went in to understanding and predicting organised campaigns of abuse as spam campaigns.

I do not believe a solution is impossible. I do doubt whether Twitter thinks it’s important enough to devote significant resources to, for now, and I suspect it will continue to use freedom of speech as a convenient baffle.

If freedom of speech on Twitter means freedom to abuse, freedom to harass and to threaten, then speech on Twitter is not free. Freedom of speech for abusers means curtailed speech for victims. What critics of moderation tend not to understand is that both options force people to be silent. What supporters tend to believe is that it is better for the community as a whole to silence abusers than to allow victims to be silenced.

IGN’s commitment to changing its comment culture

Some of the comments on the IGN announcement of their new moderation policy. As they say, there's a long way to go and a lot of work to be done before the change takes hold.
Some of the comments on the IGN announcement of their new moderation policy. As they say, there’s a long way to go before the change takes hold.

IGN, one of the largest gaming sites in the world, has recently announced changes to its commenting policy explicitly aimed at tackling the culture of abuse in its threads. In a blog post announcing the change, editor-in-chief Steve Butts says:

Will that mean we won’t tolerate disagreement or fiery debates? Not at all. We’re an audience of advocates who come to IGN because we feel passionately about certain platforms, products, and philosophies. Being able to express and defend those tastes is part of why we’re here. Articulate disagreements about those tastes are a healthy and necessary part of those interactions. The comment guidelines aren’t meant to stop that.

The problem comes when a disagreement stops being about the merits of the argument and starts being about the people making it. It’s okay for us to disagree with each other, but we won’t tolerate abuse and threats disguised as disagreement. We also won’t tolerate ad hominem attacks, where you insult a person’s character or identity merely because you don’t like that they’re not the same person as you. None of us are perfect, and we all have bad days, of course, but we can’t let a difference of opinion devolve into being nasty to each other.

The context to this change, on top of years of growing hostility in the comment threads at IGN and elsewhere, is an open letter posted on Reaction last month by Samantha Allen, calling games media generally and IGN among others specifically to account over the toxic discussions they host below articles. It is worth reading in full, repeatedly; it’s a measured, articulate, passionate piece that firmly places responsibility for debates in comment threads with the sites that host those debates, and gives three clear calls to action for those in a position to change those debates. Addressing site editors by name, it says:

We have a problem and you can do something about it.

Our medium and the culture surrounding it is still in its adolescence and we’ve been experiencing a lot of growing pains lately. Those of us in the games community who are a part of marginalized groups have been going through hell lately. You can help us. You can do more than just express sympathy.

“The arc of the moral universe is long, but it bends toward justice.” You have a chance, right now, to shorten that arc. You are in positions of power and privilege. You have the luxury of being able to effect change at a level that we can only dream about.

Framing commenting and community policy and moderation as a moral issue is not new, but locating responsibility squarely with sites and publishers, rather than the commenters who frequent them, is a quietly revolutionary attitude. And a right one: much as people who run social spaces in the real world take on responsibility for enforcing behaviour norms within those spaces, people who open up social spaces online have to enforce the behaviour they want to see within them too. Simply opening a door then washing your hands of the damage caused is not enough.

IGN’s new policy is interesting not least because of its relative mildness. It bans personal attacks and discrimination, while encouraging debate and disagreement; it bans trolling, flaming and spam while permitting sensible pseudonymity. There’s also a section on questionable content, to act as a sort of catch-all:

Since we can’t have a rule to cover everything, this is the rule to, well, cover everything. These are public discussions, so act like you would if you were in a public place (a nice place). These issues are left to the discretion of individual moderators and staff, but may include any material that is knowingly false and/or defamatory, misleading, spammy, inaccurate, abusive, vulgar, hateful, harassing, sexist, obscene, racist, profane, sexually oriented, threatening, invasive of a person’s privacy, that otherwise violates any law, or that encourages conduct constituting a criminal offense. Asking for or offering any of the material listed above is also not permitted.

It’s a sensible policy and it’s excellent to see IGN taking responsibility for the comments on their site and committing to improving the discussion. They’re being careful not to throw the baby out with the bathwater, keeping what’s good about their community and reinforcing the positive behaviours they want to see – rather than turfing over the comment section, closing it or outsourcing it. I hope it comes with increased mod resource and support, and the buy-in of their writers too. It’s a strong commitment, and I hope their actions speak as loudly as their words on this – and that more sites follow their lead.