No, good content is not enough for Facebook success

At the recent ONA conference, Liz Heron, who oversees Facebook’s news partnerships, came in for some questioning about how news organisations can do well on the platform – something that’s a cause of some consternation for many, as it becomes increasingly clear how important it is as a mass distribution service. This is one of her responses:

This is a familiar line from Facebook – I’ve been on panels with other employes who’ve said exactly the same thing. But while I have the greatest respect for Heron and understand that she has to present Facebook’s best side in public – and that a tweet may be cutting context away from a larger argument – this statement is demonstrably false. Even skimming the rather fraught question of what exactly “good” means in this context, it’s questionable whether quizzes and lists such as those that have brought Playbuzz its current success are in any meaningful way replicable for most news organisations.

It’s not that Playbuzz is “gaming the algorithm” necessarily, though it may be. It’s that the algorithm is not designed to promote news content. Facebook’s recent efforts to change that are, quite literally, an admission of that fact. Facebook itself knows that good – as in newsworthy, important, relevant, breaking, impactful, timely – is not sufficient for success on its platform; it sees that as a problem, now, and is moving to fix it.

In the mean time, creating “good” content will certainly help, but it won’t be sufficient. You can bypass that process completely by getting your community to create mediocre content that directly taps into questions of identity, like Playbuzz’s personality quizzes, and giving every piece absolutely superbly optimised headlines and sharing tools. You can cheerfully bury excellent work by putting it under headlines that don’t explain what on earth the story’s about, or are too long to parse, or are simply on subjects that people will happily read for hours but don’t want to associate themselves with publicly.

Time and attention are under huge pressure online. Facebook are split testing everything you create against everything else someone might want to see, from family photos to random links posted by people they’ve not met since high school, and first impressions matter enormously. “Good” isn’t enough for the algorithm, or for people who come to your site via their Facebook news feed. It never has been. Facebook should stop pretending that it is.

Further reading: Mathew Ingram has context and a longer discussion.

Do real names really make people nicer online?

Mathew Ingram at GigaOm has an interesting look at some Livefyre research suggesting that if you force people to use their real names to comment on your site, the vast majority will just stop commenting.

Most of those surveyed said that they responded anonymously (or pseudonymously) because they didn’t want their opinions to impact their work or professional life by being attached to their real names, or when they wanted the point of their comment to be the focus rather than their identity or background. And close to 80 percent of those surveyed said that if a site forced them to login with their offline identity, they would choose not to comment at all.

The bottom line is that by requiring real names, sites may decrease the potential for bad behavior, but they also significantly decrease the likelihood that many of their readers will comment.

That led me to an interesting question this morning: do real names really reduce the potential for bad behaviour in comments? It’s a popularly held belief, but there doesn’t seem to be a great deal of evidence out there to support the idea that meatspace identities are any more useful than persistent pseudonyms when it comes to holding people accountable for their actions online. On Twitter, Martin Belam points out:

…but there’s a big gap between “named staff members early in comment threads” and “real names for everyone”.

I can find some evidence that persistent pseudonymity is a positive thing. Disqus did a very large study in 2012 on their comments database; though their methodology is opaque, their results showed pseudonymous commenters posted both the largest number and the highest quality comments across their network. Pseudonyms make people more collaborative and more talkative in learning environments; they positively influence information sharing; they encourage people to share more about themselves in the relative safety of an identity disconnected from meatspace.

There’s also evidence that anonymity – a complete lack of identifiers and nothing to chain your interactions together to form a persona, as distinct from pseudonymity, where you pick your own identity then stick with it – is a negative influence on the civility of debate, and that it engenders more adversarial conversations in which fewer people’s minds are changed.

There’s a Czech study into the differences between anonymity and pseudonymity and how to design for reduced aggression, apparently, but the actual link 404s and the abstract isn’t detailed on the difference. There’s interesting research into the social cost of cheap, easily replaceable pseudonyms, which allow effective anonymity through the evasion of reputation consequences; Reddit and Twitter are great examples of communities where you can see this behaviour in action.

Investigating this issue isn’t helped by the fact that researchers have in the past conflated anonymous and pseudonymous behaviours, but there’s increasing awareness now that the two engender big community differences; it’s also skewed by places like 4chan and Reddit being prominently discussed while less adversarial communities like Tumblr or fandom communities are less often scrutinised. (Male-dominated communities are covered more than female-dominated communities, and widely seen as more typical: so it goes.)

I’ve found one study that suggests anonymity makes for less civil comments, but I can’t access the full study to find out what it says about pseudonymity. There are suggestions, like Martin’s, that moderation helps keep things civil; there’s evidence that genuine consequences for poor behaviour helps too. And there’s this:

“While evidence from South Korea, China, and Facebook is insufficient to draw conclusions about the long-term impacts of real name registration, the cases do provide insight into the formidable difficulties of implementing a real name system.”

But where’s the proof Facebook is right when it claims its real name policy is vital for civility on the site? Where’s the evidence that Facebook comments are more civil than a news site’s because of the identity attached, rather than because the news site goes largely unmoderated and Facebook’s comment plugin broadcasts your words to everyone you know on there? Or because most people just don’t comment any more, so arguments die faster? I am struggling to find one study that demonstrates a causal link.

So this is an open request: if you know of more studies that indicate that denying pseudonymity improves comment quality, let me know on Twitter @newsmary or share them in the comments here. If I’m wrong, and it makes a big difference, it’d be great to be better informed. And if I’m right, and it’s community norms, moderation and reputational consequences that matter, it’d be great to put the idea that real names are a magic bullet for community issues to bed once and for all.

Libraries, games and books

There’s no need for physical media any more, not really, not unless it is a beautiful and delightful object that requires physical existence in order to truly accomplish what it sets out to do.

I am thousands of miles away from my McSweeney’s quarterlies, my copies of the Codex Seraphinianus and House of Leaves, but I kept them, when we moved; they live in boxes in my parents’ spare wardrobe along with the textbooks and miscellany I couldn’t bear to get rid of. Since we landed I’ve bought three books: The Norton Anthology of Poetry, a Bible-sized chunk of literature that I pick up maybe every week or so for a hit; a field guide to Australian birds, because it helped me feel less like an alien if I could identify the stuff in the sky here; and S., a gorgeous full-colour library book full of fake marginalia and individually-produced inserts. A formal experiment of the sort I can’t devour enough of.

I can’t remember the last time I bought a physical copy of a game for the PC. Digital downloads have supplanted physical games for the PC, and in doing so they’ve freed a vast multitude of new, small, interesting games from the strange tyranny of the physical product. (Except possibly in Australia, where you can actually buy things like The Basement Collection on disk, presumably because the internet here runs about the same speed as a smoke signal.)

Now Steam sales and Kickstarters have turned my PC gaming library into the same sort of collection as the bookshelves I tore up before we moved to Australia. It’s loosely organised by genre and by ‘feel’, in a way that’s intuitive to me but makes little to no sense otherwise. Its construction and contents reflect a lot about me; the things I’ve chosen to dedicate time to, the games I want close at hand for replaying.

It’s also full of games I probably won’t play to completion, in much the same way as the Shelf of Shame I used to keep my unread books on. For most of those games it doesn’t matter – the concept of ‘completion’ is pretty fuzzy on games without linear narrative – but there are more that I haven’t started than I feel entirely comfortable with.

That never stops me from buying more. It reminds me in some ways of the glory days of the PS2, when publishers produced the most astonishing array of strange and wonderful (and often utterly awful) games, and you could pick them up relatively cheaply knowing you would get a flawed but often interesting experience. (The collection of interesting PS2 games is also in London; the bad ones we traded in, so some other poor sucker has the joy of playing Air Rescue Rangers and America’s Top Ten Most Wanted now.)

I’m also now part of the friends and family sharing system, which means I tend towards buying games that I might have been on the fence about, so I can share them with others who will probably get as much from them as I will. But it also means my Steam library has an extra 200 or so games in it that I didn’t put there, that don’t fit the system. Like merging books with housemates or lovers whose tastes overlap but don’t entirely cohere. I had to make a new category for games I don’t want to play – not the same as games I haven’t played yet but will, one day. Games I just don’t want.

But that sharing is a joy, and not just because we don’t need to pay twice for two people who share the same computer to play the same game. It’s joyous because I get to explore and discover games I’d never have thought to try, and because I also get to explore someone else’s library, the way I used to wander through bookshelves when I visited friends. It’s joyous because that library even in its barest form – as a list of names without categorisation – is a sort of access to someone’s identity, a carefully chosen stack of media that says, at the very least: this is how I like to spend my time.

Media consumption, especially conspicuously, is a way of constructing identity; it follows then that Steam sales are cheap ways of being people.

Tabloid vs broadsheet, Facebook edition

There’s a lot of chatter around about Facebook at the moment in the light of the high levels of traffic it’s driving to publishers, and the way it’s trying to define itself as a news destination as well as a social one. Particularly interesting post on this topic at AllThings D today, which talks about the not-entirely-successful news feed redesign, and the dichotomy between what Facebook seems to want for itself and what its users seem to want from it.

Most people think of Facebook in a similar way: It’s a place to share photos of your kids. It’s a way to keep up with friends and family members. It’s a place to share a funny, viral story or LOLcat picture you’ve stumbled upon on the Web.

This is not how Facebook thinks of Facebook. In Mark Zuckerberg’s mind, Facebook should be “the best personalized newspaper in the world.” He wants a design-and-content mix that plays up a wide array of “high-quality” stories and photos.

The gap between these two Facebooks — the one its managers want to see, and the one its users like using today — is starting to become visible.

I’m not a fan of the constant return to the print metaphor whenever we talk about new ways of depicting news online – the newspaper idea – because it tends so badly to limit the scope of what’s possible to what’s already been done. It’s an appeal to authority, the old authority of print pages, the idea not just of a curated experience delivered as a package but also a powerful force in the political world. An authoritative voice. And it’s likely that Facebook would not be upset if, as a side effect of becoming a more newspaper-ish experience, it also gained more power.

But what we’re talking about here isn’t just a newspaper-Facebook vs a not-a-newspaper-Facebook. It’s the tension between tabloid and broadsheet style, played out in microcosm in the news feed, just as it’s being played out in a lot of news organisations that used to be newspapers. It’s the question of whether you can really wield power and authority, whether you can be trusted, if you’re posting hard news alongside cat gifs. It’s the Buzzfeed questions played out without any content to publish, an editor’s dilemma without editorial control.

It’s also an identity question, because it always is with social media. We’re not one person universally across all our services; we don’t behave the same way on Twitter as we do on Facebook. What Zuckerberg wants isn’t just a news feed change, it’s also a shift in the way we express and construct our Facebook selves – a shift more towards the Twitter self, perhaps. A more serious, more worthy consumption experience and sharing motive, a more informational and less conversational self.

Maybe that’s a really difficult problem to solve, adjusting the way identity works within an online service. Or maybe tweaking people is easy to do, if you just find the right algorithm and design tweaks.

In defence of ‘gamer’

Simon Parkin in the New Statesman has an excellent take on the ways gamer culture strikes out at those outside it, and the way homogenous stereotypes reinforce that behaviour – it’s a great piece, and you should definitely read it, but the headline is wrong. It says “If you love games, you should refuse to be called a gamer.” But I love games. I’m a gamer. I’m a player too. And the good guys don’t get to do boundary policing and gatekeeping any more than the bad guys do.

(To be clear I don’t think Simon’s advocating this position – his point is that this is not a homogenous community, that people who play games aren’t just one thing, and I am 100% with him on that score.)

A friend of mine did some research looking at women who play games, their experiences of games and game culture, and found that a great deal of the people who responded to her survey would not define themselves as gamers, in part because of the stereotype and the hostility they felt from the community. I don’t look like the stereotype, so I can’t be one – a similar issue to the one facing feminism, where the strawfeminist is assumed to be the definition of feminism. Except that in gaming the stereotype is celebrated, rather than criticised on all sides.

Gamer as an identity isn’t going to disappear. It’s not limited to videogames (though lots of videogamers seem to think it is). It’s not limited to those who play vs those who don’t play. It’s a useful label, something that people bond over and around – and that’s not limited to dudebros playing CoD. It applies to me playing PC games, and tabletop RPGs, and board games, and live games, and finding commonality with all those gamer communities. It implies a shared vocabulary and a shared set of interests, but it’s also big enough these days to accommodate a huge number of overlapping sub-communities. And one of those – in fact, several of those – are mine.

Gaming has a huge identity problem. Many gamers see gaming as an integral part of their identity, and one of the messier results of that is that many people still perceive criticism of the games they like as criticism of them as people. That leads to all sorts of awfulness – backlash against those who are discriminated against in games and who dare to speak out, critics being attacked for doing valuable work. Some groups of gamers behave more like fandom than most of fandom does – ingroup/outgroup policing, jostling for status, assuming an outsider position, banding together against perceived adversaries. None of that is healthy or particularly sensible given the spread of the hobby.

But that doesn’t mean that’s all the label is. That headline falls into the trap that the article laments: assuming gamers are homogenous, and that the identity itself holds no value. It holds value for me: it’s been important in fostering a sense of togetherness, in creating shared spaces where I feel like I belong, diverse spaces that include other gamer women and other queer gamers. And many of us fought to be called gamers, used that label in public in spite of hostility, and we would not have done that or continue to do that if it wasn’t a valuable and useful thing.

I can criticise the actions of others who identify as gamers while also calling myself a gamer. I can be proud to be part of a community that makes Journey and Gone Home and Dys4ia and all those other games. I can be proud of being part of a community that’s – slowly but surely – getting broader, more accepting and more diverse, and I can fight against – not disown – the backlash against that process in my small corner of this culture.

Owning this identity helped me find friends on the other side of the world. It would be a shame to lose it.

Picturesque selves

This is brilliant. Identity online is multifaceted, and the explosion in popularity of Instagram and Pinterest is in part about performing single facets of identity, mythologising ourselves through imagery.

Instead of thinking of social media as a clear window into the selves and lives of its users, perhaps we should view the Web as being more like a painting.

This is why Facebook’s desire to own our identities online is fundamentally flawed; our Facebook identities are not who we are, and they are too large and cumbersome and singular to represent us all the time. Google+ has the same problem, of course. Frictionless sharing introduces an uncomfortable authenticity – Facebook identities thus far have been carefully and deliberately constructed, and allowing automatically shared content to accrete into an identity is a different process, a more honest and haphazard one, that for many may spoil their work.

As we do offline, our self-presentations online are always creative, playful, and thoroughly mediated by the logic of social-media documentation.

Pinterest and Instagram are built around these playful, creative impulses to invent ourselves. Twitter remains abstract enough to encourage it too, though in textual rather than visual form. Facebook and Google identities are such large constructions that they become restrictive – you can’t experiment in the way you can with other platforms because of the weight of associations and of history – and they’re not constructed in a vacuum. They rely on interactions with friends for legitimacy – but you can’t jointly create one the way you can a Tumblr or a Pinterest board. Group identities don’t quite work. Individual identities are too heavy to play with properly. But Pinterest and Instagram and Tumblr are online scrapbooks – visual, associative, picturesque – and are just the right formats for liminal experimentation with self-construction. Creative and lightweight.

People are all made of stories

The Story program in chocolate
The Story program in chocolate, by Liz Henry

I promised myself I wouldn’t eat The Story until I was done digesting it.

I’m not sure that’s happened yet, but I’m getting there, and I think it’s time to start eating Meg Pickard. Maybe by the time I get to Danny O’Brien I’ll be finished putting all the pieces into place in my head. Maybe not. But I will at least be full of chocolate.

Last year I didn’t have the sort of perspective on The Story that I do this year. For one thing, I was speaking at it, which made it harder to think sensibly about the day, and brought me too close to one bit of it.

This time I got to relax and enjoy one of the best events I’ve ever been to. I tweeted – a lot – and I’ve pulled together a chronological run-through of the day in tweets on Storify. I suspect it may not mean enough for people who weren’t there to be able to decode the day; it was a busy day with a lot of astonishing ideas and people in it.

There are stories we tell ourselves, and stories we tell other people about ourselves. Often, it seems, they’re the same story.’s model of frictionless sharing lets people build identity by doing stuff – the way we would before the internet, before fast fashion and the Kindle, with clothes, class and consumption habits the most available elements of our outward-facing selves.

On the other hand, Ellie Harrison‘s early work quantifying her habits and activities seems to almost reverse that process – aiming to learn more about precisely who you are by meticulously chronicling everything you do. (Though she did also build a vending machine that vends crisps every time the BBC website mentions news about the recession. I’m not sure that quite fits this particular thesis. But the Bring Back British Rail T-shirt definitely does.) The End‘s series of philosophical questions about death also lets you build up an identity around your actions – crystallising things you might not otherwise think about, then plotting you on a grid that includes your friends and major thinkers.

Tom Watson and Emily Bell discussing phone hacking was illuminating, and my most anticipated talk of the day (for obvious reasons). Another big theme that ran through many of the talks was the collision of reality and story – a junction where everyone in news media works, and where the phone hacking discussion and Liz Henry’s talk about fake lesbians provided strong, cautionary tales about what happens when the story takes over. Henry made an incredibly strong point that when someone’s fake identity takes over, people’s real struggles get lost; by attempting to speak for others, we drown their voices.

But  Scott Burnham provided a strong counterpoint, with a glorious tale about an art project in which dozens of people laid out hundreds of thousands of pennies to spell ‘Obsessions make my life worse and my work better’ on an Amsterdam pavement. As time passed people began to play with it, making new words out of the pennies, turning them over. And then the police cleared it up to stop it being stolen. His final point was that the things we do will always disappear, but the stories we create will always remain.

The more I think on it, the more I come back to Karen‘s talk as being the heart of the event, though I didn’t see it at the time. She talked about making something she was interested in, a story just for her – a whole magazine of it, in fact. But the magazine is also an extension of her self, a story she’s telling the world about who she is and how she operates. An externally constructed identity as well as a document of interest – like Matt Sheret‘s playlists, or (on a group level) Scott Burnham’s penny art, or The End’s philosophical mindmaps, or Amina‘s blog. Jeremy Deller tried to heal the wounds of a whole community by recreating events that changed its identity forever, by putting on costumes and playing with being something we’re not, something we used to be. Fiona Raby told stories about a collective future where not just our identities but our bodies were changed. Danny O’Brien talked about – well, about everything, frankly, very fast and with huge energy and expansiveness, but also about delusion and identity and what happens when group identities collide.

And Matthew Herbert made an album out of a pig, in an act which says something about the artist as well as the pig. He talked about the process of art, the investigation and discovery involved in making sound this way, finding out that pig labour is quiet and that tractors are natural bass tones. He talked about recording the sound of towers falling on 9/11, and being sent a recording of someone in Palestine being shot against a wall, and the ethics of making those things, those lives and deaths, into stories in sound.

We are all made of stories. Some of them are our own creations, some we own, some we tell inadvertently through action and through accretion, and some belong to other people, a long way outside our control.

What do we do instead of reading the paper?

For news organisations, especially ones rooted in print, stories have totally changed since the advent of the internet. I don’t just mean our stories, I mean the ones our readers put together internally without noticing it, about what they do and see, constructing the assorted stuff and fluff of the day into a nice neat narrative which contains a sensible answer to the question: What did you do today?

It used to be that “reading the paper” was a single activity, physically and mentally, bounded by the single physical experience of picking up a newspaper and then, well, reading it. Not all of it, probably. Not even necessarily very much of it. Not everyone starts in the same place or cares about the same articles. But even if you read completely different bits of completely different newspapers to everyone else in your office, or even if you just looked at page 3 and the punny headlines and then called it a day, you still called it “reading the paper”. And that’s how it turns up in the story of your day. (What have you done at work so far? Not much, just read the paper and answered some calls.)

It also used to be bounded by the covers of the paper, not by the subjects you pick within it. Which paper do you read? Your identity is to some extent bound up in that brand choice, in the UK at least – people have made good satire about this, and there’s a wider point. Your newspaper said something about you. It featured in the story you told yourself about yourself, as well as the one you told other people. Reading the paper isn’t just learning about the news or the sport or the arts coverage; it’s also an element of your identity, a piece of your personal puzzle. A Guardian reader is not the same thing as a Daily Mail reader. Most people only get one.

Except that’s all gone out the window, now. The Mail Online has god-knows-how-many million readers; the Guardian has a smaller but still reasonably mind-bending number. Both numbers are too big to imagine and you have to resort to comparisons like the population of London. And of course those audiences overlap. They’re both much bigger online than in print, and they both require much smaller commitments in terms of reading – a single article, not a whole paper (whatever a whole paper used to mean, anyway). But also, and this is important, because reading one or two or twenty articles from a single news source doesn’t make me a “reader” in the way that it would if I “read” the paper. Not in the story I tell myself about myself, and not in the story I tell other people.

Which wouldn’t be so hard to manage, if it wasn’t for the first problem. Because actually it’s really easy to miss that you read an article from a newspaper, if what you’re doing is browsing the net or chatting on Facebook or catching up on Twitter. You click a link from the thing you’re doing, you read the link, you click “back”, you carry on. You can do that dozens of times, clicking all over the place, and still it doesn’t turn up in your story of the day as “reading the news”. What are you doing? Just checking Facebook. Or wherever.

Apps take you back to that activity of reading the paper, reading the news, within the nice neat cozy boundaries of a virtual cover even if not a real one. They require certain physical activity, too. It took a while for that to click with me, but I think I get now why print people are comfortable in app space.

But people that actually go to the front pages of news sites online are pretty few and far between, compared to the numbers that just turn up on article pages when they’re in the middle of doing other stuff. So obviously that raises huge issues about making sure that every article page is a good front page, a good gateway into your site, good enough to maybe persuade a couple of those people not to click “back” but to stick around and change what they’re doing. But also it raises issues about the visibility of what news organisations are doing. Because if your readers don’t consciously realise they’re your readers, that has to change the way your brand works.

If you don’t want to talk to people, turn your comments off

Advance warning: long post is long, and opinionated. Please, if you disagree, help me improve my thinking on this subject. And if you have more good examples or resources to share, please do.

News websites have a problem.

Well, OK, they have a lot of problems. The one I want to talk about is the comments. Generally, the standard of discourse on news websites is pretty low. It’s become almost an industry standard to have all manner of unpleasantness below the line on news stories.

Really, this isn’t limited to news comments. All over the web, people are discovering a new ability to speak without constraints, with far fewer consequences than speech acts offline, and to explore and colonise new spaces in which to converse.

Continue reading If you don’t want to talk to people, turn your comments off

Facebook: Sim Social

Facebook is a simulation game.

Hear me out. This is the culmination of quite a long period of mashing obscure concepts into my brain and seeing what sticks. If it doesn’t make sense, please rip it apart in the comments.

Sim Social is a massive multi-user dungeon (MUD) about building an identity, which you do by making “friends”, “sharing” digital artefacts (photos, videos, links, text), and “liking” things – objects, concepts, individuals, brands, the aforementioned digital artefacts. It’s played in real time with real people, and the level to which you decide to play yourself or a character is entirely up to you.

It functions, in a way, like old-school text adventure games. At a basic level, text games let the player use verb noun combinations – “get sword”, “kill snake”, “drink potion” – to act on the game world and progress the game. The verbs involved tend to be very limited and to have strictly defined fields of action. So for instance “get” is a one-time-only action which only works on a particular class of object. It changes the status of that object from being in the game world to being in the player’s inventory, and it opens up the possibility of further actions – “get sword” leads to “use sword”, or in slightly more sophisticated games, “kill snake with sword”.

“Get sword” and “friend Mary” function in fascinatingly similar ways. From your perspective, “Mary” is lying around in the game space – you might come across her through both interacting with certain things (like being in the same room of the MUD at the same time) or you might go into the game specifically looking for “Mary” because you know that she’s there and you want her to be part of your experience on Sim Social. So you find her, and you friend her, and now she’s in your inventory and you can do other things with her, like tag her in photos or get access to her status updates.

This is not to imply, of course, that people are things. But the way Facebook’s interaction is set up – the rules it imposes on the simulation – does imply certain things about the game world.

That’s not a new thought. Ian Bogost talks about the procedural rhetoric of video games – the explicit or implicit arguments that games make about how something works, simply by modelling processes. And George Lakoff, in his work on conceptual metaphor, argued that the metaphors we use define the potential field of action. The language used to discuss something defines how we think and talk about it.

So Facebook (as a text) argues, increasingly with the Like button takeover of Share functions, that if I “like” or “recommend” something (one-directional relationships, indistinguishable from each other, in which ambiguity cannot be expressed) then I must also want to “share” it. And, with the new comment plugin, it gives site owners the opportunity to argue that if I comment on their work I must also “share” it with all my “friends”; that I must be non-anonymous; that I must want to be notified of responses.

By casting a certain interaction in the metaphorical field of “friendship”, and by modelling the processes of “being friends” in a certain way, Facebook (as a game, as a text) makes an argument about socialisation and about relationships in the real world. So does Twitter. So do most social apps.

Facebook, in particular, lays claim to metaphors of relationship, interest and appreciation through the verbs it uses to describe and interact within the game world; it makes wider arguments about identity and privacy too. It simulates building relationships on a deeper level than SimCity simulates city-building, sure, but both exist on a continuum where complex social processes are modelled with certain assumptions built in.

Mark Sample talks about close-reading SimCity, looking at the rhetoric of its models, and unpacking the underlying assumptions behind the simplistic assertion that tax increases cause crime. I’d like to do that with Facebook, if the code was more open, but there are plenty of open assumptions to unpack – Is “liking” something the same thing as “recommending” it? What’s a “friend”? Can identities fluctuate? Facebook has an opinion on these things.

And a closing, background thought is something half-remembered from Shelly Turkle’s Simulation and its Discontents, which is referred to by Play the Past here:

Sherry Turkle tells us about a 13 year old SimCity player who told her about the “Top Ten Rules of SimCity.” One of those rules was that “raising taxes leads to riots.” Now, if the adolescent had simply understood this as a rule in the model, it would be fine, but Turkle insists that the adolescent did not understand that the simulation was a simplification. Turkle claims that this adolescent had uncritically extrapolated a set of rules she used to understand society from SimCity. The claim is that the 13 year old did not understand the game as a model or a toy but instead saw it as a kind of direct representation of the world. In a world increasingly dependent on simulation as basis of knowledge it is important for us to begin to become literate.