Of BuzzFeed’s 76.7 million multiplatform unique visitors in April (comScore), 17 percent were coming for news. The publisher historically hasn’t broken out its content by vertical to comScore, like other top news sites including CNN, Yahoo and The Huffington Post do. But it started to on a limited basis as of last month, when it began breaking out its Entertainment and Life coverage (43.7 million and 20.1 million uniques, respectively) to comScore. Stripping out those verticals leaves 13 million uniques for the rest, including hard news.
Ignoring analysis for the moment, let’s just look at the reasoning here. Unique browsers can visit more than one section of a site, so it’s possible that there’s overlap, and that simply subtracting the known verticals from the known total traffic isn’t a useful way to start. (I’m not certain of comScore’s methodology for vertical breakouts, but would be surprised if it doesn’t let sites count a user twice in different verticals, given that audiences overlap.)
So that 17% could be higher than it appears at first. Then, that 17% includes hard news plus everything that doesn’t fit into Entertainment or Life, so the actual audience for Buzzfeed’s news could be smaller than it appears at first.
The other element here is that unique browsers are the broadest possible metric, and likely to show news in the brightest light. In March, again according to comScore, Buzzfeed averaged 4.9 visits per visitor and 2 views per visit in the US, for roughly 10 views per visitor. If, among the 17% who visited news at all, five of those views are to news, then news is very, very well read with an exceptionally loyal audience. If just one of those views is to news, then news is much less well read than “17% of traffic” might suggest.
This post has been brought to you by early morning web analytics pedantry.
Update: Digiday has now altered the headline of its post and the text of the paragraph posted above.
I’ve been thinking about this Tow study for a while now. It looks at how stats are used in the New York Times, Gawker and Chartbeat; in the case of the latter, it examines how the company builds its real-time product, and for the former, how that product feeds in (or fails to feed in) to a newsroom culture around analytics. There’s lots to mull over if part of your work, like mine, includes communication and cultural work about numbers. The most interesting parts are all about feelings.
Metrics inspire a range of strong feelings in journalists, such as excitement, anxiety, self-doubt, triumph, competition, and demoralization. When devising internal policies for the use of metrics, newsroom managers should consider the potential effects of traffic data not only on editorial content, but also on editorial workers.
Which seems obvious, but probably isn’t. Any good manager has to think about the effects of making some performance data – the quantifiable stuff – public and easily accessible, on a minute-by-minute basis. The fears about numbers in newsrooms aren’t all about the data coming to affect decisions – the “race to the bottom”-style rhetoric that used to be very common as a knee-jerk reaction against audience data. Some of the fears are about how analytics will be used, and whether it will drive editorial decision-making in unhelpful ways, for sure, but the majority of newsroom fear these days seems to be far simpler and more personal. If I write an incredible, important story and only a few people read it, is it worth writing?
Even Buzzfeed, who seem to have their metrics and their purpose mostly aligned, seem to be having issues with this. Dao Nguyen has spoken publicly about their need to measure impact in terms of real-life effects, and how big news is by those standards. The idea of quantifying the usefulness of a piece of news, or its capacity to engender real change, is seductive but tricky: how do you build a scale between leaving one person better informed about the world to, say, changing the world’s approach to international surveillance? How do you measure a person losing their job, a change in the voting intention of a small proportion of readers, a decision to make an arrest? For all functional purposes it’s impossible.
But it matters. For the longest time, journalism has been measured by its impact as much as its ability to sell papers. Journalists have measured themselves by the waves they make within the communities upon which they report. So qualitative statements are important to take into account alongside quantitative measurements.
The numbers we report are expressions of value systems. Petre’s report warns newsrooms against unquestioningly accepting the values of an analytics company when picking a vendor – the affordances of a dashboard like Chartbeat can have a huge impact on the emotional attitude towards the metrics used. Something as simple as how many users a tool can have affects how something is perceived. Something as complex as which numbers are reported to whom and how has a similarly complex effect on culture. Fearing the numbers isn’t the answer; understanding that journalists are humans and react in human ways to metrics and measurement can help a great deal. Making the numbers actionable – giving everyone ways to affect things, and helping them understand how they can use them – helps even more.
Part of the solution – there are only partial solutions – to the problem of reach vs impact is to consider the two together, but to look at the types of audiences each piece of work is reading. If only 500 people read a review of a small art show, but 400 of those either have visited the show or are using that review to make decisions about what to see, that piece of work is absolutely valuable to its audience. If a story about violent California surfing subcultures reaches 20,000 people, mostly young people on the west coast of the US, then it is reaching an audience who are more likely to have a personal stake in its content than, say, older men in Austria might.
Shortly after I arrived in New York I was on a panel which discussed the problems of reach as a metric. One person claimed cheerfully that reach was a vanity metric, to some agreement. A few minutes later we were discussing how important it was to reach snake people (sorry, millennials) – and to measure that reach.
Reach is only a vanity metric if you fail to segment it. Thinking about which audiences need your work and measuring whether it’s reaching them – that’s useful. And much less frightening for journalists, too.
Digital audience editor Chris Moran, my former boss at Guardian UK and an all round top bloke, has explained Ophan to journalism.co.uk, and if you’re interested in knowing what I do or understanding how I do it, it’s an excellent primer on how we’re building analytics into the newsroom:
“We know everything about print, pretty much, there’s not many tricks left in the bag, we’ve done it for 200 years and we’re used to it. But the internet’s changing all the time, as much as anything else.”
An idea central to Ophan, said Moran, was for it to be useful to everyone working at the outlet, something he referred to as the “democratisation of data”.
This is at the absolute heart of what’s worked for us out here in Australia. We couldn’t have had the success we have out here without this feedback loop – not just the data, but also editors, subs and reporters all working with and caring about the data. Ophan’s transformed how we work, and will continue to do so as it adapts to the changing internet. There are no analytics tools on the market that do what it does, and building it into the heart of the newsroom is a crucial part of making it successful.
In other news, it’s been a little quiet round here as I gear up for leaving Australia; lots of small projects are on hiatus while I pack up life into boxes again, including Pocket Lint, my ongoing game design work on BOPTUB, and standard curmudgeonly blogging approach. Normal service will be resumed as soon as we are sure what is normal anyway.