The homepage, and other undead creatures

One of the interesting sidelines to come out of the remarkably interesting leaked NYT innovation report in the last few days has been the fact that traffic to the NYT homepage has halved in two years. It’s an intriguing statistic, and more than one media outlet has taken it and run with it to create a beguiling narrative about how the homepage is dead, or at the very least dying, why, and what this means for news organisations.

But what’s true for the NYT is certainly not true for the whole of the rest of the industry. Other pages – articles and tag pages – are certainly becoming more important for news organisations, but that doesn’t mean the homepage no longer matters – or that losing traffic to it is a normal and accepted shift in this new digital age. Losing traffic proportionately makes sense, but real-terms traffic loss looks rather unusual.

Audience stats like this are usually closely guarded secrets, because of their commercial sensitivity, but it’s fair to suggest that homepage traffic (at least, to traditionally organised news homepages) is a reasonable indicator of brand loyalty, of interest in what that organisation has to say, and of trust that organisation can provide an interesting take on the day. Bookmarking the homepage or setting it as a start point for an internet journey is an even bigger mark of faith, a suggestion that one site will tell you what’s most important at any given moment when you log in – but it’s very hard even for sites themselves to measure bookmark stats, never mind to get some sort of broad competitor data that would shed light on whether that behaviour is declining.

It’s plausible, therefore, that brand search would be a rough indicator of brand loyalty and therefore of homepage interest; the New York Times is declining there, while the Daily Mail, for example, has been rocketing to new highs recently. I would be incredibly surprised if the Mail shares this pessimism about the health of the homepage, based on its own numbers. (That’s harder to measure for The Atlantic, whose marine namesake muddies the search comparison somewhat.)

The death of the homepage, much like the practice of SEO and pageviews as a metric, has been greatly exaggerated. What’s happening here, as Martin Belam points out, is more complicated than that. As the internet is ageing, the older, standard ways of doing business and distributing content are changing, and are being joined by newer models and methods. Joined, not supplanted, unless of course you’ve created your new shiny thing purely to focus on the new stuff rather than the old stuff, the way Buzzfeed focuses on social and Quartz doesn’t have any real homepage at all.

You need to be thinking about SEO and social, pageviews and engagement metrics, the homepage and the article page. Older techniques don’t die just because we’ve all spotted something newer and sexier, unless the older thing stopped serving a genuine need; the resurgence of email is proof enough of that. Diversify your approach. Beware of zombies.

The rise of ‘social headlines’ is not the end of search

At the launch of BuzzFeed Australia on Friday, Scott Lamb gave an interesting keynote aimed at puncturing some commonly-held myths about the internet and social sharing. It was a good speech, well written up here, but at one point he gave a view that social is essentially an evolution of the net. His idea – at least as I understood it – was that the internet had gone from portals, through search, and was now at social; that search is something of the past.

Perhaps it’s not possible to say this clearly enough. Search and social as they’re currently used are two sides of the same coin – two strategies for discovering information that serve two very different purposes. Search is where you go to find information you already know exists; social is where you go to be surprised with something you didn’t know you wanted. If you know something’s happened very recently, these days, you might go to Twitter rather than Google, but once you’re there, you search. And if a clever headline crafted for Twitter doesn’t contain the keywords someone’s going to search for, then it’s going to be as impossible to find it on Twitter as it is in Google. It’s easy to forget that a hashtag is just a link to a Twitter search.

But Twitter isn’t what we’re really talking about here. “Social” when it comes to traffic, at the moment, is a code word that means Facebook – in much the same way that “social” for news journalists is a code word that means Twitter. And optimising headlines exclusively for Facebook gives you about as much leeway to be creative and clever as optimising exclusively for Google. You can do whatever you want as long as you follow the rules for what works, and those rules are surprisingly restrictive.

Lamb, to give him credit, pointed out the problem with the current over-reliance on Facebook: they burn their partners, they have full control over their feeds and what appears in them, and they have shown no hesitation in the past in shifting traffic away from publishers if it serves them or their users. All the same problems as a lot of sites have with Google.

David Higgerson has an interesting post that feeds into this issue, asking whether the growth of social and mobile has “saved the clever headline”. He writes that instead of straight keyword optimisation, social headlines require a reaction from the reader, and says:

This should be great news for publishers steeped in writing great headlines. Just as having a website isn’t quite like having multiple editions throughout the day, the need to force a smile or an emotion in a headline doesn’t mean the days of punderful headlines can return, but there are similarities we can draw on.

Lamb also said that optimising for search is all about optimising for machines, while social is all about optimising for people. Like Higgerson, he expressed a hope that social headlines mean a more creative approach – and the idea that now we’re moving past the machine-led algorithms news can be more human.

But search, like social is people; social, like search, is machines. Online we are all people mediated by machines, and we find content through algorithms that drive our news feeds and search results. Optimising purely for Facebook’s algorithm produces different results to optimising purely for Google’s, but it’s no less risky a strategy – and no more or less human.

8 tips for writing good web headlines

A very basic guide for people who write for the web and find themselves trying to build an audience.

ONE. Give people a reason to click

Why is your work worth anyone’s attention? That’s not a mean question: you must think it’s worth people’s time, otherwise why publish at all? So your headline has to explain in some way why they should click on you, why they should care about your thing ahead of the seventy billion other things people are trying to make them care about right now. If you can’t work out a value proposition and express it clearly in a headline, it might be worth editing your piece.

TWO. It has to work out of context

In print, you have lots of elements to work with that can tell a reader what something’s about – intros, pull quotes, images and head all work together. On the web, even if your site uses all those things as part of its design, your headline is going to appear in many places you can’t control, all on its own. Twitter, Facebook, Google and any number of other social sites are going to strip it from its context and force it to perform. If it doesn’t make sense when you look at it on its own, it won’t work as a web head.

THREE. It should probably mention what the piece is about

That might sound obvious, but it’s worth stating – it’s surprising how many fascinating pieces have incredibly obscure headlines. Anyone who finds you through search because they’re looking for the thing you’re talking about is almost certainly going to be lost if you don’t mention it in the headline.

FOUR. People like lists

That doesn’t mean you should write a list if your piece isn’t already a list. But if you’re writing a list and you don’t take the opportunity to use a number in the headline, you’re probably missing a trick.

FIVE. People like useful

This ought to be self-evident. Are you giving people instructions, a helpful way to do things, or information they might find useful? Then make sure your headline says so.

SIX. Don’t make promises you can’t keep

Make sure people know they can trust what they’re clicking on. Don’t pretend what you’ve written is better or more comprehensive or more emotional than it is. No one likes feeling foolish or disappointed, and people aren’t going to share things that create those feelings.

SEVEN. Keep it snappy

Too long, and it’s going to end up truncated in most of the places that count – Twitter has a character limit, Google has a display limit – and look ugly on your site on mobile, unless you’re specifically designing for it. You’re going to lose attention. Simple tends to be better; shorter tends to be better; if you can make it elegant, alliterative or amusing at the same time, that’s icing on the cake.

EIGHT. Work out what your audience responds to

This is the golden rule. It’s one reason why Upworthy is so good at the sharing game: Upworthy’s headlines are designed around two clauses, one with an emotional pull, because that’s what its core audience of mothers shares most. If you’re making things aimed at a certain audience and you know they respond to a certain type of sell, then you can cheerfully ignore the rest of this list, safe in the knowledge that your readers won’t care.

Social, search, serendipity and sharing

social_searchSearch vs social discovery is a debate that’s been going on since Twitter’s ascendancy as a link discovery machine. TheMediaBriefing has an interesting piece that suggests hybrid discovery is the eventual goal – a blended approach that ignores neither option. It’s a sensible conclusion, though I don’t share the belief that search traffic is necessarily disloyal – or that social media traffic is necessarily loyal. Both are used too broadly by too many readers to be so easily characterised.

Search is private, while social is public (at least to some degree, depending on your privacy settings). People will search very honestly for what they want to see, and will express ignorance, voyeurism or an interest in the salacious in the secure knowledge – or at least the reasonable belief – that no one but Google will ever see that information about them. Google autocomplete suggestions are full of quiet questions asked by millions in private.

But through social media, people will share what they think makes them look more like the idealised version of themselves. We use social media to construct our identities for other people to consume, and in so doing we share what other people will think we look good for sharing. For the most part we’ll ask stupid questions, or difficult ones, for the purposes of illuminating a facet of ourselves or to call for interaction with others – not necessarily to gain information. We’ll share what outrages us in order to comment on it, but read what interests us without sharing if we can’t fit it in to our constructed identity.

This is one reason why frictionless sharing is a problem: what we read and what we want to tell others we read are two vastly different things. It’s also one reason why social and search end up positioned as adversaries, when in fact they are complementary allies. Search discovery for publishers is not serendipitous; it relies on information-seeking queries, on individuals being interested enough in something specific to type words into a page and select from what appears there. It isn’t about teasing headlines or making someone wonder about what comes next; it’s about being as relevant as possible right there and then. Often, that includes personalisation, or simply being a reader’s preferred source for a story; loyal readers come through search as well.

Social discovery, by contrast, is about stumbling upon something potentially interesting because it’s been passed on by friends or by individuals you trust. It’s about not knowing you wanted to read something until it’s in front of your face. And a successful social piece works because you enjoy reading it, and you want to pass it on, and so do dozens of others. But social discovery happens as an interruption to the flow of doing something else; you move seamlessly from browsing Twitter/Facebook/Reddit/wherever to a different site for a link, then hit the back button and return to your browsing. It’s a diversion, not a journey in its own right.

Because of the commercial sensitivity around reach and discovery for publishers, an awful lot of inaccuracies get cheerfully spread online. For some time, there’s been a popular conception that search and social are fundamentally at odds, when in fact they’re often fundamentally intertwined. Plenty of news organisations reach the same people with both, at different times, with different articles. And plenty of pieces work perfectly for both, because they both illuminate a relevant issue for those directly interested, and make for interesting reading for those who didn’t yet know they cared. As Jackson says, what matters most is making content people want to consume. Making sure they can find it is the second step.

Requesting politely to stay in the dark will not serve journalism

At Salon, Richard constantly analyzed revenue per thousand page views vs. cost per thousand page views, unit by unit, story by story, author by author, and section by section. People didn’t want to look at this data because they were afraid that unprofitable pieces would be cut. It was the same pushback years before with basic traffic data. People in the newsroom didn’t want to consult it because they assumed you’d end up writing entirely for SEO. But this argument assumes that when we get data, we dispense with our wisdom. It doesn’t work that way. You can continue producing the important but unprofitable pieces, but as a business, you need to know what’s happening out there. Requesting politely to stay in the dark will not serve journalism.

– from Matt Stempeck’s liveblog of Richard Gingras’s Nieman Foundation speech

Stop blaming the internet for rubbish news content

Newspapers and newsrooms generally have always striven to publish stories that are important, interesting, informative and entertaining.  Not every one puts those in the same order or gives them the same importance. But the internet hasn’t changed that much.

The unbundling effects of the net mean that instead of relying on the front page to sell the whole bundle, each piece has to sell itself. That can be hard; suddenly the relative market sizes for different sorts of content are much starker, and for people who care more about important/interesting/informative than entertaining, that’s been a depressing flood of data. But the internet  didn’t create that demand – it just made it more obvious. Whether we should feed it or not is an editorial question. Personally, I think it’s fine to give people a little of what they want – as long as a newsroom is putting out informative and important stories, a few interesting and entertaining ones are good too, so long as they’re not lies, unethically acquired or vicious.

If you spend a lot of time online you will see a filter bubble effect, where stories from certain news organisations are not often shared by your friends and don’t often turn up in your sphere unless you actively go looking for them. That means the ones that break through will be those that outrage, titillate or carry such explosive revelations that they cannot be ignored. That does not mean those stories are the sum total output of a newsroom – any more than the 3AM Girls are the sum total of the Mirror in print – but those pieces attract a new audience and serve to put that wider smorgasbord of content in front of them (assuming the article pages are well designed).

Of course, some news organisations publish poor stories – false, misleading, purposefully aggravating or just badly written – in the name of chasing the trend. That’s also far from an internet-only phenomenon. The Express puts pictures of Diana on the front, and routinely lies for impact in its headlines. The Star splashes on Big Brother 10 weeks running. The editorial judgement about the biggest story for the front is about sales as much as it is newsworthiness. Sometimes those goals align. Sometimes they don’t, and editors make a choice.

It is ridiculous to blame the internet for the publishing of crap stories to chase search traffic or trend-based clicks – just as it’s ridiculous to blame the printing press for the existence of phone hacking. In both cases it’s the values and choices of the newsroom that should be questioned.

News SEO: optimising for robots is all about the people

Some people in the news business get very wary of SEO in general. There seems to be a perception that content farming and low-quality stories are a sort of natural consequence of making sure your stories can be found via Google. But in fact there is a wide spectrum of approaches here, and news organisations make editorial judgements over whether to cover something that’s interesting to the public just because the public is interested. No Google robot forces a newsroom to make that choice, just as no print-sales-bot forces the Daily Star to splash on scantily-clad women and celebrity gossip.

If your editorial strategy is to chase search terms, then you’re not optimising for robots – you’re optimising for the millions of people online who search for certain sorts of stories. Websites like Gawker and the Mail Online create content to attract the potential millions who read celebrity gossip or who want the light relief of weird Chinese goats – and many of those people also care about the budget or the war in Afghanistan, because people are multi-faceted and have many, many interests at the same time.

If your production strategy includes making sure your headlines accurately describe your content, make sense out of context and use words people would actually use in real life, then you are optimising your content for search. Not for robots, again, but for people – potential and actual readers or viewers – some of whom happen to use search engines to find out about the news.

For example, search optimised headlines may well have the keywords for the story right at the beginning. Google lends greater weight to words at the start of a headline than at the end. But it does so because so do people. If you’re scanning a Google search results page, you tend to read in an F shape, taking account of the first few words of an item before either engaging further or moving on. [Edit: via @badams on Twitter, a more recent study backing up the F-shape reading pattern.] Google’s algorithm mimics how people work, because it wants to give people what they’re going to find most relevant. Optimising for the robot is the same thing as optimising for human behaviour – just as we do in print, taking time to design pages attractively, and taking account of the way people scan pages and spend time on images and headlines in certain ways.

News SEO is a very different beast from, say, e-commerce SEO or SEO for a small business that wants to pick up some leads online. Once you get beyond the basics it does not follow the same rules or require the same strategies. Link building for breaking news articles is worse than pointless, for example; your news piece has a halflife of a day, or an hour, or perhaps a whole week if you’re lucky and it really hits a nerve. Social sharing has a completely different impact for news organisations that want their content read than for, say, a company that wants to sell shoes online. For retailers, optimising for the algorithm might start to make some sense – if the only difference between you and your competitors is your website, then jostling for position in the search results on particular pages gets competitive in a way that news doesn’t. For news, though, optimising for robots always means optimising for humans. It’s just a matter of choosing which ones.

URL manipulation, libel, and Kate Middleton jelly beans

Regular readers here (all 6 of you) will probably already know about Jellybeangate. Yesterday, a URL from the Independent was rewritten to say something rather uncomplimentary about a PR-churned story on their site, revealing that Kate Middleton’s face had been discovered in a jelly bean. The link went viral on Twitter after several fairly well-respected sources assumed it was the work of a disgruntled sub and not a prank. Then the corrections went viral, along with several other versions of the link. This sort of URL behaviour is remarkably common.

According to the Nieman Lab, there are vast numbers of other news organisations whose URLs can be manipulated in this way (Citywire, my employer, is one of them) – and third parties with agendas could easily make it seem at a casual glance as though their URLs are libellous or offensive. But most URLs – if not all – can be manipulated very simply, using parameters. I can add &this=utter-rubbish after almost any link and the link will still resolve, leaving my additions intact. Thus:

There shouldn’t be any fear of being liable for this sort of manipulation, any more than there is in someone copying a newspaper masthead and pasting their own words underneath. For a statement to be libellous it must have been published, and in this case the individual who wrote, manipulated and then distributed the URL is the publisher. This seems clear for manipulated parameters marked by “?” and I have a hard time believing anyone would find otherwise for parameters within the URL itself.

If I were the Indie’s SEO team right now, I’d be more worried that the doctored URL is able to rank above their original. Might just be a good idea to get some rel=canonical tags on their article pages.