Archive for 2009

Local Bookstores, Social Hubs, and Mutualization

November 17, 2009

Last month, the American Booksellers Association published an open letter to the Justice Department, asking Justice to investigate Wal-Mart, Target, and Amazon after they lowered prices of best-selling books to under $10. The threat, the ABA says, is dire: “If left unchecked, these predatory pricing policies will devastate not only the book industry, but our collective ability to maintain a society where the widest range of ideas are always made available to the public, and will allow the few remaining mega booksellers to raise prices to consumers unchecked.”

Got that? Lower prices will lead to higher prices, and cheap books threaten to reduce the range of ideas in circulation. And don’t just take the ABA’s word for it. They also quote John Grisham’s agent and the owner of a book store, who both agree that cheap books are a horrible no-good very bad thing. So bad, in fact, that the Department of Justice must get involved, to shield the public from the scourge of affordable reading. (Just for the record, the ABA is also foursquare against ebooks being sold more cheaply than paper books, and thinks maybe Justice should look into that too.)

There may have been some Golden Age of Lobbying, where this kind of hysteria would have had led to public alarm. By now, though, the form is so debauched there’s probably a Word macro for describing competition as a Looming Threat To The Republic. (or The Children, or Civilization Itself. Depends on your audience.)

It’s not surprising that the ABA would write stuff like this — it’s their job to make self-interested claims. What is surprising is that there are members of the urban cognoscenti who still believe these arguments, arguments that made some sense twenty years ago, but have long since stopped doing so.

* * *

Twenty years ago, when we had Barnes and Noble but no Amazon, there was all kinds of literature, from 2600 to Love & Rockets, from Heather Has Two Mommies to Duplex Planet, that survived mainly in the independent ecosystem, but whose host bookstores also needed to sell enough Stephen King or M. Scott Peck to stay open. Fifteen years ago, when use of the web was still a minority pursuit, online bookselling changed this game, but hadn’t yet ended it. Even ten years ago, when more than half of U.S. adults had already become internet users, there were still many book lovers not online. Though the value of bookstores in supporting variety had shrunk, it was still there.

Those days are over. Internet use is as widespread as cable TV, and an internet user in rural Utah has access to more books than a citizen of Greenwich Village had before the web. Millions more books. Like record stores and video rental places, physical bookstores simply can’t compete for breadth of offering and, also like the social changes around music and moving images, the internet is strengthening rather than weakening the ability of niches and sub-cultures to see themselves reflected in long-form writing.

The internet also moderates the competitive threat, because the competition is only a click away. Amazon lists millions of books, but so does eBay, and publishers like O’Reilly or McGraw-Hill or Alyson can sell directly to the reader. If you had to choose between buying books only offline or only online, the choice that maximizes the number of ideas in circulation is unambiguously clear. Even if all but a dozen online booksellers were to vanish, there would still be more places to buy books on the web than there are bookstores in the average American city today.

* * *

Despite the spectacular breadth of available books created by online book sellers, many lovers of bookstores echo the ABA’s “Access to literature is at stake!” argument. In my experience, people make this argument for one of three reasons.

This first is that some people simply dislike change. For this group, the conviction that the world is getting worse merely attaches to whatever seems to be changing. These people will be complaining about kids today and their baggy pants and their online bookstores ’til the day they die.

A second group genuinely believes it’s still the 1990s somewhere. They imagine that the only outlets for books between Midtown and the Mission are Wal-Mart and Barnes and Noble, that few people in Nebraska have ever heard of Amazon, that countless avid readers have money for books but don’t own a computer. This group believes, in other words, that book buying is a widespread activity while internet access is for elites, the opposite of the actual case.

A third group, though, is making the ‘access to literature’ argument without much real commitment to its truth or falsehood, because they aren’t actually worried about access to literature, they are worried about bookstores in and of themselves. This is a form of Burkean conservatism, in which the value built up over centuries in the existence of bookstores should be preserved, even though their previous function as the principal link between writers and readers is being displaced.

This sort of commitment to bookstores is a normative argument, an argument about how things ought to be. It is also an argument that might succeed, as long as it re-imagines what bookstores are for and how they are supported, rather than merely hoping that if enough nice people seem really concerned, the flow of time will reverse.

* * *

The local bookstore creates all kinds of value for its community, whether its providing community bulletin boards, putting rocking chairs in the kids section, hosting book readings, or putting benches out in front of the store. Local writers, harried parents, couples on dates, all get value from a store’s existence as a inviting physical location, value separate from its existence as a transactional warehouse for books.

The store doesn’t get paid for this value. It gets paid for selling books. That ecosystem works — when it works — as long as the people sitting in those rocking chairs buy enough books, on average, to cover the added cost of having the chairs in the first place. The blows to that model have been coming for some time, from big box retailers stocking best sellers to online sales (especially second-hand sales) to the spread of ebooks to, now, price wars.

Online bookselling improves on many of the core functions of a bookstore, not just price and breadth of available books, but ways of searching for books, and of getting recommendations and context. On the other hand, the functions least readily replicated on the internet — providing real space in a physical location occupied by living, breathing people — have always been treated as side effects, value created by the stores and captured by the community, but not priced directly into the transactions.

If the money from selling books falls below a certain threshold, the stores will cut back on something — hours, staff, rocking chairs — and their overall value will fall, meaning marginally fewer patrons and sales, threatening still more cutbacks. There may be a future in which they offer less value and make less money in some new and stable equilibrium, but beneath a certain threshold, the only remaining equilibrium is Everything Must Go. Given the margins, many local bookstores are near that threshold today.

All of this makes it clear what those bookstores will have to do if the profits or revenues of the core transaction fall too far: collect revenue for the side-effects.

The most famous version of this is bookstore-as-coffeeshop, where the revenues from coffee subsidize the lingering over books and vice-versa, but other ways of generating revenue are possible. Reservable space for book clubs, writers rooms, or study carrels; membership with buy-back options for a second-hand book market run out of the same space; certain shopping hours reserved for members or donors; use of volunteer labor, like a food coop; sponsorships from the people or businesses in the neighborhood most interested in the social value of the store and most interested in being known as local machers.

The core idea is to appeal to that small subset of customers who think of bookstores as their “third place”, alongside home and work. These people care about the store’s existence in physical (and therefore social) space; the goal would be to generate enough revenue from them to make the difference between red and black ink, and to make the new bargain not just acceptable but desirable for all parties. A small collection of patron saints who helped keep a local bookstore open could be cheaply smothered in appreciation by the culture they help support.

* * *

Treating the old side-effects as the new core value would in many cases require non-profit status. This would push small stores who tried it towards the NPR model, with a mix of endowment, sponsorship, and donations, a choice that might be anathema to the current owners. However, the history of businesses that traffic in physical delivery of media has been grim these last few years. (This is the story of your local record store, RIP.)

Any change from a commercial to a cooperative model of support would also probably have to be accompanied by a renegotiation of commercial leases. Street level commerce seems to be undergoing some of the same changes urban warehouses and lofts went through in the 1960s and waterfront property went through in the 1990s, where the muscular old jobs of making, storing, and transporting goods receded, leaving those spaces open for colonization as dwellings.

In the current case, the spread of electronic commerce for everything from music to groceries is part of the increase in empty store fronts on shopping streets, leaving a series of Citi branches, ATT outlets, and Starbucks that repeat at regular intervals, like scenery in a Hanna-Barbera cartoon. Even when the current recession ends, it’s hard to imagine vibrant re-population of most of the empty commercial spaces, and it’s easy to imagine scenarios in which commercial districts suffer more: consolidation among pharmacy chains, an uptick in electronic banking, the end of our love affair with frozen yogurt, any of these could keep many street level spaces empty, whatever happens to the larger economy.

If commercial space does follow the warehouse-and-loft pattern, then we’ll need to find ways to re-purpose those spaces. Unlike lofts, however, street level living has never been a big draw, but turning those spaces into mixed commercial-and-communal use may offer a viable alternative.

This also comes with the standard disclaimer that it may not work. The gap between the money needed to stave off foreclosure and the money available from local beneficiaries may not match up in any configuration. Vehement declarations of support for local bookstores may turn out be mere snobbishness masquerading as commitment. The transition of revenue from “transactional warehouse” to “social hub” may be too fitful to create the needed continuity. Landlords may prefer to hold empty spaces at nominally high rents than re-price. And so on.

All of which is to say that trying to save local bookstores from otherwise predictably fatal competition by turning some customers into members, patrons, or donors is an observably crazy idea. However, if the sober-minded alternative is waiting for the Justice Department to anoint the American Booksellers Association as a kind of OPEC for ink, even crazy ideas may be worth a try.

A Speculative Post on the Idea of Algorithmic Authority

November 15, 2009

Jack Balkin invited me to be on a panel yesterday at Yale’s Information Society Project conference, Journalism & The New Media Ecology, and I used my remarks to observe that one of the things up for grabs in the current news environment is the nature of authority. In particular, I noted that people trust new classes of aggregators and filters, whether Google or Twitter or Wikipedia (in its ‘breaking news’ mode.)

I called this tendency algorithmic authority. I hadn’t used that phrase before yesterday, so it’s not well worked out (and I didn’t coin it — as Jeff Jarvis noted at the time, Google lists a hundred or so previous occurrences.) There’s a lot to be said on the subject, but as a placeholder for a well-worked-out post, I wanted to offer a rough and ready definition here.

As this is the first time I’ve written about this idea, this a bit of a ramble. I’ll take on authority briefly, then add the importance of algorithms.

Khotyn is a small town in Moldova. That is a piece of information about Eastern European geography, and one that could be right or could be wrong. You’ve probably never heard of Khotyn, so you have to decide if you’re going to take my word for it. (The “it” you’d be taking my word for is your belief that Khotyn is a town in Moldova.)

Do you trust me? You don’t have much to go on, and you’d probably fall back on social judgement — do other people vouch for my knowledge of European geography and my likelihood to tell the truth? Some of these social judgments might be informal — do other people seem to trust me? — while others might be formal — do I have certification from an institution that will vouch for my knowledge of Eastern Europe? These groups would in turn have to seem trustworthy for you to accept their judgment of me. (It’s turtles all the way down.)

The social characteristic of deciding who to trust is a key feature of authority — were you to say “I have it on good authority that Khotyn is a town in Moldova”, you’d be saying that you trust me to know and disclose that information accurately, not just because you trust me, but because some other group has vouched, formally or informally, for my trustworthiness.

This is a compressed telling, and swerves around many epistemological potholes, such as information that can’t be evaluated independently (“I love you”), information that is correct by definition (“The American Psychiatric Association says there is a mental disorder called psychosis”), or authorities making untestable propositions (“God hates it when you eat shrimp.”) Even accepting those limits, though, the assertion that Khotyn is in Moldova provides enough of an illustration here, because it’s false. Khotyn is in Ukraine.

And this is where authority begins to work its magic. If you told someone who knew better about the Moldovan town of Khotyn, and they asked where you got that incorrect bit of information, you’d have to say “Some guy on the internet said so.” See how silly you’d feel?

Now imagine answering that question “Well, Encyclopedia Britannica said so!” You wouldn’t be any less wrong, but you’d feel less silly. (Britannica did indeed wrongly assert, for years, that Khotyn was in Moldova, one of a collection of mistakes discovered in 2005 by a boy in London.) Why would you feel less silly getting the same wrong information from Britannica than from me? Because Britannica is an authoritative source.

Authority thus performs a dual function; looking to authorities is a way of increasing the likelihood of being right, and of reducing the penalty for being wrong. An authoritative source isn’t just a source you trust; it’s a source you and other members of your reference group trust together. This is the non-lawyer’s version of “due diligence”; it’s impossible to be right all the time, but it’s much better to be wrong on good authority than otherwise, because if you’re wrong on good authority, it’s not your fault.

(As an aside, the existence of sources everyone accepts can be quite pernicious — in the US, the ratings agencies Moodys, Standard & Poor’s, and Fitch did more than any other group of institutions to bring the global financial system to the brink of ruin, by debauching their assertions to investors about the riskiness of synthetic assets. Those investors accepted the judgement of the ratings agencies because everyone else was too. Like everything social, this is not a problem with a solution, just a dilemma with various equilibrium states, each of which in turn has characteristic disadvantages.)

Algorithmic authority is the decision to regard as authoritative an unmanaged process of extracting value from diverse, untrustworthy sources, without any human standing beside the result saying “Trust this because you trust me.” This model of authority differs from personal or institutional authority, and has, I think, three critical characteristics.

First, it takes in material from multiple sources, which sources themselves are not universally vetted for their trustworthiness, and it combines those sources in a way that doesn’t rely on any human manager to sign off on the results before they are published. This is how Google’s PageRank algorithm works, it’s how Twitscoop’s zeitgeist measurement works, it’s how Wikipedia’s post hoc peer review works. At this point, its just an information tool.

Second, it produces good results, and as a consequence people come to trust it. At this point, it’s become a valuable information tool, but not yet anything more.

The third characteristic is when people become aware not just of their own trust but of the trust of others: “I use Wikipedia all the time, and other members of my group do as well.” Once everyone in the group has this realization, checking Wikipedia is tantamount to answering the kinds of questions Wikipedia purports to answer, for that group. This is the transition to algorithmic authority.

As the philosopher John Searle describes social facts, they rely on the formulation X counts as Y in C — in this case, Wikipedia comes to count as an acceptable source of answers for a particular group.

There’s a spectrum of authority from “Good enough to settle a bar bet” to “Evidence to include in a dissertation defense”, and most uses of algorithmic authority right now cluster around the inebriated end of that spectrum, but the important thing is that it is a spectrum, that algorithmic authority is on it, and that current forces seem set to push it further up the spectrum to an increasing number and variety of groups that regard these kinds of sources as authoritative.

There are people horrified by this prospect, but the criticism that Wikipedia, say, is not an “authoritative source” is an attempt to end the debate by hiding the fact that authority is a social agreement, not a culturally independent fact. Authority is as a authority does.

It’s also worth noting that algorithmic authority isn’t tied to digital data or even late-model information tools. The design of Wikileaks and Citizendium and Apache all use human vetting by actors prized for their expertise as a key part of the process. What seems important is that the decision to trust Google search, say, can’t be explained as a simple extension of previous models. (Whereas the old Yahoo directory model was, specifically, an institutional model, and one that failed at scale.)

As more people come to realize that not only do they look to unsupervised processes for answers to certain questions, but that their friends do as well, those groups will come to treat those resources as authoritative. Which means that, for those groups, they will be authoritative, since there’s no root authority to construct from. (I lied before. It’s not turtles all the way down; its a network of inter-referential turtles.)

Now there are boundary problems with this definition, of course; we trust spreadsheet tools to handle large data sets we can’t inspect by eye, and we trust scientific results in part because of the scientific method. Also, although Wikipedia doesn’t ask you to trust particular contributors, it is not algorithmic in the same way PageRank is. As a result, the name may be better replaced by something else.

But the core of the idea is this: algorithmic authority handles the “Garbage In, Garbage Out” problem by accepting the garbage as an input, rather than trying to clean the data first; it provides the output to the end user without any human supervisor checking it at the penultimate step; and these processes are eroding the previous institutional monopoly on the kind of authority we are used to in a number of public spheres, including the sphere of news.

Rescuing The Reporters

October 2, 2009

Last week I gave a talk on newspapers at the Shorenstein center. (They did an amazing job with the transcript, including annotating the talk with a remarkable amount of linking.) During the talk, I ran through various strategies for funding local reporting, including an idea I first saw articulated by Steve Coll that reporters should become employees of non-profit entities.

After the talk, I decided to do a “news biopsy,” as a way of thinking about Coll’s idea. I wanted to see how much newspaper content was what Alex Jones calls the iron core of news — reporters going after facts — and how much was “other stuff” — opinion columns, sports, astrology, weather, comics, everything that was neither a hard news story or an ad.

The paper I used was my old hometown paper, the Columbia Daily Tribune. It’s is a classic metro daily and pretty good paper for a town of 100,000, because The Missourian, the rival paper produced by the local journalism school, provides an unusual degree of competition for a town that size. I had several copies of the Trib lying around, having used it in a media class I teach at ITP; I took two copies of the August 27 edition, slit them down the spine, and made two piles, one with odd-numbered pages facing up, and the other with even-numbered pages facing up. (There was an insert about the upcoming football season, clearly a one-off, which I ignored.)

I then cut up each page, labeling every piece in two separate ways. The first label was about content: News, Ads, and Other (opinion columns, sports, crosswords, and the rest.) The only judgement call was an article in the sports section about a judge’s ruling in the Major League Baseball steroids case; I put that in the News pile; the rest of sports went in Other.

The second pair of labels was about source: Created or Acquired. Created content was whatever was written (or taken, in the case of photos) by Tribune staff, while acquired content was material from a wire service or database — news from the Associated Press, but also weather, comics, and so on.

Then I weighed the piles (in grams.) Once I had the weights, I ignored the ads — they are about half the paper, but not the half I care about — and did comparisons of the remaining content:

  • Created vs. Acquired: The content created by Tribune staff made up less than a third of the total; over two-thirds was acquired from other sources, including especially the AP.
  • News vs. Other: The paper was about one-third news and about two-thirds “Other” (and this is after ignoring the all-sports insert, tipping the balance in favor of news.)
  • Created News vs. everything else: News reported by the paper’s staff was less than a sixth of the total content of the paper (again, ignoring the insert, which tips the balance in favor of news.)

In other words, most of the substantive part of that day’s Trib wasn’t locally created, and most of it wasn’t news.

I don’t want to make specific claims for these numbers; I wouldn’t be surprised to see variations in the 2:1 ratios of Created to Acquired content, or of News to Other, either from day to day or paper to paper. However, I would be astonished if those ratios were to reverse — for a medium-sized metro daily to publish twice as much News as Other, or to create twice as much as it acquired — because the economics are tilted so strongly towards material other than news, and towards buying content vs. making it. (The AP provided most of that days news, and the cost of running a wire story is tiny compared to employing a beat reporter.)

More surprising to me, though, was the number of local reporters who had a byline for hard news in that day’s paper: Six. (Given that number, we can name them: Janese Heavin, T.J. Greaney, Brennan David, Terry Ganey, Jonathan Braden, and Jodie Jackson Jr.)

Now one can imagine all kinds of reasons why only six of the Tribune’s reporters filed news stories that day — August vacations, slow news day, all the other reporters were working on bigger stories. I guessed at all those reasons and more, and as it turns out, all those reasons were wrong — the most parsimonious explanation is the correct one. Only six reporters filed news stories that day because the Tribune only has six news reporters, out of a staff list of 59. Every one of them appeared in that day’s paper, with three (Ganey, Braden, and Jackson) filing two stories each

The Trib seems to realize the importance of local reporting to their readers. The outside of the paper (front and back page of section A) was all local bylines and no wire service news, while the inside had not one local news byline. (Local opinion, yes. Local sports, doubly yes. Locally reported news? No.) The local reporters were (expensive) lures, put on the outside of a product that included none of their work, and lots of the AP’s, on the inside pages.

And the other 53 masthead staff? There’s the publisher, of course, and the managing editor, as well as the copy chief, the librarian, a pair of city editors, and so on. Then there are columnists, lots and lots of columnists, writing columns like Granny’s Notes and Smile Awhile, Let’s Talk Antiques and Cookin’ with Hoss (Chicken wings end dinner plan bickering.) There are also eleven people covering sports, including one assigned just to cover the area high schools.

Now the half-dozen reporters covering the City Council and local crime instead of antiques and sports don’t do their work in a vacuum. The city desk editors and the copy chief make the work of Janese Heavin et al. more valuable than it would otherwise be. But you can pick any multiplier you like for necessary editorial and support staff and that number, times six reporters, won’t be a big number. In particular, it won’t be 59, or anywhere near it.

This is, I want to emphasize, the staff for a pretty good paper, in a competitive market. (Ann Arbor, another midwestern college town and just a bit larger than Columbia, doesn’t have a newspaper metro daily at all. [UPDATE: changed “newspaper” to “metro daily” because publishes in print on Thursdays and Sundays.) And there’s nothing wrong with reading your horoscope or being reminded by Granny that May really is one of the nicest months of the year. Anyone who wants to read that stuff should be able to.

But it’s not news, and it’s not hard to do, and it’s not hard to replace. No one surveying the changes the internet is bringing to the newspaper business is saying “My God, who will tell me about Big 12 football! Where will I find a recipe for spicy chicken wings!” What matters in the Tribune, and what’s at risk, is Terry Ganey’s work on a state coverup of elevated levels of E. Coli in Ozark lakes, Jonathan Braden on anti-gay protesters from Kansas picketing in Columbia, Jodie Jackson’s reporting of on a child molestation case against a local politician.

For people who see newspapers as whole institutions that need to be saved, their size (and not the just the dozens and dozens of people on the masthead, but everyone in business and operations as well) makes ideas like Coll’s seems like non-starters — we’re talking about a total workforce in the hundreds, so non-profit conversion seems crazy.

All that changes, though, if you start not from total head count but from a list of the people necessary for the production of Jones’ “iron core of news,” a list that, in the Columbia Daily Tribune’s case, would be something like a dozen. (To put this in perspective, KBIA, Columbia’s NPR affiliate, lists a staff of 20.)

Seen in that light, what’s needed for a non-profit news plan to work isn’t an institutional conversion, it’s a rescue operation. There are dozen or so reporters and editors in Columbia, Missouri, whose daily and public work is critical to the orderly functioning of that town, and those people are trapped inside a burning business model. With that framing of the problem, the question is how to get them out safely, and if that’s the question, Coll’s idea starts to look awfully good.

Guest Post of sorts: Nicholas Lemann at Columbia Journalism School Graduation

May 31, 2009

(My friend Nick gave a great graduation speech on Wednesday, at Columbia Journalism School. As Columbia hasn’t posted the speech themselves, I’m putting it up here. -clay)

Commencement 2009

Welcome, everyone, and warmest congratulations and good wishes to all our graduates and our families.

Columbia Journalism School is rapidly approaching a very important milestone, our centennial, but if I am doing the math correctly, today is a significant anniversary too: this is our seventy-fifth conferring of graduate degrees (before that we were an undergraduate school). As our older alumni often remind us, back in the early days of the Columbia Graduate School of Journalism, the room on the third floor of our building that you know as the Lecture Hall was outfitted as a newspaper city room, complete with desks in rows and a small printing press set off in the corner.

Today we offer three different graduate degrees. We teach journalism in many different media and on many different subjects. The Lecture Hall is no longer a newsroom. But we still operate on the principle that if a course can possibly take the form of a guided exercise in doing journalism, it will. The number of individual journalistic Web sites launched every academic year from within our building is now up in the dozens.

One of the great things about a university setting is that it permits us, while we are doing our work, also to think about its larger purpose, more than most of us will ever have the chance to do while working at a news organization. (And as you saw a couple of hours ago during President Bollinger’s commencement speech, we are not the only people thinking about our larger purpose.) That is what I would like to spend a few minutes on today, as my parting words to you.

My generation was raised to think that journalism worked this way: owning a newspaper, especially a big-city newspaper, was a “public trust.” So was owning a local television station, and for that matter a television network. (Alex Jones and Susan Tifft’s 1999 biography of The New York Times is simply and grandly called “The Trust,” for example.) We assumed that news organizations were naturally very profitable; the idea of them as a public trust meant that their owners had an obligation to operate them at a handsome, but less than maximal, profit, so that they could fund newsrooms filled with dogged, independent journalists who would report on public affairs, at home and abroad.

Although the word “public” in “public trust” implies a quasi-governmental function, we did not mean that government should have anything to do with news organizations. They should be insulated from the state and from politics. We preferred that family dynasties own newspapers—this was an oddly feudal vision of the good, which we probably wouldn’t have had much patience for if someone had proposed it in other domains. Our job, as journalists, was to speak out, loudly, for the value and independence of journalism. We were to keep the government and powerful private interests away, and keep the family dynasties mindful of their public, but not public-sector, obligations.

The obvious problem with this vision today is that many big news organizations are no longer making the profits that were supposed to fund great journalism. This sudden and dramatic change has generated a big, urgent conversation about the need for a “new business model” for news production. That conversation is important, but it isn’t all-important. There is a subtler but no less pressing need for a different kind of conversation, which will take place in a wider realm, about our purpose—what we do and how we interact with other elements in society.

For most of my career, journalists, like most other professionals, had a robust, vigorous, tough-minded ongoing internal conversation about what the standards and norms governing our work should be. We felt that our own judgments about what good journalism was, achieved after a lot of argument, should be accepted by the rest of the world. So, like members of an extended family, we should be internally disputatious and externally unified. We should defend, stand up for, journalism.

But this is no longer a good way of defining how we should conduct public conversation, when, so to speak, we are outside the family circle. First, we have a palette of standard journalistic forms—many of which you have just learned how to execute while you were here. They grew up over the years, in response to commercial imperatives, technology, and the judgments of our profession. They have always been in a process of evolution, but it seems certain that they are going to evolve more rapidly over these next few years.

It’s amazing to think about how many new journalistic forms have been developed over these last few years, because of the Internet: blogs, wikis, interactive graphics, animations, audio slide shows, and so on. If you keep constant our basic mission of gathering, assessing, and presenting information, the specific ways in which we do this are changing more rapidly than at any time I can remember. And we don’t get to decide on our own how they change—that depends on what the technology permits us to do, what provides an economic basis for our work, and what our audiences respond to. This is not a time for journalists to say, “We have decided that the traditional news story is the best basic form of news delivery, so we’re doggedly sticking with it.” This is, instead, and more interestingly, a time for experimentation, which also means it’s a time for listening.

Second, and more broadly, we have been in the habit of assuming that whatever appears in a newspaper or a magazine or on a broadcast or a news organization’s Web site is available there uniquely, and represents a distinctive and irreplaceable contribution to public life. I spent a lot of my time these days talking to non-journalists about journalism, and I can tell you that we all have to learn to make a more sophisticated argument for ourselves.

Much of the public that we believe we are serving needs to be persuaded that it cannot find out what’s going on in the world simply by looking at non-journalistic Web sites and blogs—that there is a special value to the work that news organizations do. Conversely, we need to be more precise in our thinking about exactly how we are serving that oft-mentioned cause, the public’s right to know, at a time when, thanks to the Internet, the public has more free unmediated access to information than at any time in the history of the world. It may be that the particulars of how we execute our general mission will have to change quite a lot for us to be able to make the strongest possible case for the value of our profession. We have to be willing to explore all that undefensively, with energy and enthusiasm.

The kind of journalism that we have trained you to do—reporting—requires economic support. American journalism’s traditional systems of support have eroded in recent years. We have to find new ones. Some of these will be commercial, some will be philanthropic, and some will be public. That is the case for all professions. It is the case for ours already, but in these next few years our reliance on a mix of support systems, many of which will be new, will become more obvious than it has been.

So this is your charge. You will not only have to reinvent journalism, you will also have to reinvent the conversation about journalism, making it less internal to the profession, and more interactive with the rest of society. That’s an enormous job; I wonder whether any generation of journalists has had a more momentous mission than yours. But, to me, and I hope to you too, it sounds like fun. Good luck. We’ll miss you.

The Failure of #amazonfail

April 15, 2009

In 1987, a teenage girl in suburban New York was discovered dazed and wrapped in a garbage bag, smeared with feces, with racial epithets scrawled on her torso. She had been attacked by half a dozen white men, then left in that state on the grounds of an apartment building. As the court case against her accused assailants proceeded, it became clear that she’d actually faked the attack, in order not to be punished for running away from home. Though the event initially triggered enormous moral outrage, evidence that it didn’t actually happen didn’t quell that outrage. Moral judgment is harder to reverse than other, less emotional forms; when an event precipitates the cleansing anger of righteousness, admitting you were mistaken feels dirty. As a result, there can be an enormous premium put on finding rationales for continuing to feel aggrieved, should the initial rationale disappear. Call it ‘conservation of outrage.’

A lot of us behaved like that this week, in our fury at Amazon. After an enormous number of books relating to lesbian, gay, bi-sexual, and transgendered (LGBT) themes lost their Amazon sales rank, and therefore their visibility in certain Amazon list and search functions, we participated in a public campaign, largely coordinated via the Twitter keyword #amazonfail (a form of labeling called a hashtag) because of a perceived injustice at the hands of that company, an injustice that didn’t actually occur.

Though the #amazonfail event is important for several reasons, I can’t write about it dispassionately, because I was an enthusiastic participant in its use on Sunday. I was wrong, because I believed things that weren’t true. As bad as that was, though, far worse is the retrofitting of alternate rationales to continue to view Amazon with suspicion, rationales that would not have provoked the outrage we felt had they been all we were asked to react to in the first place.

When trying to explain one’s actions, hindsight is always 20/400. With that caveat, I will say that the emotional pleasure of using the #amazonfail hashtag was intoxicating. There is no civil rights struggle in the US that matters more to me than the extension of equal rights without regard for sexual orientation. Here was a chance to strike a public blow for that cause, and I didn’t even have to write a check or get up from my chair to do it! I went so far as to publicly suggest a link between the Amazon de-listing and the anti-gay backlash following the legalization of gay marriage in Iowa and Vermont. My friend Nelson Minar called bullshit on my completely worthless speculation, which was the beginning of my realizing how much I’d been seduced by righteousness, and how stupid it had made me.

I was easily seduced in part because the actual, undisputed event — the change in status of LGBT-themed work on Amazon, while heterosexual material and anti-gay tracts kept their metadata intact — fit a template I know well, that of the factional use of a system open to public access. Examples are legion; one recent one was the top positions enjoyed by issues related to the legalization of marijuana on the site. (Though I am in favor of the legalization of marijuana, I also recognize that the results were an outcome no representative poll of the American people would have returned.) Seeing the change in status of LGBT books, I believed, vaguely, that Amazon was hosting and therefore complicit in a systemic attempt to remove such material from public discussion.

Here’s how stupid that belief made me. I have been thinking about the internet as hard as I can for the better part of two decades, and for the latter half of that time, I’ve been thinking about the problems of categorization systems, and it never occurred to me that the possible explanation for systemic bias might be something having to do with a technological system instead of a human one, that a changed classification in the Amazon database could trigger the change in status of tens of thousands of books.

I assumed (again, vaguely) that Amazon themselves had not adopted an anti-gay posture, and I recognized the possibility that this might be a trolling attack, but the idea that this was an event of mainly technological propagation, rather than a coordinated bit of anti-gay bias, simply escaped me. This isn’t because I am a generally stupid person; it was because I was, on Sunday, a specifically stupid person. When a lifetime of intellectual labor and study came up against a moment of emotional engagement, emotion won, in a rout.

Many people I love and respect disagree with me on this point; Mary Hodder in particular has written a very thoughtful case for why we should still regard Amazon as culpable and as a target for outrage. I don’t disagree with her interpretations of what Amazon did wrong (and I am using her as a particularly eloquent spokeswoman for a whole class of post-#amazonfail arguments) but I do disagree with her conclusion.

If we wanted to deny Amazon all benefit of the doubt, and to construct the maximum case against them, it would go something like this: it was stupid to have a categorization system that would allow LGBT-themed books to be de-ranked en masse; it was stupid to have a technological system that would allow that to happen easily and globally; it was stupid to remove sales rank from sexually explicit works, rather than adding “Safe Search” options; it was stupid to speak in PR-ese to the public about something that really matters; it was stupid to take as long as they did to dribble an explanation out.

Stupid stupid stupid stupid, yes, all true. If it had been a critique of those stupidities that circulated over the weekend, without the intentional mass de-listing, it would have kicked off a long, thoughtful conversation about metadata, system design, and public relations. Those are good conversations to have, we need to have them, but they are not conversations that would enrage thousands of people in the space of a few hours and kick off calls for boycotts and worse.

Intention is what we were reacting to, and the perception of intention matters, a lot. If you hit me with your car and kill me, the effect on you could be anything from grief counseling to being convicted of murder, and that range of outcomes would rest on a judgment about your intentions, even given the same actual event.

So it is here. Whatever stupidities Amazon is guilty of, none of them are hanging offenses. The problems they have with labeling and handling contested categories is a problem with all categorization systems since the world began. Metadata is worldview; sorting is a political act. Amazon would love to avoid those problems if they could – who needs the tsouris? — but they can’t. No one gets cataloging “right” in any perfect sense, and no algorithm returns the “correct” results. We know that, because we see it every day, in every large-scale system we use. No set of labels or algorithms solves anything once and for all; any working system for showing data to the user is a bag of optimizations and tradeoffs that are a lot worse than some Platonic ideal, but a lot better than nothing.

We know all that, but we’re no longer willing to cut Amazon any slack, because we don’t trust them, and we don’t trust them because we feel like they did something bad, even though we now know, intellectually, that they didn’t actually do the bad thing we’ve come to hate them for. They didn’t intend to silence gay-themed work, and they didn’t provide the means for groups of anti-gay bigots to do so either. Even if the employee currently blamed for the change in the database turned out to be a virulent homophobe, the problem is in not having checks and balances for making changes to the database, not widespread bias.

We’re used to the future turning out differently than we expected; it happens all the time. When the past turns out differently, though, it can get really upsetting, and because people don’t like that kind of upset, we’re at risk of finding new reasons to believe false things, rather than revising our sense of what actually happened.

We shouldn’t let that happen here; conservation of outrage is the wrong answer. We can apologize to Amazon while not losing sight of the fact that homophobic bias is wrong and we have to fight it everywhere it exists. What we can’t do, can’t afford to do if we want to think of ourselves as people who care about injustice, is to fight it in places it doesn’t exist.

Newspapers and Thinking the Unthinkable

March 13, 2009

Back in 1993, the Knight-Ridder newspaper chain began investigating piracy of Dave Barry’s popular column, which was published by the Miami Herald and syndicated widely. In the course of tracking down the sources of unlicensed distribution, they found many things, including the copying of his column to on usenet; a 2000-person strong mailing list also reading pirated versions; and a teenager in the Midwest who was doing some of the copying himself, because he loved Barry’s work so much he wanted everybody to be able to read it.

One of the people I was hanging around with online back then was Gordy Thompson, who managed internet services at the New York Times. I remember Thompson saying something to the effect of “When a 14 year old kid can blow up your business in his spare time, not because he hates you but because he loves you, then you got a problem.” I think about that conversation a lot these days.

The problem newspapers face isn’t that they didn’t see the internet coming. They not only saw it miles off, they figured out early on that they needed a plan to deal with it, and during the early 90s they came up with not just one plan but several. One was to partner with companies like America Online, a fast-growing subscription service that was less chaotic than the open internet. Another plan was to educate the public about the behaviors required of them by copyright law. New payment models such as micropayments were proposed. Alternatively, they could pursue the profit margins enjoyed by radio and TV, if they became purely ad-supported. Still another plan was to convince tech firms to make their hardware and software less capable of sharing, or to partner with the businesses running data networks to achieve the same goal. Then there was the nuclear option: sue copyright infringers directly, making an example of them.

As these ideas were articulated, there was intense debate about the merits of various scenarios. Would DRM or walled gardens work better? Shouldn’t we try a carrot-and-stick approach, with education and prosecution? And so on. In all this conversation, there was one scenario that was widely regarded as unthinkable, a scenario that didn’t get much discussion in the nation’s newsrooms, for the obvious reason.

The unthinkable scenario unfolded something like this: The ability to share content wouldn’t shrink, it would grow. Walled gardens would prove unpopular. Digital advertising would reduce inefficiencies, and therefore profits. Dislike of micropayments would prevent widespread use. People would resist being educated to act against their own desires. Old habits of advertisers and readers would not transfer online. Even ferocious litigation would be inadequate to constrain massive, sustained law-breaking. (Prohibition redux.) Hardware and software vendors would not regard copyright holders as allies, nor would they regard customers as enemies. DRM’s requirement that the attacker be allowed to decode the content would be an insuperable flaw. And, per Thompson, suing people who love something so much they want to share it would piss them off.

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world increasingly resembled the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en bloc. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.

* * *

The curious thing about the various plans hatched in the ’90s is that they were, at base, all the same plan: “Here’s how we’re going to preserve the old forms of organization in a world of cheap perfect copies!” The details differed, but the core assumption behind all imagined outcomes (save the unthinkable one) was that the organizational form of the newspaper, as a general-purpose vehicle for publishing a variety of news and opinion, was basically sound, and only needed a digital facelift. As a result, the conversation has degenerated into the enthusiastic grasping at straws, pursued by skeptical responses.

“The Wall Street Journal has a paywall, so we can too!” (Financial information is one of the few kinds of information whose recipients don’t want to share.) “Micropayments work for iTunes, so they will work for us!” (Micropayments work only where the provider can avoid competitive business models.) “The New York Times should charge for content!” (They’ve tried, with QPass and later TimesSelect.) “Cook’s Illustrated and Consumer Reports are doing fine on subscriptions!” (Those publications forgo ad revenues; users are paying not just for content but for unimpeachability.) “We’ll form a cartel!” (…and hand a competitive advantage to every ad-supported media firm in the world.)

Round and round this goes, with the people committed to saving newspapers demanding to know “If the old model is broken, what will work in its place?” To which the answer is: Nothing. Nothing will work. There is no general model for newspapers to replace the one the internet just broke.

With the old economics destroyed, organizational forms perfected for industrial production have to be replaced with structures optimized for digital data. It makes increasingly less sense even to talk about a publishing industry, because the core problem publishing solves — the incredible difficulty, complexity, and expense of making something available to the public — has stopped being a problem.

* * *

Elizabeth Eisenstein’s magisterial treatment of Gutenberg’s invention, The Printing Press as an Agent of Change, opens with a recounting of her research into the early history of the printing press. She was able to find many descriptions of life in the early 1400s, the era before movable type. Literacy was limited, the Catholic Church was the pan-European political force, Mass was in Latin, and the average book was the Bible. She was also able to find endless descriptions of life in the late 1500s, after Gutenberg’s invention had started to spread. Literacy was on the rise, as were books written in contemporary languages, Copernicus had published his epochal work on astronomy, and Martin Luther’s use of the press to reform the Church was upending both religious and political stability.

What Eisenstein focused on, though, was how many historians ignored the transition from one era to the other. To describe the world before or after the spread of print was child’s play; those dates were safely distanced from upheaval. But what was happening in 1500? The hard question Eisenstein’s book asks is “How did we get from the world before the printing press to the world after it? What was the revolution itself like?”

Chaotic, as it turns out. The Bible was translated into local languages; was this an educational boon or the work of the devil? Erotic novels appeared, prompting the same set of questions. Copies of Aristotle and Galen circulated widely, but direct encounter with the relevant texts revealed that the two sources clashed, tarnishing faith in the Ancients. As novelty spread, old institutions seemed exhausted while new ones seemed untrustworthy; as a result, people almost literally didn’t know what to think. If you can’t trust Aristotle, who can you trust?

During the wrenching transition to print, experiments were only revealed in retrospect to be turning points. Aldus Manutius, the Venetian printer and publisher, invented the smaller octavo volume along with italic type. What seemed like a minor change — take a book and shrink it — was in retrospect a key innovation in the democratization of the printed word. As books became cheaper, more portable, and therefore more desirable, they expanded the market for all publishers, heightening the value of literacy still further.

That is what real revolutions are like. The old stuff gets broken faster than the new stuff is put in its place. The importance of any given experiment isn’t apparent at the moment it appears; big changes stall, small changes spread. Even the revolutionaries can’t predict what will happen. Agreements on all sides that core institutions must be protected are rendered meaningless by the very people doing the agreeing. (Luther and the Church both insisted, for years, that whatever else happened, no one was talking about a schism.) Ancient social bargains, once disrupted, can neither be mended nor quickly replaced, since any such bargain takes decades to solidify.

And so it is today. When someone demands to know how we are going to replace newspapers, they are really demanding to be told that we are not living through a revolution. They are demanding to be told that old systems won’t break before new systems are in place. They are demanding to be told that ancient social bargains aren’t in peril, that core institutions will be spared, that new methods of spreading information will improve previous practice rather than upending it. They are demanding to be lied to.

There are fewer and fewer people who can convincingly tell such a lie.

* * *

If you want to know why newspapers are in such trouble, the most salient fact is this: Printing presses are terrifically expensive to set up and to run. This bit of economics, normal since Gutenberg, limits competition while creating positive returns to scale for the press owner, a happy pair of economic effects that feed on each other. In a notional town with two perfectly balanced newspapers, one paper would eventually generate some small advantage — a breaking story, a key interview — at which point both advertisers and readers would come to prefer it, however slightly. That paper would in turn find it easier to capture the next dollar of advertising, at lower expense, than the competition. This would increase its dominance, which would further deepen those preferences, repeat chorus. The end result is either geographic or demographic segmentation among papers, or one paper holding a monopoly on the local mainstream audience.

For a long time, longer than anyone in the newspaper business has been alive in fact, print journalism has been intertwined with these economics. The expense of printing created an environment where Wal-Mart was willing to subsidize the Baghdad bureau. This wasn’t because of any deep link between advertising and reporting, nor was it about any real desire on the part of Wal-Mart to have their marketing budget go to international correspondents. It was just an accident. Advertisers had little choice other than to have their money used that way, since they didn’t really have any other vehicle for display ads.

The old difficulties and costs of printing forced everyone doing it into a similar set of organizational models; it was this similarity that made us regard Daily Racing Form and L’Osservatore Romano as being in the same business. That the relationship between advertisers, publishers, and journalists has been ratified by a century of cultural practice doesn’t make it any less accidental.

The competition-deflecting effects of printing cost got destroyed by the internet, where everyone pays for the infrastructure, and then everyone gets to use it. And when Wal-Mart, and the local Maytag dealer, and the law firm hiring a secretary, and that kid down the block selling his bike, were all able to use that infrastructure to get out of their old relationship with the publisher, they did. They’d never really signed up to fund the Baghdad bureau anyway.

* * *

Print media does much of society’s heavy journalistic lifting, from flooding the zone — covering every angle of a huge story — to the daily grind of attending the City Council meeting, just in case. This coverage creates benefits even for people who aren’t newspaper readers, because the work of print journalists is used by everyone from politicians to district attorneys to talk radio hosts to bloggers. The newspaper people often note that newspapers benefit society as a whole. This is true, but irrelevant to the problem at hand; “You’re gonna miss us when we’re gone!” has never been much of a business model. So who covers all that news if some significant fraction of the currently employed newspaper people lose their jobs?

I don’t know. Nobody knows. We’re collectively living through 1500, when it’s easier to see what’s broken than what will replace it. The internet turns 40 this fall. Access by the general public is less than half that age. Web use, as a normal part of life for a majority of the developed world, is less than half that age. We just got here. Even the revolutionaries can’t predict what will happen.

Imagine, in 1996, asking some net-savvy soul to expound on the potential of craigslist, then a year old and not yet incorporated. The answer you’d almost certainly have gotten would be extrapolation: “Mailing lists can be powerful tools”, “Social effects are intertwining with digital networks”, blah blah blah. What no one would have told you, could have told you, was what actually happened: craiglist became a critical piece of infrastructure. Not the idea of craigslist, or the business model, or even the software driving it. Craigslist itself spread to cover hundreds of cities and has become a part of public consciousness about what is now possible. Experiments are only revealed in retrospect to be turning points.

In craigslist’s gradual shift from ‘interesting if minor’ to ‘essential and transformative’, there is one possible answer to the question “If the old model is broken, what will work in its place?” The answer is: Nothing will work, but everything might. Now is the time for experiments, lots and lots of experiments, each of which will seem as minor at launch as craigslist did, as Wikipedia did, as octavo volumes did.

Journalism has always been subsidized. Sometimes it’s been Wal-Mart and the kid with the bike. Sometimes it’s been Richard Mellon Scaife. Increasingly, it’s you and me, donating our time. The list of models that are obviously working today, like Consumer Reports and NPR, like ProPublica and WikiLeaks, can’t be expanded to cover any general case, but then nothing is going to cover the general case.

Society doesn’t need newspapers. What we need is journalism. For a century, the imperatives to strengthen journalism and to strengthen newspapers have been so tightly wound as to be indistinguishable. That’s been a fine accident to have, but when that accident stops, as it is stopping before our eyes, we’re going to need lots of other ways to strengthen journalism instead.

When we shift our attention from ‘save newspapers’ to ‘save society’, the imperative changes from ‘preserve the current institutions’ to ‘do whatever works.’ And what works today isn’t the same as what used to work.

We don’t know who the Aldus Manutius of the current age is. It could be Craig Newmark, or Caterina Fake. It could be Martin Nisenholtz, or Emily Bell. It could be some 19 year old kid few of us have heard of, working on something we won’t recognize as vital until a decade hence. Any experiment, though, designed to provide new models for journalism is going to be an improvement over hiding from the real, especially in a year when, for many papers, the unthinkable future is already in the past.

For the next few decades, journalism will be made up of overlapping special cases. Many of these models will rely on amateurs as researchers and writers. Many of these models will rely on sponsorship or grants or endowments instead of revenues. Many of these models will rely on excitable 14 year olds distributing the results. Many of these models will fail. No one experiment is going to replace what we are now losing with the demise of news on paper, but over time, the collection of new experiments that do work might give us the journalism we need.

Why iTunes is not a workable model for the newspaper business

March 3, 2009

(Note: This is more backstory than post. I’m putting this here so I can refer to it later, because I’m tired of reconstituting these arguments one conversation at a time.)

As the internet destroys the local advertising monopoly previously enjoyed by newspapers, newspaper people are talking about radically altering their current digital business models. One idea getting a lot of attention is that that iTunes Music Store points to a generalizable model for selling digital content. You can find endless examples of this belief by googling ‘iTunes newspapers’; as a reference, take David Lazurus’s piece iTunes proves newspapers can and should charge for online access, whose title neatly encapsulates the idea.

This belief is wrong, because iTunes relies on several unusual characteristics for its success, characteristics that are not general, and are in particular not applicable to news.

The first characteristic concerns music itself: people like to hear the same song more than once. There are dozens of songs you’d be happy to hear hundreds of times and hundreds of songs you’d be happy to hear dozens of times, but there are not many newspaper articles you’d read twice. This in turn means that music operates outside the classic intellectual property valuation problem: if I let you read something I write, and then try to charge you for it, I will fail, even if you liked it, because you don’t want to read it again. If I let you listen to a song I recorded, and then try to charge you for it, I may succeed, especially if you liked it, because you do want to listen to it again.

This desire to hear the same song more than once helps keep music from being interchangeable, or fungible, as Nick Carr points out. If I want to hear Weezer’s El Scorcho, then I’m not in the mood to accept substitutes.

The second characteristic is about music licensing. In particular, the two most obvious business models other than pay-per-track have been voided. Most forms of superdistribution have been rendered illegal, starting with Judge Patel’s 2001 injunction against Napster. The next year, the US Register of Copyrights instituted fee-per-user-per-song on music streamed over the internet, killing broadcast radio’s “play songs, run ads, pay ASCAP” arrangement as an option for internet distribution. With these two models rendered illegal, the labels have been able to insist on fee-per-song fees free of competition from alternatives.

Third, most popular music, and especially the back catalog, is owned by just four companies — Sony, EMI, Universal, and Warner. Like the major airlines, the business models and pricing of these companies move in synch. (This is not the same as formal collusion; with so few firms and their interests so similar, they can align their interests without acting in concert.) Because the music they control is not fungible, because they prefer to charge direct user fees, and because the alternative models have been limited or forbidden by legal or regulatory means, they have significant control of the music market as a whole.

People who want to hold iTunes up as a generalizable success often do some hand-waving about other competitive models from indie sites like eMusic, but the reality of the online music world is both simpler and more limited that this hand-waving would suggest: Mainstream music is tied to fee-per-track. iTunes Music Store competes with other fee-per-track services like Amazon, but faces little significant competition from other kinds of services.

Put another way, experimental business models exist, but aren’t being applied to popular music, and popular music isn’t being made available via experimental business models. eMusic has less than half the catalog iTunes has, and much of it from independent labels (partly from preference, partly because the four major labels won’t contract with them.) Fans of the iTunes model are right to point out that people use it because they find it more convenient, but they overlook the legal and regulatory hurdles put in place precisely to make other models less convenient (especially for law-abiding citizens.)

The resulting debate around IP law and the music business has principally focused on legitimacy, with “Thieves destroying legitimate businesses!” and “Luddites stifling innovation!” representing the poles of the conversation. In the context of business models for journalism, however, the question of legitimacy is irrelevant. Whether you like or loathe the major music labels, they are in a situation where they can sharply restrict distribution models they don’t like. Whatever you think of that in normative terms, it doesn’t describe any strategy available to newspapers, because, unlike iTunes, news can’t escape competition for alternate models of distribution.

Newspapers, even if every single one of them acted in collusion, cannot establish a monopoly on news. The main source of value for newspapers is reporting on events in the real world, and since those events can’t be copyrighted, and can be reported on by radio stations and television programs and non-profits and webloggers and twitterers and and and, news online will always be a competitive business in a way music is not.

Why Small Payments Won’t Save Publishers.

February 9, 2009

With continued turmoil in the advertising market, people who work at newspapers and magazines are wondering if micropayments will save them, with recent speculation in this direction by David Sarno of the LA Times, David Carr of the NY Times, and Walter Isaacson in Time magazine. Unfortunately for the optimists, micropayments — small payments made by readers for individual articles or other pieces of a la carte content — won’t work for online journalism.

To understand why not, there are two key pieces of background.

First, the label micropayments no longer makes any sense. Some of the early proposals for small payment systems did indeed imagine digital bookkeeping and billing for amounts as small as a thousandth of a cent; this was what made such payments “micro”. Current proposals, however, imagine pricing digital content in the range of a dime to a dollar. These aren’t micro-anything, they are just ordinary but small payments, no magic anywhere.

The essential thing to understand about small payments is that users don’t like being nickel-and-dimed. We have the phrase ‘nickel-and-dimed’ because this dislike is both general and strong. The result is that small payment systems don’t survive contact with online markets, because we express our hatred of small payments by switching to alternatives, whether supported by subscription or subsidy.

The other key piece of background isn’t about small payments themselves, but about the conversation. Such systems solve no problem the user has, and offer no service we want. As a result, conversations about small payments take place entirely among content providers, never involving us, the people who will ostensibly be funding these transactions. The conversation about small payments is also not a normal part of the conversation among publishers. Instead, the word ‘micropayment’ is a trope for desperation, entering the vernacular of a given media market only after threats to older models become visibly dire (as with the failed attempts to adopt small payments for webzines in the late ’90s, or for solo content like web comics and blogs earlier in this decade.)

The invocation of micropayments involves a displaced fantasy that the publishers of digital content can re-assert control over we unruly users in a media environment with low barriers to entry for competition. News that this has been tried many times in the past and has not worked is unwelcome precisely because if small payment systems won’t save existing publishers in their current form, there might not be a way to save existing publishers in their current form (an outcome generally regarded as unthinkable by existing publishers.)

Faith in salvation from small payments all but requires the adherent to ignore the past, whether existing critiques (e.g. Szabo 1996; Shirky 2000, 2003; Odlyzko 2003) or previous failures. Isaacson’s recent Time magazine cover story on micropayments, How to Save Your Newspaper, a classic of the form, recapitulates the argument put forward by Scott McCloud in his 2003 Misunderstanding Micropayments. That McCloud advanced the same argument that Isaacson does, and that the small payment system McCloud was proselytizing for failed exactly as predicted, seems not to have troubled Isaacson much, even though he offers no argument different from McCloud’s.

Another strategy among the faithful is to extrapolate from systems that do rely on small payments: iTunes, ringtone sales, or sales of digital goods in environments such as Cyworld. (This is the idea explored by David Carr in Let’s Invent an iTunes for News.) The lesson of iTunes et al (indeed, the only real lesson of small payment systems generally) is that if you want something that doesn’t survive contact with the market, you can’t let it have contact with the market.

Cyworld, a wildly popular online forum in Korea, is able to collect small payments for digital items, denominated in a currency called Dotori (“acorn”), because once a user is in Cyworld, SK Telecom, the corporate parent, controls all the distribution options. A Cyworld user who wants a certain kind of digital decoration for their online presence has to buy it through Cyworld if they want it; the monopoly within the environment is enough to prevent competition for pricing of digital goods. Similarly, mobile phone carriers go to great lengths to prevent the ringtone distribution network from becoming general-purpose, lest freely circulating mp3s drive the price to zero. In these cases, control over the users’ environment is essential to preventing competition from destroying the payment model.

Apple’s ITMS (iTunes Music Store) is perhaps the most interesting example. People are not paying for music on ITMS because we have decided that fee-per-track is the model we prefer, but because there is no market in which commercial alternatives can be explored. Everything from Napster to online radio has been crippled or killed by fiat; small payments survive in the absence of a market for other legal options. What’s interesting about ITMS, though, it that it contains other content that illustrates the dilemma of the journalists most sharply: podcasts. Apple has the machinery in place to charge for podcasts. Why don’t they?

Because they can’t afford to. Were they to start charging, their users would start looking around for other sources, as podcasts are offered free elsewhere. Losing user attention would be anathema to a company that wants as tight a relationship between ITMS and the iPod as it can get; the potential revenues are not worth the erosion of audience.

Without the RIAA et al, Apple is unable to corner the market on podcasts, and thus unable to charge. Unless Apple could get the world’s unruly podcasters to behave as a cartel, and convince all new entrants to forgo the resulting vacuum of attention, podcasts will continue to circulate without individual payments. With every single tool in place to have a functioning small payment sytem, even Apple can’t defy the users if there is any way for us to express our preferences.

Which brings us to us.

Because small payment systems are always discussed in conversations by and for publishers, readers are assigned no independent role. In every micropayments fantasy, there is a sentence or section asserting that what the publishers want will be just fine with us, and, critically, that we will be possessed of no desires of our own that would interfere with that fantasy.

Meanwhile, back in the real world, the media business is being turned upside down by our new freedoms and our new roles. We’re not just readers anymore, or listeners or viewers. We’re not customers and we’re certainly not consumers. We’re users. We don’t consume content, we use it, and mostly what we use it for is to support our conversations with one another, because we’re media outlets now too. When I am talking about some event that just happened, whether it’s an earthquake or a basketball game, whether the conversation is in email or Facebook or Twitter, I want to link to what I’m talking about, and I want my friends to be able to read it easily, and to share it with their friends.

This is superdistribution — content moving from friend to friend through the social network, far from the original source of the story. Superdistribution, despite its unweildy name, matters to users. It matters a lot. It matters so much, in fact, that we will routinely prefer a shareable amateur source to a professional source that requires us to keep the content a secret on pain of lawsuit. (Wikipedia’s historical advantage over Britannica in one sentence.)

Nickel-and-dimeing us for access to content made less useful by those very restrictions simply isn’t appealing. Newspapers can’t entice us into small payment systems, because we care too much about our conversation with one another, and they can’t force us into such systems, because Off the Bus and Pro Publica and Gawker and Global Voices and Ohmynews and and Smoking Gun all understand that not only is a news cartel unworkable, but that if one existed, their competitive advantage would be in attacking it rather than defending it.

The threat from micropayments isn’t that they will come to pass. The threat is that talking about them will waste our time, and now is not the time to be wasting time. The internet really is a revolution for the media ecology, and the changes it is forcing on existing models are large. What matters at newspapers and magazines isn’t publishing, it’s reporting. We should be talking about new models for employing reporters rather than resuscitating old models for employing publishers; the more time we waste fantasizing about magic solutions for the latter problem, the less time we have to figure out real solutions to the former one.