Archive for the ‘Uncategorized’ Category

Pulling Punches While Covering Print: A Reply to Ryan Chittum

June 18, 2014

Background: Yesterday, I wrote something about how nostalgia blunted skeptical coverage of Aaron Kushner’s dumb — and now failing — plans to build a newspaper empire in southern California. I particularly singled out Ryan Chittum of Columbia Journalism Review and Ken Doctor of Nieman Labs as analysts who suspected that Kushner would fail, but waffled when it came time to tell their readers.

This morning brought this from Chittum.

Here’s my reply.


Ryan,

Before you pen that response, let me be very clear what I’m accusing you of: you knew that Kushner’s plan was terrible, and you knew why it was terrible, but you pulled your punches, because you didn’t like the implications of the things you knew, and because your readers would like them even less.

Here are the opening three grafs of a story you could have, and should have, written in 2013:

Aaron Kushner, a 40-year-old former greeting-card executive with zero experience in newspapers, believes that there’s money to be made in print. He’s determined to remake the OC Register with this strategy.

The odds are against his plan. The Register doesn’t have the benefit of an international audience or a financial-industry focus. It has installed a hard paywall, which has been unsuccessful most places it’s been tried. Meanwhile, readers have more sources for news and entertainment than ever, and print advertising is in an inexorable tailspin.

Despite this, Kushner, enamored with the idea of readers cutting out pictures from the paper and sticking them on the fridge, thinks he can get smartphone-obsessed teenagers to pick up an old-fashioned newspaper. “You can’t put an iPad on the refrigerator,” he says. “You can’t put it in a scrapbook. You can’t tape it to your locker.” But print-loving stalwarts are aging rapidly, and print isn’t picking up readers under 30.

Bracing, no? No doubt about where a piece like that is going.

You could have written that intro because you wrote every sentiment included here. You just spread them out and caveated them so much that only a careful reader could tell you actually assumed that Kushner’s plan was doomed.

And to write those grafs I just went through your piece and rescued those sentiments from their hiding places. (You didn’t tell your readers the odds favored failure ’til the 33rd of 34 paragraphs. You didn’t just bury the lede, you poured cement over the gravesite.)

And you should have written that piece because if you’d opened with those three grafs a year ago, you could have taken a victory lap today. You’d need no more than a tweet to deal with the recent implosion — “As I said a year ago…” Instead, you’re stuck explaining why a piece larded with skeptical asides nevertheless presented Kushner as someone conducting “the most interesting—and important—experiment in journalism right now.” Oops.

This is the key commonality between you and Ken Doctor, which is that you couldn’t stomach praising Freedom’s actual strategy, but you couldn’t bring yourself to criticize nostalgia as a business model either. (Reader uproar!) To get out of this bind you invented an alternate reality in which Kushner was using print as a kind of short-term revenue stream, while committed to a longer-term transition to digital. (Here Doctor was worse than you, fabricating a bunch of “virtuous circles” to make up for the fact that Step 1 of Kushner’s master plan was “Invest in the decaying parts of my business.”)

Even your skepticism about print was hedged. When it came time to say, straight out, that print advertising is in an inexorable tailspin, you could’t do it. Do you remember what you wrote instead?

Print advertising is—barring a miracle—in an inexorable tailspin.

“Barring a miracle”? What? All forward-looking statements are “barring a miracle”; you’d only use a construction like that to make the plain truth of print’s tailspin more palatable to your weepy old readers.

And now, when Dan Kennedy says that you “hailed their print-centric approach“, you have to distance yourself from what was actually going on:

The bet on better journalism was always the key to success, not the emphasis on print itself.

Betting on journalism while de-emphasizing print sure sounds like an interesting plan. It does not, however, sound like Kushner’s plan. It’s not like there was sooper-seekrit Enron/Madoff stuff going on either; Kushner was stopping people on the street and telling them he was doubling down on print. This was a guy whose idea of sharing content involved fridge magnets.

And you knew. You knew a year ago. And you couldn’t bear to tell your readers without so much heming and hawing that you ended up shilling for Kushner instead of warning people away.

-clay

Nostalgia and Newspapers

June 17, 2014

Aaron Kushner, CEO of Freedom Communications and the architect of a contrarian plan to expand southern California newspapers, began erecting hard paywalls for his digital properties while increasing newsroom and print outlay in the summer of 2012. That strategy imploded earlier this month, with layoffs, buy-outs, furloughs and the merger of two Freedom papers, essentially reversing the previous two years of investment.

There’s no nice way to say this, so I might just as well get to it: Kushner’s plan was always dumb and we should celebrate its demise, not because it failed (never much in doubt) but because it distracted people with the fantasy of an easy out for dealing with the gradual end of profits from print.

The most important fight in journalism today isn’t between short vs. long-form publications, or fast vs. thorough newsrooms, or even incumbents vs. start-ups. The most important fight is between realists and nostalgists. Kushner was running a revival meeting for nostalgists: “The internet’s not such a big deal! Digital readers will pay rather than leave! Investing in print is just plain good business!”

That was some old-time religion right there. It was fun while it lasted, for people who miss the good old days. For people who do not miss the good old days, it was not fun.

A year or so ago, I was a guest lecturer in NYU’s Intro to Journalism class, 200 or so sophomores interested in adding journalism as a second major. (We don’t allow students to major in journalism alone, for the obvious reason.) One of the students had been dispatched to interview me in front of the class, and two or three questions in, she asked “So how do we save print?”

I was speechless for a moment, then exploded, telling her that print was in terminal decline and that everyone in the class needed to understand this if they were thinking of journalism as a major or a profession.

The students were shocked — for many of them, it was the first time anyone had talked to them that way. Even a prompt from me to predict the date of Time magazine’s demise elicited a small gasp. This was a room full of people would would rather lick asphalt than subscribe to a paper publication; what on earth would make them think print was anything other than a wasting asset?

And the answer is “Adults lying to them.” Our students were persuaded to discount their own experience in favor of what the grownups who cover the media industry were saying, and those grownups were saying that strategies like Kushner’s might just work.

People who ought to have known better, like Ryan Chittum at Columbia Journalism Review and Ken Doctor at Nieman, wrote puff pieces for Kushner, because they couldn’t bear to treat him like the snake-oil salesman he is.

Last year, Chittum said:

Kushner, a 40-year-old former greeting-card executive with zero experience in newspapers, is running the most interesting—and important—experiment in journalism right now.

The bit of that sentence before the comma now looks prescient; the bit after somewhat less so. Doctor was even worse, penning little “Maybe this thing still has a chance!” mash notes about Freedom a month before the layoffs hit.

The really terrible thing is that both Chittum and Doctor understood from the beginning what made Kushner’s plan a disaster. They just couldn’t bring themselves to give it to their readers straight. In the same piece where he lauds Kushner, Chittum waits til 2/3rds of the way through to point out that the core of Freedom’s strategy “has been unsuccessful most places it’s been tried”, and buries his most important observation — it will probably fail — at the very end of the piece.

What happened to Chittum and Doctor is endemic to media reporting generally — an industry that prides itself on pitiless public scrutiny of politics and industry has largely lost the will to cover itself with any more skepticism than sports reporters rooting for the home team. (Here’s Doctor, writing during the implosion of Freedom’s strategy: “The enthusiasm of Kushner and [partner] Spitz is hard to dislike.” What’s this, a Pharrell profile?)

When you have an audience mostly made up of nostalgists, there’s not much market demand for unvarnished truth. This kind of boosterism wouldn’t matter so much if it were only reaching weepy journos whose careers started in the Reagan administration. But the toxic runoff from CJR and Nieman’s form of unpaid PR is poisoning the minds of 19-year-olds.

We don’t have much time left to manage the transition away from print. We are statistically closer to the next recession than to the last one, and another year or two of double-digit ad declines will push many papers into 3-day printing schedules, or bankruptcy, or both. If you want to cry in your beer about the good old days, go ahead. Just stay the hell away from the kids while you’re reminiscing; pretending that dumb business models might suddenly start working has crossed over from sentimentality to child abuse.

The End of Higher Education’s Golden Age

January 29, 2014

Interest in using the internet to slash the price of higher education is being driven in part by hope for new methods of teaching, but also by frustration with the existing system. The biggest threat those of us working in colleges and universities face isn’t video lectures or online tests. It’s the fact that we live in institutions perfectly adapted to an environment that no longer exists.

In the first half of the 20th century, higher education was a luxury and a rarity in the U.S. Only 5% or so of adults, overwhelmingly drawn from well-off families, had attended college. That changed with the end of WWII. Waves of discharged soldiers subsidized by the GI Bill, joined by the children of the expanding middle class, wanted or needed a college degree. From 1945 to 1975, the number of undergraduates increased five-fold, and graduate students nine-fold. PhDs graduating one year got jobs teaching the ever-larger cohort of freshman arriving the next.

This growth was enthusiastically subsidized. Between 1960 and 1975, states more than doubled their rate of appropriations for higher education, from four dollars per thousand in state revenue to ten. Post-secondary education extended its previous mission—liberal arts education for elites—to include both more basic research from faculty and more job-specific training for students. Federal research grants quadrupled; at the same time, a Bachelor’s degree became an entry-level certificate for an increasing number of jobs.

This expansion created tensions among the goals of open-ended exploration, training for the workplace, and research, but these tensions were masked by new income. Decades of rising revenue meant we could simultaneously become the research arm of government and industry, the training ground for a rapidly professionalizing workforce, and the preservers of the liberal arts tradition. Even better, we could do all of this while increasing faculty ranks and reducing the time senior professors spent in the classroom. This was the Golden Age of American academia.

As long as the income was incoming, we were happy to trade funding our institutions with our money (tuition and endowment) for funding it with other people’s money (loans and grants.) And so long as college remained a source of cheap and effective job credentials, our new sources of support—students with loans, governments with research agendas—were happy to let us regard ourselves as priests instead of service workers.

Then the 1970s happened. The Vietnam war ended, removing “not getting shot at” as a reason to enroll. The draft ended too, reducing the ranks of future GIs, while the GI bill was altered to shift new costs onto former soldiers. During the oil shock and subsequent recession, demand for education shrank for the first time since 1945, and states began persistently reducing the proportion of tax dollars going to higher education, eventually cutting the previous increase in half. Rising costs and falling subsidies have driven average tuition up over 1000% since the 1970s.

Golden Age economics ended. Golden Age assumptions did not. For 30 wonderful years, we had been unusually flush, and we got used to it, re-designing our institutions to assume unending increases in subsidized demand. This did not happen. The year it started not happening was 1975. Every year since, we tweaked our finances, hiking tuition a bit, taking in a few more students, making large lectures a little larger, hiring a few more adjuncts.

Each of these changes looked small and reversible at the time. Over the decades, though, we’ve behaved like an embezzler who starts by taking only what he means to replace, but ends up extracting so much that embezzlement becomes the system. There is no longer enough income to support a full-time faculty and provide students a reasonably priced education of acceptable quality at most colleges or universities in this country.

Our current difficulties are not the result of current problems. They are the bill coming due for 40 years of trying to preserve a set of practices that have outlived the economics that made them possible.

* * *

Part of the reason this change is so disorienting is that the public conversation focuses, obsessively, on a few elite institutions. The persistent identification of higher education with institutions like Swarthmore and Stanford creates a collective delusion about the realities of education after high school; the collapse of Antioch College in 2008 was more widely reported than the threatened loss of accreditation for the Community College of San Francisco last year, even though CCSF has 85,000 students, and Antioch had fewer than 400 when it lost accreditation. Those 400, though, were attractive and well-off young people living together, which made for the better story. Life in the college dorm and on the grassy quad are rarities discussed as norms.

The students enrolled in places like CCSF (or Houston Community College, or Miami Dade) are sometimes called non-traditional, but this label is itself a holdover from another era, when residential colleges for teenage learners were still the norm. After the massive expansion of higher education into job training, the promising 18-year-old who goes straight to a residential college is now the odd one out.

Of the twenty million or so students in the US, only about one in ten lives on a campus. The remaining eighteen million—the ones who don’t have the grades for Swarthmore, or tens of thousands of dollars in free cash flow, or four years free of adult responsibility—are relying on education after high school not as a voyage of self-discovery but as a way to acquire training and a certificate of hireability.

Though the landscape of higher education in the U.S., spread across forty-six hundred institutions, hosts considerable variation, a few commonalities emerge: the bulk of students today are in their mid-20s or older, enrolled at a community or commuter school, and working towards a degree they will take too long to complete. One in three won’t complete, ever. Of the rest, two in three will leave in debt. The median member of this new student majority is just keeping her head above water financially. The bottom quintile is drowning.

One obvious way to improve life for the new student majority is to raise the quality of the education without raising the price. This is clearly the ideal, whose principal obstacle is not conceptual but practical: no one knows how. The value of our core product—the Bachelor’s degree—has fallen in every year since 2000, while tuition continues to increase faster than inflation.

The other way to help these students would be to dramatically reduce the price or time required to get an education of acceptable quality (and for acceptable read “enabling the student to get a better job”, their commonest goal.) This is a worse option in every respect except one, which is that it may be possible.

* * *

Running parallel to the obsession with elite institutions and students is the hollowing out of the academic job market. When the economic support from the Golden Age began to crack, we tenured faculty couldn’t be forced to share much of the pain. Our jobs were secure, so rather than forgo raises or return to our old teaching loads, we either allowed or encouraged those short-term fixes—rising tuition, larger student bodies, huge introductory lectures.

All that was minor, though, compared to our willingness to rely on contingent hires, including our own graduate students, ideal cheap labor. The proportion of part-time and non-tenure track teachers went from less than half of total faculty, before 1975, to over two-thirds now. In the same period, the proportion of jobs that might someday lead to tenure collapsed, from one in five to one in ten. The result is the bifurcation we have today: People who have tenure can’t lose it. People who don’t mostly can’t get it. The faculty has stopped being a guild, divided into junior and senior members, and become a caste system, divided into haves and have-nots.

Caste systems are notoriously hard to change. Though tenured professors often imagine we could somehow pay our non-tenured colleagues more, charge our students less, and keep our own salaries and benefits the same, the economics of our institutions remain as they have always been: our major expense is compensation (much of it for healthcare and pensions) distributed unequally between tenured and contingent faculty, and our major income is tuition.

I recently saw this pattern in my home institution. Last fall, NYU’s chapter of the American Association of University Professors proposed reducing senior administrative salaries by 25%, alongside a ‘steady conversion’ of non-tenure-track jobs to tenure-track ones ‘at every NYU location’. The former move would save us about $5 million a year. The latter would cost us $250 million.

Now NYU is relatively well off, but we do not have a spare quarter of a billion dollars per annum, not even for a good cause, not even if we sold the mineral rights under Greenwich Village. As at most institutions, even savage cuts in administrative compensation would not allow for hiring contingent faculty full time while also preserving tenured faculty’s benefits. (After these two proposals, the AAUP also advocated reducing ‘the student debt burden by expanding needs‐based financial aid’. No new sources of revenue were suggested.)

* * *

Many of my colleagues believe that if we just explain our plight clearly enough, legislators will come to their senses and give us enough money to save us from painful restructuring. I’ve never seen anyone explain why this argument will be persuasive, and we are nearing the 40th year in which similar pleas have failed, but “Someday the government will give us lots of money” remains in circulation, largely because contemplating our future without that faith is so bleak. If we can’t keep raising costs for students (we can’t) and if no one is coming to save us (they aren’t), then the only remaining way to help these students is to make a cheaper version of higher education for the new student majority.

The number of high-school graduates underserved or unserved by higher education today dwarfs the number of people for whom that system works well. The reason to bet on the spread of large-scale low-cost education isn’t the increased supply of new technologies. It’s the massive demand for education, which our existing institutions are increasingly unable to handle. That demand will go somewhere.

Those of us in the traditional academy could have a hand in shaping that future, but doing so will require us to relax our obsessive focus on elite students, institutions, and faculty. It will require us to stop regarding ourselves as irreplaceable occupiers of sacred roles, and start regarding ourselves as people who do several jobs society needs done, only one of which is creating new knowledge.

It will also require us to abandon any hope of restoring the Golden Age. It was a nice time, but it wasn’t stable, and it didn’t last, and it’s not coming back. It’s been gone ten years more than it lasted, in fact, and in the time since it ended, we’ve done more damage to our institutions, and our students, and our junior colleagues, by trying to preserve it than we would have by trying to adapt. Arguing that we need to keep the current system going just long enough to get the subsidy the world owes us is really just a way of preserving an arrangement that works well for elites—tenured professors, rich students, endowed institutions—but increasingly badly for everyone else.

Healthcare.gov and the Gulf Between Planning and Reality

November 19, 2013

Back in the mid-1990s, I did a lot of web work for traditional media. That often meant figuring out what the client was already doing on the web, and how it was going, so I’d find the techies in the company, and ask them what they were doing, and how it was going. Then I’d tell management what I’d learned. This always struck me as a waste of my time and their money; I was like an overpaid bike messenger, moving information from one part of the firm to another. I didn’t understand the job I was doing until one meeting at a magazine company.

The thing that made this meeting unusual was that one of their programmers had been invited to attend, so management could outline their web strategy to him. After the executives thanked me for explaining what I’d learned from log files given me by their own employees just days before, the programmer leaned forward and said “You know, we have all that information downstairs, but nobody’s ever asked us for it.”

I remember thinking “Oh, finally!” I figured the executives would be relieved this information was in-house, delighted that their own people were on it, maybe even mad at me for charging an exorbitant markup on local knowledge. Then I saw the look on their faces as they considered the programmer’s offer. The look wasn’t delight, or even relief, but contempt. The situation suddenly came clear: I was getting paid to save management from the distasteful act of listening to their own employees.

In the early days of print, you had to understand the tech to run the organization. (Ben Franklin, the man who made America a media hothouse, called himself Printer.) But in the 19th century, the printing press became domesticated. Printers were no longer senior figures — they became blue-collar workers. And the executive suite no longer interacted with them much, except during contract negotiations.

This might have been nothing more than a previously hard job becoming easier, Hallelujah. But most print companies took it further. Talking to the people who understood the technology became demeaning, something to be avoided. Information was to move from management to workers, not vice-versa (a pattern that later came to other kinds of media businesses as well.) By the time the web came around and understanding the technology mattered again, many media executives hadn’t just lost the habit of talking with their own technically adept employees, they’d actively suppressed it.

I’d long forgotten about that meeting and those looks of contempt (I stopped building websites before most people started) until the launch of Healthcare.gov.

* * *

For the first couple of weeks after the launch, I assumed any difficulties in the Federal insurance market were caused by unexpected early interest, and that once the initial crush ebbed, all would be well. The sinking feeling that all would not be well started with this disillusioning paragraph about what had happened when a staff member at the Centers for Medicare & Medicaid Services, the department responsible for Healthcare.gov, warned about difficulties with the site back in March. In response, his superiors told him…

[...] in effect, that failure was not an option, according to people who have spoken with him. Nor was rolling out the system in stages or on a smaller scale, as companies like Google typically do so that problems can more easily and quietly be fixed. Former government officials say the White House, which was calling the shots, feared that any backtracking would further embolden Republican critics who were trying to repeal the health care law.

The idea that “failure is not an option” is a fantasy version of how non-engineers should motivate engineers. That sentiment was invented by a screenwriter, riffing on an after-the-fact observation about Apollo 13; no one said it at the time. (If you ever say it, wash your mouth out with soap. If anyone ever says it to you, run.) Even NASA’s vaunted moonshot, so often referred to as the best of government innovation, tested with dozens of unmanned missions first, several of which failed outright.

Failure is always an option. Engineers work as hard as they do because they understand the risk of failure. And for anything it might have meant in its screenplay version, here that sentiment means the opposite; the unnamed executives were saying “Addressing the possibility of failure is not an option.”

* * *

The management question, when trying anything new, is “When does reality trump planning?” For the officials overseeing Healthcare.gov, the preferred answer was “Never.” Every time there was a chance to create some sort of public experimentation, or even just some clarity about its methods and goals, the imperative was to avoid giving the opposition anything to criticize.

At the time, this probably seemed like a way of avoiding early failures. But the project’s managers weren’t avoiding those failures. They were saving them up. The actual site is worse—far worse—for not having early and aggressive testing. Even accepting the crassest possible political rationale for denying opponents a target, avoiding all public review before launch has given those opponents more to complain about than any amount of ongoing trial and error would have.

In his most recent press conference about the problems with the site, the President ruefully compared his campaigns’ use of technology with Healthcare.gov:

And I think it’s fair to say that we have a pretty good track record of working with folks on technology and IT from our campaign, where, both in 2008 and 2012, we did a pretty darn good job on that. [...] If you’re doing it at the federal government level, you know, you’re going through, you know, 40 pages of specs and this and that and the other and there’s all kinds of law involved. And it makes it more difficult — it’s part of the reason why chronically federal IT programs are over budget, behind schedule.

It’s certainly true that Federal IT is chronically challenged by its own processes. But the biggest problem with Healthcare.gov was not timeline or budget. The biggest problem was that the site did not work, and the administration decided to launch it anyway.

This is not just a hiring problem, or a procurement problem. This is a management problem, and a cultural problem. The preferred method for implementing large technology projects in Washington is to write the plans up front, break them into increasingly detailed specifications, then build what the specifications call for. It’s often called the waterfall method, because on a timeline the project cascades from planning, at the top left of the chart, down to implementation, on the bottom right.

Like all organizational models, waterfall is mainly a theory of collaboration. By putting the most serious planning at the beginning, with subsequent work derived from the plan, the waterfall method amounts to a pledge by all parties not to learn anything while doing the actual work. Instead, waterfall insists that the participants will understand best how things should work before accumulating any real-world experience, and that planners will always know more than workers.

This is a perfect fit for a culture that communicates in the deontic language of legislation. It is also a dreadful way to make new technology. If there is no room for learning by doing, early mistakes will resist correction. If the people with real technical knowledge can’t deliver bad news up the chain, potential failures get embedded rather than uprooted as the work goes on.

At the same press conference, the President also noted the degree to which he had been kept in the dark:

OK. On the website, I was not informed directly that the website would not be working the way it was supposed to. Had I been informed, I wouldn’t be going out saying “Boy, this is going to be great.” You know, I’m accused of a lot of things, but I don’t think I’m stupid enough to go around saying, this is going to be like shopping on Amazon or Travelocity, a week before the website opens, if I thought that it wasn’t going to work.

Healthcare.gov is a half-billion dollar site that was unable to complete even a thousand enrollments a day at launch, and for weeks afterwards. As we now know, programmers, stakeholders, and testers all expressed reservations about Healthcare.gov’s ability to do what it was supposed to do. Yet no one who understood the problems was able to tell the President. Worse, every senior political figure—every one—who could have bridged the gap between knowledgeable employees and the President decided not to.

And so it was that, even on launch day, the President was allowed to make things worse for himself and his signature program by bragging about the already-failing site and inviting people to log in and use something that mostly wouldn’t work. Whatever happens to government procurement or hiring (and we should all hope those things get better) a culture that prefers deluding the boss over delivering bad news isn’t well equipped to try new things.

* * *

With a site this complex, things were never going to work perfectly the first day, whatever management thought they were procuring. Yet none of the engineers with a grasp of this particular reality could successfully convince the political appointees to adopt the obvious response: “Since the site won’t work for everyone anyway, let’s decide what tests to run on the initial uses we can support, and use what we learn to improve.”

In this context, testing does not just mean “Checking to see what works and what doesn’t.” Even the Healthcare.gov team did some testing; it was late and desultory, but at least it was there. (The testers recommended delaying launch until the problems were fixed. This did not happen.) Testing means seeing what works and what doesn’t, and acting on that knowledge, even if that means contradicting management’s deeply held assumptions or goals. In well run organizations, information runs from the top down and from the bottom up.

One of the great descriptions of what real testing looks like comes from Valve software, in a piece detailing the making of its game Half-Life. After designing a game that was only sort of good, the team at Valve revamped its process, including constant testing:

This [testing] was also a sure way to settle any design arguments. It became obvious that any personal opinion you had given really didn’t mean anything, at least not until the next test. Just because you were sure something was going to be fun didn’t make it so; the testers could still show up and demonstrate just how wrong you really were.

“Any personal opinion you had given really didn’t mean anything.” So it is in the government; any insistence that something must work is worthless if it actually doesn’t.

An effective test is an exercise in humility; it’s only useful in a culture where desirability is not confused with likelihood. For a test to change things, everyone has to understand that their opinion, and their boss’s opinion, matters less than what actually works and what doesn’t. (An organization that isn’t learning from its users has decided it doesn’t want to learn from its users.)

Given comparisons with technological success from private organizations, a common response is that the government has special constraints, and thus cannot develop projects piecemeal, test with citizens, or learn from its mistakes in public. I was up at the Kennedy School a month after the launch, talking about technical leadership and Healthcare.gov, when one of the audience members made just this point, proposing that the difficult launch was unavoidable, because the government simply couldn’t have tested bits of the project over time.

That observation illustrates the gulf between planning and reality in political circles. It is hard for policy people to imagine that Healthcare.gov could have had a phased rollout, even while it is having one.

At launch, on October 1, only a tiny fraction of potential users could actually try the service. They generated concrete errors. Those errors were handed to a team whose job was to improve the site, already public but only partially working. The resulting improvements are incremental, and put in place over a period of months. That is a phased rollout, just one conducted in the worst possible way.

The vision of “technology” as something you can buy according to a plan, then have delivered as if it were coming off a truck, flatters and relieves managers who have no idea and no interest in how this stuff works, but it’s also a breeding ground for disaster. The mismatch between technical competence and executive authority is at least as bad in government now as it was in media companies in the 1990s, but with much more at stake.

* * *

Tom Steinberg, in his remembrance of his brilliant colleague Chris Lightfoot, said this about Lightfoot’s view of government and technology:

[W]hat he fundamentally had right was the understanding that you could no longer run a country properly if the elites don’t understand technology in the same way they grasp economics or ideology or propaganda. His analysis and predictions about what would happens if elites couldn’t learn were savage and depressingly accurate.

Now, and from now on, government will interact with its citizens via the internet, in increasingly important ways. This is a non-partisan issue; whichever party is in the White House will build and launch new forms of public service online. Unfortunately for us, our senior political figures have little habit of talking to their own technically adept employees.

If I had to design a litmus test for whether our political class grasps the internet, I would look for just one signal: Can anyone with authority over a new project articulate the tradeoff between features, quality, and time?

When a project cannot meet all three goals—a situation Healthcare.gov was clearly in by March—something will give. If you want certain features at a certain level of quality, you’d better be able to move the deadline. If you want overall quality by a certain deadline, you’d better be able to simplify, delay, or drop features. And if you have a fixed feature list and deadline, quality will suffer.

Intoning “Failure is not an option” will be at best useless, and at worst harmful. There is no “Suddenly Go Faster” button, no way you can throw in money or additional developers as a late-stage accelerant; money is not directly tradable for either quality or speed, and adding more programmers to a late project makes it later. You can slip deadlines, reduce features, or, as a last resort, just launch and see what breaks.

Denying this tradeoff doesn’t prevent it from happening. If no one with authority over the project understands that, the tradeoff is likely to mean sacrificing quality by default. That just happened to this administration’s signature policy goal. It will happen again, as long politicians can be allowed to imagine that if you just plan hard enough, you can ignore reality. It will happen again, as long as department heads imagine that complex technology can be procured like pencils. It will happen again as long as management regards listening to the people who understand the technology as a distasteful act.

Remembering Aaron by taking care of each other

January 23, 2013

My friend Will Morrell, brilliant and sardonic, was the first person I ever knew to make his living close to the machine. A few years after we got out of college, he got a job in New York designing DSP chips for pinball machines, and crashed with me for a couple of months. During his stay, he convinced me I could dump my theater career in favor of finding a way to make my living on the internet. That turned out to be one of the most important conversations of my life, but I’ll never be able to thank him properly. He killed himself a few years ago.

I teach at NYU, where a quartet of students recently decided the world needed a privacy-respecting alternative to Facebook. The result, Diaspora, was the longest of long shots, a project that shouldn’t have a chance in hell of working, but it’s turned into an interesting experiement, largely because of Ilya Zhitomirskiy, whose Wikipedia page calls him “the most idealistic and privacy-conscious member of the group.” Ilya killed himself a little over a year ago.

Then there’s Aaron Swartz.

Aaron’s suicide has stirred the kind of political anger he cared about — as Taren Stinebrickner-Kauffman said in her heart-wrenching and beautiful memorial, Aaron would have loved to be here — and those of us who care about the things Aaron cared about have to work harder to support open culture and the free flow of information, now that he’s not with us.

But there’s something else we need to do. We need to take care of the people in our community who are depressed.

Suicide is not hard to understand, not intellectually anyway. It is, as Jeff Atwood says, the ultimate in ragequitting. But for most of us, it is hard to understand emotionally.

For a variety of reasons, I’ve spent a lot of time with people at risk of suicide, and so have become an amateur scholar of that choice. When I first started reading about it, I thought of it as the last stop on a road of stress and upset — when things get bad, people suffer, and when they get really bad, they take their own lives.

And what I learned was that this view is wrong. Suicide is no more a heightened reaction to the slings and arrows of outrageous fortune than depression is just being extra sad. Most of us won’t kill ourselves, no matter how bad things get. The common thread among people who commit suicide is that they are suicidal.

It’s tempting to narrow our focus to the proximate causes. Ilya killed himself because of the stresses of running a startup, Aaron because of out-of-control prosecutors. And these are proximate causes — without Stephen Heymann and Carmen Ortiz gunning for Aaron, he wouldn’t have hanged himself two weeks ago. He had people near and far who loved him, but given what was happening to him, that wasn’t enough.

But suicide is not only about proximate causes. Bernie Madoff destroyed his friends and his family, turned his own name into a curse in every community of which he was a member, and there he sits, in the jail cell where he will almost certainly die, writing missives to the outside world about the state of the financial system. Madoff hasn’t killed himself because he isn’t the kind of person who kills himself.

The reasons someone commits suicide at a particular moment aren’t all the reasons they commit suicide. Often those aren’t even the most important reasons. No one likes this part of the explanation. It makes an event that’s already as awful as it can be more awful, because it renders it inexplicable. Most of us, even with our occasional desires for the ground to swallow us up, can sympathize but never really empathize.

Among the /b/tards of 4chan, there is a culture of celebrating people who ‘an hero’ (their preferred synonym for suicide), but there’s also a message that frequently gets copied and pasted in those threads, whose core paragraph is:

so instead of killing yourself, why don’t you just get the fuck out? leave the basement, leave your house, leave the mother fucking country. go on an adventure. spend your time doing something awesome, like tracking down some terrorists. go be james bond. go fuck up a shark with a harpoon. danger? fuck that, you were going up against 100% death rate before, you’re being safe now? fuck EVERYTHING man the world is your oyster.

This message is both energetic and clueless, like most of /b/, an adolescent version of “Freedom’s just another word for nothing left to lose”, where not caring is a prelude to excellent adventures.

But not caring doesn’t mean giving up on the things holding you back. Not caring — real despair — means giving up. Period.

The warning signs are well known. Persistent withdrawal. Mood swings. Previous attempts or family history. Talking about it. Self-erasure. The American Association of Suicidology has a good overview. There’s no perfect checklist, but we are better at knowing the signs in general than we are at acting on them in specific cases. Ask yourself “Whose suicide would sadden but not surprise me?”

The useful responses are well-known too. Reach out. Ask. Listen. Take casual mentions of suicide seriously. Be persistent about checking on someone. Don’t try to cure or fix anyone; that’s out of your league. Just tell them you care, and point them to professional resources. Wikipedia has a list of English-language suicide prevention hotlines. Help Guide has a good overview of what we know about prevention generally, and how to help the potentially suicidal.

We need to remember Aaron by supporting free culture, and by limiting prosecutorial abuse. But we also need to remember Aaron by taking care of each other. Our community is unusually welcoming of people disproportionately at risk, but we are also unusually capable of working together without always building close social ties. Github is great for distributing participation, but it is lousy for seeing how everyone is doing.

We need to remember Aaron by thinking of those among us at risk of dying as he did. Most of them won’t be martyrs — most of them will be people like Ilya and Will — but their deaths will be just as awful. And, as with every cause Aaron stood for, we know how to take on this problem. What we need is the will to act.

Napster, Udacity, and the Academy

November 12, 2012

Fifteen years ago, a research group called The Fraunhofer Institute announced a new digital format for compressing movie files. This wasn’t a terribly momentous invention, but it did have one interesting side effect: Fraunhofer also had to figure out how to compress the soundtrack. The result was the Motion Picture Experts Group Format 1, Audio Layer III, a format you know and love, though only by its acronym, MP3.

The recording industry concluded this new audio format would be no threat, because quality mattered most. Who would listen to an MP3 when they could buy a better-sounding CD at the record store? Then Napster launched, and quickly became the fastest-growing piece of software in history. The industry sued Napster and won, and it collapsed even more suddenly than it had arisen.

If Napster had only been about free access, control of legal distribution of music would then have returned the record labels. That’s not what happened. Instead, Pandora happened. Last.fm happened. Spotify happened. ITunes happened. Amazon began selling songs in the hated MP3 format.

How did the recording industry win the battle but lose the war? How did they achieve such a decisive victory over Napster, then fail to regain control of even legal distribution channels? They crushed Napster’s organization. They poisoned Napster’s brand. They outlawed Napster’s tools. The one thing they couldn’t kill was the story Napster told.

The story the recording industry used to tell us went something like this: “Hey kids, Alanis Morisette just recorded three kickin’ songs! You can have them, so long as you pay for the ten mediocrities she recorded at the same time.” Napster told us a different story. Napster said “You want just the three songs? Fine. Just ‘You Oughta Know’? No problem. Every cover of ‘Blue Suede Shoes’ ever made? Help yourself. You’re in charge.”

The people in the music industry weren’t stupid, of course. They had access to the same internet the rest of us did. They just couldn’t imagine—and I mean this in the most ordinarily descriptive way possible—could not imagine that the old way of doing things might fail. Yet things did fail, in large part because, after Napster, the industry’s insistence that digital distribution be as expensive and inconvenient as a trip to the record store suddenly struck millions of people as a completely terrible idea.

Once you see this pattern—a new story rearranging people’s sense of the possible, with the incumbents the last to know—you see it everywhere. First, the people running the old system don’t notice the change. When they do, they assume it’s minor. Then that it’s a niche. Then a fad. And by the time they understand that the world has actually changed, they’ve squandered most of the time they had to adapt.

It’s been interesting watching this unfold in music, books, newspapers, TV, but nothing has ever been as interesting to me as watching it happen in my own backyard. Higher education is now being disrupted; our MP3 is the massive open online course (or MOOC), and our Napster is Udacity, the education startup.

We have several advantages over the recording industry, of course. We are decentralized and mostly non-profit. We employ lots of smart people. We have previous examples to learn from, and our core competence is learning from the past. And armed with these advantages, we’re probably going to screw this up as badly as the music people did.

* * *

A massive open online class is usually a series of video lectures with associated written materials and self-scoring tests, open to anyone. That’s what makes them OOCs. The M part, though, comes from the world. As we learned from Wikipedia, demand for knowledge is so enormous that good, free online materials can attract extraordinary numbers of people from all over the world.

Last year, Introduction to Artificial Intelligence, an online course from Stanford taught by Peter Norvig and Sebastian Thrun, attracted 160,000 potential students, of whom 23,000 completed it, a scale that dwarfs anything possible on a physical campus. As Thrun put it, “Peter and I taught more students AI, than all AI professors in the world combined.” Seeing this, he quit and founded Udacity, an educational institution designed to offer MOOCs.

The size of Thrun and Norvig’s course, and the attention attracted by Udacity (and similar organizations like Coursera, P2PU, and University of the People), have many academics worrying about the effect on higher education. The loudest such worrying so far has been The Trouble With Online Education, a New York Times OpEd by Mark Edmunson of the University of Virginia. As most critics do, Edmundson focussed on the issue of quality, asking and answering his own question: “[C]an online education ever be education of the very best sort?”

Now you and I know what he means by “the very best sort”—the intimate college seminar, preferably conducted by tenured faculty. He’s telling the story of the liberal arts education in a selective residential college and asking “Why would anyone take an online class when they can buy a better education at UVA?”

But who faces that choice? Are we to imagine an 18 year old who can set aside $250K and 4 years, but who would have a hard time choosing between a residential college and a series of MOOCs? Elite high school students will not be abandoning elite colleges any time soon; the issue isn’t what education of “the very best sort” looks like, but what the whole system looks like.

Edmundson isn’t crazy enough to argue that all college experiences are good, so he hedges. He tells us “Every memorable class is a bit like a jazz composition”, without providing an analogy for the non-memorable ones. He assures us that “large lectures can also create genuine intellectual community”, which of course means they can also not do that. (He doesn’t say how many large lectures fail his test.) He says “real courses create intellectual joy,” a statement that can be accurate only as a tautology. (The MOOC Criticism Drinking Game: take a swig whenever someone says “real”, “true”, or “genuine” to hide the fact that they are only talking about elite schools instead of the median college experience.)

I was fortunate enough to get the kind of undergraduate education Edmundson praises: four years at Yale, in an incredible intellectual community, where even big lecture classes were taught by seriously brilliant people. Decades later, I can still remember my art history professor’s description of the Arnolfini Wedding, and the survey of modern poetry didn’t just expose me to Ezra Pound and HD, it changed how I thought about the 20th century.

But you know what? Those classes weren’t like jazz compositions. They didn’t create genuine intellectual community. They didn’t even create ersatz intellectual community. They were just great lectures: we showed up, we listened, we took notes, and we left, ready to discuss what we’d heard in smaller sections.

And did the professors also teach our sections too? No, of course not; those were taught by graduate students. Heaven knows what they were being paid to teach us, but it wasn’t a big fraction of a professor’s salary. The large lecture isn’t a tool for producing intellectual joy; it’s a tool for reducing the expense of introductory classes.

* * *

Higher education has a bad case of cost disease (sometimes called Baumol’s cost disease, after one of its theorizers.) The classic example is the string quartet; performing a 15-minute quartet took a cumulative hour of musician time in 1850, and takes that same hour today. This is not true of the production of food, or clothing, or transportation, all of which have seen massive increases in value created per hour of labor. Unfortunately, the obvious ways to make production more efficient—fewer musicians playing faster—wouldn’t work as well for the production of music as for the production of cars.

An organization with cost disease can use lower paid workers, increase the number of consumers per worker, subsidize production, or increase price. For live music, this means hiring less-talented musicians, selling more tickets per performance, writing grant applications, or, of course, raising ticket prices. For colleges, this means more graduate and adjunct instructors, increased enrollments and class size, fundraising, or, of course, raising tuition.

The great work on college and cost-disease is Robert Archibald and David Feldman’s Why Does College Cost So Much? Archibald and Feldman conclude that institution-specific explanations—spoiled students expecting a climbing wall; management self-aggrandizement at the expense of educational mission—hold up less well than the generic observation: colleges need a lot of highly skilled people, people whose wages, benefits, and support costs have risen faster than inflation for the last thirty years.

Cheap graduate students let a college lower the cost of teaching the sections while continuing to produce lectures as an artisanal product, from scratch, on site, real time. The minute you try to explain exactly why we do it this way, though, the setup starts to seem a little bizarre. What would it be like to teach at a university where a you could only assign books you yourself had written? Where you could only ask your students to read journal articles written by your fellow faculty members? Ridiculous. Unimaginable.

Every college provides access to a huge collection of potential readings, and to a tiny collection of potential lectures. We ask students to read the best works we can find, whoever produced them and where, but we only ask them to listen to the best lecture a local employee can produce that morning. Sometimes you’re at a place where the best lecture your professor can give is the best in the world. But mostly not. And the only thing that kept this system from seeming strange was that we’ve never had a good way of publishing lectures.

This is the huge difference between music and education. Starting with Edison’s wax cylinders, and continuing through to Pandora and the iPod, the biggest change in musical consumption has come not from production but playback. Hearing an excellent string quartet play live in an intimate venue has indeed become a very expensive proposition, as cost disease would suggest, but at the same time, the vast majority of music listened to on any given day is no longer recreated live.

* * *

Harvard, where I was fortunate enough to have a visiting lectureship a couple of years ago, is our agreed-upon Best Institution, and it is indeed an extraordinary place. But this very transcendence should make us suspicious. Harvard’s endowment, 31 billion dollars, is over three hundred times the median, and only one college in five has an endowment in the first place. Harvard also educates only about a tenth of a percent of the 18 million or so students enrolled in higher education in any given year. Any sentence that begins “Let’s take Harvard as an example…” should immediately be followed up with “No, let’s not do that.”

This atypical bent of our elite institutions covers more than just Harvard. The top 50 colleges on the US News and World Report list (which includes most of the ones you’ve heard of) only educate something like 3% of the current student population. The entire list, about 250 colleges, educates fewer than 25%.

The upper reaches of the US college system work like a potlatch, those festivals of ostentatious giving. The very things the US News list of top colleges prizes—low average class size, ratio of staff to students—mean that any institution that tries to create a cost-effective education will move down the list. This is why most of the early work on MOOCs is coming out of Stanford and Harvard and MIT. As Ian Bogost says, MOOCs are marketing for elite schools.

Outside the elite institutions, though, the other 75% of students—over 13 million of them—are enrolled in the four thousand institutions you haven’t heard of: Abraham Baldwin Agricultural College. Bridgerland Applied Technology College. The Laboratory Institute of Merchandising. When we talk about college education in the US, these institutions are usually left out of the conversation, but Clayton State educates as many undergraduates as Harvard. Saint Leo educates twice as many. City College of San Francisco enrolls as many as the entire Ivy League combined. These are where most students are, and their experience is what college education is mostly like.

* * *

The fight over MOOCs isn’t about the value of college; a good chunk of the four thousand institutions you haven’t heard of provide an expensive but mediocre education. For-profit schools like Kaplan’s and the University of Phoenix enroll around one student in eight, but account for nearly half of all loan defaults, and the vast majority of their enrollees fail to get a degree even after six years. Reading the academic press, you wouldn’t think that these statistics represented a more serious defection from our mission than helping people learn something about Artificial Intelligence for free.

The fight over MOOCs isn’t even about the value of online education. Hundreds of institutions already offer online classes for credit, and half a million students are already enrolled in them. If critics of online education were consistent, they would believe that the University of Virginia’s Bachelor of Interdisciplinary Studies or Rutger’s MLIS degree are abominations, or else they would have to believe that there is a credit-worthy way to do online education, one MOOCs could emulate. Neither argument is much in evidence.

That’s because the fight over MOOCs is really about the story we tell ourselves about higher education: what it is, who it’s for, how it’s delivered, who delivers it. The most widely told story about college focuses obsessively on elite schools and answers a crazy mix of questions: How will we teach complex thinking and skills? How will we turn adolescents into well-rounded members of the middle class? Who will certify that education is taking place? How will we instill reverence for Virgil? Who will subsidize the professor’s work?

MOOCs simply ignore a lot of those questions. The possibility MOOCs hold out isn’t replacement; anything that could replace the traditional college experience would have to work like one, and the institutions best at working like a college are already colleges. The possibility MOOCs hold out is that the educational parts of education can be unbundled. MOOCs expand the audience for education to people ill-served or completely shut out from the current system, in the same way phonographs expanded the audience for symphonies to people who couldn’t get to a concert hall, and PCs expanded the users of computing power to people who didn’t work in big companies.

Those earlier inventions systems started out markedly inferior to the high-cost alternative: records were scratchy, PCs were crashy. But first they got better, then they got better than that, and finally, they got so good, for so cheap, that they changed people’s sense of what was possible.

In the US, an undergraduate education used to be an option, one way to get into the middle class. Now it’s a hostage situation, required to avoid falling out of it. And if some of the hostages having trouble coming up with the ransom conclude that our current system is a completely terrible idea, then learning will come unbundled from the pursuit of a degree just as as songs came unbundled from CDs.

If this happens, Harvard will be fine. Yale will be fine, and Stanford, and Swarthmore, and Duke. But Bridgerland Applied Technology College? Maybe not fine. University of Arkansas at Little Rock? Maybe not fine. And Kaplan College, a more reliable producer of debt than education? Definitely not fine.

* * *

Udacity and its peers don’t even pretend to tell the story of an 18-year old earning a Bachelor’s degree in four years from a selective college, a story that only applies to a small minority of students in the US, much less the world. Meanwhile, they try to answer some new questions, questions that the traditional academy—me and my people—often don’t even recognize as legitimate, like “How do we spin up 10,000 competent programmers a year, all over the world, at a cost too cheap to meter?”

Udacity may or may not survive, but as with Napster, there’s no containing the story it tells: “It’s possible to educate a thousand people at a time, in a single class, all around the world, for free.” To a traditional academic, this sounds like crazy talk. Earlier this fall, a math instructor writing under the pen name Delta enrolled in Thrun’s Statistics 101 class, and, after experiencing it first-hand, concluded that the course was

…amazingly, shockingly awful. It is poorly structured; it evidences an almost complete lack of planning for the lectures; it routinely fails to properly define or use standard terms or notation; it necessitates occasional massive gaps where “magic” happens; and it results in nonstandard computations that would not be accepted in normal statistical work.

Delta posted ten specific criticisms of the the content (Normal Curve Calculations), teaching methods (Quiz Regime) and the MOOC itself (Lack of Updates). About this last one, Delta said:

So in theory, any of the problems that I’ve noted above could be revisited and fixed on future pass-throughs of the course. But will that happen at Udacity, or any other massive online academic program?

The very next day, Thrun answered that question. Conceding that Delta “points out a number of shortcomings that warrant improvements”, Thrun detailed how they were going to update the class. Delta, to his credit, then noted that Thrun had answered several of his criticisms, and went on to tell a depressing story of a fellow instructor at his own institution who had failed to define the mathematical terms he was using despite student requests.

Tellingly, when Delta was criticizing his peer, he didn’t name the professor, the course, or even his institution. He could observe every aspect of Udacity’s Statistics 101 (as can you) and discuss them in public, but when criticizing his own institution, he pulled his punches.

Open systems are open. For people used to dealing with institutions that go out of their way to hide their flaws, this makes these systems look terrible at first. But anyone who has watched a piece of open source software improve, or remembers the Britannica people throwing tantrums about Wikipedia, has seen how blistering public criticism makes open systems better. And once you imagine educating a thousand people in a single class, it becomes clear that open courses, even in their nascent state, will be able to raise quality and improve certification faster than traditional institutions can lower cost or increase enrollment.

College mottos run the gamut from Bryn Mawr’s Veritatem Dilexi (I Delight In The Truth) to the Laboratory Institute of Merchandising’s Where Business Meets Fashion, but there’s a new one that now hangs over many of them: Quae Non Possunt Non Manent. Things That Can’t Last Don’t. The cost of attending college is rising above inflation every year, while the premium for doing so shrinks. This obviously can’t last, but no one on the inside has any clear idea about how to change the way our institutions work while leaving our benefits and privileges intact.

In the academy, we lecture other people every day about learning from history. Now its our turn, and the risk is that we’ll be the last to know that the world has changed, because we can’t imagine—really cannot imagine—that story we tell ourselves about ourselves could start to fail. Even when it’s true. Especially when it’s true.

Save Homicide Watch

September 4, 2012

Homicide Watch, one of the most important experiments in improving journalism in the era of the internet, will die in a week, unless we save them. They need our help. Please donate $50 on Kickstarter to help them keep working. If you can’t do $50, do $25, or $5. (For the record, I’m in for $500.)

Please donate. Everything helps.

If you stop reading here and just give them a little money and a little public love, you’ll be making the country a better place. If you want more, read on.

Homicide Watch is a two-year old journalism startup that reports on every murder in Washington, DC. Every one. It is the only institution, in one of the most murderous cities in the country, that does. The Washington Post doesn’t, City Paper doesn’t, news radio doesn’t, local TV doesn’t. Just Homicide Watch.

Homicide Watch matters because they are more than just thorough, they’re innovative. They’ve designed the site like a set of feeds and a wiki rather than like the crime section of a newspaper. The home page shows the most recent updates on all pending cases. Each victim gets their own page, where those updates are aggregated. Every murder is mapped. Every page has the tip line for the detective assigned to the case. Every page hosts a place for remembrance of the victim.

This way of working isn’t just technologically innovative, it’s socially innovative, in a way journalism desperately needs. The home page of Homicide Watch shows photos of the most recent seven victims; as I write this, all seven, are, as usual, African-American. Like a lot of white people, I knew, vaguely, that crime was worse in black neighborhoods than in white ones, but actually seeing the faces, too often of kids not much older than my own, makes it clear how disproportionately this crime is visited on African-Americans.

This is one of their most remarkable innovations: murder coverage has always been racially biased in this country. The old saying for New York papers was not to bother covering murders north of 96th street, where the victims were almost certainly black. The casual exclusion of most citizens from most DC crime coverage is a continuation of that legacy; news organizations aren’t generally in the business of introducing their readers to the realities of life elsewhere in their town. Simon Anderson, father of 5, was gunned down in northwest DC. Terrance Robinson was killed in southeast DC the day before. Antwan Boseman was shot to death two miles south and three hours earlier. And so on, and on, and on.

With a newspaper or a 30 minute broadcast, scarcity of space or time is enough of an excuse to keep ignoring crimes like these. Homicide Watch reverses that logic. Inclusion is the default; one victim equals one new page. Unlike the traditional press, racial bias would take extra work. Their motto, unique in metro crime reporting, is this: “Mark every death. Remember every victim. Follow every case.” It’s hard to describe how radical such a sensible idea is.

And the kicker on all this technological and social innovation? They do this with two employees, one of whom works part-time. Laura Amico is the full-time reporter, editor, and publisher; Chris Amico built and manages the platform. They demonstrate, daily and decisively, what crime coverage could and should look like in the 21st century.

It will all go away in a week if we don’t save it.

The threat that Homicide Watch will close comes from one failure and one success. The failure is simple: the Amicos assumed that if they could show news organizations how to do better work on important news with a smaller budget, those organizations would license the platform. The Amicos have done a couple of these deals, but many fewer than they’d hoped for, and not enough to keep the lights on. (Please join me in being astonished that legacy news organizations talk innovation but walk “Minimize change.”)

Even with this difficulty, they’ve been relentless about keeping the site open, but then came the success, the other thing that threatens the site: Harvard offered Laura Amico a Nieman Fellowship, richly deserved, in recognition of her work, and she can’t go to court every day in DC while she’s in Cambridge.

She needs to replace herself while she’s gone, and whatever virtues startups have, organizational slack and a deep bench of talent isn’t one of them. Which is where we come in. They’ve structured the new hire as a student reporting lab, and now they need the money to make that work, to keep the site running.

American journalism is having two crises of institutionalization. The first, public and obvious, is the difficulty existing institutions have in adapting to the internet. (If the Washington Post walked their talk, they’d have acquired Homicide Watch outright by now.) But the second crisis, less widely understood, is the lack of institutional stability for startups. Even news organizations that are, by internet standards, august and ancient, like Talking Points Memo, still struggle, and early startups like Homicide Watch live moment to moment, however good and important their work.

Kickstarter assumes that the logical supporters for projects are the people who benefit most, but Homicide Watch’s natural audience — Antwan Boseman’s friends and Terrance Robinson’s neighbors and Simon Anderson’s children — are already suffering from a crime that we should all regard as a shared injustice. They shouldn’t have to pony up just so someone will take the murder of their loved ones seriously, just so someone will mark every death and remember every victim and follow every case.

We are the only people who can save Homicide Watch. If they can raise another $20K in the next week to cover the cost of one reporter for a year, the site goes on, DC keeps an irreplaceable service, and there’s more time to figure out how to get the model adopted in other cities. If they can’t cover the cost, it goes away.

And if it goes away…you’ll be fine. I’ll be fine. People like us, we’re always fine; if you’re reading this, you probably live in a place with low crime rates and good coverage. Homicide Watch isn’t for us, but the people it is for can’t support it, and without us, they won’t have it anymore.

I’ve spent the last year looking at journalism startups, and the one that has most impressed me is Homicide Watch — innovative high-quality work on a civically critical issue that increases coverage and reduces cost. Laura and Chris are amazing. I don’t work for them, but today I do — if you can help in any way, with a donation or by publicizing the Kickstarter campaign or both, I’d be grateful, and so would they, and so would the residents of DC.

Please donate. Even $5. They need you.

And if you give them a donation, tweet it out and put it on Facebook. This will only work if we get our friends to get their friends to help.

Warren Buffett’s Newspaper Purchase

May 29, 2012

Last week, Warren Buffett, the CEO of Berkshire Hathaway, purchased two dozen small newspapers and their related online properties from Media General, a conglomerate with holdings mainly concentrated in the southeast Unites States. After finalizing the deal, Buffett issued a memo on his view of the acquisition. (The text of the memo is here.)

Buffett is famously the greatest investor alive, and almost as famous for plain-spoken observations about the market, so you’d assume his first public memo about Media General would offer insight into the current state of the newspaper business. The actual text, however, merely makes it clear that Buffett doesn’t understand that business.

He makes much of drops in print readership, but circulation has not been strongly correlated with revenue for two decades now. Print circulation began its decline during the Reagan administration, while newspaper profits increased through the middle of the last decade, reaching their highest point just before the current collapse.

He alludes to the relationship between readers and newspapers half a dozen times in a thousand-word memo; in that same space, he never once uses the words ‘advertising’ or ‘advertisers’. Reading the letter, you’d never know that papers make most of their money from companies, not citizens, and have done for the better part of two centuries. It is disruptive competition for ad dollars, not changing reader engagement, that has sent the industry into a tailspin.

Without understanding what’s in it for advertisers, an exhortation to “reign supreme in matters of local importance” has no more strategic value than a halftime cheer; if all it took to run a profitable paper was good local coverage, newspapers would not be in this bind in the first place. But good local coverage isn’t enough, because ordinary citizens don’t pay for news. What we paid for, when we used to buy the paper, was a bundle of news and sports and coupons and job listings, printed together and delivered to our doorstep.

People are still happy to pay for reproduction and delivery, of course. We just pay our ISPs now. And we still care about news and sports and coupons and job listings — we just get them from different places, and, critically, money that goes to Groupon or Hot Jobs [correction] no longer subsidizes the newsroom. Ad dollars lost to competing content creators can be fought for; ad dollars that no longer subsidize content at all are never coming back.

Buffett asks his new employees to provide “your best thinking as we work out the blend of digital and print,” but the eventual blend of digital and print is going to be digital. Small town residents of the sort Media General serves tend to adopt technology late, but the future eventually arrives, even in Opelika, Alabama.

These mistakes don’t mean Berkshire Hathaway will lose money on the deal, of course; given the fire-sale price, every one of those papers could close in the next ten years and Buffett’s firm would still make money on interest paid and the underlying real estate. These mistakes do mean that Buffett’s sepia-toned view of the newspaper business, with its references to linotype machines and newspaper-throwing contests, is badly off the mark. For the readers, old habits are not the same as current loyalty. For the advertisers, previous convenience does not translate into planned commitment. For the papers, historical longevity does not imply future resilience.

So here’s a prediction: long before the Berkshire Hathaway warrants expire, many of the papers Buffett has invested in will have reduced both print days and their newsroom staff, and journalists will be writing the “What went wrong with the Media General deal?” story.

The answer to that question is already apparent: Buffett wants to talk like a philanthropist and an investor at the same time, not understanding that the public good and the bottom line have diverged. A newspaper used to be both a profitable business and a public service, but this was just an accident of the competitive (or rather uncompetitive) media landscape. His commonsense approach to saving papers won’t work, because there is no longer any commonsense business model for a former monopoly that is still seeing its revenues erode faster than its costs.

* * *

Correction: In an earlier version, I had used Career Builder as an example, but as Ben Welsh points out in the comments, CB is jointly owned by newspaper companies. I substituted Hot Jobs as an example of a service that removes revenues from content subsidy entirely.

Pick up the pitchforks:
David Pogue underestimates Hollywood

January 20, 2012

Writing in his blog on the New York Times yesterday, David Pogue, one of the Times’ tech columnists, advises toning down the alarmist rhetoric over SOPA, suggesting that opponents of the bill (and its Senate cousin PIPA) should Put Down the Pitchforks. He takes particular issue with people who have criticized SOPA without actually understanding the text of the bill. Then, after this preamble, Pogue proceeds to offer an explanation of SOPA that makes it clear that he does not understand the text of the bill.

Here’s his description of what’s at stake:

If the entertainment industry’s legal arm gets out of control, [opponents] say, they could deem almost anything to be a piracy site. YouTube could be one, because lots of videos include bits of TV shows and copyrighted music. Facebook could be one, because people often link to copyrighted videos and songs. Google and Bing would be responsible for removing every link to a questionable Web site. Just a gigantic headache.

That’s Pogue’s perspective: Letting Hollywood decide whether any given site with user contributions facilitates piracy would amount to nothing more than “a gigantic headache.” (Me, I’d have gone with “a violation of the First Amendment.”) To come to a conclusion like this, you’d have to believe that traditional media companies are committed to balancing their desire for control with a respect for citizen rights, and indeed, Pogue does seem to believe this (hence the observation that bad things would happen only if the entertainment industry’s legal arm gets out of control.)

If their legal arm gets out of control? This is an industry that demands payment from summer camps if the kids sing Happy Birthday or God Bless America, an industry that issues takedown notices for a 29-second home movie of a toddler dancing to Prince. Traditional American media firms are implacably opposed to any increase in citizens’ ability to create, copy, save, alter, or share media on our own. They fought against cassette audio tapes, and photocopiers. They swore the VCR would destroy Hollywood. They tried to kill Tivo. They tried to kill MiniDisc. They tried to kill player pianos. They do this whenever a technology increases user freedom over media. Every time. Every single time.

And they don’t just want control — they want it at low cost, and high speed. Pogue talks about the bills’ allowing the Government to sue. What he doesn’t mention is that the bills were also written to allow “market based” system allowing media firms to get injunctions against sites they don’t like, or that they were written so that firms who host user conversations would have incentives to censor their own users in advance, rather than waiting for notification from a copyright holder, as happens now.

I know David Pogue, and he’s a smart guy. I don’t think he’s intentionally trying to obscure the way the bill imagines letting media firms escape due process and impose “market-based” censorship. I think he simply cannot imagine that the bills are as bad as they actually are.

This is a general problem — there is a reasonable conversation to be had about sites set up for large, commercial operations that are designed to violate copyright. And because there’s a reasonable conversation to be had, Pogue (and many others) simply imagine that the core of SOPA must therefore be reasonable. Surely Hollywood wouldn’t try to suspend due process, would they? Or create a parallel enforcement system? Or take away citizen recourse if they were unfairly silenced? They wouldn’t imagine the possibility of a longer jail term for streaming a Michael Jackson video than Jackson’s own doctor got for killing actual Michael Jackson? Would they?

Hollywood wants to take the law into their own hands — they had our representatives add a vigilante clause, for God’s sake, to protect overzealous censors from legal challenge by users — and like a Scooby Doo™ episode, they would have gotten away with it too, if it hadn’t been for us meddlesome kids.

Chris Dodd, lobbyist-in-chief for the MPAA, who is watching the thick end of a hundred million bucks of paid-for legislation swirl around the drain, has been reduced to bizarrely indirect defensiveness, touting the First Amendment credentials of the bill’s co-sponsors, as if that meant these bills must therefore be clean as well. Yet the very first substantive section of SOPA, Section 2.a.1., gives the game away, by being a little too touchy about its constitutional implications: “Nothing in this Act shall be construed to impose a prior restraint on free speech.” Got that? This bill is not about prior restraint. Totally not! What would make you even think such a thing!?

And arguments like Pogue’s are dangerous not because they are pro-SOPA — Pogue himself is glad it is in trouble — but because they obscure the core historical fact: The American media industry tries to stifle user freedom. Every time. Every single time.

We should delight in the stand we’ve taken in favor of things like, say, notifications, and trials, and proof before censoring someone, but we should get ready to do it again next year, and the year after that. The risk now is not that SOPA will pass. The risk is that we’ll think we’ve won. We haven’t; they’ll be back. Get ready to have this fight again.

Newspapers, Paywalls, and Core Users

January 4, 2012

This may be the year where newspapers finally drop the idea of treating all news as a product, and all readers as customers.

One early sign of this shift was the 2010 launch of paywalls for the London Times and Sunday Times. These involved no new strategy; however, the newspaper world was finally willing to regard them as real test of whether general-interest papers could induce a critical mass of readers to pay. (Nope.) Then, in March, the New York Times introduced a charge for readers who crossed a certain threshold of article views (a pattern copied from the financial press, and especially the Financial Times) which is generating substantial revenue. Finally, and most recently, were a pair of announcements last month: The Chicago Sun-Times was adopting a new threshold charge, and the Minneapolis Star-Tribune said that their existing one was also working well. Taken together, these events are a blow to the idea that online news can be treated as a simple product for sale, as the physical newspaper was.

For some time now, newspaper people have been insisting, sometimes angrily, that we readers will soon have to pay for content (an assertion that had already appeared, in just that form, by 1996.) During that same period, freely available content grew ten-thousand-fold, while buyers didn’t. In fact, as Paul Graham has pointed out, “Consumers never really were paying for content, and publishers weren’t really selling it either…Almost every form of publishing has been organized as if the medium was what they were selling, and the content was irrelevant.”

Commercial radio is ad-supported because no one could figure out a way to restrict access to radio waves; cable TV collects revenues because someone figured out a way to restrict access to co-axial cables. The logic of the internet is that everyone pays for the infrastructure, then everyone gets to use it. This is obviously incompatible with print economics, but oddly, the industry’s faith in ‘every reader a customer’ has been largely unshaken by newspapers’ own lived experience of the move to the web.

A printed paper was a bundle. A reader who wanted only sports and stock tables bought the same paper as a reader who wanted local and national politics, or recipes and horoscopes. Online, though, that bundle is torn apart, every day, by users who forward each other individual URLs, without regard to front pages or named sections or intended navigation. This unbundling leads to the odd math of web readership — if you rank readers by pages viewed in a month, the largest group by far, between a third and half of them, will visit only a single page. A smaller group will read two pages in a month, a still smaller group will read three, and so on, up to the most active reader, in a group by herself, who will read dozens of pages a day, hundreds in a month.

Against this hugely variable audience behavior, a paywall was all-or-nothing: “If you won’t give us any money, we won’t show you any ads!” Offered this all-or-nothing choice, most readers opted for ‘nothing’; the day they launched their paywall, the Times of London shrank its digital audience from a large multiple of its print circulation to a small fraction of it. This isn’t a problem with general-interest paywalls — it is the problem, widely understood before the turn of the century, and one to which there has never been a convincing answer. The easy part of treating digital news as a product is getting money from 2% of your audience. The hard part is losing 98% of your advertising base.

* * *

To understand newspapers’ 15-year attachment to paywalls, you have to understand “Everyone must pay!” not just as an economic assertion, but as a cultural one. Though the journalists all knew readership would plummet if their paper dropped imported content like Dear Abby or the funny pages, they never really had to know just how few people were reading about the City Council or the water main break. Part of the appeal of paywalls, even in the face of their economic ineffectiveness, was preserving this sense that a coupon-clipper and a news junkie were both just customers, people whose motivations the paper could serve in general, without having to understand in particular.

The article threshold has often been discussed as if it was simply a new method of getting readers to pay, to which the reply has to be “Yes, except for most of them.” Calling article thresholds a “leaky” or “porous” paywall understates the enormity of the change; the metaphor of a leak suggests a mostly intact container that lets out a minority of its contents, but a paper that shares even two pages a month frees a majority of users from any fee at all. By the time the threshold is at 20 pages (a number fast becoming customary) a paper has given up on even trying to charge between 85% and 95% of its readers, and it will only convince a minority of that minority to pay.

Newspapers have two principal sources of revenue, readers and advertisers, and they can operate at mass or niche scale for each of those groups. A metro-area daily paper is a mass product for customers (many readers buy the paper) and for advertisers (many readers see their ads.) Newsletters and small-circulation magazines, by contrast, serve niche readers, and therefore niche advertisers — Fire Chief, Mother Earth News. (Some newsletters get by with no advertising at all, as with Cooks’ Illustrated, where part of what the user pays for is freedom from ads, or rather freedom from a publisher beholden to advertisers.)

Paywalls were an attempt to preserve the old mass+mass model after a transition to digital distribution. With so few readers willing to pay, and therefore so few readers to advertise to, paywalls instead turned newspapers into a niche+niche business. What the article threshold creates is an odd hybrid — a mass market for advertising, but a niche market for users. As David Cohn has pointed out, this is the commercial equivalent of the National Public Radio model, where sponsors reach all listeners, but direct suport only comes from donors. (Lest NPR seem like small ball, it’s worth noting that the Times ‘ has convinced something like one out of every hundred of its online readers to pay, while NPR affiliates’ success rate is something like one in twelve. Newspapers with thresholds now aspire to NPR’s persuasiveness.) Paywalls encourage a paper to focus on the value of their content. Thresholds encourage them to focus on the value of their users.

* * *

Threshold charges subject the logic of the print bundle — a bit of everything for everybody, slathered with ads — to two new questions: What do our most committed users want? And what will turn our most frequent readers into committed users? Here are some things that won’t: More ads. More gossip. More syndicated copy. This is new territory for mainstream papers, who have always had head count rather than engagement as their principal business metric.

Celebrities behaving badly always drive page-views through the roof, but those readers will be anything but committed. Meanwhile, the people who hit the threshold and then hand over money are, almost by definition, people who regard the paper not just as an occasional source of interesting articles, but as an essential institution, one whose continued existence is vital no matter what today’s offerings are.

In discussing why the most loyal subset of readers would pay for access to the Times, Felix Salmon described some of the motivations reported by users: “I like the product, understand the incentives involved, and want its production to continue” and “I feel that maintaining a quality NYT is immensely important to the country as a whole.” Now, and presumably from now on, the readers that matter most are disproportionately likely to score high on the God Forbid index (as in “God forbid the Sun-Times not be around to keep an eye on the politicians!”)

The people who feel this way have always been a minority of the readership, a fact obscured by print bundles, but made painfully visible by paywalls. When a paper abandons the standard paywall strategy, it gives up on selling news as a simple transaction. Instead, it must also appeal to its readers’ non-financial and non-transactional motivations: loyalty, gratitude, dedication to the mission, a sense of identification with the paper, an urge to preserve it as an institution rather than a business.

* * *

Thresholds are now mostly being tried at big-city papers — New York, Chicago, Minneapolis. Most papers, however, are not the Minneapolis Star-Tribune. Most papers are the Springfield Reporter, papers with a circulation 20,000 or less, and mostly made up of content bought from the Associated Press and United Media. These papers may not do well on the God Forbid index, because they produce so little original content, and they may not find thresholds financially viable, because the most engaged hundredth of their audience will number in the dozens, not the thousands.

On the other hand, local reporting is almost the only form of content for which the local paper is the sole source, so it’s also possible to imagine a virtuous circle for at least some small papers, where a civically-minded core of citizens step in to fund the paper in return for an increase in local coverage, both of politics and community matters. (It’s hard to overstate how vital community coverage is for small-town papers, which have typically been as much village well as town crier.)

It’s too early to know what behaviors the newly core users will reward or demand from their papers. They may start asking to see fewer or less intrusive ads than non-paying readers do. They may reward papers that make their comments section more conversational (as the Times has just done.) The most dramatic change, though, is that the paying users are almost certain to be more political engaged than the median reader.

There has never been a mass market for good journalism in this country. What there used to be was a mass market for print ads, coupled with a mass market for a physical bundle of entertainment, opinion, and information; these were tied to an institutional agreement to subsidize a modicum of real journalism. In that mass market, the opinions of the politically engaged readers didn’t matter much, outnumbered as they were by people checking their horoscopes. This suited advertisers fine; they have always preferred a centrist and distanced political outlook, the better not to alienate potential customers. When the politically engaged readers are also the only paying readers, however, their opinion will come to matter more, and in ways that will sometimes contradict the advertisers’ desires for anodyne coverage.

It will take time for the economic weight of those users to affect the organizational form of the paper, but slowly slowly, form follows funding. For the moment at least, the most promising experiment in user support means forgoing mass in favor of passion; this may be the year where we see how papers figure out how to reward the people most committed to their long-term survival.