Manufacturers of mainframe computers made good decisions about making and selling mainframe computers and devising important refinements to them in their R. & D. departments—“sustaining innovations,” Christensen called them—but, busy pleasing their mainframe customers, one tinker at a time, they missed what an entirely untapped customer wanted, personal computers, the market for which was created by what Christensen called “disruptive innovation”: the selling of a cheaper, poorer-quality product that initially reaches less profitable customers but eventually takes over and devours an entire industry.
Things you own or use that are now considered to be the product of disruptive innovation include your smartphone and many of its apps, which have disrupted businesses from travel agencies and record stores to mapmaking and taxi dispatch. Much more disruption, we are told, lies ahead. Christensen has co-written books urging disruptive innovation in higher education (“The Innovative University”), public schools (“Disrupting Class”), and health care (“The Innovator’s Prescription”). His acolytes and imitators, including no small number of hucksters, have called for the disruption of more or less everything else. If the company you work for has a chief innovation officer, it’s because of the long arm of “The Innovator’s Dilemma.” If your city’s public-school district has adopted an Innovation Agenda, which has disrupted the education of every kid in the city, you live in the shadow of “The Innovator’s Dilemma.” If you saw the episode of the HBO sitcom “Silicon Valley” in which the characters attend a conference called TechCrunch Disrupt 2014 (which is a real thing), and a guy from the stage, a Paul Rudd look-alike, shouts, “Let me hear it, DISSS-RUPPTTT!,” you have heard the voice of Clay Christensen, echoing across the valley.
Every age has a theory of rising and falling, of growth and decay, of bloom and wilt: a theory of nature. Every age also has a theory about the past and the present, of what was and what is, a notion of time: a theory of history. Theories of history used to be supernatural: the divine ruled time; the hand of God, a special providence, lay behind the fall of each sparrow. If the present differed from the past, it was usually worse: supernatural theories of history tend to involve decline, a fall from grace, the loss of God’s favor, corruption. Beginning in the eighteenth century, as the intellectual historian Dorothy Ross once pointed out, theories of history became secular; then they started something new—historicism, the idea “that all events in historical time can be explained by prior events in historical time.” Things began looking up. First, there was that, then there was this, and this is better than that. The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence.
The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer.
The word “innovate”—to make new—used to have chiefly negative connotations: it signified excessive novelty, without purpose or end. Edmund Burke called the French Revolution a “revolt of innovation”; Federalists declared themselves to be “enemies to innovation.” George Washington, on his deathbed, was said to have uttered these words: “Beware of innovation in politics.” Noah Webster warned in his dictionary, in 1828, “It is often dangerous to innovate on the customs of a nation.”
The redemption of innovation began in 1939, when the economist Joseph Schumpeter, in his landmark study of business cycles, used the word to mean bringing new products to market, a usage that spread slowly, and only in the specialized literatures of economics and business. (In 1942, Schumpeter theorized about “creative destruction”; Christensen, retrofitting, believes that Schumpeter was really describing disruptive innovation.) “Innovation” began to seep beyond specialized literatures in the nineteen-nineties, and gained ubiquity only after 9/11. One measure: between 2011 and 2014, Time, the Times Magazine, The New Yorker, Forbes, and even Better Homes and Gardens published special “innovation” issues—the modern equivalents of what, a century ago, were known as “sketches of men of progress.”
The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.
When the financial-services industry disruptively innovated, it led to a global financial crisis. Like the bursting of the dot-com bubble, the meltdown didn’t dim the fervor for disruption; instead, it fuelled it, because these products of disruption contributed to the panic on which the theory of disruption thrives.
The logic of disruptive innovation is the logic of the startup: establish a team of innovators, set a whiteboard under a blue sky, and never ask them to make a profit, because there needs to be a wall of separation between the people whose job is to come up with the best, smartest, and most creative and important ideas and the people whose job is to make money by selling stuff. Interestingly, a similar principle has existed, for more than a century, in the press. The “heavyweight innovation team”? That’s what journalists used to call the “newsroom.”
It’s readily apparent that, in a democracy, the important business interests of institutions like the press might at times conflict with what became known as the “public interest.” That’s why, a very long time ago, newspapers like the Times and magazines like this one established a wall of separation between the editorial side of affairs and the business side. (The metaphor is to the Jeffersonian wall between church and state.) “The wall dividing the newsroom and business side has served The Times well for decades,” according to the Times’ Innovation Report, “allowing one side to focus on readers and the other to focus on advertisers,” as if this had been, all along, simply a matter of office efficiency. But the notion of a wall should be abandoned, according to the report, because it has “hidden costs” that thwart innovation.
Disruptive innovation is a theory about why businesses fail. It’s not more than that. It doesn’t explain change. It’s not a law of nature. It’s an artifact of history, an idea, forged in time; it’s the manufacture of a moment of upsetting and edgy uncertainty. Transfixed by change, it’s blind to continuity. It makes a very poor prophet.
The upstarts who work at startups don’t often stay at any one place for very long. (Three out of four startups fail. More than nine out of ten never earn a return.) They work a year here, a few months there—zany hours everywhere. They wear jeans and sneakers and ride scooters and share offices and sprawl on couches like Great Danes. Their coffee machines look like dollhouse-size factories.