Tag Archives: ethnography

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED.

It’s not too much of an exaggeration to call autocorrect the overlooked underwriter of our era of mobile prolixity. Without it, we wouldn’t be able to compose windy love letters from stadium bleachers, write novels on subway commutes, or dash off breakup texts while in line at the post office. Without it, we probably couldn’t even have phones that look anything like the ingots we tickle—the whole notion of touchscreen typing, where our podgy physical fingers are expected to land with precision on tiny virtual keys, is viable only when we have some serious software to tidy up after us. Because we know autocorrect is there as brace and cushion, we’re free to write with increased abandon, at times and in places where writing would otherwise be impossible. Thanks to autocorrect, the gap between whim and word is narrower than it’s ever been, and our world is awash in easily rendered thought.

[…]

I find him in a drably pastel conference room at Microsoft headquarters in Redmond, Washington. Dean Hachamovitch—inventor on the patent for autocorrect and the closest thing it has to an individual creator—reaches across the table to introduce himself.

[…]

Hachamovitch, now a vice president at Microsoft and head of data science for the entire corporation, is a likable and modest man. He freely concedes that he types teh as much as anyone. (Almost certainly he does not often type hte. As researchers have discovered, initial-letter transposition is a much rarer error.)

[…]

The notion of autocorrect was born when Hachamovitch began thinking about a functionality that already existed in Word. Thanks to Charles Simonyi, the longtime Microsoft executive widely recognized as the father of graphical word processing, Word had a “glossary” that could be used as a sort of auto-expander. You could set up a string of words—like insert logo—which, when typed and followed by a press of the F3 button, would get replaced by a JPEG of your company’s logo. Hachamovitch realized that this glossary could be used far more aggressively to correct common mistakes. He drew up a little code that would allow you to press the left arrow and F3 at any time and immediately replace teh with the. His aha moment came when he realized that, because English words are space-delimited, the space bar itself could trigger the replacement, to make correction … automatic! Hachamovitch drew up a list of common errors, and over the next years he and his team went on to solve many of the thorniest. Seperate would automatically change to separate. Accidental cap locks would adjust immediately (making dEAR grEG into Dear Greg). One Microsoft manager dubbed them the Department of Stupid PC Tricks.

[…]

One day Hachamovitch went into his boss’s machine and changed the autocorrect dictionary so that any time he typed Dean it was automatically changed to the name of his coworker Mike, and vice versa. (His boss kept both his computer and office locked after that.) Children were even quicker to grasp the comedic ramifications of the new tool. After Hachamovitch went to speak to his daughter’s third-grade class, he got emails from parents that read along the lines of “Thank you for coming to talk to my daughter’s class, but whenever I try to type her name I find it automatically transforms itself into ‘The pretty princess.’”

[…]

On idiom, some of its calls seemed fairly clear-cut: gorilla warfare became guerrilla warfare, for example, even though a wildlife biologist might find that an inconvenient assumption. But some of the calls were quite tricky, and one of the trickiest involved the issue of obscenity. On one hand, Word didn’t want to seem priggish; on the other, it couldn’t very well go around recommending the correct spelling of mothrefukcer. Microsoft was sensitive to these issues. The solution lay in expanding one of spell-check’s most special lists, bearing the understated title: “Words which should neither be flagged nor suggested.”

[…]

One day Vignola sent Bill Gates an email. (Thorpe couldn’t recall who Bill Vignola was or what he did.) Whenever Bill Vignola typed his own name in MS Word, the email to Gates explained, it was automatically changed to Bill Vaginal. Presumably Vignola caught this sometimes, but not always, and no doubt this serious man was sad to come across like a character in a Thomas Pynchon novel. His email made it down the chain of command to Thorpe. And Bill Vaginal wasn’t the only complainant: As Thorpe recalls, Goldman Sachs was mad that Word was always turning it into Goddamn Sachs.

Thorpe went through the dictionary and took out all the words marked as “vulgar.” Then he threw in a few anatomical terms for good measure. The resulting list ran to hundreds of entries:

anally, asshole, battle-axe, battleaxe, bimbo, booger, boogers, butthead, Butthead …

With these sorts of master lists in place—the corrections, the exceptions, and the to-be-primly-ignored—the joists of autocorrect, then still a subdomain of spell-check, were in place for the early releases of Word. Microsoft’s dominance at the time ensured that autocorrect became globally ubiquitous, along with some of its idiosyncrasies. By the early 2000s, European bureaucrats would begin to notice what came to be called the Cupertino effect, whereby the word cooperation (bizarrely included only in hyphenated form in the standard Word dictionary) would be marked wrong, with a suggested change to Cupertino. There are thus many instances where one parliamentary back-bencher or another longs for increased Cupertino between nations. Since then, linguists have adopted the word cupertino as a term of art for such trapdoors that have been assimilated into the language.

[…]

Autocorrection is no longer an overqualified intern drawing up lists of directives; it’s now a vast statistical affair in which petabytes of public words are examined to decide when a usage is popular enough to become a probabilistically savvy replacement. The work of the autocorrect team has been made algorithmic and outsourced to the cloud.

A handful of factors are taken into account to weight the variables: keyboard proximity, phonetic similarity, linguistic context. But it’s essentially a big popularity contest. A Microsoft engineer showed me a slide where somebody was trying to search for the long-named Austrian action star who became governor of California. Schwarzenegger, he explained, “is about 10,000 times more popular in the world than its variants”—Shwaranegar or Scuzzynectar or what have you. Autocorrect has become an index of the most popular way to spell and order certain words.

When English spelling was first standardized, it was by the effective fiat of those who controlled the communicative means of production. Dictionaries and usage guides have always represented compromises between top-down prescriptivists—those who believe language ought to be used a certain way—and bottom-up descriptivists—those who believe, instead, that there’s no ought about it.

The emerging consensus on usage will be a matter of statistical arbitration, between the way “most” people spell something and the way “some” people do. If it proceeds as it has, it’s likely to be a winner-take-all affair, as alternatives drop out. (Though Apple’s recent introduction of personalized, “contextual” autocorrect—which can distinguish between the language you use with your friends and the language you use with your boss—might complicate that process of standardization and allow us the favor of our characteristic errors.)

[…]

The possibility of linguistic communication is grounded in the fact of what some philosophers of language have called the principle of charity: The first step in a successful interpretation of an utterance is the belief that it somehow accords with the universe as we understand it. This means that we have a propensity to take a sort of ownership over even our errors, hoping for the possibility of meaning in even the most perverse string of letters. We feel honored to have a companion like autocorrect who trusts that, despite surface clumsiness or nonsense, inside us always smiles an articulate truth.

[…]

Today the influence of autocorrect is everywhere: A commenter on the Language Log blog recently mentioned hearing of an entire dialect in Asia based on phone cupertinos, where teens used the first suggestion from autocomplete instead of their chosen word, thus creating a slang that others couldn’t decode. (It’s similar to the Anglophone teenagers who, in a previous texting era, claimed to have replaced the term of approval cool with that of book because of happenstance T9 input priority.) Surrealists once encouraged the practice of écriture automatique, or automatic writing, in order to reveal the peculiar longings of the unconscious. The crackpot suggestions of autocorrect have become our own form of automatic writing—but what they reveal are the peculiar statistics of a world id.

Advertisements

Tracks of My Tears: Design Observer

Tracks of My Tears: Design Observer.


Tears of ending and beginning, Rose-Lynn Fisher ©2013


Tears of grief, Rose-Lynn Fisher ©2013


Onion Tears, Rose-Lynn Fisher ©2013


Tears of possibility and hope, Rose-Lynn Fisher ©2013

1.
You can’t be impersonal when it comes to tears. They are by their nature intimate, as unique as the patterns of a snowflake or the swirl of the skin on your thumb. As Rose-Lynn Fisher’s photographs make clear, your tears are yours alone and each one is different.

2.
Fisher used a standard light Zeiss microscope and a digital microscopy camera to make these images. She photographed over one hundred tears in her quest to discover their distinctive formations. She worked like a surveyor mapping the topography of a new land. But rather than surveying the mountains and valleys of an external landscape her explorations are of the proteins, hormones and minerals of an inner world.

[…]

4.
Medieval theologians grouped tears into four different types:

Tears of contrition
Tears of sorrow
Tears of gladness
Tears of grace

Twenty first century scientists have identified three different types of tears:

Basal tears which moisten the eye
Reflex tears caused by an outside irritant, like a stray eyelash or chopping an onion or a smoky wind.
Emotional tears that are triggered by sadness, grief, frustration, ecstasy, mourning, or loss.

5.
Emotional tears are packed full of hormones, up to 25 percent more than reflex tears. In Fisher’s photographs a tear from chopping an onion looks very different than tear of possibility and hope.

Emotional tears contain the Adrenocorticotropic hormone, which signifies high levels of stress, leucine-enkephalin, an endorphin that reduces pain, and prolactin, a hormone that triggers breast milk production (and found in higher levels in woman’s tears).

William Frey, of the St. Paul Ramsey Center in Minnesota, discovered that tears contain thirty times more manganese than blood, and manganese is a mineral that effects mood; it’s linked to depression. All of these elements build up in the body during times of stress, and crying is a way for the body to release them. A good cry slows your heart rate; it helps you to return to an emotional equilibrium.

In other words, you can cry yourself back to mental health.

[…]

7.
Samuel Beckett once said “my words are my tears.” But the opposite is also true: tears are your words. Tears are a language, a means of communication. Overwhelmed by emotion, babies cry out in need, having no other way to express their feelings. A lover, not getting the response that she craves, cries in frustration: tears of distress as a plea for emotional connection. Tears flow when mere words don’t.

8.
Rose-Lynn Fisher writes that “the topography of tears is a momentary landscape.” Isn’t it strange that a tear, which is transitory and fragile, can look just like the topography of an actual landscape: the solid stuff of soil, water, stone and vegetation, and which has been in formation for thousands of years? How is it that the microcosm of the tear mirrors the macrocosm of the earth?

9.
In Lewis Carroll’s Alice in Wonderland, Alice cries when she grows to be nine feet tall, and she can’t get into the garden. She reprimands herself just like a parent scolding a child: “You ought to be ashamed of yourself, a great girl like you to go on crying in this way! Stop it this moment, I tell you!” But she can’t stop and she cries gallons of tears that form a large pool around her that is four inches deep.

And then Alice loses her sense of self. “Who in the world am I?” she asks, like a person experiencing a breakdown. She becomes more and more confused, imagines herself as someone else, and yearns to be told who she is really is (“If I like that person I’ll come up”). She then bursts into tears again when she realizes how lonely she feels.

And then Alice shrinks and she finds herself swimming in the pool of her own tears, the same tears that she shed when she was nine feet tall. She meets a mouse, and later a Duck, a Dodo, a Lory and an Eaglet, and they all fall into the pool as well, and then they all climb ashore, and are saved.

Alice’s tears of distress become her means of salvation.

The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED

The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED.

robot-morality-inline

Whether you find it exhilarating or terrifying (or both), progress in robotics and related fields like AI is raising new ethical quandaries and challenging legal codes that were created for a world in which a sharp line separates man from machine. Last week, roboticists, legal scholars, and other experts met at the University of California, Berkeley law school to talk through some of the social, moral, and legal hazards that are likely to arise as that line starts to blur.

[…]

We May Have Feelings for Robots

Darling studies the attachments people form with robots. “There’s evidence that people respond very strongly to robots that are designed to be lifelike,” she said. “We tend to project onto them and anthropomorphize them.”

Most of the evidence for this so far is anecdotal. Darling’s ex-boyfriend, for example, named his Roomba and would feel bad for it when it got stuck under the couch. She’s trying to study human empathy for robots in a more systematic way. In one ongoing study she’s investigating how people react when they’re asked to “hurt” or “kill” a robot by hitting it with various objects. Preliminary evidence suggests they don’t like it one bit.

Another study by Julie Carpenter, a University of Washington graduate student, found that soldiers develop attachments to the robots they use to detect and defuse roadside bombs and other weapons. In interviews with service members, Carpenter found that in some cases they named their robots, ascribed personality traits to them, and felt angry or even sad when their robot got blown up in the line of duty.

This emerging field of research has implications for robot design, Darling says. If you’re building a robot to help take care of elderly people, for example, you might want to foster a deep sense of engagement. But if you’re building a robot for military use, you wouldn’t want the humans to get so attached that they risk their own lives.

There might also be more profound implications. In a 2012 paper, Darling considers the possibility of robot rights. She admits it’s a provocative proposition, but notes that some arguments for animal rights focus not on the animals’ ability to experience pain and anguish but on the effect that cruelty to animals has on humans. If research supports the idea that abusing robots makes people more abusive towards people, it might be a good idea to have legal protections for social robots, Darling says.

Robots Will Have Sex With Us

Robotics is taking sex toys to a new level, and that raises some interesting issues, ranging from the appropriateness of human-robot marriages to using robots to replace prostitutes or spice up the sex lives of the elderly. Some of the most provocative questions involve child-like sex robots. Arkin, the Georgia Tech roboticist, thinks it’s worth investigating whether they could be used to rehabilitate sex offenders.

“We have a problem with pedophilia in society,” Arkin said. “What do we do with these people after they get out of prison? There are very high recidivism rates.” If convicted sex offenders were “prescribed” a child-like sex robot, much like heroin addicts are prescribed methadone as part of a program to kick the habit, it might be possible to reduce recidivism, Arkin suggests. A government agency would probably never fund such a project, Arkin says, and he doesn’t know of anyone else who would either. “But nonetheless I do believe there is a possibility that we may be able to better protect society through this kind of research, rather than having the sex robot cottage industry develop in seedy back rooms, which indeed it is already,” he said.

Even if—and it’s a big if—such a project could win funding and ethical approval, it would be difficult to carry out, Sharkey cautions. “How do you actually do the research until these things are out there in the wild and used for a while? How do you know you’re not creating pedophiles?” he said.

How the legal system would deal with child-like sex robots isn’t entirely clear, according to Ryan Calo, a law professor at the University of Washington. In 2002, the Supreme Court ruled that simulated child pornography (in which young adults or computer generated characters play the parts of children) is protected by the First Amendment and can’t be criminalized. “I could see that extending to embodied [robotic] children, but I can also see courts and regulators getting really upset about that,” Calo said.

Our Laws Aren’t Made for Robots

Child-like sex robots are just one of the many ways in which robots are likely to challenge the legal system in the future, Calo said. “The law assumes, by and large, a dichotomy between a person and a thing. Yet robotics is a place where that gets conflated,” he said.

For example, the concept of mens rea (Latin for “guilty mind”) is central to criminal law: For an act to be considered a crime, there has to be intent. Artificial intelligence could throw a wrench into that thinking, Calo said. “The prospect of robotics behaving in the wild, displaying emergent or learned behavior creates the possibility there will be crimes that no one really intended.”

To illustrate the point, Calo used the example of Darius Kazemi, a programmer who created a bot that buys random stuff for him on Amazon. “He comes home and he’s delighted to find some box that his bot purchased,” Calo said. But what if Kazemi’s bot bought some alcoholic candy, which is illegal in his home state of Massachusetts? Could he be held accountable? So far the bot hasn’t stumbled on Amazon’s chocolate liqueur candy offerings—it’s just hypothetical. But Calo thinks we’ll soon start seeing cases that raise these kinds of questions.

And it won’t stop there. The apparently imminent arrival of autonomous vehicles will raise new questions in liability law. Social robots inside the home will raise 4th Amendment issues. “Could the FBI get a warrant to plant a question in a robot you talk to, ‘So, where’d you go this weekend?’” Calo asked. Then there are issues of how to establish the limits that society deems appropriate. Should robots or the roboticists who make them be the target of our laws and regulations?

Jihad vs. McWorld – Benjamin R. Barber – The Atlantic

Jihad vs. McWorld – Benjamin R. Barber – The Atlantic.

Just beyond the horizon of current events lie two possible political futures—both bleak, neither democratic. The first is a retribalization of large swaths of humankind by war and bloodshed: a threatened Lebanonization of national states in which culture is pitted against culture, people against people, tribe against tribe—a Jihad in the name of a hundred narrowly conceived faiths against every kind of interdependence, every kind of artificial social cooperation and civic mutuality. The second is being borne in on us by the onrush of economic and ecological forces that demand integration and uniformity and that mesmerize the world with fast music, fast computers, and fast food—with MTV, Macintosh, and McDonald’s, pressing nations into one commercially homogenous global network: one McWorld tied together by technology, ecology, communications, and commerce. The planet is falling precipitantly apart AND coming reluctantly together at the very same moment.

[…]

The tendencies of what I am here calling the forces of Jihad and the forces of McWorld operate with equal strength in opposite directions, the one driven by parochial hatreds, the other by universalizing markets, the one re-creating ancient subnational and ethnic borders from within, the other making national borders porous from without. They have one thing in common: neither offers much hope to citizens looking for practical ways to govern themselves democratically. If the global future is to pit Jihad’s centrifugal whirlwind against McWorld’s centripetal black hole, the outcome is unlikely to be democratic—or so I will argue.

[…]

Four imperatives make up the dynamic of McWorld: a market imperative, a resource imperative, an information-technology imperative, and an ecological imperative. By shrinking the world and diminishing the salience of national borders, these imperatives have in combination achieved a considerable victory over factiousness and particularism, and not least of all over their most virulent traditional form—nationalism. It is the realists who are now Europeans, the utopians who dream nostalgically of a resurgent England or Germany, perhaps even a resurgent Wales or Saxony. Yesterday’s wishful cry for one world has yielded to the reality of McWorld.

THE MARKET IMPERATIVE. Marxist and Leninist theories of imperialism assumed that the quest for ever-expanding markets would in time compel nation-based capitalist economies to push against national boundaries in search of an international economic imperium. Whatever else has happened to the scientistic predictions of Marxism, in this domain they have proved farsighted. All national economies are now vulnerable to the inroads of larger, transnational markets within which trade is free, currencies are convertible, access to banking is open, and contracts are enforceable under law. In Europe, Asia, Africa, the South Pacific, and the Americas such markets are eroding national sovereignty and giving rise to entities—international banks, trade associations, transnational lobbies like OPEC and Greenpeace, world news services like CNN and the BBC, and multinational corporations that increasingly lack a meaningful national identity—that neither reflect nor respect nationhood as an organizing or regulative principle.

The market imperative has also reinforced the quest for international peace and stability, requisites of an efficient international economy. Markets are enemies of parochialism, isolation, fractiousness, war. Market psychology attenuates the psychology of ideological and religious cleavages and assumes a concord among producers and consumers—categories that ill fit narrowly conceived national or religious cultures. Shopping has little tolerance for blue laws, whether dictated by pub-closing British paternalism, Sabbath-observing Jewish Orthodox fundamentalism, or no-Sunday-liquor-sales Massachusetts puritanism. In the context of common markets, international law ceases to be a vision of justice and becomes a workaday framework for getting things done—enforcing contracts, ensuring that governments abide by deals, regulating trade and currency relations, and so forth.

Common markets demand a common language, as well as a common currency, and they produce common behaviors of the kind bred by cosmopolitan city life everywhere. Commercial pilots, computer programmers, international bankers, media specialists, oil riggers, entertainment celebrities, ecology experts, demographers, accountants, professors, athletes—these compose a new breed of men and women for whom religion, culture, and nationality can seem only marginal elements in a working identity. Although sociologists of everyday life will no doubt continue to distinguish a Japanese from an American mode, shopping has a common signature throughout the world. Cynics might even say that some of the recent revolutions in Eastern Europe have had as their true goal not liberty and the right to vote but well-paying jobs and the right to shop (although the vote is proving easier to acquire than consumer goods). The market imperative is, then, plenty powerful; but, notwithstanding some of the claims made for “democratic capitalism,” it is not identical with the democratic imperative.

THE RESOURCE IMPERATIVE. Democrats once dreamed of societies whose political autonomy rested firmly on economic independence. The Athenians idealized what they called autarky, and tried for a while to create a way of life simple and austere enough to make the polis genuinely self-sufficient. To be free meant to be independent of any other community or polis. Not even the Athenians were able to achieve autarky, however: human nature, it turns out, is dependency. By the time of Pericles, Athenian politics was inextricably bound up with a flowering empire held together by naval power and commerce—an empire that, even as it appeared to enhance Athenian might, ate away at Athenian independence and autarky. Master and slave, it turned out, were bound together by mutual insufficiency.

The dream of autarky briefly engrossed nineteenth-century America as well, for the underpopulated, endlessly bountiful land, the cornucopia of natural resources, and the natural barriers of a continent walled in by two great seas led many to believe that America could be a world unto itself. Given this past, it has been harder for Americans than for most to accept the inevitability of interdependence. But the rapid depletion of resources even in a country like ours, where they once seemed inexhaustible, and the maldistribution of arable soil and mineral resources on the planet, leave even the wealthiest societies ever more resource-dependent and many other nations in permanently desperate straits.

Every nation, it turns out, needs something another nation has; some nations have almost nothing they need.

THE INFORMATION-TECHNOLOGY IMPERATIVE. Enlightenment science and the technologies derived from it are inherently universalizing. They entail a quest for descriptive principles of general application, a search for universal solutions to particular problems, and an unswerving embrace of objectivity and impartiality.

Scientific progress embodies and depends on open communication, a common discourse rooted in rationality, collaboration, and an easy and regular flow and exchange of information. Such ideals can be hypocritical covers for power-mongering by elites, and they may be shown to be wanting in many other ways, but they are entailed by the very idea of science and they make science and globalization practical allies.

Business, banking, and commerce all depend on information flow and are facilitated by new communication technologies. The hardware of these technologies tends to be systemic and integrated—computer, television, cable, satellite, laser, fiber-optic, and microchip technologies combining to create a vast interactive communications and information network that can potentially give every person on earth access to every other person, and make every datum, every byte, available to every set of eyes. If the automobile was, as George Ball once said (when he gave his blessing to a Fiat factory in the Soviet Union during the Cold War), “an ideology on four wheels,” then electronic telecommunication and information systems are an ideology at 186,000 miles per second—which makes for a very small planet in a very big hurry. Individual cultures speak particular languages; commerce and science increasingly speak English; the whole world speaks logarithms and binary mathematics.

Moreover, the pursuit of science and technology asks for, even compels, open societies. Satellite footprints do not respect national borders; telephone wires penetrate the most closed societies. With photocopying and then fax machines having infiltrated Soviet universities and samizdat literary circles in the eighties, and computer modems having multiplied like rabbits in communism’s bureaucratic warrens thereafter, glasnost could not be far behind. In their social requisites, secrecy and science are enemies.

The new technology’s software is perhaps even more globalizing than its hardware. The information arm of international commerce’s sprawling body reaches out and touches distinct nations and parochial cultures, and gives them a common face chiseled in Hollywood, on Madison Avenue, and in Silicon Valley. Throughout the 1980s one of the most-watched television programs in South Africa was The Cosby Show. The demise of apartheid was already in production. Exhibitors at the 1991 Cannes film festival expressed growing anxiety over the “homogenization” and “Americanization” of the global film industry when, for the third year running, American films dominated the awards ceremonies. America has dominated the world’s popular culture for much longer, and much more decisively.

[…]

This kind of software supremacy may in the long term be far more important than hardware superiority, because culture has become more potent than armaments. What is the power of the Pentagon compared with Disneyland? Can the Sixth Fleet keep up with CNN? McDonald’s in Moscow and Coke in China will do more to create a global culture than military colonization ever could. It is less the goods than the brand names that do the work, for they convey life-style images that alter perception and challenge behavior. They make up the seductive software of McWorld’s common (at times much too common) soul.

Yet in all this high-tech commercial world there is nothing that looks particularly democratic. It lends itself to surveillance as well as liberty, to new forms of manipulation and covert control as well as new kinds of participation, to skewed, unjust market outcomes as well as greater productivity. The consumer society and the open society are not quite synonymous. Capitalism and democracy have a relationship, but it is something less than a marriage. An efficient free market after all requires that consumers be free to vote their dollars on competing goods, not that citizens be free to vote their values and beliefs on competing political candidates and programs. The free market flourished in junta-run Chile, in military-governed Taiwan and Korea, and, earlier, in a variety of autocratic European empires as well as their colonial possessions.

THE ECOLOGICAL IMPERATIVE. The impact of globalization on ecology is a cliche even to world leaders who ignore it. We know well enough that the German forests can be destroyed by Swiss and Italians driving gas-guzzlers fueled by leaded gas. We also know that the planet can be asphyxiated by greenhouse gases because Brazilian farmers want to be part of the twentieth century and are burning down tropical rain forests to clear a little land to plough, and because Indonesians make a living out of converting their lush jungle into toothpicks for fastidious Japanese diners, upsetting the delicate oxygen balance and in effect puncturing our global lungs. Yet this ecological consciousness has meant not only greater awareness but also greater inequality, as modernized nations try to slam the door behind them, saying to developing nations, “The world cannot afford your modernization; ours has wrung it dry!”

Each of the four imperatives just cited is transnational, transideological, and transcultural. Each applies impartially to Catholics, Jews, Muslims, Hindus, and Buddhists; to democrats and totalitarians; to capitalists and socialists. The Enlightenment dream of a universal rational society has to a remarkable degree been realized—but in a form that is commercialized, homogenized, depoliticized, bureaucratized, and, of course, radically incomplete, for the movement toward McWorld is in competition with forces of global breakdown, national dissolution, and centrifugal corruption. These forces, working in the opposite direction, are the essence of what I call Jihad.

Jihad, or the Lebanonization of the World

OPEC, the World Bank, the United Nations, the International Red Cross, the multinational corporation…there are scores of institutions that reflect globalization. But they often appear as ineffective reactors to the world’s real actors: national states and, to an ever greater degree, subnational factions in permanent rebellion against uniformity and integration—even the kind represented by universal law and justice. The headlines feature these players regularly: they are cultures, not countries; parts, not wholes; sects, not religions; rebellious factions and dissenting minorities at war not just with globalism but with the traditional nation-state. Kurds, Basques, Puerto Ricans, Ossetians, East Timoreans, Quebecois, the Catholics of Northern Ireland, Abkhasians, Kurile Islander Japanese, the Zulus of Inkatha, Catalonians, Tamils, and, of course, Palestinians—people without countries, inhabiting nations not their own, seeking smaller worlds within borders that will seal them off from modernity.

A powerful irony is at work here. Nationalism was once a force of integration and unification, a movement aimed at bringing together disparate clans, tribes, and cultural fragments under new, assimilationist flags. But as Ortega y Gasset noted more than sixty years ago, having won its victories, nationalism changed its strategy. In the 1920s, and again today, it is more often a reactionary and divisive force, pulverizing the very nations it once helped cement together.

[…]

The aim of many of these small-scale wars is to redraw boundaries, to implode states and resecure parochial identities: to escape McWorld’s dully insistent imperatives. The mood is that of Jihad: war not as an instrument of policy but as an emblem of identity, an expression of community, an end in itself. Even where there is no shooting war, there is fractiousness, secession, and the quest for ever smaller communities.

[…]

Among the tribes, religion is also a battlefield. (“Jihad” is a rich word whose generic meaning is “struggle”—usually the struggle of the soul to avert evil. Strictly applied to religious war, it is used only in reference to battles where the faith is under assault, or battles against a government that denies the practice of Islam. My use here is rhetorical, but does follow both journalistic practice and history.) Remember the Thirty Years War? Whatever forms of Enlightenment universalism might once have come to grace such historically related forms of monotheism as Judaism, Christianity, and Islam, in many of their modern incarnations they are parochial rather than cosmopolitan, angry rather than loving, proselytizing rather than ecumenical, zealous rather than rationalist, sectarian rather than deistic, ethnocentric rather than universalizing. As a result, like the new forms of hypernationalism, the new expressions of religious fundamentalism are fractious and pulverizing, never integrating. This is religion as the Crusaders knew it: a battle to the death for souls that if not saved will be forever lost.

The atmospherics of Jihad have resulted in a breakdown of civility in the name of identity, of comity in the name of community. International relations have sometimes taken on the aspect of gang war—cultural turf battles featuring tribal factions that were supposed to be sublimated as integral parts of large national, economic, postcolonial, and constitutional entities.

[…]

Neither McWorld nor Jihad is remotely democratic in impulse. Neither needs democracy; neither promotes democracy.

McWorld does manage to look pretty seductive in a world obsessed with Jihad. It delivers peace, prosperity, and relative unity—if at the cost of independence, community, and identity (which is generally based on difference). The primary political values required by the global market are order and tranquillity, and freedom—as in the phrases “free trade,” “free press,” and “free love.” Human rights are needed to a degree, but not citizenship or participation—and no more social justice and equality than are necessary to promote efficient economic production and consumption. Multinational corporations sometimes seem to prefer doing business with local oligarchs, inasmuch as they can take confidence from dealing with the boss on all crucial matters. Despots who slaughter their own populations are no problem, so long as they leave markets in place and refrain from making war on their neighbors (Saddam Hussein’s fatal mistake). In trading partners, predictability is of more value than justice.

[…]

Jihad delivers a different set of virtues: a vibrant local identity, a sense of community, solidarity among kinsmen, neighbors, and countrymen, narrowly conceived. But it also guarantees parochialism and is grounded in exclusion. Solidarity is secured through war against outsiders. And solidarity often means obedience to a hierarchy in governance, fanaticism in beliefs, and the obliteration of individual selves in the name of the group. Deference to leaders and intolerance toward outsiders (and toward “enemies within”) are hallmarks of tribalism—hardly the attitudes required for the cultivation of new democratic women and men capable of governing themselves. Where new democratic experiments have been conducted in retribalizing societies, in both Europe and the Third World, the result has often been anarchy, repression, persecution, and the coming of new, noncommunist forms of very old kinds of despotism.

[…]

To the extent that either McWorld or Jihad has a NATURAL politics, it has turned out to be more of an antipolitics. For McWorld, it is the antipolitics of globalism: bureaucratic, technocratic, and meritocratic, focused (as Marx predicted it would be) on the administration of things—with people, however, among the chief things to be administered. In its politico-economic imperatives McWorld has been guided by laissez-faire market principles that privilege efficiency, productivity, and beneficence at the expense of civic liberty and self-government.

For Jihad, the antipolitics of tribalization has been explicitly antidemocratic: one-party dictatorship, government by military junta, theocratic fundamentalism—often associated with a version of theFuhrerprinzip that empowers an individual to rule on behalf of a people.

[…]

How can democracy be secured and spread in a world whose primary tendencies are at best indifferent to it (McWorld) and at worst deeply antithetical to it (Jihad)? My guess is that globalization will eventually vanquish retribalization. The ethos of material “civilization” has not yet encountered an obstacle it has been unable to thrust aside.

[…]

…democracy is how we remonstrate with reality, the rebuke our aspirations offer to history. And if retribalization is inhospitable to democracy, there is nonetheless a form of democratic government that can accommodate parochialism and communitarianism, one that can even save them from their defects and make them more tolerant and participatory: decentralized participatory democracy. And if McWorld is indifferent to democracy, there is nonetheless a form of democratic government that suits global markets passably well—representative government in its federal or, better still, confederal variation.

[…]

It certainly seems possible that the most attractive democratic ideal in the face of the brutal realities of Jihad and the dull realities of McWorld will be a confederal union of semi-autonomous communities smaller than nation-states, tied together into regional economic associations and markets larger than nation-states—participatory and self-determining in local matters at the bottom, representative and accountable at the top. The nation-state would play a diminished role, and sovereignty would lose some of its political potency. The Green movement adage “Think globally, act locally” would actually come to describe the conduct of politics.

This vision reflects only an ideal, however—one that is not terribly likely to be realized. Freedom, Jean-Jacques Rousseau once wrote, is a food easy to eat but hard to digest. Still, democracy has always played itself out against the odds. And democracy remains both a form of coherence as binding as McWorld and a secular faith potentially as inspiriting as Jihad.

David Trotter reviews ‘Lifted’ by Andreas Bernard, translated by David Dollenmayer · LRB 3 July 2014

David Trotter reviews ‘Lifted’ by Andreas Bernard, translated by David Dollenmayer · LRB 3 July 2014.

According to elevator legend, it all began with a stunt. In the summer of 1854, at the Exhibition of the Industry of All Nations in New York, an engineer called Elisha Graves Otis gave regular demonstrations of his new safety device. Otis had himself hoisted into the air on a platform secured on either side by guide-rails and – at a suitably dramatic height – cut the cable. Instead of plummeting to the ground fifty feet below, the platform stopped dead after a couple of inches. ‘All safe, gentlemen, all safe,’ Otis would bellow at the expectant crowd. The device was simple enough: a flat-leaf cart spring above the platform splayed out to its full extent as soon as the cable was cut, engaging notches in the guide-rails. Has any mode of transport ever been safer? After 1854, malfunctioning (or non-existent) doors were the only direct risk still attached to travelling by lift. Safety first was not so much a motto as a premise. No wonder that the closest high-end TV drama has come to Sartrean nausea is the moment in Mad Men when a pair of elevator doors mysteriously parts in front of troubled genius Don Draper, who is left peering in astonishment down into a mechanical abyss. The cables coiling and uncoiling in the shaft stand in for the root of Roquentin’s chestnut tree.

Andreas Bernard is properly sceptical of myths of origin. It didn’t all begin in 1854, in fact. From Archimedes and Vitruvius onwards, descriptions survive of devices for the vertical transport of goods, primarily, but also of people. The English diplomat Charles Greville, writing in 1830, recalled with admiration a lift in the Genoese palace of the Sardinian royal couple: ‘For the comfort of their bodies he has a machine made like a car, which is drawn up by a chain from the bottom to the top of the house; it holds about six people, who can be at pleasure elevated to any storey, and at each landing place there is a contrivance to let them in and out.’ In June 1853, Harper’s New Monthly Magazine reported the imminent introduction of steam-powered elevators into private homes in New York, by means of which an ‘indolent, or fatigued, or aristocratic person’ could reach the upper floors. Confusingly, there was another engineering Otis around, Otis Tufts, who in 1859 patented an apparatus known as the Vertical Railway or Vertical Screw Elevator. The Vertical Railway, driven by a twenty-inch-wide iron screw running through its centre, was the first such device to boast an enclosed cab. It proved extremely reliable, but slow and costly.

How, then, did Otis’s stunt achieve the status of a myth of origin? It was theatrical, for a start. More important, it exploited what Bernard calls the 19th-century ‘trauma of the cable’. From the late Middle Ages, when mineshafts in Europe first reached depths greater than a few yards, some means had to be developed to bring the ore up to the surface. For centuries, cable winches powered in various ways allowed the vertical transport of raw materials and freight. By 1850, when elevators first began to appear in buildings, the depth of the mineshafts in the upper Harz and Ruhr regions had reached more than two thousand feet. So high was the risk of an accident caused by a cable breaking that until 1859 German mining regulations forbade the transport of miners in the rail-guided baskets that brought the ore up to the surface (they had to use ladders). Bernard’s emphasis on the history of mining usefully embeds the history of the elevator in the history not just of transport in general, but of the transport accident: itself about to give rise, courtesy of rail-guided transport of the horizontal kind, to trauma as a diagnostic category.

[…]

His main interest lies in the ways in which the advent of the elevator transformed the design, construction and experience of high-rise buildings, and thus of modern urban life in general (the focus remains on Germany and the United States throughout). From the 1870s onwards, all new multi-storey buildings in major American cities were constructed around an elevator shaft. The ‘perfection of elevator work’, as one commentator put it in 1891, had become the skyscraper’s ‘fundamental condition’. That, and steel frame construction. Bernard seems reluctant to get into a dispute as to which came first, or mattered more, but he maintains that the elevator was a ‘prerequisite’ for vertical growth. In the 1890s, the highest building in the world was the twenty-storey Masonic Temple in Chicago; the Woolworth Building in New York, completed in 1913, stood at 55 storeys. In Europe, the pace of change was a good deal slower, since the emphasis remained as much on adaptation as on innovative design.

[…]

He argues that the lasting symbolic consequence of the perfection of elevator work was the ‘recodification of verticality’ it brought about. During the final decade of the 19th century (an ‘epochal watershed’), the best rooms in the largest buildings ‘migrated’ from low to high in a decisive reversal of ‘hierarchic order’, while the worst went in the opposite direction. In Europe’s grand hotels, for example, the worst rooms had traditionally been at the top, since only poor people and hotel staff could be expected to climb all those flights of stairs. Lifts, however, ‘freed the upper storeys from the stigma of inaccessibility and lent them an unheard-of glamour’. A roughly comparable migration occurred at the other end of the social scale. Statistics for rental prices in Berlin in the period from the founding of the Reich in 1871 to the outbreak of the First World War demonstrate that the most expensive apartments were invariably on the first floor (the bel étage), the less expensive on the ground, second and third floors, and the cheapest at attic or basement level. The last two levels consistently attracted the stigma of ‘abnormality’. It was here, at the top and bottom of the building, that the urban underclass festered. By the end of the 19th century, sanitary reform had pretty much done for the basement as a dwelling-place. It took a while longer, as Bernard shows, for the elevator to domesticate the upper floors of the standard tenement block by rendering them easily accessible.

The bel étage wasn’t just on the way up. It entered, or rather had built for it, a separate symbolic dimension. Rich people realised that the stuff they’d always enjoyed doing at ground level was even more enjoyable when done on the top floor; and that being able to do it there at all was a useful display of the power wealth brings. In 1930s New York, the twin towers of the new Waldorf-Astoria hotel, which rose from the 29th to the 43rd storey, constituted its unique appeal. ‘Below the demarcation line of the 29th storey, the Waldorf-Astoria, although expensive, was accessible to everyone; above the line began an exclusive region of suites of as many as twelve rooms with private butler service.’ The upper floors of tall buildings, once given over to staff dormitories, had become what Bernard calls an ‘enclave of the elite’. The Waldorf-Astoria’s express elevators, travelling direct to the 29th floor, were as much barrier as conduit. Such discrimination between elevators, or between elevator speeds, played a significant part in the design of those ultimate enclaves of the managerial elite, the penthouse apartment and the executive suite. In 1965, the penthouse still had enough ‘unheard-of glamour’ to lend its name to a new men’s magazine.

[…]

Seen through the lens of canonical urban theory, a ride in a lift looks like the perfect opportunity for those jarring random encounters with people you don’t know that are said to characterise life in the big city. As Bernard puts it, ‘the elevator cab – in the days of Poe and Baudelaire just beginning to be installed in the grand hotels, by the time of Simmel and Benjamin a permanent part of urban architecture – is the contingent locale par excellence.’ For Bernard, the elevator is a Benjaminian street brought indoors and rotated on its axis: during the few seconds of ascent or descent, the perpetual ‘anaesthetising of attention’ allegedly required of the city-dweller becomes an acute anxiety. Bernard invokes Erving Goffman’s ethnomethodological analysis of the positions passengers customarily take up on entering a lift: the first beside the controls, the second in the corner diagonally opposite, the third somewhere along the rear wall, the fourth in the empty centre and so on; all of them at once turning to face the front, as though on parade. He terms the resulting intricate array of mutual aversions a ‘sociogram’. He’s right, of course. There is something about the way people behave in lifts which requires explanation. But does urban theory hold the key to that behaviour? Crossing the road is not at all the same as riding between floors.

The invention of the elevator belongs as securely to the history of mechanised transport as it does to the history of urban planning. After all, the trains which first obliged passengers to sit or stand in close proximity to one another for hours on end without exchanging a word ran between rather than across the great conurbations. Considered as a people-mover, the elevator ranks with those other epochal Fin-de-Siècle inventions, the motor car and the aeroplane. Like them, it combines high speed with a high degree of insulation from the outside world. It’s a vertical bullet train, a space rocket forever stuck in its silo – at least until the moment in Tim Burton’s Charlie and the Chocolate Factory when Willie Wonka presses the button marked ‘Up and Out’. An elevator exceeds a car or a plane in the claustrophobic extremity of its insulation from the outside world. It’s the collective endurance of protracted viewlessness, rather than urban ennui, that activates Bernard’s sociogram.

The clue to the elevator’s significance lies in the buttons that adorn its interior and exterior. Its automation, at the beginning of the 20th century, created a system of electronic signalling which brought the entire operation under the control of the individual user. In no other mode of transport could a vehicle be hailed, directed and dismissed entirely without assistance, and by a touch so slight it barely amounts to an expenditure of energy. The machine appears to work by information alone. Elevators, Bernard says, reprogrammed the high-rise building. It might be truer to say that they reprogrammed the people who made use of them, in buildings of any kind. Approaching the elevator bank, we alert the system to where we are and the direction we want to travel in. Pressing the button in the lift, we signal our precise destination and our confidence that the apparatus will come to a halt and the doors open when we get there. The closer we come to sending ourselves as a message, in competition or alliance with the messages sent by others, the more likely we are to arrive speedily, and intact.

[…]

You can only send yourself as a message successfully if you remain intact – that is, fully encrypted – during transmission. That’s what elevator protocol is for. Or so we might gather from the very large number of scenes set in lifts in movies from the 1930s onwards. The vast majority of these scenes involve breaches of protocol in which the breach is of far greater interest than the protocol. Desire erupts, or violence, shattering the sociogram’s frigid array. Or the lift, stopped in its tracks, ceases to be a lift. It becomes something else altogether: a prison cell to squeeze your way out of, or (Bernard suggests) a confessional. The eruptions are sometimes entertaining, sometimes not. But since they pay little or no attention to the protocols which have consistently defined the ‘atmosphere in the cab’, they often date badly. The student of elevator scenes in James Bond movies, for example, will discover only that while Daniel Craig in Quantum of Solace (2008) instantly unleashes a crisply definitive, neoliberal backwards head-butt, Sean Connery in Diamonds Are Forever (1971) has to absorb a good deal of heavy punishment before he’s able to apply the unarmed combat manoeuvre du jour: an Edward Heath of a flailing, two-handed downwards chop at the kidneys.

Rarer, and far more illuminating, are scenes in which the lift remains a lift, and the protocols, consequently, of greater interest than their potential or actual breach. These scenes are a gift to the cultural historian, and it’s unfortunate that Bernard’s allegiances to urban theory and to literature (especially to the literature of an earlier period) should have persuaded him to ignore them. The shrewdest representations are those which understand that the elevator is a place where messages meet, rather than people. In white-collar epics from King Vidor’s seminal The Crowd through Robert Wise’s highly inventive Executive Suite and the exuberant Jerry Lewis vehicle The Errand Boy to The Hudsucker Proxy, the Coen brothers’ screwball version of Frank Capra, what separates the upper floors from the lower is access to information. The express elevator, bypassing those floors on which actual business is done, constitutes a prototypical information superhighway ripe for abuse by finance capitalism. The Hudsucker Proxy, in particular, would have been grist to Bernard’s mill. It features a sweaty basement mailroom as well as cool expanses of executive suite. Its miniature New York set included a model of the Woolworth Building. But the film is about information rather than urban contingency. It’s only when gormless errand boy Tim Robbins, ordered to deliver a top-secret ‘Blue Letter’ (the year is 1959) to the top floor via express elevator, himself becomes in effect the message, that evil capitalist Paul Newman can see his way to the ingenious stock scam which drives the plot on towards last-minute angelic intervention.

The arrangement by phalanx required by lift protocol has the great virtue of precluding conversation. Cinema’s best elevator scenes delight in maintaining that such rules should not be broken, whether by head-butt or injudicious self-revelation. When two thugs intent on kidnap at the very least follow advertising executive Roger Thornhill into a packed lift in Hitchcock’s North by Northwest, his mother, who knows what he’s afraid of, but considers him a fantasist, asks them if they’re really trying to kill her son. Cary Grant does an excellent job of seeming more put out by the laughter which greets her sally than by the threat of kidnap. His disgust draws attention to the necessity, in a form of transport directed as much by the flow of data as by the flow of energy, of codes of conduct. It is a kind of meta-commentary. Something comparable happens in another of the many elevator scenes in Mad Men. Don Draper occupies one corner, a couple of insurance salesmen another. The one with his hat on is not to be deflected from his rancid sexual boasting by the entrance at the next floor of a woman whose only option is to stand directly in front of him. Draper tells the man to take his hat off; and when he doesn’t, removes it from his head and shoves it gently into his chest. That’s it. No head-butts, no expressions of feeling. If one code of conduct is to apply, in the earnest business of being parcelled up for delivery, they must all apply, all the time. Perhaps Draper has been to see Billy Wilder’s The Apartment, in which Jack Lemmon shows Shirley MacLaine he’s a true gent by remembering to take his hat off in the lift. These scenes comment not so much on specific codes as on codedness in general, in a world increasingly subsumed into information. For such a staid apparatus, the elevator has generated some pretty compelling stories.

In Praise of Idleness By Bertrand Russell

In Praise of Idleness By Bertrand Russell.

I think that there is far too much work done in the world, that immense harm is caused by the belief that work is virtuous, and that what needs to be preached in modern industrial countries is quite different from what always has been preached. Everyone knows the story of the traveler in Naples who saw twelve beggars lying in the sun (it was before the days of Mussolini), and offered a lira to the laziest of them. Eleven of them jumped up to claim it, so he gave it to the twelfth. this traveler was on the right lines.

[…]

Whenever a person who already has enough to live on proposes to engage in some everyday kind of job, such as school-teaching or typing, he or she is told that such conduct takes the bread out of other people’s mouths, and is therefore wicked. If this argument were valid, it would only be necessary for us all to be idle in order that we should all have our mouths full of bread. What people who say such things forget is that what a man earns he usually spends, and in spending he gives employment. As long as a man spends his income, he puts just as much bread into people’s mouths in spending as he takes out of other people’s mouths in earning. The real villain, from this point of view, is the man who saves. If he merely puts his savings in a stocking, like the proverbial French peasant, it is obvious that they do not give employment.

[…]

In view of the fact that the bulk of the public expenditure of most civilized Governments consists in payment for past wars or preparation for future wars, the man who lends his money to a Government is in the same position as the bad men in Shakespeare who hire murderers. The net result of the man’s economical habits is to increase the armed forces of the State to which he lends his savings. Obviously it would be better if he spent the money, even if he spent it in drink or gambling.

But, I shall be told, the case is quite different when savings are invested in industrial enterprises. When such enterprises succeed, and produce something useful, this may be conceded. In these days, however, no one will deny that most enterprises fail. That means that a large amount of human labor, which might have been devoted to producing something that could be enjoyed, was expended on producing machines which, when produced, lay idle and did no good to anyone. The man who invests his savings in a concern that goes bankrupt is therefore injuring others as well as himself. If he spent his money, say, in giving parties for his friends, they (we may hope) would get pleasure, and so would all those upon whom he spent money, such as the butcher, the baker, and the bootlegger. But if he spends it (let us say) upon laying down rails for surface card in some place where surface cars turn out not to be wanted, he has diverted a mass of labor into channels where it gives pleasure to no one. Nevertheless, when he becomes poor through failure of his investment he will be regarded as a victim of undeserved misfortune, whereas the gay spendthrift, who has spent his money philanthropically, will be despised as a fool and a frivolous person.

[…]

I want to say, in all seriousness, that a great deal of harm is being done in the modern world by belief in the virtuousness of work, and that the road to happiness and prosperity lies in an organized diminution of work.

First of all: what is work? Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid. The second kind is capable of indefinite extension: there are not only those who give orders, but those who give advice as to what orders should be given. Usually two opposite kinds of advice are given simultaneously by two organized bodies of men; this is called politics. The skill required for this kind of work is not knowledge of the subjects as to which advice is given, but knowledge of the art of persuasive speaking and writing, i.e. of advertising.

[…]

Modern technique has made it possible for leisure, within limits, to be not the prerogative of small privileged classes, but a right evenly distributed throughout the community. The morality of work is the morality of slaves, and the modern world has no need of slavery.

[…]

To this day, 99 per cent of British wage-earners would be genuinely shocked if it were proposed that the King should not have a larger income than a working man. The conception of duty, speaking historically, has been a means used by the holders of power to induce others to live for the interests of their masters rather than for their own. Of course the holders of power conceal this fact from themselves by managing to believe that their interests are identical with the larger interests of humanity. Sometimes this is true; Athenian slave-owners, for instance, employed part of their leisure in making a permanent contribution to civilization which would have been impossible under a just economic system. Leisure is essential to civilization, and in former times leisure for the few was only rendered possible by the labors of the many. But their labors were valuable, not because work is good, but because leisure is good. And with modern technique it would be possible to distribute leisure justly without injury to civilization.

[…]

The war showed conclusively that, by the scientific organization of production, it is possible to keep modern populations in fair comfort on a small part of the working capacity of the modern world. If, at the end of the war, the scientific organization, which had been created in order to liberate men for fighting and munition work, had been preserved, and the hours of the week had been cut down to four, all would have been well. Instead of that the old chaos was restored, those whose work was demanded were made to work long hours, and the rest were left to starve as unemployed. Why? Because work is a duty, and a man should not receive wages in proportion to what he has produced, but in proportion to his virtue as exemplified by his industry.

This is the morality of the Slave State, applied in circumstances totally unlike those in which it arose. No wonder the result has been disastrous. Let us take an illustration. Suppose that, at a given moment, a certain number of people are engaged in the manufacture of pins. They make as many pins as the world needs, working (say) eight hours a day. Someone makes an invention by which the same number of men can make twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world, everybody concerned in the manufacturing of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way, it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?

The idea that the poor should have leisure has always been shocking to the rich. In England, in the early nineteenth century, fifteen hours was the ordinary day’s work for a man; children sometimes did as much, and very commonly did twelve hours a day. When meddlesome busybodies suggested that perhaps these hours were rather long, they were told that work kept adults from drink and children from mischief. When I was a child, shortly after urban working men had acquired the vote, certain public holidays were established by law, to the great indignation of the upper classes. I remember hearing an old Duchess say: ‘What do the poor want with holidays? They ought to work.’ People nowadays are less frank, but the sentiment persists, and is the source of much of our economic confusion.

[…]

If the ordinary wage-earner worked four hours a day, there would be enough for everybody and no unemployment — assuming a certain very moderate amount of sensible organization. This idea shocks the well-to-do, because they are convinced that the poor would not know how to use so much leisure. In America men often work long hours even when they are well off; such men, naturally, are indignant at the idea of leisure for wage-earners, except as the grim punishment of unemployment; in fact, they dislike leisure even for their sons. Oddly enough, while they wish their sons to work so hard as to have no time to be civilized, they do not mind their wives and daughters having no work at all. the snobbish admiration of uselessness, which, in an aristocratic society, extends to both sexes, is, under a plutocracy, confined to women; this, however, does not make it any more in agreement with common sense.

[…]

Industry, sobriety, willingness to work long hours for distant advantages, even submissiveness to authority, all these reappear; moreover authority still represents the will of the Ruler of the Universe, Who, however, is now called by a new name, Dialectical Materialism.

[…]

For ages, men had conceded the superior saintliness of women, and had consoled women for their inferiority by maintaining that saintliness is more desirable than power. At last the feminists decided that they would have both, since the pioneers among them believed all that the men had told them about the desirability of virtue, but not what they had told them about the worthlessness of political power. A similar thing has happened in Russia as regards manual work. For ages, the rich and their sycophants have written in praise of ‘honest toil’, have praised the simple life, have professed a religion which teaches that the poor are much more likely to go to heaven than the rich, and in general have tried to make manual workers believe that there is some special nobility about altering the position of matter in space, just as men tried to make women believe that they derived some special nobility from their sexual enslavement.

[…]

A large country, full of natural resources, awaits development, and has has to be developed with very little use of credit. In these circumstances, hard work is necessary, and is likely to bring a great reward. But what will happen when the point has been reached where everybody could be comfortable without working long hours?

In the West, we have various ways of dealing with this problem. We have no attempt at economic justice, so that a large proportion of the total produce goes to a small minority of the population, many of whom do no work at all. Owing to the absence of any central control over production, we produce hosts of things that are not wanted. We keep a large percentage of the working population idle, because we can dispense with their labor by making the others overwork. When all these methods prove inadequate, we have a war: we cause a number of people to manufacture high explosives, and a number of others to explode them, as if we were children who had just discovered fireworks. By a combination of all these devices we manage, though with difficulty, to keep alive the notion that a great deal of severe manual work must be the lot of the average man.

[…]

The fact is that moving matter about, while a certain amount of it is necessary to our existence, is emphatically not one of the ends of human life. If it were, we should have to consider every navvy superior to Shakespeare. We have been misled in this matter by two causes. One is the necessity of keeping the poor contented, which has led the rich, for thousands of years, to preach the dignity of labor, while taking care themselves to remain undignified in this respect. The other is the new pleasure in mechanism, which makes us delight in the astonishingly clever changes that we can produce on the earth’s surface. Neither of these motives makes any great appeal to the actual worker. If you ask him what he thinks the best part of his life, he is not likely to say: ‘I enjoy manual work because it makes me feel that I am fulfilling man’s noblest task, and because I like to think how much man can transform his planet. It is true that my body demands periods of rest, which I have to fill in as best I may, but I am never so happy as when the morning comes and I can return to the toil from which my contentment springs.’ I have never heard working men say this sort of thing. They consider work, as it should be considered, a necessary means to a livelihood, and it is from their leisure that they derive whatever happiness they may enjoy.

It will be said that, while a little leisure is pleasant, men would not know how to fill their days if they had only four hours of work out of the twenty-four. In so far as this is true in the modern world, it is a condemnation of our civilization; it would not have been true at any earlier period. There was formerly a capacity for light-heartedness and play which has been to some extent inhibited by the cult of efficiency. The modern man thinks that everything ought to be done for the sake of something else, and never for its own sake. Serious-minded persons, for example, are continually condemning the habit of going to the cinema, and telling us that it leads the young into crime.

[…]

The butcher who provides you with meat and the baker who provides you with bread are praiseworthy, because they are making money; but when you enjoy the food they have provided, you are merely frivolous, unless you eat only to get strength for your work. Broadly speaking, it is held that getting money is good and spending money is bad. Seeing that they are two sides of one transaction, this is absurd; one might as well maintain that keys are good, but keyholes are bad. Whatever merit there may be in the production of goods must be entirely derivative from the advantage to be obtained by consuming them. The individual, in our society, works for profit; but the social purpose of his work lies in the consumption of what he produces. It is this divorce between the individual and the social purpose of production that makes it so difficult for men to think clearly in a world in which profit-making is the incentive to industry. We think too much of production, and too little of consumption. One result is that we attach too little importance to enjoyment and simple happiness, and that we do not judge production by the pleasure that it gives to the consumer.

When I suggest that working hours should be reduced to four, I am not meaning to imply that all the remaining time should necessarily be spent in pure frivolity. I mean that four hours’ work a day should entitle a man to the necessities and elementary comforts of life, and that the rest of his time should be his to use as he might see fit. It is an essential part of any such social system that education should be carried further than it usually is at present, and should aim, in part, at providing tastes which would enable a man to use leisure intelligently. I am not thinking mainly of the sort of things that would be considered ‘highbrow’.

[…]

The pleasures of urban populations have become mainly passive: seeing cinemas, watching football matches, listening to the radio, and so on. This results from the fact that their active energies are fully taken up with work; if they had more leisure, they would again enjoy pleasures in which they took an active part.

In the past, there was a small leisure class and a larger working class. The leisure class enjoyed advantages for which there was no basis in social justice; this necessarily made it oppressive, limited its sympathies, and caused it to invent theories by which to justify its privileges. These facts greatly diminished its excellence, but in spite of this drawback it contributed nearly the whole of what we call civilization. It cultivated the arts and discovered the sciences; it wrote the books, invented the philosophies, and refined social relations. Even the liberation of the oppressed has usually been inaugurated from above. Without the leisure class, mankind would never have emerged from barbarism.

The method of a leisure class without duties was, however, extraordinarily wasteful. None of the members of the class had to be taught to be industrious, and the class as a whole was not exceptionally intelligent. The class might produce one Darwin, but against him had to be set tens of thousands of country gentlemen who never thought of anything more intelligent than fox-hunting and punishing poachers. At present, the universities are supposed to provide, in a more systematic way, what the leisure class provided accidentally and as a by-product. This is a great improvement, but it has certain drawbacks. University life is so different from life in the world at large that men who live in academic milieu tend to be unaware of the preoccupations and problems of ordinary men and women; moreover their ways of expressing themselves are usually such as to rob their opinions of the influence that they ought to have upon the general public. Another disadvantage is that in universities studies are organized, and the man who thinks of some original line of research is likely to be discouraged. Academic institutions, therefore, useful as they are, are not adequate guardians of the interests of civilization in a world where everyone outside their walls is too busy for unutilitarian pursuits.

In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving, however excellent his pictures may be. Young writers will not be obliged to draw attention to themselves by sensational pot-boilers, with a view to acquiring the economic independence needed for monumental works, for which, when the time at last comes, they will have lost the taste and capacity. Men who, in their professional work, have become interested in some phase of economics or government, will be able to develop their ideas without the academic detachment that makes the work of university economists often seem lacking in reality. Medical men will have the time to learn about the progress of medicine, teachers will not be exasperatedly struggling to teach by routine methods things which they learnt in their youth, which may, in the interval, have been proved to be untrue.

[…]

Good nature is, of all moral qualities, the one that the world needs most, and good nature is the result of ease and security, not of a life of arduous struggle. Modern methods of production have given us the possibility of ease and security for all; we have chosen, instead, to have overwork for some and starvation for others. Hitherto we have continued to be as energetic as we were before there were machines; in this we have been foolish, but there is no reason to go on being foolish forever.

All You Have Eaten: On Keeping a Perfect Record | Longreads

All You Have Eaten: On Keeping a Perfect Record | Longreads.

Screen Shot 2014-07-07 at 4.45.51 PM

Over the course of his or her lifetime, the average person will eat 60,000 pounds of food, the weight of six elephants.

The average American will drink over 3,000 gallons of soda. He will eat about 28 pigs, 2,000 chickens, 5,070 apples, and 2,340 pounds of lettuce. How much of that will he remember, and for how long, and how well?

[…]

The human memory is famously faulty; the brain remains mostly a mystery. We know that comfort foods make the pleasure centers in our brains light up the way drugs do. We know, because of a study conducted by Northwestern University and published in the Journal of Neuroscience, that by recalling a moment, you’re altering it slightly, like a mental game of Telephone—the more you conjure a memory, the less accurate it will be down the line. Scientists have implanted false memories in mice and grown memories in pieces of brain in test tubes. But we haven’t made many noteworthy strides in the thing that seems most relevant: how not to forget.

Unless committed to memory or written down, what we eat vanishes as soon as it’s consumed. That’s the point, after all. But because the famous diarist Samuel Pepys wrote, in his first entry, “Dined at home in the garret, where my wife dressed the remains of a turkey, and in the doing of it she burned her hand,” we know that Samuel Pepys, in the 1600s, ate turkey. We know that, hundreds of years ago, Samuel Pepys’s wife burned her hand. We know, because she wrote it in her diary, that Anne Frank at one point ate fried potatoes for breakfast. She once ate porridge and “a hash made from kale that came out of the barrel.”

For breakfast on January 2, 2008, I ate oatmeal with pumpkin seeds and brown sugar and drank a cup of green tea.

I know because it’s the first entry in a food log I still keep today. I began it as an experiment in food as a mnemonic device. The idea was this: I’d write something objective every day that would cue my memories into the future—they’d serve as compasses by which to remember moments.

Andy Warhol kept what he called a “smell collection,” switching perfumes every three months so he could reminisce more lucidly on those months whenever he smelled that period’s particular scent. Food, I figured, took this even further. It involves multiple senses, and that’s why memories that surround food can come on so strong.

What I’d like to have is a perfect record of every day. I’ve long been obsessed with this impossibility, that every day be perfectly productive and perfectly remembered. What I remember from January 2, 2008 is that after eating the oatmeal I went to the post office, where an old woman was arguing with a postal worker about postage—she thought what she’d affixed to her envelope was enough and he didn’t.

I’m terrified of forgetting. My grandmother has battled Alzheimer’s for years now, and to watch someone battle Alzheimer’s—we say “battle,” as though there’s some way of winning—is terrifying. If I’m always thinking about dementia, my unscientific logic goes, it can’t happen to me (the way an earthquake comes when you don’t expect it, and so the best course of action is always to expect it). “Really, one might almost live one’s life over, if only one could make a sufficient effort of recollection” is a sentence I once underlined in John Banville’s The Sea (a book that I can’t remember much else about). But effort alone is not enough and isn’t particularly reasonable, anyway. A man named Robert Shields kept the world’s longest diary: he chronicled every five minutes of his life until a stroke in 2006 rendered him unable to. He wrote about microwaving foods, washing dishes, bathroom visits, writing itself. When he died in 2007, he left 37.5 million words behind—ninety-one boxes of paper. Reading his obituary, I wondered if Robert Shields ever managed to watch a movie straight through.

Last spring, as part of a NASA-funded study, a crew of three men and three women with “astronaut-like” characteristics spent four months in a geodesic dome in an abandoned quarry on the northern slope of Hawaii’s Mauna Loa volcano.

For those four months, they lived and ate as though they were on Mars, only venturing outside to the surrounding Mars-like, volcanic terrain, in simulated space suits.[1] Hawaii Space Exploration Analog and Simulation (HI-SEAS) is a four-year project: a series of missions meant to simulate and study the challenges of long-term space travel, in anticipation of mankind’s eventual trip to Mars. This first mission’s focus was food.

Getting to Mars will take roughly six to nine months each way, depending on trajectory; the mission itself will likely span years. So the question becomes: How do you feed astronauts for so long? On “Mars,” the HI-SEAS crew alternated between two days of pre-prepared meals and two days of dome-cooked meals of shelf-stable ingredients. Researchers were interested in the answers to a number of behavioral issues: among them, the well-documented phenomenon of menu fatigue (when International Space Station astronauts grow weary of their packeted meals, they tend to lose weight). They wanted to see what patterns would evolve over time if a crew’s members were allowed dietary autonomy, and given the opportunity to cook for themselves (“an alternative approach to feeding crews of long term planetary outposts,” read the open call).

Everything was hyper-documented. Everything eaten was logged in painstaking detail: weighed, filmed, and evaluated. The crew filled in surveys before and after meals: queries into how hungry they were, their first impressions, their moods, how the food smelled, what its texture was, how it tasted. They documented their time spent cooking; their water usage; the quantity of leftovers, if any. The goal was to measure the effect of what they ate on their health and morale, along with other basic questions concerning resource use. How much water will it take to cook on Mars? How much water will it take to wash dishes? How much time is required; how much energy? How will everybody feel about it all?

[…]

The main food study had a big odor identification component to it: the crew took scratch-n-sniff tests, which Kate said she felt confident about at the mission’s start, and less certain about near the end. “The second-to-last test,” she said, “I would smell grass and feel really wistful.” Their noses were mapped with sonogram because, in space, the shape of your nose changes. And there were, on top of this, studies unrelated to food. They exercised in anti-microbial shirts (laundry doesn’t happen in space), evaluated their experiences hanging out with robot pets, and documented their sleep habits.

[…]

“We all had relationships outside that we were trying to maintain in some way,” Kate said. “Some were kind of new, some were tenuous, some were old and established, but they were all very difficult to maintain. A few things that could come off wrong in an e-mail could really bum you out for a long time.”

She told me about another crew member whose boyfriend didn’t email her at his usual time. This was roughly halfway through the mission. She started to get obsessed with the idea that maybe he got into a car accident. “Like seriously obsessed,” Kate said. “I was like, ‘I think your brain is telling you things that aren’t actually happening. Let’s just be calm about this,’ and she was like, ‘Okay, okay.’ But she couldn’t sleep that night. In the end he was just like, ‘Hey, what’s up?’ I knew he would be fine, but I could see how she could think something serious had happened.”

“My wife sent me poems every day but for a couple days she didn’t,” Kate said. “Something was missing from those days, and I don’t think she could have realized how important they were. It was weird. Everything was bigger inside your head because you were living inside your head.”

[…]

When I look back on my meals from the past year, the food log does the job I intended more or less effectively.

I can remember, with some clarity, the particulars of given days: who I was with, how I was feeling, the subjects discussed. There was the night in October I stress-scarfed a head of romaine and peanut butter packed onto old, hard bread; the somehow not-sobering bratwurst and fries I ate on day two of a two-day hangover, while trying to keep things light with somebody to whom, the two nights before, I had aired more than I meant to. There was the night in January I cooked “rice, chicken stirfry with bell pepper and mushrooms, tomato-y Chinese broccoli, 1 bottle IPA” with my oldest, best friend, and we ate the stirfry and drank our beers slowly while commiserating about the most recent conversations we’d had with our mothers.

But reading the entries from 2008, that first year, does something else to me: it suffuses me with the same mortification as if I’d written down my most private thoughts (that reaction is what keeps me from maintaining a more conventional journal). There’s nothing especially incriminating about my diet, except maybe that I ate tortilla chips with unusual frequency, but the fact that it’s just food doesn’t spare me from the horror and head-shaking that comes with reading old diaries. Mentions of certain meals conjure specific memories, but mostly what I’m left with are the general feelings from that year. They weren’t happy ones. I was living in San Francisco at the time. A relationship was dissolving.

It seems to me that the success of a relationship depends on a shared trove of memories. Or not shared, necessarily, but not incompatible. That’s the trouble, I think, with parents and children: parents retain memories of their children that the children themselves don’t share. My father’s favorite meal is breakfast and his favorite breakfast restaurant is McDonald’s, and I remember—having just read Michael Pollan or watched Super Size Me—self-righteously not ordering my regular egg McMuffin one morning, and how that actually hurt him.

When a relationship goes south, it’s hard to pinpoint just where or how—especially after a prolonged period of it heading that direction. I was at a loss with this one. Going forward, I didn’t want not to be able to account for myself. If I could remember everything, I thought, I’d be better equipped; I’d be better able to make proper, comprehensive assessments—informed decisions. But my memory had proved itself unreliable, and I needed something better. Writing down food was a way to turn my life into facts: if I had all the facts, I could keep them straight. So the next time this happened I’d know exactly why—I’d have all the data at hand.

In the wake of that breakup there were stretches of days and weeks of identical breakfasts and identical dinners. Those days and weeks blend into one another, become indistinguishable, and who knows whether I was too sad to be imaginative or all the unimaginative food made me sadder.

[…]

“I’m always really curious about who you are in a different context. Who am I completely removed from Earth—or pretending to be removed from Earth? When you’re going further and further from this planet, with all its rules and everything you’ve ever known, what happens? Do you invent new rules? What matters to you when you don’t have constructs? Do you take the constructs with you? On an individual level it was an exploration of who I am in a different context, and on a larger scale, going to another planet is an exploration about what humanity is in a different context.”

[…]

What I remember is early that evening, drinking sparkling wine and spreading cream cheese on slices of a soft baguette from the fancy Key Biscayne Publix, then spooning grocery-store caviar onto it (“Lumpfish caviar and Prosecco, definitely, on the balcony”). I remember cooking dinner unhurriedly (“You were comparing prices for the seafood and I was impatient”)—the thinnest pasta I could find, shrimp and squid cooked in wine and lots of garlic—and eating it late (“You cooked something good, but I can’t remember what”) and then drinking a café Cubano even later (“It was so sweet it made our teeth hurt and then, for me at least, immediately precipitated a metabolic crisis”) and how, afterward, we all went to the empty beach and got in the water which was, on that warm summer day, not even cold (“It was just so beautiful after the rain”).

“And this wasn’t the same trip,” wrote that wrong-for-me then-boyfriend, “but remember when you and I walked all the way to that restaurant in Bill Baggs park, at the southern tip of the island, and we had that painfully sweet white sangria, and ceviche, and walked back and got tons of mosquito bites, but we didn’t care, and then we were on the beach somehow and we looked at the red lights on top of all the buildings, and across the channel at Miami Beach, and went in the hot Miami ocean, and most importantly it was National Fish Day?”

And it’s heartening to me that I do remember all that—had remembered without his prompting, or consulting the record (I have written down: “D: ceviche; awful sangria; fried plantains; shrimp paella.” “It is National fish day,” I wrote. “There was lightning all night!”). It’s heartening that my memory isn’t as unreliable as I worry it is. I remember it exactly as he describes: the too-sweet sangria at that restaurant on the water, how the two of us had giggled so hard over nothing and declared that day “National Fish Day,” finding him in the kitchen at four in the morning, dipping a sausage into mustard—me taking that other half of the sausage, dipping it into mustard—the two of us deciding to drive the six hours back to Gainesville, right then.

“That is a really happy memory,” he wrote to me. “That is my nicest memory from that year and from that whole period. I wish we could live it again, in some extra-dimensional parallel life.”

Three years ago I moved back to San Francisco, which was, for me, a new-old city.

I’d lived there twice before. The first time I lived there was a cold summer in 2006, during which I met that man I’d be broken up about a couple years later. And though that summer was before I started writing down the food, and before I truly learned how to cook for myself, I can still remember flashes: a dimly lit party and drinks with limes in them and how, ill-versed in flirting, I took the limes from his drink and put them into mine. I remember a night he cooked circular ravioli he’d bought from an expensive Italian grocery store, and zucchini he’d sliced into thin coins. I remembered him splashing Colt 45—leftover from a party—into the zucchini as it was cooking, and all of that charming me: the Colt 45, the expensive ravioli, this dinner of circles.

The second time I lived in San Francisco was the time our thing fell apart. This was where my terror had originated: where I remembered the limes and the ravioli, he remembered or felt the immediacy of something else, and neither of us was right or wrong to remember what we did—all memories, of course, are valid—but still, it sucked. And now I have a record reminding me of the nights I came home drunk and sad and, with nothing else in the house, sautéed kale; blanks on the days I ran hungry to Kezar Stadium from the Lower Haight, running lap after lap after lap to turn my brain off, stopping to read short stories at the bookstore on the way home, all to turn off the inevitable thinking, and at home, of course, the inevitable thinking.

[…]

I’m not sure what to make of this data—what conclusions, if any, to draw. What I know is that it accumulates and disappears and accumulates again. No matter how vigilantly we keep track—even if we spend four months in a geodesic dome on a remote volcano with nothing to do but keep track—we experience more than we have the capacity to remember; we eat more than we can retain; we feel more than we can possibly carry with us. And maybe forgetting isn’t so bad. I know there is the “small green apple” from the time we went to a moving sale and he bought bricks, and it was raining lightly, and as we were gathering the bricks we noticed an apple tree at the edge of the property with its branches overhanging into the yard, and we picked two small green apples that’d been washed by the rain, and wiped them off on our shirts. They surprised us by being sweet and tart and good. We put the cores in his car’s cup holders. There was the time he brought chocolate chips and two eggs and a Tupperware of milk to my apartment, and we baked cookies. There are the times he puts candy in my jacket’s small pockets—usually peppermints so ancient they’ve melted and re-hardened inside their wrappers—which I eat anyway, and then are gone, but not gone.