Tag Archives: ethnography

Faster horses – Analog Senses

Faster horses – Analog Senses.

There’s a great quote that is often attributed to Henry Ford, the man who revolutionized the automobile industry with the introduction of the Model T in 1908. You’ve probably heard it before:

If I had asked my customers what they wanted they would have said a faster horse.

Whether Ford actually ever said them or not, those are wise words, and they apply to a great many things beyond cars. The gist of it is that consumers largely judge new products by comparing them to their existing competitors. That’s how we instinctively know if something is better. However, what happens when an entirely new product comes along? What happens when there are no real competitors?

When there’s no reference, there’s no objective way to quantify how good —or bad— a product is. As a last resort, people will still try to compare it to the closest thing they can think of, even if the comparison doesn’t really work. That can be a dangerous thing, but it can also be an opportunity.

The main lesson behind Ford’s words is that, if you aim to create a revolution, you must be willing to part with the existing preconceptions that are holding your competitors back. Only then will you be able to take a meaningful leap forward. That will surely attract some criticism in the beginning, but once the product manages to stand on its own, people will see it for what it really is.

The tech world is largely governed by that rule. It’s what we now call disruption. Apple, in particular, is famous for anticipating what people need before they even know it, disrupting entire markets. That’s arguably the main reason behind their massive success during the past decade.

In retrospect, Apple products are often seen as revolutionary, but only after they’ve gained a foothold in the market and more importantly, in our collective consciousness. Only then, people start seeing them for the revolutionary devices they always were. At the time of their announcement, though, they tend to face strong criticism from people that don’t really understand them. Apple products are usually not terribly concerned with conforming to the status quo and in fact, more often than not they’re actively trying to disrupt it. And that drives some people nuts.

It happened with the iPod:

No wireless. Less space than a nomad. Lame.

It happened with the iPhone.

That is the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good email machine…

It also happened with the iPad.

It’s just a big iPod touch.

There’s another example that’s particularly telling. During the last episode of The Talk Show, John Gruber and Ben Thompson reminded me of the public criticism that the original iPhone faced when Apple announced it. Much of that criticism was focused on its non-removable battery, a first in the mobile phone industry at the time. Back then, many people were used to carrying a spare battery in case their phone happened to die mid-day. Once the iPhone arrived and people couldn’t swap batteries anymore, they became angry. The iPhone didn’t conform to what they already knew, and they didn’t like it.

But the iPhone was never a horse.

7 years later, swappable batteries are no longer a thing, and nobody remembers them anymore. Some people may think of it as nice-to-have, and some others prefer to carry an extra battery pack, but for the most part, battery-swappability is not a factor driving smartphone sales.

Was it ever really a big deal?

Of course not. Swappable batteries were never a feature, they were merely a way to deal with the technological shortcomings of the time. Apple knew that if they managed to get a full day’s worth of use out of the iPhone’s battery, there wouldn’t be a need for it to be removable anymore, and they trusted people to eventually understand and accept that. It was a gamble, but history has shown that they were right.

The same thing happened with MacBooks a few years ago, but by then, Apple’s solution had already proven to be the right one. Indeed, it seems a bit silly to complain about a non-removable battery when your laptop gets 12 hours of battery life.

And yet, no matter how many times Apple has been right in the past, people keep finding reasons to complain about their new products. The Apple Watch, of course, is no different:

Apple Watch is ugly and boring (and Steve Jobs would have agreed).

It’s not even a finished product, and some people are already slamming it. And it’s only going to get worse.

People don’t like what they don’t understand and so far, nobody understands the Apple Watch. I’m not even sure anybody can; we just don’t know enough about it at this point. In the absence of a valid reference, many are sure to dismiss it as either irrelevant or flawed, simply because it doesn’t conform to their own existing preconceptions. Because, like the iPhone, the Apple Watch is not a horse either.

That’s a very human response, deeply rooted in our nature. It’s actually uncontrollable, to a degree. We’ve been evolutionary conditioned to be wary of the unknown, because there was a time not so long ago, when our very survival depended on it. However, given that we’re not fighting smilodons for food anymore, perhaps we should at least try to keep an open mind about things. Especially shiny things that cost hundreds —or thousands— of dollars and have the potential to disrupt our entire lives and redefine the way we communicate with each other.

I’m not saying that you should like the Apple Watch. I’m certainly not saying you should buy one. I’m just saying, it can’t hurt to give it the benefit of the doubt. There’s so much to gain and so little to lose.

The Apple Watch is not a faster horse but who knows? It just may end up being your favorite thing.

“the moment seizes us” – Boyhood

Untitled-1_0004_Layer 2

Untitled-1_0003_Layer 1

Untitled-1_0002_Layer 3

Untitled-1_0001_Layer 4

Untitled-1_0000_Layer 5

Boyhood is a 2014 American coming-of-age drama film written and directed by Richard Linklater and starring Patricia Arquette, Ellar Coltrane, Lorelei Linklater and Ethan Hawke. The film was shot intermittently over an eleven-year period from May 2002 to October 2013, showing the growth of the young boy and his sister to adulthood.

http://en.wikipedia.org/wiki/Boyhood_(film)

What’s Up With That: Why It’s So Hard to Catch Your Own Typos | Science | WIRED

What’s Up With That: Why It’s So Hard to Catch Your Own Typos | Science | WIRED.

type

Typos suck. They are saboteurs, undermining your intent, causing your resume to land in the “pass” pile, or providing sustenance for an army of pedantic critics. Frustratingly, they are usually words you know how to spell, but somehow skimmed over in your rounds of editing. If we are our own harshest critics, why do we miss those annoying little details?

[…]

“When you’re writing, you’re trying to convey meaning. It’s a very high level task,” he said.

As with all high level tasks, your brain generalizes simple, component parts (like turning letters into words and words into sentences) so it can focus on more complex tasks (like combining sentences into complex ideas). “We don’t catch every detail, we’re not like computers or NSA databases,” said Stafford. “Rather, we take in sensory information and combine it with what we expect, and we extract meaning.” When we’re reading other peoples’ work, this helps us arrive at meaning faster by using less brain power. When we’re proof reading our own work, we know the meaning we want to convey. Because we expect that meaning to be there, it’s easier for us to miss when parts (or all) of it are absent. The reason we don’t see our own typos is because what we see on the screen is competing with the version that exists in our heads.

[…]

Generalization is the hallmark of all higher-level brain functions. It’s similar to how our brains build maps of familiar places, compiling the sights, smells, and feel of a route. That mental map frees your brain up to think about other things. Sometimes this works against you, like when you accidentally drive to work on your way to a barbecue, because the route to your friend’s house includes a section of your daily commute. We can become blind to details because our brain is operating on instinct. By the time you proof read your own work, your brain already knows the destination.

This explains why your readers are more likely to pick up on your errors. Even if you are using words and concepts that they are also familiar with, their brains are on this journey for the first time, so they are paying more attention to the details along the way and not anticipating the final destination.

But even if familiarization handicaps your ability to pick out mistakes in the long run, we’re actually pretty awesome at catching ourselves in the act. (According to Microsoft, backspace is the third-most used button on the keyboard.) In fact, touch typists—people who can type without looking at their fingers—know they’ve made a mistake even before it shows up on the screen. Their brain is so used to turning thoughts into letters that it alerts them when they make even minor mistakes, like hitting the wrong key or transposing two characters. In a study published earlier this year, Stafford and a colleague covered both the screen and keyboard of typists and monitored their word rate. These “blind” typists slowed down their word rate just before they made a mistake.

Touch typists are working off a subconscious map of the keyboard. As they type, their brains are instinctually preparing for their next move. “But, there’s a lag between the signal to hit the key and the actual hitting of the key,” Stafford said. In that split second, your brain has time to run the signal it sent your finger through a simulation telling it what the correct response will feel like. When it senses an error, it sends a signal to the fingers, slowing them down so they have more time to adjust.

As any typist knows, hitting keys happens too fast to divert a finger when it’s in the process of making a mistake. But, Stafford says this evolved from the same mental mechanism that helped our ancestors’ brains make micro adjustments when they were throwing spears.

Unfortunately, that kind of instinctual feedback doesn’t exist in the editing process. When you’re proof reading, you are trying to trick your brain into pretending that it’s reading the thing for the first time. Stafford suggests that if you want to catch your own errors, you should try to make your work as unfamiliar as possible. Change the font or background color, or print it out and edit by hand. “Once you’ve learned something in a particular way, it’s hard to see the details without changing the visual form,” he said.

What’s Up With That: Your Best Thinking Seems to Happen in the Shower | Science | WIRED

What’s Up With That: Your Best Thinking Seems to Happen in the Shower | Science | WIRED.

shower-thoughts

Long drives, short walks, even something like pulling weeds, all seem to have the right mix of monotony and engagement to trigger a revelation. They also happen to be activities where it’s difficult to take notes. It turns out that aimless engagement in an activity is a great catalyst for free association, but introducing a pen and paper can sterilize the effort.

[…]

The common thread in these activities is they are physically or mentally active, but only mildly so. They also need to be familiar or comfortable enough that you stay engaged but not bored, and last long enough to have an uninterrupted stream of thought.

Kounios explains that our brains typically catalog things by their context: Windows are parts of buildings, and the stars belong in the night sky. Ideas will always mingle to some degree, but when we’re focused on a specific task our thinking tends to be linear.

Kounios likes to use the example of a stack of bricks in your backyard. You walk by them every day with hardly a second thought, and if asked you’d describe them as a building material (maybe for that pizza oven you keep meaning to put together). But one day in the shower, you start thinking about your neighbor’s walnut tree. Those nuts sure look tasty, and they’ve been falling in your yard. You suddenly realize that you can smash those nuts open using the bricks in your backyard!

[…]

As ideas become untethered, they are free to bump up against other ideas they’ve never had the chance to encounter, increasing the likelihood of a useful connection.

[…]

Like Archimedes, when you are working on a problem your brain tends to fixate on one or a few different strategies. Kounios says these are like ruts that your mental wheels get stuck in. “If you take a break however, those thought patterns no longer dominate your thinking,” he said. The problem gets removed from the mental ruts and mingles with other ideas you’re carrying in your head. Eventually, it finds one—or several—that click together and rise up like Voltron into a solution. This is called fixation forgetting.

[…]

It’s not clear how your brain decides which are the right connections, but it’s obvious that the farther your brain can roam, the better. Research has shown that your brain builds bigger creative webs when you’re in a positive mood. This makes sense, because when you’re anxious you’re less likely to take a chance on creativity. Even when resting or taking a break, anxious brains tend to obsess on linear solutions. This may be part of the reason that when you bring a way to record your thoughts into the equation—such as a notebook, voice recorder or word processor—the thoughts worth recording become scarce.

[…]

“Not having an explicit task is the main ingredient for random insights,” Kounios said. “Once you have a pen and paper there, it’s not really your mind wandering.”

 

Me, Myself, and I by Stephen Greenblatt | The New York Review of Books

Me, Myself, and I by Stephen Greenblatt | The New York Review of Books.

File:Shunga woman reading.jpg

Shunga woman reading

Laqueur’s most recent book, Solitary Sex: A Cultural History of Masturbation, shares with Making Sex the same startling initial premise: that something we take for granted, something that goes without saying, something that simply seems part of being human has in fact a history, and a fascinating, conflicted, momentous history at that.

[…]

Masturbation is virtually unique, in the array of more or less universal human behaviors, in arousing a peculiar and peculiarly intense current of anxiety.

This anxiety, Laqueur observes, is not found in all cultures and is not part of our own culture’s distant origins. In ancient Greece and Rome, masturbation could be the object of transitory embarrassment or mockery, but it had little or no medical or, as far as we can tell, cultural significance. More surprisingly, Laqueur argues, it is almost impossible to find in ancient Jewish thought. This claim at first seems dubious because in Genesis 38 we read that Onan “spilled his seed upon the ground,” an act that so displeased the Lord that He struck him dead. Onanism indeed became a synonym for masturbation, but not for the rabbis who produced the Talmuds and midrashim. For them the sin of Onan was not masturbation but a willful refusal to procreate. Their conceptual categories—procreation, idolatry, pollution—evidently did not include a significant place for the sinful indulgence in gratuitous, self-generated sexual pleasure. Some commentators on a pronouncement by Rabbi Eliezer—“Any- one who holds his penis when he urinates is as though he brought the flood into the world”—seem close to condemning such pleasure, but on closer inspection these commentators too are concerned with the wasting of semen.

Medieval Christian theologians, by contrast, did have a clear concept of masturbation as a sin, but it was not, Laqueur claims, a sin in which they had particularly intense interest. With the exception of the fifth-century abbot John Cassian, they were far more concerned with what Laqueur calls the ethics of social sexuality than they were with the ethics of solitary sex. What mattered most were “perversions of sexuality as perversions of social life, not as a withdrawal into asocial autarky.” Within the monastery anxiety focused far more on sodomy than on masturbation, while in the world at large it focused more on incest, bestiality, fornication, and adultery.

[…]

Church fathers could not share in particularly intense form the Jewish anxiety about Onan, precisely because the Church most honored those whose piety led them to escape from the whole cycle of sexual intercourse and generation. Theologians did not permit masturbation, but they did not focus sharply upon it, for sexuality itself, and not only nonreproductive sexuality, was to be overcome. A very severe moralist, Raymond of Peñafort, did warn married men against touching themselves, but only because arousal might make them want to copulate more often with their wives.

[…]

Reformation theologians did not fundamentally alter the traditional conception of masturbation or significantly intensify the level of interest in it. To be sure, Protestants vehemently castigated Catholics for creating institutions—monasteries and convents—that in their view denigrated marriage and inevitably fostered masturbation. Marriage, the Reformers preached, was not a disappointing second choice made by those who could not embrace the higher goal of chastity; it was the fulfillment of human and divine love. Sexual pleasure in marriage, provided that it was not excessive or pursued for its own sake, was not inherently sinful, or rather any taint of sinfulness was expunged by the divinely sanctioned goal of procreation. In the wake of Luther and Calvin masturbation remained what it had been for the rabbis: an act whose sinfulness lay in the refusal of procreation, the prodigal wasting of seed.

In one of his early sonnets, Shakespeare wittily turns such “unthrifty” wasting into economic malpractice:

Unthrifty loveliness, why dost thou spend
Upon thyself thy beauty’s legacy?

In bequeathing the young man such loveliness, nature expected him to pass it along to the next generation; instead the “beauteous niggard” is holding on to it for himself and refusing to create the child who should rightly bear his image into the future. Masturbation, in the sonnet, is the perverse misuse of an inheritance. The young man merely spends upon himself, and thereby throws away, wealth that should rightly generate more wealth:

For having traffic with thyself alone,
Thou of thyself thy sweet self dost deceive.
Then how when nature calls thee to be gone:
What acceptable audit canst thou leave?

  Thy unused beauty must be tombed with thee,

  Which usèd, lives th’executor to be.

The young man, as the sonnet characterizes him, is a “profitless usurer,” and when his final reckoning is made, he will be found in arrears. The economic metaphors here have the odd effect of praising usury, still at the time regarded both as a sin and as a crime. There may be an autobiographical element here—the author of The Merchant of Venice was himself on occasion a usurer, as was his father—but Shakespeare was also anticipating a recurrent theme in the history of “modern masturbation” that concerns Laqueur: from the eighteenth century onward, masturbation is assailed as an abuse of biological and social economy. Still, a poem like Shakespeare’s only shows that masturbation in the full modern sense did not yet exist: by “having traffic” with himself alone, the young man is wasting his seed, but the act itself is not destroying his health or infecting the whole social order.

The Renaissance provides a few glimpses of masturbation that focus on pleasure rather than the avoidance of procreation. In the 1590s Shakespeare’s contemporary Thomas Nashe wrote a poem about a young man who went to visit his girlfriend who was lodging—just for the sake of convenience, she assured him—in a whorehouse. The man was so aroused by the very sight of her that he had the misfortune of prematurely ejaculating, but the obliging lady managed to awaken him again. Not, however, long enough for her own satisfaction: to his chagrin, the lady only managed to achieve her “solace” by means of a dildo which, she declared, was far more reliable than any man. This piece of social comedy is closer to what Laqueur would consider authentic “modern” masturbation, for Nashe’s focus is the pursuit of pleasure rather than the wasting of seed, but it is still not quite there.

Laqueur’s point is not that men and women did not masturbate throughout antiquity, the Middle Ages, and the Renaissance—the brief confessional manual attributed to Gerson assumes that the practice is ubiquitous, and the historian finds no reason to doubt it—but rather that it was not regarded as a deeply significant event. It is simply too infrequently mentioned to have counted for a great deal, and the few mentions that surface tend to confirm its relative unimportance. Thus in his diary, alongside the many occasions on which he had a partner in pleasure, Samuel Pepys jotted down moments in which he enjoyed solitary sex, but these latter did not provoke in him any particular shame or self-reproach. On the contrary, he felt a sense of personal triumph when he managed, while being ferried in a boat up the Thames, to bring himself to an orgasm—to have “had it complete,” as he put it—by the strength of his imagination alone. Without using his hands, he noted proudly, he had managed just by thinking about a girl he had seen that day to pass a “trial of my strength of fancy…. So to my office and wrote letters.” Only on such solemn occasions as High Mass on Christmas Eve in 1666, when the sight of the queen and her ladies led him to masturbate in church, did Pepys’s conscience speak out, and only in a very still, small voice.

The seismic shift came about some half-century later, and then not because masturbation was finally understood as a horrible sin or an economic crime but rather because it was classified for the first time as a serious disease. “Modern masturbation,” Solitary Sex begins, “can be dated with a precision rare in cultural history.” It came into being “in or around 1712” with the publication in London of a short tract with a very long title: Onania; or, The Heinous Sin of Self Pollution, and all its Frightful Consequences, in both SEXES Considered, with Spiritual and Physical Advice to those who have already injured themselves by this abominable practice. And seasonable Admonition to the Youth of the nation of Both SEXES….The anonymous author—Laqueur identifies him as John Marten, a quack surgeon who had published other works of soft-core medical pornography—announced that he had providentially met a pious physician who had found remedies for this hitherto incurable disease. The remedies are expensive, but given the seriousness of the condition, they are worth every penny. Readers are advised to ask for them by name: the “Strengthening Tincture” and the “Prolific Powder.”

[…]

But marketing alone cannot explain why “onanism” and related terms began to show up in the great eighteenth-century encyclopedias or why one of the most influential physicians in France, the celebrated Samuel Auguste David Tissot, took up the idea of masturbation as a dangerous illness or why Tissot’s 1760 work, L’Onanisme, became an instant European literary sensation.

[…]

Tissot “definitively launched masturbation,” as Laqueur puts it, “into the mainstream of Western culture.” It was not long before almost the entire medical profession attributed an inexhaustible list of woes to solitary sex, a list that included spinal tuberculosis, epilepsy, pimples, madness, general wasting, and an early death.

[…]

Modern masturbation—and this is Laqueur’s brilliant point—was the creature of the Enlightenment. It was the age of reason, triumph over superstition, and the tolerant, even enthusiastic acceptance of human sexuality that conjured up the monster of self-abuse. Prior to Tissot and his learned medical colleagues, it was possible for most ordinary people to masturbate, as Pepys had done, without more than a twinge of guilt. After Tissot, anyone who indulged in this secret pleasure did so in the full, abject knowledge of the horrible consequences. Masturbation was an assault on health, on reason, on marriage, and even on pleasure itself. For Enlightenment doctors and their allies did not concede that masturbation was a species of pleasure, however minor or embarrassing; it was at best a false pleasure, a perversion of the real. As such it was dangerous and had at all costs to be prevented.

[…]

There were, Laqueur suggests, three reasons why the Enlightenment concluded that masturbation was perverse and unnatural. First, while all other forms of sexuality were reassuringly social, masturbation—even when it was done in a group or taught by wicked servants to children—seemed in its climactic moments deeply, irremediably private. Second, the masturbatory sexual encounter was not with a real, flesh-and-blood person but with a phantasm. And third, unlike other appetites, the addictive urge to masturbate could not be sated or moderated. “Every man, woman, and child suddenly seemed to have access to the boundless excesses of gratification that had once been the privilege of Roman emperors.”

Privacy, fantasy, insatiability: each of these constitutive features of the act that the Enlightenment taught itself to fear and loathe is, Laqueur argues, a constitutive feature of the Enlightenment itself. Tissot and his colleagues had identified the shadow side of their own world: its interest in the private life of the individual, its cherishing of the imagination, its embrace of a seemingly limitless economy of production and consumption. Hammering away at the social, political, and religious structures that had traditionally defined human existence, the eighteenth century proudly brought forth a shining model of moral autonomy and market economy—only to discover that this model was subject to a destructive aberration. The aberration—the physical act of masturbating—was not in itself so obviously dreadful. When Diderot and his circle of sophisticated encyclopédistes offered their considered view of the subject, they acknowledged that moderate masturbation as a relief for urgent sexual desires that lacked a more satisfying outlet seemed natural enough. But the problem was that “moderate masturbation” was a contradiction in terms: the voluptuous, fiery imagination could never be so easily restrained.

Masturbation then became a sexual bugbear, Laqueur argues, because it epitomized all of the fears that lay just on the other side of the new sense of social, psychological, and moral independence. A dramatic increase in individual autonomy was bound up, as he convincingly documents, with an intensified anxiety about unsocialized, unreproductive pleasure, pleasure fueled by seductive chimeras ceaselessly generated by the vagrant mind:

The Enlightenment project of liberation—the coming into adulthood of humanity—made the most secret, private, seemingly harmless, and most difficult to detect of sexual acts the centerpiece of a program for policing the imagination, desire, and the self that modernity itself had unleashed.

The dangers of solitary sex were linked to one of the most telling modern innovations. “It was not an accident,” Laqueur writes, in the careful phrase of a historian eager at once to establish a link and to sidestep the issue of causality, that Onania was published in the age of the first stock market crashes, the foundation of the Bank of England, and the eruption of tulip-mania. Masturbation is the vice of civil society, the culture of the marketplace, the world in which traditional barriers against luxury give way to philosophical justifications of excess. Adam Smith, David Hume, and Bernard Mandeville all found ways to celebrate the marvelous self-regulating quality of the market, by which individual acts of self-indulgence and greed were transformed into the general good. Masturbation might at first glance seem to be the logical emblem of the market: after all, the potentially limitless impulse to gratify desire is the motor that fuels the whole enormous enterprise. But in fact it was the only form of pleasure-seeking that escaped the self-regulating mechanism: it was, Mandeville saw with a shudder, unstoppable, unconstrained, unproductive, and absolutely free of charge. Far better, Mandeville wrote in his Defense of Public Stews (1724), that boys visit brothels than that they commit “rapes upon their own bodies.”

The revealing contrast here is with an earlier cultural innovation, the public theaters, which were vigorously attacked in Shakespeare’s time for their alleged erotic power. The theaters, moralists claimed, were “temples to Venus.” Aroused audiences would allegedly rush off at the play’s end to make love in nearby inns or in secret rooms hidden within the playhouses themselves.

[…]

In the late seventeenth century John Dunton—the author of The Night-walker, or Evening Rambles in Search After Lewd Women (1696)—picked up a whore in the theater, went to her room, and then tried to give her a sermon on chastity. She vehemently objected, saying that the men with whom she usually went home were far more agreeable: they would pretend, she said, that they were Antony and she would pretend that she was Cleopatra. The desires that theaters awakened were evidently understood to be fundamentally social: irate Puritans never charged that audiences were lured into an addiction to solitary sex. But that is precisely the accusation leveled at the experience of reading imaginative fiction.

It was not only the solitude in which novels could be read that contributed to the difference between the two attacks; the absence of the bodies of the actors and hence the entire reliance on imagination seemed to make novels more suitable for solitary than social sex. Eighteenth-century doctors, tapping into ancient fears of the imagination, were convinced that when sexual excitement was caused by something unreal, something not actually present in the flesh, that excitement was at once unnatural and dangerous. The danger was greatly intensified by its addictive potential: the masturbator, like the novel reader—or rather, precisely as novel reader—could willfully mobilize the imagination, engaging in an endless creation and renewing of fictive desire. And shockingly, with the spread of literacy, this was a democratic, equal opportunity vice. The destructive pleasure was just as available to servants as to masters and, still worse, just as available to women as to men. Women, with their hyperactive imaginations and ready sympathies, their proneness to tears, blushes, and fainting fits, their irrationality and emotional vagrancy, were thought particularly subject to the dangerous excitements of the novel.

[…]

at the beginning of the twentieth century, the whole preoccupation—the anxiety, the culture of surveillance, the threat of death and insanity—began to wane. The shift was by no means sudden or decisive, and traces of the older attitudes obviously persist not only in schoolboy legends and many zany, often painful family dramas but also in the nervous laughter that attends the whole topic. Still, the full nightmare world of medicalized fear and punishment came to an end. Laqueur tells this second part of the story far more briskly: he attributes the change largely to the work of Freud and liberal sexology, though he also acknowledges how complex and ambivalent many of the key figures actually were. Freud came to abandon his conventional early views about the ill effects of masturbation and posited instead the radical idea of the universality of infant masturbation. What had been an aberration became a constitutive part of the human condition. Nevertheless the founder of psychoanalysis constructed his whole theory of civilization around the suppression of what he called the “perverse elements of sexual excitement,” beginning with autoeroticism. In this highly influential account, masturbation, as Laqueur puts it, “became a part of ontogenesis: we pass through masturbation, we build on it, as we become sexual adults.”

[…]

Solitary Sex ends with a brief account of modern challenges to this theory of repression, from the championing of women’s masturbation in the 1971 feminist best seller Our Bodies, Ourselves to the formation of groups with names like the SF Jacks—“a fellowship of men who like to jack-off in the company of like-minded men,” as its Web site announces—and the Melbourne Wankers. A series of grotesque photographs illustrates the transgressive fascination that masturbation has for such contemporary artists as Lynda Benglis, Annie Sprinkle, and Vito Acconci. The latter made a name for himself by masturbating for three weeks while reclining in a box under a white ramp on the floor of the Sonnabend Gallery in New York City: “so, art making,” Laqueur observes, “is literally masturbating.”

[…]

Conjuring up his childhood in Combray, Proust’s narrator recalls that at the top of his house, “in the little room that smelt of orris-root,” he looked out through the half-opened window and

with the heroic misgivings of a traveller setting out on a voyage of exploration or of a desperate wretch hesitating on the verge of self-destruction, faint with emotion, I explored, across the bounds of my own experience, an untrodden path which for all I knew was deadly—until the moment when a natural trail like that left by a snail smeared the leaves of the flowering currant that drooped around me.

For this brief moment in Swann’s Way (1913), it is as if we had reentered the cultural world that Laqueur chronicles so richly, the world in which solitary sex was a rash voyage away beyond the frontiers of the natural order, a headlong plunge into a realm of danger and self-destruction. Then, with the glimpse of the snail’s trail, the landscape resumes its ordinary, everyday form, and the seemingly untrodden path is disclosed—as so often in Proust—to be exceedingly familiar.

[…]

Proust does not encourage us to exaggerate the significance of masturbation—it is only one small, adolescent step in the slow fashioning of the writer’s vocation. Still, Laqueur’s courageous cultural history (and it took courage, even now, to write this book) makes it abundantly clear why for Proust—and for ourselves—the celebration of the imagination has to include a place for solitary sex.

The Hi-Tech Mess of Higher Education by David Bromwich | The New York Review of Books

The Hi-Tech Mess of Higher Education by David Bromwich | The New York Review of Books.

bromwich_1-081414.jpg

Students at Deep Springs College in the California desert, near the Nevada border, where education involves ranching, farming, and self-governance in addition to academics – Jodi Cobb/National Geographic/Getty Images

The financial crush has come just when colleges are starting to think of Internet learning as a substitute for the classroom. And the coincidence has engendered a new variant of the reflection theory. We are living (the digital entrepreneurs and their handlers like to say) in a technological society, or a society in which new technology is rapidly altering people’s ways of thinking, believing, behaving, and learning. It follows that education itself ought to reflect the change. Mastery of computer technology is the major competence schools should be asked to impart. But what if you can get the skills more cheaply without the help of a school?

A troubled awareness of this possibility has prompted universities, in their brochures, bulletins, and advertisements, to heighten the one clear advantage that they maintain over the Internet. Universities are physical places; and physical existence is still felt to be preferable in some ways to virtual existence. Schools have been driven to present as assets, in a way they never did before, nonacademic programs and facilities that provide students with the “quality of life” that makes a college worth the outlay. Auburn University in Alabama recently spent $72 million on a Recreation and Wellness Center. Stanford built Escondido Village Highrise Apartments. Must a college that wants to compete now have a student union with a food court and plasma screens in every room?

[…]

The model seems to be the elite club—in this instance, a club whose leading function is to house in comfort thousands of young people while they complete some serious educational tasks and form connections that may help them in later life.

[…]

A hidden danger both of intramural systems and of public forums like “Rate My Professors” is that they discourage eccentricity. Samuel Johnson defined a classic of literature as a work that has pleased many and pleased long. Evaluations may foster courses that please many and please fast.

At the utopian edge of the technocratic faith, a rising digital remedy for higher education goes by the acronym MOOCs (massive open online courses). The MOOC movement is represented in Ivory Tower by the Silicon Valley outfit Udacity. “Does it really make sense,” asks a Udacity adept, “to have five hundred professors in five hundred different universities each teach students in a similar way?” What you really want, he thinks, is the academic equivalent of a “rock star” to project knowledge onto the screens and into the brains of students without the impediment of fellow students or a teacher’s intrusive presence in the room. “Maybe,” he adds, “that rock star could do a little bit better job” than the nameless small-time academics whose fame and luster the video lecturer will rightly displace.

That the academic star will do a better job of teaching than the local pedagogue who exactly resembles 499 others of his kind—this, in itself, is an interesting assumption at Udacity and a revealing one. Why suppose that five hundred teachers of, say, the English novel from Defoe to Joyce will all tend to teach the materials in the same way, while the MOOC lecturer will stand out because he teaches the most advanced version of the same way? Here, as in other aspects of the movement, under all the talk of variety there lurks a passion for uniformity.

[…]

The pillars of education at Deep Springs are self-governance, academics, and physical labor. The students number scarcely more than the scholar-hackers on Thiel Fellowships—a total of twenty-six—but they are responsible for all the duties of ranching and farming on the campus in Big Pine, California, along with helping to set the curriculum and keep their quarters. Two minutes of a Deep Springs seminar on citizen and state in the philosophy of Hegel give a more vivid impression of what college education can be than all the comments by college administrators in the rest of Ivory Tower.

[…]

Teaching at a university, he says, involves a commitment to the preservation of “cultural memory”; it is therefore in some sense “an effort to cheat death.”

Jenny Diski reviews ‘Cubed’ by Nikil Saval · LRB 31 July 2014

Jenny Diski reviews ‘Cubed’ by Nikil Saval · LRB 31 July 2014.

The story of the office begins in counting houses, where scribes kept their heads down accounting for the transformation of goods into wealth and vice versa. You might go as far back as ancient Egypt or stay sensible and look to mercantile Europe for the beginnings of bureaucracy, and the need to keep written accounts of business in one place. Saval gives a nod to the medieval guilds but settles on the 19th century as the start of the office proper, still in Europe, although this is an overwhelmingly American account of the American office. The closer you get to modernity in Cubed, the more the emphasis is on buildings and the more diminished the figure of the worker inside the buildings (until you get to the end and the buildings begin to disappear, although so too do the workers). It’s not a mystery. The design and construction of entire purpose-built structures for office work is a modern phenomenon. Scribes, to stretch the notion of office work, wrote in scriptoria, rooms in monasteries which were built for the more general purpose of worshipping God and housing those devoted to the various tasks (among which the reproduction of scripture) involved in doing so. Clerks are more likely to be what we think of when we want to look at the early days of office work. They emerged from their religious duties to assist commerce in keeping track of business, where we recognise them as dark-suited, substantially present characters in Trollope, Thackeray and Dickens. The ready-made spaces these clerks worked in became ‘offices’, rather than special buildings defining the work they pursued. They kept their books and scratched out their invoices in regular private houses given over to business, and sat or stood at desks in rooms they shared with their bosses for both convenience and oversight – this too disappears and then returns in postmodernity when hierarchy is spatially, if not actually, flattened.

Proximity has always been an important issue for office workers, so much so that it eventually precluded any form of unionisation. Rather than organise to improve their pay and conditions, office workers chose to keep close to their superiors in the hope, not always forlorn, that they would rise in prominence thanks to patronage. Physical closeness applied in the Dickensian office, but there are other ways to achieve it. In The Apartment (perfectly depicting the apex of the American way of office life in 1960 as North by Northwest perfectly depicts the fantasised alternative), Jack Lemmon gets close to his boss, which gets him ever closer to a key to the executive washroom, by lending his apartment to executives for their extra-marital assignations.

[…]

The pre-20th-century office worker saw himself as a cut above the unsalaried labouring masses, and was as ambivalent about his superiors, who were his only means of rising, as the rest of the working world was about him. Dandyish clerks prided themselves on not being workers, on the cleanness of their job (thus the whiteness of the collars), and on being a step above hoi polloi. They became a massed workforce in the United States, where the attitude towards the scribe and record-keeper changed, so that they came to be seen both as effete and untrustworthy, like Dickens’s Heep, and as ominous and unknowable, like Bartleby, but without receiving the amazed respect of Melville’s narrator. By 1855 in New York they were the third largest occupational group. Their self-esteem as their numbers grew was not shared: ‘Nothing about clerical labour was congenial to the way most Americans thought of work … At best, it seemed to reproduce things … the bodies of real workers were sinewy, tanned by the relentless sun, or blackened by smokestack soot; the bodies of clerks were slim, almost feminine in their untested delicacy.’ In Vanity Fair, the clerks are ‘“vain, mean, selfish, greedy, sensual and sly, talkative and cowardly”, and spent all their minimal strength attempting to dress better than “real men who did real work”.’

 

By the mid-20th century sex had created a new division within clerical labour. The secretary was almost invariably a woman and so was the typist, who worked in massed serried ranks, although (again to be seen in The Apartment) there was also a pool of anonymous desks for mute men with accounting machines, like Lemmon as C.C. Baxter. The secretaries lived inside a bubble of closeness to power, looking to burst through it into management or marriage, most likely the latter, geishas at work whose most realistic hope was to become domestic geishas, while the typists (originally called typewriters) and number-crunchers clattering on their machines on their own floor merely received dictated or longhand work to type or add up, distributed by runners, and so were not likely to catch the eye of an executive to give them a hand up unless they were prepared to wait outside their own apartment in the rain.

The pools of workers as well as the interior design of offices were under the spell of Taylorism, the 1950s fetish for a time and motion efficiency that tried to replicate the rhythm enforced in the factories to which office workers felt so superior. The idea that things that need doing and the people doing them could be so organised that they operated together as smoothly as cogs in a machine is everlastingly seductive. Anyone who spends half a day reorganising their home office, rejigging their filing system, arranging their work space ‘ergonomically’ knows this. It isn’t just a drive for cost efficiency, but some human tic that has us convinced that the way we organise ourselves in relation to our work holds a magic key to an almost effortless success. Entire online magazines like Lifehacker and Zen Habits are devoted to time-and-money-saving tweaks for work and home (‘An Easy Way to Find the Perfect Height for Your Chair or Standing Desk’; ‘Five Ways to Spend a Saved Hour at Work’; ‘Ten Tips to Work Smarter, Not Harder’; ‘What to Think about While You Exercise’). At a corporate level, this meant erecting buildings and designing their interiors and work systems to achieve office nirvana. No time, no motion wasted. The utopian dream of architects, designers and managers comes together in the form-follows-function mantra, beginning with Adler and Sullivan’s Wainwright Building in St Louis in 1891, although, as Saval points out, from the start it was really all about form follows finance:

The point was not to make an office building per specification of a given company … but rather to build for an economy in which an organisation could move in and out of a space without any difficulty. The space had to be eminently rentable … The skylines of American cities, more than human ingenuity and entrepreneurial prowess, came simply to represent dollars per square foot.

The skyscraper, the apotheosis of form following finance and function, appears once the manufacture of elevators allowed buildings of more than the five floors that people are prepared to walk up. It was a perfect structure philosophically and speculatively to house the now millions of workers whose job it was to keep track of manufacturing, buying and selling – ‘the synthesis of naked commerce and organic architecture’ as foreseen by Louis Sullivan, mentor to Frank Lloyd Wright. The basic unit of the skyscraper is the ‘cell’: ‘We take our cue from the individual cell, which requires a window with its separating pier, its sill and lintel, and we, without more ado, make them look all alike because they are all alike.’ The International Style reached its glory period with the vertical cities designed by Sullivan, Mies van der Rohe, Philip Johnson, Henry-Russell Hitchcock. The Philadelphia Savings Fund Society Building, the Rockefeller Center, the UN Secretariat Building, Lever House and the Seagram Building were visually stunning statements of corporate power and prevailed by making the perceived virtues of repetition and monotony in design synonymous with economy and order. Even the need for a window in each cell was obviated with the invention of an efficient air-conditioning system and electric lighting, allowing more rational ways to provide light and air. However beautiful or banal the exterior, curtained in glass or blank with concrete, the buildings served as hives for the masses who performed their varied tasks to produce the evidence of profit. They were Taylorist cathedrals, and new techniques of ergonomics and personality-testing for employees compounded the organisational religious zeal, so that individuals more than ever before became bodies operating within physical space, whose ‘personalities’ were tested for the lack of them in the search for compliance and conformity. Business jargon added mind-conditioning on a par with air-conditioning, keeping everyone functioning optimally within the purposes of the mini-city.

The popular sociology books that began to appear in the 1960s criticising this uniformity were read avidly by the office workers who started to see themselves as victims. The Lonely Crowd, The Organisation Man, The Man in the Grey Flannel Suit, the movie The Apartment itself, described a dystopian conformity that mid-century business America had produced in entire lives, not just in the working day. An alternative was proposed by office designers such as Robert Propst at Herman Miller, who were still working on behalf of the corporations, but who saw Taylorism as deadening the creative forces that were beginning to be seen as useful to business, perhaps as a result of the rise of advertising. Open plan became the solution. The cell opened out to the entire floor space of the building and it became a matter of how to subdivide that space to suit the varied tasks each individual needed to do, while retaining openness; to create office interiors in which workers needed to move around to achieve their goals, ideally bumping into one another on the way to permit the fortuitous cross-pollination of ideas. Cubes arrived, boxes without lids for people, but humane, alterable and adaptable to their needs (or the needs of the business for which they worked). Lots of little adjustable cells inside the main cell. Walls became flexible and low enough to be chatted over. Herman Miller’s Action Office and the concept of Bürolandschaft, the landscaped office, replaced the fundamental lonely cell and created its own kind of hell: ‘unpleasant temperature variations, draughts, low humidity, unacceptable noise levels, poor natural lighting, lack of visual contact with the outside and lack of natural ventilation’. And in addition there was a felt loss of privacy that had people bringing in all manner of knick-knacks to their cubes as self-identifiers and status symbols.

Another kind of office work came along with the arrival of the dotcom revolution. Not paper work but screen work. Like advertising but growing crazily, not humdrum invoice-stamping and letter-writing, but innovative programming that required intense brainwork from young, ill-disciplined talent who needed to be kept at their screens as much as possible while being nurtured and refuelled on the job. Being young and not having any connection with the office work of the past, the new workforce was offered on-site playgrounds that kept obsessive minds refreshed but still focused. Hierarchies were loosened, or more accurately given the appearance of being loosened. Jeans and T-shirts replaced suits, all youthful needs (except sleep-inducing sex) were catered for: pizzas and carbonated drinks, basketball and brightly coloured nursery furniture for the young geniuses to lounge or nap on when they were exhausted with programming. The open-plan office moved towards ‘main streets’ with side offices for particular purposes, often themed like Disneyland with lots of communal meeting and playing places, scooters to get around, and built-in time for workers to develop their own pet projects. The Herman Miller Aeron chair, still so desirable, was a design response to the need to sit for long periods working at a screen. It’s advertised as being ergonomically created for people to sit comfortably on stretchy mesh for up to 12 hours at a time.

In advertising, Jay Chiat decided that office politics were a bar to inspirational thinking. He hired Frank Gehry to design his ‘deterritorialised’ agency offices in Venice, California in 1986. ‘Everyone would be given a cellular phone and a laptop computer when they came in. And they would work wherever they wanted.’ Personal items, pictures or plants had to be put in lockers. There were no other private spaces. There were ‘Tilt-A-Whirl domed cars … taken from a defunct amusement park ride, for two people to have private conferences. They became the only place where people could take private phone calls.’ One employee pulled a toy wagon around to keep her stuff together. It rapidly turned into a disaster. People got to work and had no idea where they were to go. There were too many people and not enough chairs. People just stopped going to work. In more formal work situations too, the idea of the individual workstation, an office or a personal desk, began to disappear and designers created fluid spaces where people wandered to settle here and there in specialised spaces. For some reason homelessness was deemed to be the answer to a smooth operation.

The great days of office buildings dictating where and how individuals work within them may have gone. There are new architects and designers who collaborate with the workers themselves to produce interiors that suit their needs and desires. ‘Co-design’ – allowing the users of a space to have an equal say in how it is organised – is a first sign that buildings, sponsored by and monuments to corporate power, might have lost their primacy over the individuals engaged to work in them. But if the time of grand structures is over, it’s probably an indication that corporate power has seen a better way to sustain itself. The shift away from monolithic vertical cities of work and order might be seen as the stage immediately preceding the disappearance of the office altogether and the start of the home-working revolution we’ve been told has been on its way ever since futurology programmes in the 1950s assured us we’d never get out of our pyjamas within the year.

Fantasies of home-working, as people began to see round the corner into a computerised future, were forever being promised but never really came to anything. The idea made management nervous. How to keep tabs on people? How were managers to manage? And it alarmed office workers. It wasn’t perhaps such a luxury after all not having to face the nightmare of commuting or those noisy open-plan dystopias, when confronted instead by the discipline needed to get down to and keep at work at home, operating around the domestic needs of the family, and having no one to chat to around the water cooler that wasn’t there. Even now, when the beneficial economics of freelancing and outsourcing has finally got a grip on corporate accountants, there is something baffling and forlorn about the sight, as you walk past café after café window, of rows of people tapping on their MacBook Air. There for company in the communal space, but wearing isolating headphones to keep out the chatter, rather than sitting in their own time in quiet, ideally organised, or lonely, noisy, cramped home offices. Cafés with free wifi charge by the coffee to replicate a working atmosphere in what was once a place for daydreaming and chat. The freedom of home-working is also the freedom from employment benefits such as paid holidays, sick pay, pensions; and the freedom of permatemp contracts or none at all and the radical uncertainty about maintaining a steady income. These workers are a serious new class, known as the precariat: insecure, unorganised, taking on too much work for fear of famine, or frighteningly underemployed. The old rules of employment have been turned upside down. These new non-employees, apparently, need to develop a new ‘self-employed mindset’, in which they treat their employers as ‘customers’ of their services, and do their best to satisfy them, in order to retain their ‘business’. The ‘co-working’ rental is the most recent arrival. Space in a building with office equipment and technical facilities is hired out to freelancers, who work together but separately in flexible spaces on their own projects, in a bid ‘to get out of their apartments and be sociable in an office setting’. Office space has returned to what it really was, dollars per square foot, which those who were once employees now pay to use, without the need for rentiers to provide more than a minimum of infrastructure. The US Bureau of Labor Statistics projects that ‘by 2020 freelancers, temps, day labourers and independent contractors will constitute 40 per cent of the workforce.’ Some think up to 50 per cent. Any freelancer will tell you about the time and effort required to drum up business and keep it coming (networking, if you like) which cuts down on how much work you can actually do if you get it. When they do get the work, they no longer get the annual salaries that old-time clerks were so proud to receive. Getting paid is itself time-consuming and difficult. It’s estimated that more than 77 per cent of freelancers have had trouble collecting payment, because contractors try to retain fees for as long as possible. Flexibility sounds seductive, as if it allows individuals to live their lives sanely, fitting work and leisure together in whatever way suits them and their families best. But returning the focus to the individual worker rather than the great corporate edifice simply adds the burdens of management to the working person’s day while creating permanent anxiety and ensuring employee compliance. As to what freelancers actually do in their home offices, in steamy cafés, in co-working spaces, I still have no idea, but I suspect that the sumptuous stationery cupboard is getting to be as rare as a monthly salary cheque.

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED.

It’s not too much of an exaggeration to call autocorrect the overlooked underwriter of our era of mobile prolixity. Without it, we wouldn’t be able to compose windy love letters from stadium bleachers, write novels on subway commutes, or dash off breakup texts while in line at the post office. Without it, we probably couldn’t even have phones that look anything like the ingots we tickle—the whole notion of touchscreen typing, where our podgy physical fingers are expected to land with precision on tiny virtual keys, is viable only when we have some serious software to tidy up after us. Because we know autocorrect is there as brace and cushion, we’re free to write with increased abandon, at times and in places where writing would otherwise be impossible. Thanks to autocorrect, the gap between whim and word is narrower than it’s ever been, and our world is awash in easily rendered thought.

[…]

I find him in a drably pastel conference room at Microsoft headquarters in Redmond, Washington. Dean Hachamovitch—inventor on the patent for autocorrect and the closest thing it has to an individual creator—reaches across the table to introduce himself.

[…]

Hachamovitch, now a vice president at Microsoft and head of data science for the entire corporation, is a likable and modest man. He freely concedes that he types teh as much as anyone. (Almost certainly he does not often type hte. As researchers have discovered, initial-letter transposition is a much rarer error.)

[…]

The notion of autocorrect was born when Hachamovitch began thinking about a functionality that already existed in Word. Thanks to Charles Simonyi, the longtime Microsoft executive widely recognized as the father of graphical word processing, Word had a “glossary” that could be used as a sort of auto-expander. You could set up a string of words—like insert logo—which, when typed and followed by a press of the F3 button, would get replaced by a JPEG of your company’s logo. Hachamovitch realized that this glossary could be used far more aggressively to correct common mistakes. He drew up a little code that would allow you to press the left arrow and F3 at any time and immediately replace teh with the. His aha moment came when he realized that, because English words are space-delimited, the space bar itself could trigger the replacement, to make correction … automatic! Hachamovitch drew up a list of common errors, and over the next years he and his team went on to solve many of the thorniest. Seperate would automatically change to separate. Accidental cap locks would adjust immediately (making dEAR grEG into Dear Greg). One Microsoft manager dubbed them the Department of Stupid PC Tricks.

[…]

One day Hachamovitch went into his boss’s machine and changed the autocorrect dictionary so that any time he typed Dean it was automatically changed to the name of his coworker Mike, and vice versa. (His boss kept both his computer and office locked after that.) Children were even quicker to grasp the comedic ramifications of the new tool. After Hachamovitch went to speak to his daughter’s third-grade class, he got emails from parents that read along the lines of “Thank you for coming to talk to my daughter’s class, but whenever I try to type her name I find it automatically transforms itself into ‘The pretty princess.’”

[…]

On idiom, some of its calls seemed fairly clear-cut: gorilla warfare became guerrilla warfare, for example, even though a wildlife biologist might find that an inconvenient assumption. But some of the calls were quite tricky, and one of the trickiest involved the issue of obscenity. On one hand, Word didn’t want to seem priggish; on the other, it couldn’t very well go around recommending the correct spelling of mothrefukcer. Microsoft was sensitive to these issues. The solution lay in expanding one of spell-check’s most special lists, bearing the understated title: “Words which should neither be flagged nor suggested.”

[…]

One day Vignola sent Bill Gates an email. (Thorpe couldn’t recall who Bill Vignola was or what he did.) Whenever Bill Vignola typed his own name in MS Word, the email to Gates explained, it was automatically changed to Bill Vaginal. Presumably Vignola caught this sometimes, but not always, and no doubt this serious man was sad to come across like a character in a Thomas Pynchon novel. His email made it down the chain of command to Thorpe. And Bill Vaginal wasn’t the only complainant: As Thorpe recalls, Goldman Sachs was mad that Word was always turning it into Goddamn Sachs.

Thorpe went through the dictionary and took out all the words marked as “vulgar.” Then he threw in a few anatomical terms for good measure. The resulting list ran to hundreds of entries:

anally, asshole, battle-axe, battleaxe, bimbo, booger, boogers, butthead, Butthead …

With these sorts of master lists in place—the corrections, the exceptions, and the to-be-primly-ignored—the joists of autocorrect, then still a subdomain of spell-check, were in place for the early releases of Word. Microsoft’s dominance at the time ensured that autocorrect became globally ubiquitous, along with some of its idiosyncrasies. By the early 2000s, European bureaucrats would begin to notice what came to be called the Cupertino effect, whereby the word cooperation (bizarrely included only in hyphenated form in the standard Word dictionary) would be marked wrong, with a suggested change to Cupertino. There are thus many instances where one parliamentary back-bencher or another longs for increased Cupertino between nations. Since then, linguists have adopted the word cupertino as a term of art for such trapdoors that have been assimilated into the language.

[…]

Autocorrection is no longer an overqualified intern drawing up lists of directives; it’s now a vast statistical affair in which petabytes of public words are examined to decide when a usage is popular enough to become a probabilistically savvy replacement. The work of the autocorrect team has been made algorithmic and outsourced to the cloud.

A handful of factors are taken into account to weight the variables: keyboard proximity, phonetic similarity, linguistic context. But it’s essentially a big popularity contest. A Microsoft engineer showed me a slide where somebody was trying to search for the long-named Austrian action star who became governor of California. Schwarzenegger, he explained, “is about 10,000 times more popular in the world than its variants”—Shwaranegar or Scuzzynectar or what have you. Autocorrect has become an index of the most popular way to spell and order certain words.

When English spelling was first standardized, it was by the effective fiat of those who controlled the communicative means of production. Dictionaries and usage guides have always represented compromises between top-down prescriptivists—those who believe language ought to be used a certain way—and bottom-up descriptivists—those who believe, instead, that there’s no ought about it.

The emerging consensus on usage will be a matter of statistical arbitration, between the way “most” people spell something and the way “some” people do. If it proceeds as it has, it’s likely to be a winner-take-all affair, as alternatives drop out. (Though Apple’s recent introduction of personalized, “contextual” autocorrect—which can distinguish between the language you use with your friends and the language you use with your boss—might complicate that process of standardization and allow us the favor of our characteristic errors.)

[…]

The possibility of linguistic communication is grounded in the fact of what some philosophers of language have called the principle of charity: The first step in a successful interpretation of an utterance is the belief that it somehow accords with the universe as we understand it. This means that we have a propensity to take a sort of ownership over even our errors, hoping for the possibility of meaning in even the most perverse string of letters. We feel honored to have a companion like autocorrect who trusts that, despite surface clumsiness or nonsense, inside us always smiles an articulate truth.

[…]

Today the influence of autocorrect is everywhere: A commenter on the Language Log blog recently mentioned hearing of an entire dialect in Asia based on phone cupertinos, where teens used the first suggestion from autocomplete instead of their chosen word, thus creating a slang that others couldn’t decode. (It’s similar to the Anglophone teenagers who, in a previous texting era, claimed to have replaced the term of approval cool with that of book because of happenstance T9 input priority.) Surrealists once encouraged the practice of écriture automatique, or automatic writing, in order to reveal the peculiar longings of the unconscious. The crackpot suggestions of autocorrect have become our own form of automatic writing—but what they reveal are the peculiar statistics of a world id.

Tracks of My Tears: Design Observer

Tracks of My Tears: Design Observer.


Tears of ending and beginning, Rose-Lynn Fisher ©2013


Tears of grief, Rose-Lynn Fisher ©2013


Onion Tears, Rose-Lynn Fisher ©2013


Tears of possibility and hope, Rose-Lynn Fisher ©2013

1.
You can’t be impersonal when it comes to tears. They are by their nature intimate, as unique as the patterns of a snowflake or the swirl of the skin on your thumb. As Rose-Lynn Fisher’s photographs make clear, your tears are yours alone and each one is different.

2.
Fisher used a standard light Zeiss microscope and a digital microscopy camera to make these images. She photographed over one hundred tears in her quest to discover their distinctive formations. She worked like a surveyor mapping the topography of a new land. But rather than surveying the mountains and valleys of an external landscape her explorations are of the proteins, hormones and minerals of an inner world.

[…]

4.
Medieval theologians grouped tears into four different types:

Tears of contrition
Tears of sorrow
Tears of gladness
Tears of grace

Twenty first century scientists have identified three different types of tears:

Basal tears which moisten the eye
Reflex tears caused by an outside irritant, like a stray eyelash or chopping an onion or a smoky wind.
Emotional tears that are triggered by sadness, grief, frustration, ecstasy, mourning, or loss.

5.
Emotional tears are packed full of hormones, up to 25 percent more than reflex tears. In Fisher’s photographs a tear from chopping an onion looks very different than tear of possibility and hope.

Emotional tears contain the Adrenocorticotropic hormone, which signifies high levels of stress, leucine-enkephalin, an endorphin that reduces pain, and prolactin, a hormone that triggers breast milk production (and found in higher levels in woman’s tears).

William Frey, of the St. Paul Ramsey Center in Minnesota, discovered that tears contain thirty times more manganese than blood, and manganese is a mineral that effects mood; it’s linked to depression. All of these elements build up in the body during times of stress, and crying is a way for the body to release them. A good cry slows your heart rate; it helps you to return to an emotional equilibrium.

In other words, you can cry yourself back to mental health.

[…]

7.
Samuel Beckett once said “my words are my tears.” But the opposite is also true: tears are your words. Tears are a language, a means of communication. Overwhelmed by emotion, babies cry out in need, having no other way to express their feelings. A lover, not getting the response that she craves, cries in frustration: tears of distress as a plea for emotional connection. Tears flow when mere words don’t.

8.
Rose-Lynn Fisher writes that “the topography of tears is a momentary landscape.” Isn’t it strange that a tear, which is transitory and fragile, can look just like the topography of an actual landscape: the solid stuff of soil, water, stone and vegetation, and which has been in formation for thousands of years? How is it that the microcosm of the tear mirrors the macrocosm of the earth?

9.
In Lewis Carroll’s Alice in Wonderland, Alice cries when she grows to be nine feet tall, and she can’t get into the garden. She reprimands herself just like a parent scolding a child: “You ought to be ashamed of yourself, a great girl like you to go on crying in this way! Stop it this moment, I tell you!” But she can’t stop and she cries gallons of tears that form a large pool around her that is four inches deep.

And then Alice loses her sense of self. “Who in the world am I?” she asks, like a person experiencing a breakdown. She becomes more and more confused, imagines herself as someone else, and yearns to be told who she is really is (“If I like that person I’ll come up”). She then bursts into tears again when she realizes how lonely she feels.

And then Alice shrinks and she finds herself swimming in the pool of her own tears, the same tears that she shed when she was nine feet tall. She meets a mouse, and later a Duck, a Dodo, a Lory and an Eaglet, and they all fall into the pool as well, and then they all climb ashore, and are saved.

Alice’s tears of distress become her means of salvation.

The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED

The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED.

robot-morality-inline

Whether you find it exhilarating or terrifying (or both), progress in robotics and related fields like AI is raising new ethical quandaries and challenging legal codes that were created for a world in which a sharp line separates man from machine. Last week, roboticists, legal scholars, and other experts met at the University of California, Berkeley law school to talk through some of the social, moral, and legal hazards that are likely to arise as that line starts to blur.

[…]

We May Have Feelings for Robots

Darling studies the attachments people form with robots. “There’s evidence that people respond very strongly to robots that are designed to be lifelike,” she said. “We tend to project onto them and anthropomorphize them.”

Most of the evidence for this so far is anecdotal. Darling’s ex-boyfriend, for example, named his Roomba and would feel bad for it when it got stuck under the couch. She’s trying to study human empathy for robots in a more systematic way. In one ongoing study she’s investigating how people react when they’re asked to “hurt” or “kill” a robot by hitting it with various objects. Preliminary evidence suggests they don’t like it one bit.

Another study by Julie Carpenter, a University of Washington graduate student, found that soldiers develop attachments to the robots they use to detect and defuse roadside bombs and other weapons. In interviews with service members, Carpenter found that in some cases they named their robots, ascribed personality traits to them, and felt angry or even sad when their robot got blown up in the line of duty.

This emerging field of research has implications for robot design, Darling says. If you’re building a robot to help take care of elderly people, for example, you might want to foster a deep sense of engagement. But if you’re building a robot for military use, you wouldn’t want the humans to get so attached that they risk their own lives.

There might also be more profound implications. In a 2012 paper, Darling considers the possibility of robot rights. She admits it’s a provocative proposition, but notes that some arguments for animal rights focus not on the animals’ ability to experience pain and anguish but on the effect that cruelty to animals has on humans. If research supports the idea that abusing robots makes people more abusive towards people, it might be a good idea to have legal protections for social robots, Darling says.

Robots Will Have Sex With Us

Robotics is taking sex toys to a new level, and that raises some interesting issues, ranging from the appropriateness of human-robot marriages to using robots to replace prostitutes or spice up the sex lives of the elderly. Some of the most provocative questions involve child-like sex robots. Arkin, the Georgia Tech roboticist, thinks it’s worth investigating whether they could be used to rehabilitate sex offenders.

“We have a problem with pedophilia in society,” Arkin said. “What do we do with these people after they get out of prison? There are very high recidivism rates.” If convicted sex offenders were “prescribed” a child-like sex robot, much like heroin addicts are prescribed methadone as part of a program to kick the habit, it might be possible to reduce recidivism, Arkin suggests. A government agency would probably never fund such a project, Arkin says, and he doesn’t know of anyone else who would either. “But nonetheless I do believe there is a possibility that we may be able to better protect society through this kind of research, rather than having the sex robot cottage industry develop in seedy back rooms, which indeed it is already,” he said.

Even if—and it’s a big if—such a project could win funding and ethical approval, it would be difficult to carry out, Sharkey cautions. “How do you actually do the research until these things are out there in the wild and used for a while? How do you know you’re not creating pedophiles?” he said.

How the legal system would deal with child-like sex robots isn’t entirely clear, according to Ryan Calo, a law professor at the University of Washington. In 2002, the Supreme Court ruled that simulated child pornography (in which young adults or computer generated characters play the parts of children) is protected by the First Amendment and can’t be criminalized. “I could see that extending to embodied [robotic] children, but I can also see courts and regulators getting really upset about that,” Calo said.

Our Laws Aren’t Made for Robots

Child-like sex robots are just one of the many ways in which robots are likely to challenge the legal system in the future, Calo said. “The law assumes, by and large, a dichotomy between a person and a thing. Yet robotics is a place where that gets conflated,” he said.

For example, the concept of mens rea (Latin for “guilty mind”) is central to criminal law: For an act to be considered a crime, there has to be intent. Artificial intelligence could throw a wrench into that thinking, Calo said. “The prospect of robotics behaving in the wild, displaying emergent or learned behavior creates the possibility there will be crimes that no one really intended.”

To illustrate the point, Calo used the example of Darius Kazemi, a programmer who created a bot that buys random stuff for him on Amazon. “He comes home and he’s delighted to find some box that his bot purchased,” Calo said. But what if Kazemi’s bot bought some alcoholic candy, which is illegal in his home state of Massachusetts? Could he be held accountable? So far the bot hasn’t stumbled on Amazon’s chocolate liqueur candy offerings—it’s just hypothetical. But Calo thinks we’ll soon start seeing cases that raise these kinds of questions.

And it won’t stop there. The apparently imminent arrival of autonomous vehicles will raise new questions in liability law. Social robots inside the home will raise 4th Amendment issues. “Could the FBI get a warrant to plant a question in a robot you talk to, ‘So, where’d you go this weekend?’” Calo asked. Then there are issues of how to establish the limits that society deems appropriate. Should robots or the roboticists who make them be the target of our laws and regulations?

Jihad vs. McWorld – Benjamin R. Barber – The Atlantic

Jihad vs. McWorld – Benjamin R. Barber – The Atlantic.

Just beyond the horizon of current events lie two possible political futures—both bleak, neither democratic. The first is a retribalization of large swaths of humankind by war and bloodshed: a threatened Lebanonization of national states in which culture is pitted against culture, people against people, tribe against tribe—a Jihad in the name of a hundred narrowly conceived faiths against every kind of interdependence, every kind of artificial social cooperation and civic mutuality. The second is being borne in on us by the onrush of economic and ecological forces that demand integration and uniformity and that mesmerize the world with fast music, fast computers, and fast food—with MTV, Macintosh, and McDonald’s, pressing nations into one commercially homogenous global network: one McWorld tied together by technology, ecology, communications, and commerce. The planet is falling precipitantly apart AND coming reluctantly together at the very same moment.

[…]

The tendencies of what I am here calling the forces of Jihad and the forces of McWorld operate with equal strength in opposite directions, the one driven by parochial hatreds, the other by universalizing markets, the one re-creating ancient subnational and ethnic borders from within, the other making national borders porous from without. They have one thing in common: neither offers much hope to citizens looking for practical ways to govern themselves democratically. If the global future is to pit Jihad’s centrifugal whirlwind against McWorld’s centripetal black hole, the outcome is unlikely to be democratic—or so I will argue.

[…]

Four imperatives make up the dynamic of McWorld: a market imperative, a resource imperative, an information-technology imperative, and an ecological imperative. By shrinking the world and diminishing the salience of national borders, these imperatives have in combination achieved a considerable victory over factiousness and particularism, and not least of all over their most virulent traditional form—nationalism. It is the realists who are now Europeans, the utopians who dream nostalgically of a resurgent England or Germany, perhaps even a resurgent Wales or Saxony. Yesterday’s wishful cry for one world has yielded to the reality of McWorld.

THE MARKET IMPERATIVE. Marxist and Leninist theories of imperialism assumed that the quest for ever-expanding markets would in time compel nation-based capitalist economies to push against national boundaries in search of an international economic imperium. Whatever else has happened to the scientistic predictions of Marxism, in this domain they have proved farsighted. All national economies are now vulnerable to the inroads of larger, transnational markets within which trade is free, currencies are convertible, access to banking is open, and contracts are enforceable under law. In Europe, Asia, Africa, the South Pacific, and the Americas such markets are eroding national sovereignty and giving rise to entities—international banks, trade associations, transnational lobbies like OPEC and Greenpeace, world news services like CNN and the BBC, and multinational corporations that increasingly lack a meaningful national identity—that neither reflect nor respect nationhood as an organizing or regulative principle.

The market imperative has also reinforced the quest for international peace and stability, requisites of an efficient international economy. Markets are enemies of parochialism, isolation, fractiousness, war. Market psychology attenuates the psychology of ideological and religious cleavages and assumes a concord among producers and consumers—categories that ill fit narrowly conceived national or religious cultures. Shopping has little tolerance for blue laws, whether dictated by pub-closing British paternalism, Sabbath-observing Jewish Orthodox fundamentalism, or no-Sunday-liquor-sales Massachusetts puritanism. In the context of common markets, international law ceases to be a vision of justice and becomes a workaday framework for getting things done—enforcing contracts, ensuring that governments abide by deals, regulating trade and currency relations, and so forth.

Common markets demand a common language, as well as a common currency, and they produce common behaviors of the kind bred by cosmopolitan city life everywhere. Commercial pilots, computer programmers, international bankers, media specialists, oil riggers, entertainment celebrities, ecology experts, demographers, accountants, professors, athletes—these compose a new breed of men and women for whom religion, culture, and nationality can seem only marginal elements in a working identity. Although sociologists of everyday life will no doubt continue to distinguish a Japanese from an American mode, shopping has a common signature throughout the world. Cynics might even say that some of the recent revolutions in Eastern Europe have had as their true goal not liberty and the right to vote but well-paying jobs and the right to shop (although the vote is proving easier to acquire than consumer goods). The market imperative is, then, plenty powerful; but, notwithstanding some of the claims made for “democratic capitalism,” it is not identical with the democratic imperative.

THE RESOURCE IMPERATIVE. Democrats once dreamed of societies whose political autonomy rested firmly on economic independence. The Athenians idealized what they called autarky, and tried for a while to create a way of life simple and austere enough to make the polis genuinely self-sufficient. To be free meant to be independent of any other community or polis. Not even the Athenians were able to achieve autarky, however: human nature, it turns out, is dependency. By the time of Pericles, Athenian politics was inextricably bound up with a flowering empire held together by naval power and commerce—an empire that, even as it appeared to enhance Athenian might, ate away at Athenian independence and autarky. Master and slave, it turned out, were bound together by mutual insufficiency.

The dream of autarky briefly engrossed nineteenth-century America as well, for the underpopulated, endlessly bountiful land, the cornucopia of natural resources, and the natural barriers of a continent walled in by two great seas led many to believe that America could be a world unto itself. Given this past, it has been harder for Americans than for most to accept the inevitability of interdependence. But the rapid depletion of resources even in a country like ours, where they once seemed inexhaustible, and the maldistribution of arable soil and mineral resources on the planet, leave even the wealthiest societies ever more resource-dependent and many other nations in permanently desperate straits.

Every nation, it turns out, needs something another nation has; some nations have almost nothing they need.

THE INFORMATION-TECHNOLOGY IMPERATIVE. Enlightenment science and the technologies derived from it are inherently universalizing. They entail a quest for descriptive principles of general application, a search for universal solutions to particular problems, and an unswerving embrace of objectivity and impartiality.

Scientific progress embodies and depends on open communication, a common discourse rooted in rationality, collaboration, and an easy and regular flow and exchange of information. Such ideals can be hypocritical covers for power-mongering by elites, and they may be shown to be wanting in many other ways, but they are entailed by the very idea of science and they make science and globalization practical allies.

Business, banking, and commerce all depend on information flow and are facilitated by new communication technologies. The hardware of these technologies tends to be systemic and integrated—computer, television, cable, satellite, laser, fiber-optic, and microchip technologies combining to create a vast interactive communications and information network that can potentially give every person on earth access to every other person, and make every datum, every byte, available to every set of eyes. If the automobile was, as George Ball once said (when he gave his blessing to a Fiat factory in the Soviet Union during the Cold War), “an ideology on four wheels,” then electronic telecommunication and information systems are an ideology at 186,000 miles per second—which makes for a very small planet in a very big hurry. Individual cultures speak particular languages; commerce and science increasingly speak English; the whole world speaks logarithms and binary mathematics.

Moreover, the pursuit of science and technology asks for, even compels, open societies. Satellite footprints do not respect national borders; telephone wires penetrate the most closed societies. With photocopying and then fax machines having infiltrated Soviet universities and samizdat literary circles in the eighties, and computer modems having multiplied like rabbits in communism’s bureaucratic warrens thereafter, glasnost could not be far behind. In their social requisites, secrecy and science are enemies.

The new technology’s software is perhaps even more globalizing than its hardware. The information arm of international commerce’s sprawling body reaches out and touches distinct nations and parochial cultures, and gives them a common face chiseled in Hollywood, on Madison Avenue, and in Silicon Valley. Throughout the 1980s one of the most-watched television programs in South Africa was The Cosby Show. The demise of apartheid was already in production. Exhibitors at the 1991 Cannes film festival expressed growing anxiety over the “homogenization” and “Americanization” of the global film industry when, for the third year running, American films dominated the awards ceremonies. America has dominated the world’s popular culture for much longer, and much more decisively.

[…]

This kind of software supremacy may in the long term be far more important than hardware superiority, because culture has become more potent than armaments. What is the power of the Pentagon compared with Disneyland? Can the Sixth Fleet keep up with CNN? McDonald’s in Moscow and Coke in China will do more to create a global culture than military colonization ever could. It is less the goods than the brand names that do the work, for they convey life-style images that alter perception and challenge behavior. They make up the seductive software of McWorld’s common (at times much too common) soul.

Yet in all this high-tech commercial world there is nothing that looks particularly democratic. It lends itself to surveillance as well as liberty, to new forms of manipulation and covert control as well as new kinds of participation, to skewed, unjust market outcomes as well as greater productivity. The consumer society and the open society are not quite synonymous. Capitalism and democracy have a relationship, but it is something less than a marriage. An efficient free market after all requires that consumers be free to vote their dollars on competing goods, not that citizens be free to vote their values and beliefs on competing political candidates and programs. The free market flourished in junta-run Chile, in military-governed Taiwan and Korea, and, earlier, in a variety of autocratic European empires as well as their colonial possessions.

THE ECOLOGICAL IMPERATIVE. The impact of globalization on ecology is a cliche even to world leaders who ignore it. We know well enough that the German forests can be destroyed by Swiss and Italians driving gas-guzzlers fueled by leaded gas. We also know that the planet can be asphyxiated by greenhouse gases because Brazilian farmers want to be part of the twentieth century and are burning down tropical rain forests to clear a little land to plough, and because Indonesians make a living out of converting their lush jungle into toothpicks for fastidious Japanese diners, upsetting the delicate oxygen balance and in effect puncturing our global lungs. Yet this ecological consciousness has meant not only greater awareness but also greater inequality, as modernized nations try to slam the door behind them, saying to developing nations, “The world cannot afford your modernization; ours has wrung it dry!”

Each of the four imperatives just cited is transnational, transideological, and transcultural. Each applies impartially to Catholics, Jews, Muslims, Hindus, and Buddhists; to democrats and totalitarians; to capitalists and socialists. The Enlightenment dream of a universal rational society has to a remarkable degree been realized—but in a form that is commercialized, homogenized, depoliticized, bureaucratized, and, of course, radically incomplete, for the movement toward McWorld is in competition with forces of global breakdown, national dissolution, and centrifugal corruption. These forces, working in the opposite direction, are the essence of what I call Jihad.

Jihad, or the Lebanonization of the World

OPEC, the World Bank, the United Nations, the International Red Cross, the multinational corporation…there are scores of institutions that reflect globalization. But they often appear as ineffective reactors to the world’s real actors: national states and, to an ever greater degree, subnational factions in permanent rebellion against uniformity and integration—even the kind represented by universal law and justice. The headlines feature these players regularly: they are cultures, not countries; parts, not wholes; sects, not religions; rebellious factions and dissenting minorities at war not just with globalism but with the traditional nation-state. Kurds, Basques, Puerto Ricans, Ossetians, East Timoreans, Quebecois, the Catholics of Northern Ireland, Abkhasians, Kurile Islander Japanese, the Zulus of Inkatha, Catalonians, Tamils, and, of course, Palestinians—people without countries, inhabiting nations not their own, seeking smaller worlds within borders that will seal them off from modernity.

A powerful irony is at work here. Nationalism was once a force of integration and unification, a movement aimed at bringing together disparate clans, tribes, and cultural fragments under new, assimilationist flags. But as Ortega y Gasset noted more than sixty years ago, having won its victories, nationalism changed its strategy. In the 1920s, and again today, it is more often a reactionary and divisive force, pulverizing the very nations it once helped cement together.

[…]

The aim of many of these small-scale wars is to redraw boundaries, to implode states and resecure parochial identities: to escape McWorld’s dully insistent imperatives. The mood is that of Jihad: war not as an instrument of policy but as an emblem of identity, an expression of community, an end in itself. Even where there is no shooting war, there is fractiousness, secession, and the quest for ever smaller communities.

[…]

Among the tribes, religion is also a battlefield. (“Jihad” is a rich word whose generic meaning is “struggle”—usually the struggle of the soul to avert evil. Strictly applied to religious war, it is used only in reference to battles where the faith is under assault, or battles against a government that denies the practice of Islam. My use here is rhetorical, but does follow both journalistic practice and history.) Remember the Thirty Years War? Whatever forms of Enlightenment universalism might once have come to grace such historically related forms of monotheism as Judaism, Christianity, and Islam, in many of their modern incarnations they are parochial rather than cosmopolitan, angry rather than loving, proselytizing rather than ecumenical, zealous rather than rationalist, sectarian rather than deistic, ethnocentric rather than universalizing. As a result, like the new forms of hypernationalism, the new expressions of religious fundamentalism are fractious and pulverizing, never integrating. This is religion as the Crusaders knew it: a battle to the death for souls that if not saved will be forever lost.

The atmospherics of Jihad have resulted in a breakdown of civility in the name of identity, of comity in the name of community. International relations have sometimes taken on the aspect of gang war—cultural turf battles featuring tribal factions that were supposed to be sublimated as integral parts of large national, economic, postcolonial, and constitutional entities.

[…]

Neither McWorld nor Jihad is remotely democratic in impulse. Neither needs democracy; neither promotes democracy.

McWorld does manage to look pretty seductive in a world obsessed with Jihad. It delivers peace, prosperity, and relative unity—if at the cost of independence, community, and identity (which is generally based on difference). The primary political values required by the global market are order and tranquillity, and freedom—as in the phrases “free trade,” “free press,” and “free love.” Human rights are needed to a degree, but not citizenship or participation—and no more social justice and equality than are necessary to promote efficient economic production and consumption. Multinational corporations sometimes seem to prefer doing business with local oligarchs, inasmuch as they can take confidence from dealing with the boss on all crucial matters. Despots who slaughter their own populations are no problem, so long as they leave markets in place and refrain from making war on their neighbors (Saddam Hussein’s fatal mistake). In trading partners, predictability is of more value than justice.

[…]

Jihad delivers a different set of virtues: a vibrant local identity, a sense of community, solidarity among kinsmen, neighbors, and countrymen, narrowly conceived. But it also guarantees parochialism and is grounded in exclusion. Solidarity is secured through war against outsiders. And solidarity often means obedience to a hierarchy in governance, fanaticism in beliefs, and the obliteration of individual selves in the name of the group. Deference to leaders and intolerance toward outsiders (and toward “enemies within”) are hallmarks of tribalism—hardly the attitudes required for the cultivation of new democratic women and men capable of governing themselves. Where new democratic experiments have been conducted in retribalizing societies, in both Europe and the Third World, the result has often been anarchy, repression, persecution, and the coming of new, noncommunist forms of very old kinds of despotism.

[…]

To the extent that either McWorld or Jihad has a NATURAL politics, it has turned out to be more of an antipolitics. For McWorld, it is the antipolitics of globalism: bureaucratic, technocratic, and meritocratic, focused (as Marx predicted it would be) on the administration of things—with people, however, among the chief things to be administered. In its politico-economic imperatives McWorld has been guided by laissez-faire market principles that privilege efficiency, productivity, and beneficence at the expense of civic liberty and self-government.

For Jihad, the antipolitics of tribalization has been explicitly antidemocratic: one-party dictatorship, government by military junta, theocratic fundamentalism—often associated with a version of theFuhrerprinzip that empowers an individual to rule on behalf of a people.

[…]

How can democracy be secured and spread in a world whose primary tendencies are at best indifferent to it (McWorld) and at worst deeply antithetical to it (Jihad)? My guess is that globalization will eventually vanquish retribalization. The ethos of material “civilization” has not yet encountered an obstacle it has been unable to thrust aside.

[…]

…democracy is how we remonstrate with reality, the rebuke our aspirations offer to history. And if retribalization is inhospitable to democracy, there is nonetheless a form of democratic government that can accommodate parochialism and communitarianism, one that can even save them from their defects and make them more tolerant and participatory: decentralized participatory democracy. And if McWorld is indifferent to democracy, there is nonetheless a form of democratic government that suits global markets passably well—representative government in its federal or, better still, confederal variation.

[…]

It certainly seems possible that the most attractive democratic ideal in the face of the brutal realities of Jihad and the dull realities of McWorld will be a confederal union of semi-autonomous communities smaller than nation-states, tied together into regional economic associations and markets larger than nation-states—participatory and self-determining in local matters at the bottom, representative and accountable at the top. The nation-state would play a diminished role, and sovereignty would lose some of its political potency. The Green movement adage “Think globally, act locally” would actually come to describe the conduct of politics.

This vision reflects only an ideal, however—one that is not terribly likely to be realized. Freedom, Jean-Jacques Rousseau once wrote, is a food easy to eat but hard to digest. Still, democracy has always played itself out against the odds. And democracy remains both a form of coherence as binding as McWorld and a secular faith potentially as inspiriting as Jihad.

David Trotter reviews ‘Lifted’ by Andreas Bernard, translated by David Dollenmayer · LRB 3 July 2014

David Trotter reviews ‘Lifted’ by Andreas Bernard, translated by David Dollenmayer · LRB 3 July 2014.

According to elevator legend, it all began with a stunt. In the summer of 1854, at the Exhibition of the Industry of All Nations in New York, an engineer called Elisha Graves Otis gave regular demonstrations of his new safety device. Otis had himself hoisted into the air on a platform secured on either side by guide-rails and – at a suitably dramatic height – cut the cable. Instead of plummeting to the ground fifty feet below, the platform stopped dead after a couple of inches. ‘All safe, gentlemen, all safe,’ Otis would bellow at the expectant crowd. The device was simple enough: a flat-leaf cart spring above the platform splayed out to its full extent as soon as the cable was cut, engaging notches in the guide-rails. Has any mode of transport ever been safer? After 1854, malfunctioning (or non-existent) doors were the only direct risk still attached to travelling by lift. Safety first was not so much a motto as a premise. No wonder that the closest high-end TV drama has come to Sartrean nausea is the moment in Mad Men when a pair of elevator doors mysteriously parts in front of troubled genius Don Draper, who is left peering in astonishment down into a mechanical abyss. The cables coiling and uncoiling in the shaft stand in for the root of Roquentin’s chestnut tree.

Andreas Bernard is properly sceptical of myths of origin. It didn’t all begin in 1854, in fact. From Archimedes and Vitruvius onwards, descriptions survive of devices for the vertical transport of goods, primarily, but also of people. The English diplomat Charles Greville, writing in 1830, recalled with admiration a lift in the Genoese palace of the Sardinian royal couple: ‘For the comfort of their bodies he has a machine made like a car, which is drawn up by a chain from the bottom to the top of the house; it holds about six people, who can be at pleasure elevated to any storey, and at each landing place there is a contrivance to let them in and out.’ In June 1853, Harper’s New Monthly Magazine reported the imminent introduction of steam-powered elevators into private homes in New York, by means of which an ‘indolent, or fatigued, or aristocratic person’ could reach the upper floors. Confusingly, there was another engineering Otis around, Otis Tufts, who in 1859 patented an apparatus known as the Vertical Railway or Vertical Screw Elevator. The Vertical Railway, driven by a twenty-inch-wide iron screw running through its centre, was the first such device to boast an enclosed cab. It proved extremely reliable, but slow and costly.

How, then, did Otis’s stunt achieve the status of a myth of origin? It was theatrical, for a start. More important, it exploited what Bernard calls the 19th-century ‘trauma of the cable’. From the late Middle Ages, when mineshafts in Europe first reached depths greater than a few yards, some means had to be developed to bring the ore up to the surface. For centuries, cable winches powered in various ways allowed the vertical transport of raw materials and freight. By 1850, when elevators first began to appear in buildings, the depth of the mineshafts in the upper Harz and Ruhr regions had reached more than two thousand feet. So high was the risk of an accident caused by a cable breaking that until 1859 German mining regulations forbade the transport of miners in the rail-guided baskets that brought the ore up to the surface (they had to use ladders). Bernard’s emphasis on the history of mining usefully embeds the history of the elevator in the history not just of transport in general, but of the transport accident: itself about to give rise, courtesy of rail-guided transport of the horizontal kind, to trauma as a diagnostic category.

[…]

His main interest lies in the ways in which the advent of the elevator transformed the design, construction and experience of high-rise buildings, and thus of modern urban life in general (the focus remains on Germany and the United States throughout). From the 1870s onwards, all new multi-storey buildings in major American cities were constructed around an elevator shaft. The ‘perfection of elevator work’, as one commentator put it in 1891, had become the skyscraper’s ‘fundamental condition’. That, and steel frame construction. Bernard seems reluctant to get into a dispute as to which came first, or mattered more, but he maintains that the elevator was a ‘prerequisite’ for vertical growth. In the 1890s, the highest building in the world was the twenty-storey Masonic Temple in Chicago; the Woolworth Building in New York, completed in 1913, stood at 55 storeys. In Europe, the pace of change was a good deal slower, since the emphasis remained as much on adaptation as on innovative design.

[…]

He argues that the lasting symbolic consequence of the perfection of elevator work was the ‘recodification of verticality’ it brought about. During the final decade of the 19th century (an ‘epochal watershed’), the best rooms in the largest buildings ‘migrated’ from low to high in a decisive reversal of ‘hierarchic order’, while the worst went in the opposite direction. In Europe’s grand hotels, for example, the worst rooms had traditionally been at the top, since only poor people and hotel staff could be expected to climb all those flights of stairs. Lifts, however, ‘freed the upper storeys from the stigma of inaccessibility and lent them an unheard-of glamour’. A roughly comparable migration occurred at the other end of the social scale. Statistics for rental prices in Berlin in the period from the founding of the Reich in 1871 to the outbreak of the First World War demonstrate that the most expensive apartments were invariably on the first floor (the bel étage), the less expensive on the ground, second and third floors, and the cheapest at attic or basement level. The last two levels consistently attracted the stigma of ‘abnormality’. It was here, at the top and bottom of the building, that the urban underclass festered. By the end of the 19th century, sanitary reform had pretty much done for the basement as a dwelling-place. It took a while longer, as Bernard shows, for the elevator to domesticate the upper floors of the standard tenement block by rendering them easily accessible.

The bel étage wasn’t just on the way up. It entered, or rather had built for it, a separate symbolic dimension. Rich people realised that the stuff they’d always enjoyed doing at ground level was even more enjoyable when done on the top floor; and that being able to do it there at all was a useful display of the power wealth brings. In 1930s New York, the twin towers of the new Waldorf-Astoria hotel, which rose from the 29th to the 43rd storey, constituted its unique appeal. ‘Below the demarcation line of the 29th storey, the Waldorf-Astoria, although expensive, was accessible to everyone; above the line began an exclusive region of suites of as many as twelve rooms with private butler service.’ The upper floors of tall buildings, once given over to staff dormitories, had become what Bernard calls an ‘enclave of the elite’. The Waldorf-Astoria’s express elevators, travelling direct to the 29th floor, were as much barrier as conduit. Such discrimination between elevators, or between elevator speeds, played a significant part in the design of those ultimate enclaves of the managerial elite, the penthouse apartment and the executive suite. In 1965, the penthouse still had enough ‘unheard-of glamour’ to lend its name to a new men’s magazine.

[…]

Seen through the lens of canonical urban theory, a ride in a lift looks like the perfect opportunity for those jarring random encounters with people you don’t know that are said to characterise life in the big city. As Bernard puts it, ‘the elevator cab – in the days of Poe and Baudelaire just beginning to be installed in the grand hotels, by the time of Simmel and Benjamin a permanent part of urban architecture – is the contingent locale par excellence.’ For Bernard, the elevator is a Benjaminian street brought indoors and rotated on its axis: during the few seconds of ascent or descent, the perpetual ‘anaesthetising of attention’ allegedly required of the city-dweller becomes an acute anxiety. Bernard invokes Erving Goffman’s ethnomethodological analysis of the positions passengers customarily take up on entering a lift: the first beside the controls, the second in the corner diagonally opposite, the third somewhere along the rear wall, the fourth in the empty centre and so on; all of them at once turning to face the front, as though on parade. He terms the resulting intricate array of mutual aversions a ‘sociogram’. He’s right, of course. There is something about the way people behave in lifts which requires explanation. But does urban theory hold the key to that behaviour? Crossing the road is not at all the same as riding between floors.

The invention of the elevator belongs as securely to the history of mechanised transport as it does to the history of urban planning. After all, the trains which first obliged passengers to sit or stand in close proximity to one another for hours on end without exchanging a word ran between rather than across the great conurbations. Considered as a people-mover, the elevator ranks with those other epochal Fin-de-Siècle inventions, the motor car and the aeroplane. Like them, it combines high speed with a high degree of insulation from the outside world. It’s a vertical bullet train, a space rocket forever stuck in its silo – at least until the moment in Tim Burton’s Charlie and the Chocolate Factory when Willie Wonka presses the button marked ‘Up and Out’. An elevator exceeds a car or a plane in the claustrophobic extremity of its insulation from the outside world. It’s the collective endurance of protracted viewlessness, rather than urban ennui, that activates Bernard’s sociogram.

The clue to the elevator’s significance lies in the buttons that adorn its interior and exterior. Its automation, at the beginning of the 20th century, created a system of electronic signalling which brought the entire operation under the control of the individual user. In no other mode of transport could a vehicle be hailed, directed and dismissed entirely without assistance, and by a touch so slight it barely amounts to an expenditure of energy. The machine appears to work by information alone. Elevators, Bernard says, reprogrammed the high-rise building. It might be truer to say that they reprogrammed the people who made use of them, in buildings of any kind. Approaching the elevator bank, we alert the system to where we are and the direction we want to travel in. Pressing the button in the lift, we signal our precise destination and our confidence that the apparatus will come to a halt and the doors open when we get there. The closer we come to sending ourselves as a message, in competition or alliance with the messages sent by others, the more likely we are to arrive speedily, and intact.

[…]

You can only send yourself as a message successfully if you remain intact – that is, fully encrypted – during transmission. That’s what elevator protocol is for. Or so we might gather from the very large number of scenes set in lifts in movies from the 1930s onwards. The vast majority of these scenes involve breaches of protocol in which the breach is of far greater interest than the protocol. Desire erupts, or violence, shattering the sociogram’s frigid array. Or the lift, stopped in its tracks, ceases to be a lift. It becomes something else altogether: a prison cell to squeeze your way out of, or (Bernard suggests) a confessional. The eruptions are sometimes entertaining, sometimes not. But since they pay little or no attention to the protocols which have consistently defined the ‘atmosphere in the cab’, they often date badly. The student of elevator scenes in James Bond movies, for example, will discover only that while Daniel Craig in Quantum of Solace (2008) instantly unleashes a crisply definitive, neoliberal backwards head-butt, Sean Connery in Diamonds Are Forever (1971) has to absorb a good deal of heavy punishment before he’s able to apply the unarmed combat manoeuvre du jour: an Edward Heath of a flailing, two-handed downwards chop at the kidneys.

Rarer, and far more illuminating, are scenes in which the lift remains a lift, and the protocols, consequently, of greater interest than their potential or actual breach. These scenes are a gift to the cultural historian, and it’s unfortunate that Bernard’s allegiances to urban theory and to literature (especially to the literature of an earlier period) should have persuaded him to ignore them. The shrewdest representations are those which understand that the elevator is a place where messages meet, rather than people. In white-collar epics from King Vidor’s seminal The Crowd through Robert Wise’s highly inventive Executive Suite and the exuberant Jerry Lewis vehicle The Errand Boy to The Hudsucker Proxy, the Coen brothers’ screwball version of Frank Capra, what separates the upper floors from the lower is access to information. The express elevator, bypassing those floors on which actual business is done, constitutes a prototypical information superhighway ripe for abuse by finance capitalism. The Hudsucker Proxy, in particular, would have been grist to Bernard’s mill. It features a sweaty basement mailroom as well as cool expanses of executive suite. Its miniature New York set included a model of the Woolworth Building. But the film is about information rather than urban contingency. It’s only when gormless errand boy Tim Robbins, ordered to deliver a top-secret ‘Blue Letter’ (the year is 1959) to the top floor via express elevator, himself becomes in effect the message, that evil capitalist Paul Newman can see his way to the ingenious stock scam which drives the plot on towards last-minute angelic intervention.

The arrangement by phalanx required by lift protocol has the great virtue of precluding conversation. Cinema’s best elevator scenes delight in maintaining that such rules should not be broken, whether by head-butt or injudicious self-revelation. When two thugs intent on kidnap at the very least follow advertising executive Roger Thornhill into a packed lift in Hitchcock’s North by Northwest, his mother, who knows what he’s afraid of, but considers him a fantasist, asks them if they’re really trying to kill her son. Cary Grant does an excellent job of seeming more put out by the laughter which greets her sally than by the threat of kidnap. His disgust draws attention to the necessity, in a form of transport directed as much by the flow of data as by the flow of energy, of codes of conduct. It is a kind of meta-commentary. Something comparable happens in another of the many elevator scenes in Mad Men. Don Draper occupies one corner, a couple of insurance salesmen another. The one with his hat on is not to be deflected from his rancid sexual boasting by the entrance at the next floor of a woman whose only option is to stand directly in front of him. Draper tells the man to take his hat off; and when he doesn’t, removes it from his head and shoves it gently into his chest. That’s it. No head-butts, no expressions of feeling. If one code of conduct is to apply, in the earnest business of being parcelled up for delivery, they must all apply, all the time. Perhaps Draper has been to see Billy Wilder’s The Apartment, in which Jack Lemmon shows Shirley MacLaine he’s a true gent by remembering to take his hat off in the lift. These scenes comment not so much on specific codes as on codedness in general, in a world increasingly subsumed into information. For such a staid apparatus, the elevator has generated some pretty compelling stories.

In Praise of Idleness By Bertrand Russell

In Praise of Idleness By Bertrand Russell.

I think that there is far too much work done in the world, that immense harm is caused by the belief that work is virtuous, and that what needs to be preached in modern industrial countries is quite different from what always has been preached. Everyone knows the story of the traveler in Naples who saw twelve beggars lying in the sun (it was before the days of Mussolini), and offered a lira to the laziest of them. Eleven of them jumped up to claim it, so he gave it to the twelfth. this traveler was on the right lines.

[…]

Whenever a person who already has enough to live on proposes to engage in some everyday kind of job, such as school-teaching or typing, he or she is told that such conduct takes the bread out of other people’s mouths, and is therefore wicked. If this argument were valid, it would only be necessary for us all to be idle in order that we should all have our mouths full of bread. What people who say such things forget is that what a man earns he usually spends, and in spending he gives employment. As long as a man spends his income, he puts just as much bread into people’s mouths in spending as he takes out of other people’s mouths in earning. The real villain, from this point of view, is the man who saves. If he merely puts his savings in a stocking, like the proverbial French peasant, it is obvious that they do not give employment.

[…]

In view of the fact that the bulk of the public expenditure of most civilized Governments consists in payment for past wars or preparation for future wars, the man who lends his money to a Government is in the same position as the bad men in Shakespeare who hire murderers. The net result of the man’s economical habits is to increase the armed forces of the State to which he lends his savings. Obviously it would be better if he spent the money, even if he spent it in drink or gambling.

But, I shall be told, the case is quite different when savings are invested in industrial enterprises. When such enterprises succeed, and produce something useful, this may be conceded. In these days, however, no one will deny that most enterprises fail. That means that a large amount of human labor, which might have been devoted to producing something that could be enjoyed, was expended on producing machines which, when produced, lay idle and did no good to anyone. The man who invests his savings in a concern that goes bankrupt is therefore injuring others as well as himself. If he spent his money, say, in giving parties for his friends, they (we may hope) would get pleasure, and so would all those upon whom he spent money, such as the butcher, the baker, and the bootlegger. But if he spends it (let us say) upon laying down rails for surface card in some place where surface cars turn out not to be wanted, he has diverted a mass of labor into channels where it gives pleasure to no one. Nevertheless, when he becomes poor through failure of his investment he will be regarded as a victim of undeserved misfortune, whereas the gay spendthrift, who has spent his money philanthropically, will be despised as a fool and a frivolous person.

[…]

I want to say, in all seriousness, that a great deal of harm is being done in the modern world by belief in the virtuousness of work, and that the road to happiness and prosperity lies in an organized diminution of work.

First of all: what is work? Work is of two kinds: first, altering the position of matter at or near the earth’s surface relatively to other such matter; second, telling other people to do so. The first kind is unpleasant and ill paid; the second is pleasant and highly paid. The second kind is capable of indefinite extension: there are not only those who give orders, but those who give advice as to what orders should be given. Usually two opposite kinds of advice are given simultaneously by two organized bodies of men; this is called politics. The skill required for this kind of work is not knowledge of the subjects as to which advice is given, but knowledge of the art of persuasive speaking and writing, i.e. of advertising.

[…]

Modern technique has made it possible for leisure, within limits, to be not the prerogative of small privileged classes, but a right evenly distributed throughout the community. The morality of work is the morality of slaves, and the modern world has no need of slavery.

[…]

To this day, 99 per cent of British wage-earners would be genuinely shocked if it were proposed that the King should not have a larger income than a working man. The conception of duty, speaking historically, has been a means used by the holders of power to induce others to live for the interests of their masters rather than for their own. Of course the holders of power conceal this fact from themselves by managing to believe that their interests are identical with the larger interests of humanity. Sometimes this is true; Athenian slave-owners, for instance, employed part of their leisure in making a permanent contribution to civilization which would have been impossible under a just economic system. Leisure is essential to civilization, and in former times leisure for the few was only rendered possible by the labors of the many. But their labors were valuable, not because work is good, but because leisure is good. And with modern technique it would be possible to distribute leisure justly without injury to civilization.

[…]

The war showed conclusively that, by the scientific organization of production, it is possible to keep modern populations in fair comfort on a small part of the working capacity of the modern world. If, at the end of the war, the scientific organization, which had been created in order to liberate men for fighting and munition work, had been preserved, and the hours of the week had been cut down to four, all would have been well. Instead of that the old chaos was restored, those whose work was demanded were made to work long hours, and the rest were left to starve as unemployed. Why? Because work is a duty, and a man should not receive wages in proportion to what he has produced, but in proportion to his virtue as exemplified by his industry.

This is the morality of the Slave State, applied in circumstances totally unlike those in which it arose. No wonder the result has been disastrous. Let us take an illustration. Suppose that, at a given moment, a certain number of people are engaged in the manufacture of pins. They make as many pins as the world needs, working (say) eight hours a day. Someone makes an invention by which the same number of men can make twice as many pins: pins are already so cheap that hardly any more will be bought at a lower price. In a sensible world, everybody concerned in the manufacturing of pins would take to working four hours instead of eight, and everything else would go on as before. But in the actual world this would be thought demoralizing. The men still work eight hours, there are too many pins, some employers go bankrupt, and half the men previously concerned in making pins are thrown out of work. There is, in the end, just as much leisure as on the other plan, but half the men are totally idle while half are still overworked. In this way, it is insured that the unavoidable leisure shall cause misery all round instead of being a universal source of happiness. Can anything more insane be imagined?

The idea that the poor should have leisure has always been shocking to the rich. In England, in the early nineteenth century, fifteen hours was the ordinary day’s work for a man; children sometimes did as much, and very commonly did twelve hours a day. When meddlesome busybodies suggested that perhaps these hours were rather long, they were told that work kept adults from drink and children from mischief. When I was a child, shortly after urban working men had acquired the vote, certain public holidays were established by law, to the great indignation of the upper classes. I remember hearing an old Duchess say: ‘What do the poor want with holidays? They ought to work.’ People nowadays are less frank, but the sentiment persists, and is the source of much of our economic confusion.

[…]

If the ordinary wage-earner worked four hours a day, there would be enough for everybody and no unemployment — assuming a certain very moderate amount of sensible organization. This idea shocks the well-to-do, because they are convinced that the poor would not know how to use so much leisure. In America men often work long hours even when they are well off; such men, naturally, are indignant at the idea of leisure for wage-earners, except as the grim punishment of unemployment; in fact, they dislike leisure even for their sons. Oddly enough, while they wish their sons to work so hard as to have no time to be civilized, they do not mind their wives and daughters having no work at all. the snobbish admiration of uselessness, which, in an aristocratic society, extends to both sexes, is, under a plutocracy, confined to women; this, however, does not make it any more in agreement with common sense.

[…]

Industry, sobriety, willingness to work long hours for distant advantages, even submissiveness to authority, all these reappear; moreover authority still represents the will of the Ruler of the Universe, Who, however, is now called by a new name, Dialectical Materialism.

[…]

For ages, men had conceded the superior saintliness of women, and had consoled women for their inferiority by maintaining that saintliness is more desirable than power. At last the feminists decided that they would have both, since the pioneers among them believed all that the men had told them about the desirability of virtue, but not what they had told them about the worthlessness of political power. A similar thing has happened in Russia as regards manual work. For ages, the rich and their sycophants have written in praise of ‘honest toil’, have praised the simple life, have professed a religion which teaches that the poor are much more likely to go to heaven than the rich, and in general have tried to make manual workers believe that there is some special nobility about altering the position of matter in space, just as men tried to make women believe that they derived some special nobility from their sexual enslavement.

[…]

A large country, full of natural resources, awaits development, and has has to be developed with very little use of credit. In these circumstances, hard work is necessary, and is likely to bring a great reward. But what will happen when the point has been reached where everybody could be comfortable without working long hours?

In the West, we have various ways of dealing with this problem. We have no attempt at economic justice, so that a large proportion of the total produce goes to a small minority of the population, many of whom do no work at all. Owing to the absence of any central control over production, we produce hosts of things that are not wanted. We keep a large percentage of the working population idle, because we can dispense with their labor by making the others overwork. When all these methods prove inadequate, we have a war: we cause a number of people to manufacture high explosives, and a number of others to explode them, as if we were children who had just discovered fireworks. By a combination of all these devices we manage, though with difficulty, to keep alive the notion that a great deal of severe manual work must be the lot of the average man.

[…]

The fact is that moving matter about, while a certain amount of it is necessary to our existence, is emphatically not one of the ends of human life. If it were, we should have to consider every navvy superior to Shakespeare. We have been misled in this matter by two causes. One is the necessity of keeping the poor contented, which has led the rich, for thousands of years, to preach the dignity of labor, while taking care themselves to remain undignified in this respect. The other is the new pleasure in mechanism, which makes us delight in the astonishingly clever changes that we can produce on the earth’s surface. Neither of these motives makes any great appeal to the actual worker. If you ask him what he thinks the best part of his life, he is not likely to say: ‘I enjoy manual work because it makes me feel that I am fulfilling man’s noblest task, and because I like to think how much man can transform his planet. It is true that my body demands periods of rest, which I have to fill in as best I may, but I am never so happy as when the morning comes and I can return to the toil from which my contentment springs.’ I have never heard working men say this sort of thing. They consider work, as it should be considered, a necessary means to a livelihood, and it is from their leisure that they derive whatever happiness they may enjoy.

It will be said that, while a little leisure is pleasant, men would not know how to fill their days if they had only four hours of work out of the twenty-four. In so far as this is true in the modern world, it is a condemnation of our civilization; it would not have been true at any earlier period. There was formerly a capacity for light-heartedness and play which has been to some extent inhibited by the cult of efficiency. The modern man thinks that everything ought to be done for the sake of something else, and never for its own sake. Serious-minded persons, for example, are continually condemning the habit of going to the cinema, and telling us that it leads the young into crime.

[…]

The butcher who provides you with meat and the baker who provides you with bread are praiseworthy, because they are making money; but when you enjoy the food they have provided, you are merely frivolous, unless you eat only to get strength for your work. Broadly speaking, it is held that getting money is good and spending money is bad. Seeing that they are two sides of one transaction, this is absurd; one might as well maintain that keys are good, but keyholes are bad. Whatever merit there may be in the production of goods must be entirely derivative from the advantage to be obtained by consuming them. The individual, in our society, works for profit; but the social purpose of his work lies in the consumption of what he produces. It is this divorce between the individual and the social purpose of production that makes it so difficult for men to think clearly in a world in which profit-making is the incentive to industry. We think too much of production, and too little of consumption. One result is that we attach too little importance to enjoyment and simple happiness, and that we do not judge production by the pleasure that it gives to the consumer.

When I suggest that working hours should be reduced to four, I am not meaning to imply that all the remaining time should necessarily be spent in pure frivolity. I mean that four hours’ work a day should entitle a man to the necessities and elementary comforts of life, and that the rest of his time should be his to use as he might see fit. It is an essential part of any such social system that education should be carried further than it usually is at present, and should aim, in part, at providing tastes which would enable a man to use leisure intelligently. I am not thinking mainly of the sort of things that would be considered ‘highbrow’.

[…]

The pleasures of urban populations have become mainly passive: seeing cinemas, watching football matches, listening to the radio, and so on. This results from the fact that their active energies are fully taken up with work; if they had more leisure, they would again enjoy pleasures in which they took an active part.

In the past, there was a small leisure class and a larger working class. The leisure class enjoyed advantages for which there was no basis in social justice; this necessarily made it oppressive, limited its sympathies, and caused it to invent theories by which to justify its privileges. These facts greatly diminished its excellence, but in spite of this drawback it contributed nearly the whole of what we call civilization. It cultivated the arts and discovered the sciences; it wrote the books, invented the philosophies, and refined social relations. Even the liberation of the oppressed has usually been inaugurated from above. Without the leisure class, mankind would never have emerged from barbarism.

The method of a leisure class without duties was, however, extraordinarily wasteful. None of the members of the class had to be taught to be industrious, and the class as a whole was not exceptionally intelligent. The class might produce one Darwin, but against him had to be set tens of thousands of country gentlemen who never thought of anything more intelligent than fox-hunting and punishing poachers. At present, the universities are supposed to provide, in a more systematic way, what the leisure class provided accidentally and as a by-product. This is a great improvement, but it has certain drawbacks. University life is so different from life in the world at large that men who live in academic milieu tend to be unaware of the preoccupations and problems of ordinary men and women; moreover their ways of expressing themselves are usually such as to rob their opinions of the influence that they ought to have upon the general public. Another disadvantage is that in universities studies are organized, and the man who thinks of some original line of research is likely to be discouraged. Academic institutions, therefore, useful as they are, are not adequate guardians of the interests of civilization in a world where everyone outside their walls is too busy for unutilitarian pursuits.

In a world where no one is compelled to work more than four hours a day, every person possessed of scientific curiosity will be able to indulge it, and every painter will be able to paint without starving, however excellent his pictures may be. Young writers will not be obliged to draw attention to themselves by sensational pot-boilers, with a view to acquiring the economic independence needed for monumental works, for which, when the time at last comes, they will have lost the taste and capacity. Men who, in their professional work, have become interested in some phase of economics or government, will be able to develop their ideas without the academic detachment that makes the work of university economists often seem lacking in reality. Medical men will have the time to learn about the progress of medicine, teachers will not be exasperatedly struggling to teach by routine methods things which they learnt in their youth, which may, in the interval, have been proved to be untrue.

[…]

Good nature is, of all moral qualities, the one that the world needs most, and good nature is the result of ease and security, not of a life of arduous struggle. Modern methods of production have given us the possibility of ease and security for all; we have chosen, instead, to have overwork for some and starvation for others. Hitherto we have continued to be as energetic as we were before there were machines; in this we have been foolish, but there is no reason to go on being foolish forever.

All You Have Eaten: On Keeping a Perfect Record | Longreads

All You Have Eaten: On Keeping a Perfect Record | Longreads.

Screen Shot 2014-07-07 at 4.45.51 PM

Over the course of his or her lifetime, the average person will eat 60,000 pounds of food, the weight of six elephants.

The average American will drink over 3,000 gallons of soda. He will eat about 28 pigs, 2,000 chickens, 5,070 apples, and 2,340 pounds of lettuce. How much of that will he remember, and for how long, and how well?

[…]

The human memory is famously faulty; the brain remains mostly a mystery. We know that comfort foods make the pleasure centers in our brains light up the way drugs do. We know, because of a study conducted by Northwestern University and published in the Journal of Neuroscience, that by recalling a moment, you’re altering it slightly, like a mental game of Telephone—the more you conjure a memory, the less accurate it will be down the line. Scientists have implanted false memories in mice and grown memories in pieces of brain in test tubes. But we haven’t made many noteworthy strides in the thing that seems most relevant: how not to forget.

Unless committed to memory or written down, what we eat vanishes as soon as it’s consumed. That’s the point, after all. But because the famous diarist Samuel Pepys wrote, in his first entry, “Dined at home in the garret, where my wife dressed the remains of a turkey, and in the doing of it she burned her hand,” we know that Samuel Pepys, in the 1600s, ate turkey. We know that, hundreds of years ago, Samuel Pepys’s wife burned her hand. We know, because she wrote it in her diary, that Anne Frank at one point ate fried potatoes for breakfast. She once ate porridge and “a hash made from kale that came out of the barrel.”

For breakfast on January 2, 2008, I ate oatmeal with pumpkin seeds and brown sugar and drank a cup of green tea.

I know because it’s the first entry in a food log I still keep today. I began it as an experiment in food as a mnemonic device. The idea was this: I’d write something objective every day that would cue my memories into the future—they’d serve as compasses by which to remember moments.

Andy Warhol kept what he called a “smell collection,” switching perfumes every three months so he could reminisce more lucidly on those months whenever he smelled that period’s particular scent. Food, I figured, took this even further. It involves multiple senses, and that’s why memories that surround food can come on so strong.

What I’d like to have is a perfect record of every day. I’ve long been obsessed with this impossibility, that every day be perfectly productive and perfectly remembered. What I remember from January 2, 2008 is that after eating the oatmeal I went to the post office, where an old woman was arguing with a postal worker about postage—she thought what she’d affixed to her envelope was enough and he didn’t.

I’m terrified of forgetting. My grandmother has battled Alzheimer’s for years now, and to watch someone battle Alzheimer’s—we say “battle,” as though there’s some way of winning—is terrifying. If I’m always thinking about dementia, my unscientific logic goes, it can’t happen to me (the way an earthquake comes when you don’t expect it, and so the best course of action is always to expect it). “Really, one might almost live one’s life over, if only one could make a sufficient effort of recollection” is a sentence I once underlined in John Banville’s The Sea (a book that I can’t remember much else about). But effort alone is not enough and isn’t particularly reasonable, anyway. A man named Robert Shields kept the world’s longest diary: he chronicled every five minutes of his life until a stroke in 2006 rendered him unable to. He wrote about microwaving foods, washing dishes, bathroom visits, writing itself. When he died in 2007, he left 37.5 million words behind—ninety-one boxes of paper. Reading his obituary, I wondered if Robert Shields ever managed to watch a movie straight through.

Last spring, as part of a NASA-funded study, a crew of three men and three women with “astronaut-like” characteristics spent four months in a geodesic dome in an abandoned quarry on the northern slope of Hawaii’s Mauna Loa volcano.

For those four months, they lived and ate as though they were on Mars, only venturing outside to the surrounding Mars-like, volcanic terrain, in simulated space suits.[1] Hawaii Space Exploration Analog and Simulation (HI-SEAS) is a four-year project: a series of missions meant to simulate and study the challenges of long-term space travel, in anticipation of mankind’s eventual trip to Mars. This first mission’s focus was food.

Getting to Mars will take roughly six to nine months each way, depending on trajectory; the mission itself will likely span years. So the question becomes: How do you feed astronauts for so long? On “Mars,” the HI-SEAS crew alternated between two days of pre-prepared meals and two days of dome-cooked meals of shelf-stable ingredients. Researchers were interested in the answers to a number of behavioral issues: among them, the well-documented phenomenon of menu fatigue (when International Space Station astronauts grow weary of their packeted meals, they tend to lose weight). They wanted to see what patterns would evolve over time if a crew’s members were allowed dietary autonomy, and given the opportunity to cook for themselves (“an alternative approach to feeding crews of long term planetary outposts,” read the open call).

Everything was hyper-documented. Everything eaten was logged in painstaking detail: weighed, filmed, and evaluated. The crew filled in surveys before and after meals: queries into how hungry they were, their first impressions, their moods, how the food smelled, what its texture was, how it tasted. They documented their time spent cooking; their water usage; the quantity of leftovers, if any. The goal was to measure the effect of what they ate on their health and morale, along with other basic questions concerning resource use. How much water will it take to cook on Mars? How much water will it take to wash dishes? How much time is required; how much energy? How will everybody feel about it all?

[…]

The main food study had a big odor identification component to it: the crew took scratch-n-sniff tests, which Kate said she felt confident about at the mission’s start, and less certain about near the end. “The second-to-last test,” she said, “I would smell grass and feel really wistful.” Their noses were mapped with sonogram because, in space, the shape of your nose changes. And there were, on top of this, studies unrelated to food. They exercised in anti-microbial shirts (laundry doesn’t happen in space), evaluated their experiences hanging out with robot pets, and documented their sleep habits.

[…]

“We all had relationships outside that we were trying to maintain in some way,” Kate said. “Some were kind of new, some were tenuous, some were old and established, but they were all very difficult to maintain. A few things that could come off wrong in an e-mail could really bum you out for a long time.”

She told me about another crew member whose boyfriend didn’t email her at his usual time. This was roughly halfway through the mission. She started to get obsessed with the idea that maybe he got into a car accident. “Like seriously obsessed,” Kate said. “I was like, ‘I think your brain is telling you things that aren’t actually happening. Let’s just be calm about this,’ and she was like, ‘Okay, okay.’ But she couldn’t sleep that night. In the end he was just like, ‘Hey, what’s up?’ I knew he would be fine, but I could see how she could think something serious had happened.”

“My wife sent me poems every day but for a couple days she didn’t,” Kate said. “Something was missing from those days, and I don’t think she could have realized how important they were. It was weird. Everything was bigger inside your head because you were living inside your head.”

[…]

When I look back on my meals from the past year, the food log does the job I intended more or less effectively.

I can remember, with some clarity, the particulars of given days: who I was with, how I was feeling, the subjects discussed. There was the night in October I stress-scarfed a head of romaine and peanut butter packed onto old, hard bread; the somehow not-sobering bratwurst and fries I ate on day two of a two-day hangover, while trying to keep things light with somebody to whom, the two nights before, I had aired more than I meant to. There was the night in January I cooked “rice, chicken stirfry with bell pepper and mushrooms, tomato-y Chinese broccoli, 1 bottle IPA” with my oldest, best friend, and we ate the stirfry and drank our beers slowly while commiserating about the most recent conversations we’d had with our mothers.

But reading the entries from 2008, that first year, does something else to me: it suffuses me with the same mortification as if I’d written down my most private thoughts (that reaction is what keeps me from maintaining a more conventional journal). There’s nothing especially incriminating about my diet, except maybe that I ate tortilla chips with unusual frequency, but the fact that it’s just food doesn’t spare me from the horror and head-shaking that comes with reading old diaries. Mentions of certain meals conjure specific memories, but mostly what I’m left with are the general feelings from that year. They weren’t happy ones. I was living in San Francisco at the time. A relationship was dissolving.

It seems to me that the success of a relationship depends on a shared trove of memories. Or not shared, necessarily, but not incompatible. That’s the trouble, I think, with parents and children: parents retain memories of their children that the children themselves don’t share. My father’s favorite meal is breakfast and his favorite breakfast restaurant is McDonald’s, and I remember—having just read Michael Pollan or watched Super Size Me—self-righteously not ordering my regular egg McMuffin one morning, and how that actually hurt him.

When a relationship goes south, it’s hard to pinpoint just where or how—especially after a prolonged period of it heading that direction. I was at a loss with this one. Going forward, I didn’t want not to be able to account for myself. If I could remember everything, I thought, I’d be better equipped; I’d be better able to make proper, comprehensive assessments—informed decisions. But my memory had proved itself unreliable, and I needed something better. Writing down food was a way to turn my life into facts: if I had all the facts, I could keep them straight. So the next time this happened I’d know exactly why—I’d have all the data at hand.

In the wake of that breakup there were stretches of days and weeks of identical breakfasts and identical dinners. Those days and weeks blend into one another, become indistinguishable, and who knows whether I was too sad to be imaginative or all the unimaginative food made me sadder.

[…]

“I’m always really curious about who you are in a different context. Who am I completely removed from Earth—or pretending to be removed from Earth? When you’re going further and further from this planet, with all its rules and everything you’ve ever known, what happens? Do you invent new rules? What matters to you when you don’t have constructs? Do you take the constructs with you? On an individual level it was an exploration of who I am in a different context, and on a larger scale, going to another planet is an exploration about what humanity is in a different context.”

[…]

What I remember is early that evening, drinking sparkling wine and spreading cream cheese on slices of a soft baguette from the fancy Key Biscayne Publix, then spooning grocery-store caviar onto it (“Lumpfish caviar and Prosecco, definitely, on the balcony”). I remember cooking dinner unhurriedly (“You were comparing prices for the seafood and I was impatient”)—the thinnest pasta I could find, shrimp and squid cooked in wine and lots of garlic—and eating it late (“You cooked something good, but I can’t remember what”) and then drinking a café Cubano even later (“It was so sweet it made our teeth hurt and then, for me at least, immediately precipitated a metabolic crisis”) and how, afterward, we all went to the empty beach and got in the water which was, on that warm summer day, not even cold (“It was just so beautiful after the rain”).

“And this wasn’t the same trip,” wrote that wrong-for-me then-boyfriend, “but remember when you and I walked all the way to that restaurant in Bill Baggs park, at the southern tip of the island, and we had that painfully sweet white sangria, and ceviche, and walked back and got tons of mosquito bites, but we didn’t care, and then we were on the beach somehow and we looked at the red lights on top of all the buildings, and across the channel at Miami Beach, and went in the hot Miami ocean, and most importantly it was National Fish Day?”

And it’s heartening to me that I do remember all that—had remembered without his prompting, or consulting the record (I have written down: “D: ceviche; awful sangria; fried plantains; shrimp paella.” “It is National fish day,” I wrote. “There was lightning all night!”). It’s heartening that my memory isn’t as unreliable as I worry it is. I remember it exactly as he describes: the too-sweet sangria at that restaurant on the water, how the two of us had giggled so hard over nothing and declared that day “National Fish Day,” finding him in the kitchen at four in the morning, dipping a sausage into mustard—me taking that other half of the sausage, dipping it into mustard—the two of us deciding to drive the six hours back to Gainesville, right then.

“That is a really happy memory,” he wrote to me. “That is my nicest memory from that year and from that whole period. I wish we could live it again, in some extra-dimensional parallel life.”

Three years ago I moved back to San Francisco, which was, for me, a new-old city.

I’d lived there twice before. The first time I lived there was a cold summer in 2006, during which I met that man I’d be broken up about a couple years later. And though that summer was before I started writing down the food, and before I truly learned how to cook for myself, I can still remember flashes: a dimly lit party and drinks with limes in them and how, ill-versed in flirting, I took the limes from his drink and put them into mine. I remember a night he cooked circular ravioli he’d bought from an expensive Italian grocery store, and zucchini he’d sliced into thin coins. I remembered him splashing Colt 45—leftover from a party—into the zucchini as it was cooking, and all of that charming me: the Colt 45, the expensive ravioli, this dinner of circles.

The second time I lived in San Francisco was the time our thing fell apart. This was where my terror had originated: where I remembered the limes and the ravioli, he remembered or felt the immediacy of something else, and neither of us was right or wrong to remember what we did—all memories, of course, are valid—but still, it sucked. And now I have a record reminding me of the nights I came home drunk and sad and, with nothing else in the house, sautéed kale; blanks on the days I ran hungry to Kezar Stadium from the Lower Haight, running lap after lap after lap to turn my brain off, stopping to read short stories at the bookstore on the way home, all to turn off the inevitable thinking, and at home, of course, the inevitable thinking.

[…]

I’m not sure what to make of this data—what conclusions, if any, to draw. What I know is that it accumulates and disappears and accumulates again. No matter how vigilantly we keep track—even if we spend four months in a geodesic dome on a remote volcano with nothing to do but keep track—we experience more than we have the capacity to remember; we eat more than we can retain; we feel more than we can possibly carry with us. And maybe forgetting isn’t so bad. I know there is the “small green apple” from the time we went to a moving sale and he bought bricks, and it was raining lightly, and as we were gathering the bricks we noticed an apple tree at the edge of the property with its branches overhanging into the yard, and we picked two small green apples that’d been washed by the rain, and wiped them off on our shirts. They surprised us by being sweet and tart and good. We put the cores in his car’s cup holders. There was the time he brought chocolate chips and two eggs and a Tupperware of milk to my apartment, and we baked cookies. There are the times he puts candy in my jacket’s small pockets—usually peppermints so ancient they’ve melted and re-hardened inside their wrappers—which I eat anyway, and then are gone, but not gone.

Why You Secretly Hate Cool Bars | Wait But Why

Why You Secretly Hate Cool Bars | Wait But Why.

The word “bar” can refer to a variety of places—a handy rule is, the cooler the bar, the more horrible the life experience it will provide. And on a weekend night, the quintessential cool, super-popular, loud, dark city bar becomes a place of genuine hardship.

The problem begins because you have this idea in your head that a cool bar is a fun place to be. You think to yourself, “It’s time for a big weekend. Excited to hit the bars!” without what should be the follow-up thought, “Oh wait no, I remember now that weekend bars are terrible places to go to.”

[…]

If you want to understand how a cool bar thinks, just take the way every other business thinks—”please the customer and they’ll come back”—and do the opposite.

I call it the You’re A Little Bitch strategy. Being forced to stand in line like a tamed snail—often when it’s cold and even sometimes when the bar is empty—is your first taste of the You’re a Little Bitch strategy.

While you wait, you’ll watch several all-girl groups walk to the front of the line without waiting, where the bouncer opens the rope and lets them in. Ahead of you. Because you’re a little bitch.

When you finally get to the front, you’ll notice there’s no sign with the bar’s name anywhere, because the bar likes to watch its little bitch customers go through extra trouble to find them.

You’re then asked for your ID by someone who may not have been the biggest dick in your high school—but he was the biggest dick in someone’s high school.

He then shuffles your little bitch ass along to the next stage, where they show everyone how desperately you want to be their customer by charging you $10 just to come in. They cap things off by stamping their logo on your undignified hand, just because they can.

An uninformed observer would only assume, after seeing everything you just went through, that the place you were about to enter would be some coveted utopia of pleasures. They’d be pretty surprised to see you walk into this:

The first moment you walk into a scene like this brings a distinct mix of dread and hopelessness. It’s an unbearably loud, dark, crowded cauldron of hell, and nothing fun can possibly occur here.

I’m not sure when it happened or why it happened, but at some point along the line we decided that the heinous combination of Loud/Dark/Crowded was the optimal nightlife atmosphere.[1] Maybe it started because clubs were trying to imitate the vibe at concerts, and then bars started imitating clubs to seem hipper—I’m not sure. But where it’s all left us is a place that disregards the concept of a human, and there you are in the middle of it.

[…]

Activity 1) Getting a Drink

After wedging your coat into a nook in the wall and saying goodbye to it for the last time, it’s time to go get your first drink. You were the first one of your friends to walk in the door, so you’re in the lead as your group works its way through the crowd, which means you’re the one who’s gonna drop the worst $54 a human can spend on a round of drinks no one will remember. But that’s the end goal—first, you need to figure out how to get through the three layers of people also trying to order drinks:

It’s a sickening undertaking. And depending on your level of aggressiveness and luck, making the worst purchase of your life could take anywhere from 3 to 20 stressful minutes. I’ve spent at least a cumulative week protruding my face forward, vigorously locking my eyes on the bartender’s face and still not being able to make eye contact.

You finally get back to your friends with drinks, just in time to start the primary bar activity—

Activity 2) Standing there talking to no one

Standing there talking to no one is a centerpiece of any night at the bar. If you don’t look carefully, Loud/Dark/Crowded will give you the impression that everyone in the bar is having fun and being social. But next time you’re at one, take a good look around the room, and you’ll see a surprising percentage of the people looking like this guy:

Standing there talking to no one

Just 30 minutes earlier, this guy was at dinner with his friends—talking, laughing, and sitting comfortably. Now, luckily, the real fun has begun.

Activity 3) Holding something

Almost as ubiquitous as Activity 2, holding something—usually a cold drink—is popular in bars around the world. The thing we all ignore is that holding a cold drink is shitty. A) Holding anything up for an extended period of time is shitty, B) A cold, wet drink is especially unpleasant to hold, and C) because bars are insanely crowded and people are constantly moving, your elbow will be bumped about once every 30 seconds, continuously spilling the drink on your hand and wrist. If you were in a restaurant and someone told you you had to hold your drink a few inches off the table while you sat there, you’d leave.

Unfortunately, putting it down isn’t really an option, because holding nothing at a bar frees up your hands, which has the side-effect of making you suddenly aware that you’re just standing there talking to no one, and you might panic. The solution is to quickly hold something else, usually your phone, which whisks you back into hiding.

Activity 4) Yelling out randomly to let people know you’re having a good time

Desperate to maintain the “This is fun!” narrative we’ve all been sworn to, you’ll sometimes hear a person yell out to no one in particular. They won’t yell an actual word—just something unendearing like “Woooh!” or “Ohhhh!” Relative to other activities, this is one of the most fun moments you’ll have in the bar.

Activity 5) Screaming words toward a person’s head

At some point you’ll decide to try to interact with your friends, since you’re in theory having a night out together. There’s no chance of presenting information in a nuanced way, so the conversation stays crude and basic—I’d estimate that 20 minutes of bar conversation accomplishes what roughly 1 minute of restaurant conversation does.

You might even get ambitious and decide to start accosting strangers. This tends to be an upsetting experience for both sides of the interaction, and almost never leads to anything fruitful. The irony of all this is that the Loud/Dark/Crowded cauldron of hell vibe is there in the first place for single people who want to meet single people, and bars don’t even do a good job with their prime purpose. Bars are a terrible place to meet someone if you’re single. You can barely see what people look like, let alone any subtle facial expressions that convey personality. And because it’s so crowded and hard to hear anything, mingling doesn’t really happen, which leaves aggressive conversation-starting (i.e. accosting strangers) as the only real way to get things off the ground.

Once you’re in a conversation with someone new, you’ll spend 6 minutes getting through the first nine lines of small talk and still have no idea if the stranger has a sense of humor—not a good environment to build chemistry.

[…]

Activity 7) Crying

A lot of people cry in bars.

[…]

Activity 10) Engaging with filth

Bars are a great place to really soak up the collective sludge of humanity. From the sticky floors to the vomit to the strangers making out to everyone breathing on everyone else to the bartender handling money and then shoving a lime into your drink, it’s a quality of living only drunk people could create and only drunk people could endure. The most disgusting exhibit is the men’s bathroom, where 120 drunk men have each sloppily peed 1/4th on the floor—which makes it a similar place to a bathroom where 30 men have peed only on the floor, and a place you’ll have to visit at least twice.

[…]

Suddenly remembering that food exists, you’re reenergized and work your way to the exit through the closing-time crowd of ultra-horny guys making furious last-second attempts at meeting someone. You’re pleasantly surprised to actually find your coat and then you head out the door, making sure to forget your credit card behind the bar.

And you’re done. Almost…

There’s one more critical step—the moment that propagates the bar species onto the next night: You need to convince yourself that the night was super fun.

Of course, loud, dark, crowded bars are not fun. But drunk usually is fun—no matter where it is. Go to the grocery store drunk with a bunch of friends, and you’ll have fun. Go ride the bus around town—if you’re drunk, you’ll probably have fun. If you had a good time at the bar, what actually happened is you were drunk and the bar was not quite able to ruin it for you. If something is truly fun, it should still be at least a little fun sober, and bars are not even a little fun sober.

Some people aren’t even conscious of the fact that they hate these bars and for them, the self-convincing is an automated process that takes place all through the night. For others, the delusion is a bit more forced and takes a week or two to take hold. A few people won’t ever twist the memory, but enough of their friends will that they’ll need bars for another purpose—avoiding FOMO—and they’ll be back before long.

We have a problem here, with no foreseeable solution—and until something changes, the weekend streets will be lined with little bitches, patiently waiting.

If everyone’s an idiot, guess who’s a jerk? – Eric Schwitzgebel – Aeon

If everyone’s an idiot, guess who’s a jerk? – Eric Schwitzgebel – Aeon.

Illustration by Paul Blow

Picture the world through the eyes of the jerk. The line of people in the post office is a mass of unimportant fools; it’s a felt injustice that you must wait while they bumble with their requests. The flight attendant is not a potentially interesting person with her own cares and struggles but instead the most available face of a corporation that stupidly insists you shut your phone. Custodians and secretaries are lazy complainers who rightly get the scut work. The person who disagrees with you at the staff meeting is an idiot to be shot down. Entering a subway is an exercise in nudging past the dumb schmoes.

We need a theory of jerks. We need such a theory because, first, it can help us achieve a calm, clinical understanding when confronting such a creature in the wild. Imagine the nature-documentary voice-over: ‘Here we see the jerk in his natural environment. Notice how he subtly adjusts his dominance display to the Italian restaurant situation…’ And second – well, I don’t want to say what the second reason is quite yet.

As it happens, I do have such a theory. But before we get into it, I should clarify some terminology. The word ‘jerk’ can refer to two different types of person (I set aside sexual uses of the term, as well as more purely physical senses). The older use of ‘jerk’ designates a kind of chump or an ignorant fool, though not a morally odious one.

[…]

The jerk-as-fool usage seems to have begun as a derisive reference to the unsophisticated people of a ‘jerkwater town’: that is, a town not rating a full-scale train station, requiring the boiler man to pull on a chain to water his engine. The term expresses the travelling troupe’s disdain. Over time, however, ‘jerk’ shifted from being primarily a class-based insult to its second, now dominant, sense as a term of moral condemnation. Such linguistic drift from class-based contempt to moral deprecation is a common pattern across languages, as observed by Friedrich Nietzsche in On the Genealogy of Morality (1887). (In English, consider ‘rude’, ‘villain’, ‘ignoble’.) And it is the immoral jerk who concerns me here.

Why, you might be wondering, should a philosopher make it his business to analyse colloquial terms of abuse? Doesn’t Urban Dictionary cover that kind of thing quite adequately? Shouldn’t I confine myself to truth, or beauty, or knowledge, or why there is something rather than nothing (to which the Columbia philosopher Sidney Morgenbesser answered: ‘If there was nothing you’d still be complaining’)? I am, in fact, interested in all those topics. And yet I suspect there’s a folk wisdom in the term ‘jerk’ that points toward something morally important. I want to extract that morally important thing, to isolate the core phenomenon towards which I think the word is groping. Precedents for this type of work include the Princeton philosopher Harry Frankfurt’s essay ‘On Bullshit’ (2005) and, closer to my target, the Irvine philosopher Aaron James’s book Assholes (2012). Our taste in vulgarity reveals our values.

I submit that the unifying core, the essence of jerkitude in the moral sense, is this: the jerk culpably fails to appreciate the perspectives of others around him, treating them as tools to be manipulated or idiots to be dealt with rather than as moral and epistemic peers. This failure has both an intellectual dimension and an emotional dimension, and it has these two dimensions on both sides of the relationship. The jerk himself is both intellectually and emotionally defective, and what he defectively fails to appreciate is both the intellectual and emotional perspectives of the people around him. He can’t appreciate how he might be wrong and others right about some matter of fact; and what other people want or value doesn’t register as of interest to him, except derivatively upon his own interests. The bumpkin ignorance captured in the earlier use of ‘jerk’ has changed into a type of moral ignorance.

Some related traits are already well-known in psychology and philosophy – the ‘dark triad’ of Machiavellianism, narcissism, and psychopathy, and James’s conception of the asshole, already mentioned. But my conception of the jerk differs from all of these. The asshole, James says, is someone who allows himself to enjoy special advantages out of an entrenched sense of entitlement. That is one important dimension of jerkitude, but not the whole story. The callous psychopath, though cousin to the jerk, has an impulsivity and love of risk-taking that need be no part of the jerk’s character. Neither does the jerk have to be as thoroughly self-involved as the narcissist or as self-consciously cynical as the Machiavellian, though narcissism and Machiavellianism are common enough jerkish attributes.

[…]

The opposite of the jerk is the sweetheart. The sweetheart sees others around him, even strangers, as individually distinctive people with valuable perspectives, whose desires and opinions, interests and goals are worthy of attention and respect. The sweetheart yields his place in line to the hurried shopper, stops to help the person who dropped her papers, calls an acquaintance with an embarrassed apology after having been unintentionally rude. In a debate, the sweetheart sees how he might be wrong and the other person right.

The moral and emotional failure of the jerk is obvious. The intellectual failure is obvious, too: no one is as right about everything as the jerk thinks he is. He would learn by listening. And one of the things he might learn is the true scope of his jerkitude – a fact about which, as I will explain shortly, the all-out jerk is inevitably ignorant. Which brings me to the other great benefit of a theory of jerks: it might help you figure out if you yourself are one.

[…]

With Vazire’s model of self-knowledge in mind, I conjecture a correlation of approximately zero between how one would rate oneself in relative jerkitude and one’s actual true jerkitude. The term is morally loaded, and rationalisation is so tempting and easy! Why did you just treat that cashier so harshly? Well, she deserved it – and anyway, I’ve been having a rough day. Why did you just cut into that line of cars at the last minute, not waiting your turn to exit? Well, that’s just good tactical driving – and anyway, I’m in a hurry! Why did you seem to relish failing that student for submitting her essay an hour late? Well, the rules were clearly stated; it’s only fair to the students who worked hard to submit their essays on time – and that was a grimace not a smile.

Since the most effective way to learn about defects in one’s character is to listen to frank feedback from people whose opinions you respect, the jerk faces special obstacles on the road to self-knowledge, beyond even what Vazire’s model would lead us to expect.

[…]

Still, it’s entirely possible for a picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. ‘So what, yeah, I’m a jerk,’ he might say. Provided this label carries no real sting of self-disapprobation, the jerk’s moral self-ignorance remains. Part of what it is to fail to appreciate the perspectives of others is to fail to see your jerkishly dismissive attitude toward their ideas and concerns as inappropriate.

[…]

All normal jerks distribute their jerkishness mostly down the social hierarchy, and to anonymous strangers. Waitresses, students, clerks, strangers on the road – these are the unfortunates who bear the brunt of it. With a modicum of self-control, the jerk, though he implicitly or explicitly regards himself as more important than most of the people around him, recognises that the perspectives of those above him in the hierarchy also deserve some consideration. Often, indeed, he feels sincere respect for his higher-ups. Perhaps respectful feelings are too deeply written in our natures to disappear entirely. Perhaps the jerk retains a vestigial kind of concern specifically for those whom it would benefit him, directly or indirectly, to win over. He is at least concerned enough about their opinion of him to display tactical respect while in their field of view. However it comes about, the classic jerk kisses up and kicks down. The company CEO rarely knows who the jerks are, though it’s no great mystery among the secretaries.

Because the jerk tends to disregard the perspectives of those below him in the hierarchy, he often has little idea how he appears to them. This leads to hypocrisies. He might rage against the smallest typo in a student’s or secretary’s document, while producing a torrent of errors himself; it just wouldn’t occur to him to apply the same standards to himself. He might insist on promptness, while always running late. He might freely reprimand other people, expecting them to take it with good grace, while any complaints directed against him earn his eternal enmity. Such failures of parity typify the jerk’s moral short-sightedness, flowing naturally from his disregard of others’ perspectives. These hypocrisies are immediately obvious if one genuinely imagines oneself in a subordinate’s shoes for anything other than selfish and self-rationalising ends, but this is exactly what the jerk habitually fails to do.

Embarrassment, too, becomes practically impossible for the jerk, at least in front of his underlings. Embarrassment requires us to imagine being viewed negatively by people whose perspectives we care about. As the circle of people whom the jerk is willing to regard as true peers and superiors shrinks, so does his capacity for shame – and with it a crucial entry point for moral self-knowledge.

As one climbs the social hierarchy it is also easier to become a jerk. Here’s a characteristically jerkish thought: ‘I’m important, and I’m surrounded by idiots!’ Both halves of this proposition serve to conceal the jerk’s jerkitude from himself.

[…]

As you ascend the hierarchy, you will find it easier to discover evidence of your relative importance (your big salary, your first-class seat) and of the relative idiocy of others (who have failed to ascend as high as you). Also, flatterers will tend to squeeze out frank, authentic critics.

This isn’t the only possible explanation for the prevalence of powerful jerks, of course. Maybe jerks are actually more likely to rise in business and academia than non-jerks – the truest sweethearts often suffer from an inability to advance their own projects over the projects of others. But I suspect the causal path runs at least as much in the other direction. Success might or might not favour the existing jerks, but I’m pretty sure it nurtures new ones.

[…]

In failing to appreciate others’ perspectives, the jerk almost inevitably fails to appreciate the full range of human goods – the value of dancing, say, or of sports, nature, pets, local cultural rituals, and indeed anything that he doesn’t care for himself. Think of the aggressively rumpled scholar who can’t bear the thought that someone would waste her time getting a manicure. Or think of the manicured socialite who can’t see the value of dedicating one’s life to dusty Latin manuscripts. Whatever he’s into, the moralising jerk exudes a continuous aura of disdain for everything else.

Furthermore, mercy is near the heart of practical, lived morality. Virtually everything that everyone does falls short of perfection: one’s turn of phrase is less than perfect, one arrives a bit late, one’s clothes are tacky, one’s gesture irritable, one’s choice somewhat selfish, one’s coffee less than frugal, one’s melody trite. Practical mercy involves letting these imperfections pass forgiven or, better yet, entirely unnoticed. In contrast, the jerk appreciates neither others’ difficulties in attaining all the perfections that he attributes to himself, nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralising principle therefore comes naturally to him. (Sympathetic mercy is natural to the sweetheart.) And on the rare occasions when the jerk is merciful, his indulgence is usually ill-tuned: the flaws he forgives are exactly the one he recognises in himself or has ulterior reasons to let slide.

[…]

He needn’t care only about money and prestige. Indeed, sometimes an abstract and general concern for moral or political principles serves as a kind of substitute for genuine concern about the people in his immediate field of view, possibly leading to substantial self-sacrifice. And in social battles, the sweetheart will always have some disadvantages: the sweetheart’s talent for seeing things from his opponent’s perspective deprives him of bold self-certainty, and he is less willing to trample others for his ends. Social movements sometimes do well when led by a moralising jerk.

[…]

Instead of introspection, try listening. Ideally, you will have a few people in your life who know you intimately, have integrity, and are concerned about your character. They can frankly and lovingly hold your flaws up to the light and insist that you look at them. Give them the space to do this, and prepare to be disappointed in yourself.

[…]

To discover one’s degree of jerkitude, the best approach might be neither (first-person) direct reflection upon yourself nor (second-person) conversation with intimate critics, but rather something more third-person: looking in general at other people. Everywhere you turn, are you surrounded by fools, by boring nonentities, by faceless masses and foes and suckers and, indeed, jerks? Are you the only competent, reasonable person to be found? In other words, how familiar was the vision of the world I described at the beginning of this essay?

If your self-rationalising defences are low enough to feel a little pang of shame at the familiarity of that vision of the world, then you probably aren’t pure diamond-grade jerk. But who is? We’re all somewhere in the middle. That’s what makes the jerk’s vision of the world so instantly recognisable. It’s our own vision. But, thankfully, only sometimes.

Why odd numbers are dodgy, evens are good, and 7 is everyone’s favourite | Science | The Observer

Why odd numbers are dodgy, evens are good, and 7 is everyone’s favourite | Science | The Observer.

We can explain the popularity of 7 as a favourite number by looking at a classic psychology experiment. When asked to think of a random number between 1 and 10, most people will think of 7. Our response is determined by arithmetic. The numbers 1 and 10 don’t feel random enough, neither does 2, nor the other even numbers, nor 5, which is right in the middle … So we quickly eliminate all the numbers, leaving us with 7, since 7 is the only number that cannot be divided or multiplied within the first 10. Seven “feels” more random. It feels different from the others, more special, because – arithmetically speaking – it is.

[…]

We learn at school that numbers are tools for counting, but our relationship with numbers is quite clearly a deep and complex one, dependent on many cultural and psychological factors.

In the far east, superstitions about numbers are more noticeable than in the west. For example, 4 is unlucky for speakers of Mandarin, Cantonese, Japanese and Korean because the word for “4” sounds the same as that for death. Brands avoid product lines with a 4 in them, hotels don’t have fourth floors and aircraft don’t have fourth rows. (This is more disruptive than western fear of 13, primarily since, being smaller, 4 occurs more often than 13).

Eight is a lucky number in east Asia, however, because it sounds like the word for prosperity. A study of newspaper adverts in China, Taiwan and Hong Kong showed that 8 is by far the most popular non-zero digit in a price (for example in ¥6,800, ¥280). If you put an 8 in your price you make the product seem much more alluring.

These superstitions are not lightly held. Indeed, the association of 4 with death has become a self-fulfilling prophecy. US health records show that, for Chinese and Japanese Americans, the chance of suffering a fatal heart attack is 7% higher on the 4th of the month than would be expected.

East Asians hold deep superstitions about numbers, yet outperform western nations in the international league tables of mathematical performance, which suggests that strong mystical beliefs about numbers are not an impediment to learning arithmetic skills.

Glancing ETech 2004

Glancing ETech 2004: slide 1.

1 – Eye contact is a polite way to start conversations

Erving Goffman in his book “Behavior in Public Places” studied the way people interacted in twos and threes and small groups and looked at how people move from unfocused interactions, where they’re in the same place but not together, to encounters, where they’re actually talking to each other.

He saw that people didn’t just start talking but used ambiguous expressive communication to ask if it was okay to start talking first.

Hang on, expressive communication? Right, he made a division into two kinds of messages:

Linguistic messages are your spoken ones. You speak about whatever you want, and deliberately communicate the meaning you want to communicate. Like me giving this talk.

Expressive messages are the ones you – you’re the message receiver – glean about me. The fact I chose to use this particular word rather than another. My body language. The fact I’m here at all! A nervous laugh.

Expressive messages are usually involuntary, but you can pretend if you want: that’s like a poker face.

The great thing about expressive messages is that your intention of sending them is usually unclear — or at least unprovable! The reason I’m talking, and talking is linguistic communication, is to give you information, so much is obvious, but if I look in your direction am I trying to get your attention, or just staring into space?

So Goffman found that a person would try to start a conversation with a glance that is…

“sufficiently tentative and ambiguous to allow him to act as if no initiation has been intended, if it appears that his overture is not desired.”

Which makes sense. It’s a good way of saving face. Rather than being a person other people ignore, you can just say their thoughts were on other things. Letting people save face is really important if you want to keep them happy.

Howard Rhiengold in his book Smart Mobs gives a good example of text messaging being used for this. He talked about kids in Sweden after a party. Say you’ve seen someone you quite liked and you’d like to see them again, but don’t know if the feeling’s shared. You’d send them a blank text message, or maybe just a really bland one like “hey, good party”. If they reply, ask for a date. The first message is almost entirely expressive communication: tentative, deniable.

So what usually happens in cyberspace, if I want to approach someone? I could send them an email to see if it’s okay to start emailing… it’s all quite blunt, and although I can be tentative in what I write in that email it’d be better if it was built into the software itself.

[…]

2 – Healthier small groups

So the way eye contact works as a tentative conversation opener is you look at someone, and they give you a clearance sign for that conversation by meeting your eyes. The reason this works, says Goffman, is that the very fact we’re using a sense, that fact can be noticed. And the way we notice that is by using those very same senses!

If two people look at each other, they can see each other and simultaneously see that the other person has seen them. It’s really efficient.

This visibility is used in small groups. Whenever you have more than two people together, there’s the chance that a pair of them might be carrying on with their own secret interaction, just between the two of them. They’re being disloyal to the gathering.

This is no problem in the real world because if it gets too bad then everyone else in the group can see what’s going on. That visibility moderates the behaviour and keeps everyone concentrated on the main activity.

No such luck in cyberspace. If there’s a bunch of us chatting, it’s usually really easy for a couple of people to start a direct connection, to start talking without anyone else noticing, even about the same subject. It doesn’t feel impolite, as it would in the physical world, because nobody’s going to notice, even though it still shifts their attention from the main event.

In the real world, people generally opt to stick with the group and feel uncomfortable about not doing so.

In other words, they’re polite. I’m quite up for this idea of politeness. Number one, people want to be polite. Number two, people don’t want to put other people in the position of having to be rude.

You can see this in software.

There’s an example here in a piece of software called Montage which a research group developed to help a team of people work together even though they were in geographically distributed offices. Montage simulated popping your head into someone’s office to see if they were busy, and if they’re free, you can ask them a question.

The way it did this was to have a button on your computer that brought up the video from a webcam on somebody else’s machine. Looking through this webcam, they called a glance. Glances were reciprocal, so if you looked into someone’s office with the webcam, a video of you fades up on their computer.

It worked pretty well as it happens, but people did say they felt more obliged to let those video glances turn into encounters than if someone looked through the door.

Why? I’d say it’s because there’s no plausible way to pretend you didn’t notice the video approach. You’re working on an Excel spreadsheet when bang a video pops up on your screen. No way you’re not going to notice that. In fact, it’s so obvious that you can’t not notice that, the person who’s glancing in must have a really important request! So either you ignore them, and implicitly accuse them of frivolously wasting your time, or you take the message. People take the message.

So. People want to be polite, in general. In a group situation they’ll moderate disloyal activity and join in with the whole group instead of carrying on with a side-interaction. That’s why, in Glancing, you glance not at individual people but at the whole group. Because in real life, politeness would encourage you to look at the whole group. The software default is to assume you want to be polite.

This isn’t true, for example, with email. It’s all too easy to reply only to the sender on a cc’d email. Even if this doesn’t happen to you, you’re not sure whether anyone else is doing it. There’s a lack of visibility.

Incidentally, I’ll come back to the question of why software doesn’t generally give you visibility of sense use in a bit. But for the moment I’m talking about why eye contact is good, so,

3 – Recognition

When you look at someone, you’re recognising they’re there.

Recognition is important because it helps with human bonding.

Why is bonding important in this context? Well, it’s because in small groups we’re dealing with people who are closest to you, and these are the people who you need to bond with the most.

Here’s a tool to help think about this kind of thing. Transactional Analysis is a psychological tool from the 1950s. It models communication between people in terms of transactions, a request and response. The smallest unit of a transaction, the basic unit of human recognition, TA calls a stroke.

It’s a nice way of thinking about it: recognising someone, making eye contact with someone, is a stroke: think of protohumans on the African savannah grooming one another, swapping strokes.

Now, Robin Dunbar, an anthropologist, talked about grooming in his paper on neocortex size and social group size in primates. He said we have a maximum cohesive social group of about 150. That’s the maximum stable size of your community in a given context — so, we find that scientific research specialities have a size of about 150 people. My mum has about 150 people on her christmas card list. It was the size of early villages across the world 8000 years ago, and in comparable cultures now. It’s been the size of army units through the ages. It’s the maximum number of buddies the AOL instant messenger server allows you to have.

Actually, 150 is the number of people the social computing centres of your brain can work with. You know, if you’re keeping track of who you owe favours, who nicked your berries last time you climbed a tree, that kind of thing. 150.

But actually that number is dictated by how much time you spend grooming your primary network. Primary network? This large social group is made out of many smaller networks.

Dunbar found that the primary network, the small group, they’re cohorts. They protect each other, stand up for each, against the big group as a whole. Individuals in too large a social group get stressed; it’s important to have your supportive primary network around, and you maintain that by expending effort on them.

Grooming, for chimps, is picking fleas and lice, but we have a way which is more efficient: conversation. Whereas you can only pick fleas from one other person at a time, you can talk to several at once. One of the key characteristics of this kind of grooming, however, is that it’s public.

This can be seen in the exchange of text messages in Alex Taylor’s paper looking at 16-19 year olds in an English school. They send each other quite mundane messages, with their mobile phones, but what’s important is the reciprocity. They establish their peer networks and social status, inside their community, by who sent what to whom, and who replied. Taylor said it resembled descriptions of gift-giving cultures in Polynesia.

It’s important you can see who’s grooming who because it’s like a public assertation of “don’t mess with my friends”. It’s meaningful when you publicly put your neck on the line for someone.

The kids simulate visibility of the grooming, the strokes of recognition, by showing each other the text messages. They treat them as things of value to show off.

So SMS is brilliant. The two most important things for what it means to be human: figure out the pecking order in your community and getting dates.

Anyway.

These three together tell us what attributes of eye contact we need to support for small groups. We need use of eye contact to be:

  • unconscious or involuntary (but deliberate if you want)

  • visible to other people in the group

Out of that, as we can see with SMS and mobiles, you can grow the tentative requests for encounters and social grooming.

Two other aspects. You need to feel the presence of people around so you can decide to make eye contact.

And I’d prefer it to be polite.

That’s the why. That’s very roughly what we’re aiming for. Now for the How.

Done in two ways:

1 – Presence

Telepresence is a huge topic I wish I had time to go into more here. As it is, I’ll just point you towards At The Heart Of It All which summarises different kinds of presence and why presence is good.

In a nutshell, we’re interested in the subjective feeling there are other people nearby, and that you all feel like the same people are there. That’s social copresence. And presence is good because it does things like improve social judgement and improves learning ability and memory.

All you really need for presence is to be able to detect the actions of another person on your computer. It can be anything above seeing whether the other person has turned the application on or not. Realism, little avatars or faces, isn’t important.

[…]

2 – Interface needs to be close to unconscious, visible, and tentative

To make it so the interface to Glancing is almost backgrounded and to encourage perhaps unconscious use, there’s a number of tricks we can use:

  • it’s small, both physically, and how much it stands out among other application. It got a tiny icon and it operates in a very Mac-like way, sitting where these sort of applications usually sit. Looking at the icon and opening the menu is a familiar gesture, so there’s a low cognitive overhead in looking at who’s online, made even lower by the fact you don’t actually choose to glance — it’s a side-effect of seeing the list of who’s in your group. And seeing that list is only a single click away from whatever you’re doing, because that menu is always available.

  • it’s slow. The icons are deliberately very similar so that when the glancing activity changes it doesn’t immediately catch your attention. If it did, that might mean each person in the group would decide to reciprocate, and suddenly you’re all in an encounter situation you didn’t want. So the icons are different enough to tell you the activity level, but not different enough to be distracting. Given the fact people might not notice the level for a while, Glancing is a slow application. A glance persists for 2 hours — that is, two hours after you’ve done a glance, the eye will still be open a little bit.

  • ambiguous. These two contribute to the feeling that you don’t know whether people have deliberately opened the menu or not, or whether they’ve even noticed you’ve been sending glances. It brings in that ‘tentative’ aspect I was talking about earlier, hopefully addresses that problem we saw in Montage. Something that adds to this is that you don’t glance at a specific person, you glance at the whole group. Just to restate this politeness thing: if you were sitting round a pub table with your mates, you wouldn’t just keep on looking at a single person — that’s a subactivity and frowned upon. Besides, everyone else would see you doing it and think you were weird. So to be polite you’d distribute your glances, your little strokes of recognition, around the entire table. What Glancing, the application, does by glancing at the entire group is assume in the first instance you want to be polite and just do that instead.

[…]

We can place ourselves in the middle of two long-term trends.

The first trend is the mixing of cyberspace and the real world, which has tides in two directions.

Coming from cyberspace we expect to be able to manipulate objects and automate that manipulation. That requires giving things handles and names. Coming into the physical world, we find it’s not like that: it’s a continuous world, we can’t get handles on it. So we end up creating handles for things: MP3s for Music, GeoURL for locations, email addresses for people. Look at how much effort the social software community is spending talking about Identity, which is just moot, not important, when we socialise face to face. Not only do we create handles in the real world, but we get upset when we can’t make full use of them. Why we get upset about that I’ll come back to at the end because I think it’s important.

This isn’t unique to cyberspace. We do the same thing with scientific models, or any way of talking about the world. We externalise our mental models. This process is named constructivist, this way we have to partition and name the world around us in order to interact with it. Now, I’m not saying this is new: it’s the industrial mindset (the conduit metaphor) — the ability to be able to break down a process into discrete steps is

(a) what gave us the ability to make production lines, to commoditise goods, and to complete the second half of the industrial revolution. That was Fordism, early twentieth century

(b) and this is the same as being able to program. That is, to decide that you can represent a process using only numbers and simple manipulations. You break it up into performable steps.

The problem being that to name and identify things is contentious. In cyberspace we’ve limited the number of people who can name things (have ontic powers), who can bring things into existence by contributing to the naming. So we can’t all create webpages, create new email protocols, or whatever. I’ll come back to this, because I want to talk about the other direction of this tide.

Coming into cyberspace we’re bombarded with data. In the physical world we’re used to handling this with our senses, peripheral vision. So we demand to not just read the data about the stockmarket or our social network, but to convert it into a format where it can be gleaned, experienced.

This is what Mark Rantzer has called supersenses: new communication senses to understand the huge mass of information that confronts us.

The idea is that by compressing complex data and presenting it in a way that minimises cognitive overhead, we can have a kind of background awareness of otherwise difficult to understand qualities.

This is the idea behind the Ambient Orb, which glows different colours depending on different variables. So it could glow red if the stockmarket was falling. Once you’d gotten used to the device, you wouldn’t even notice it was there, it would just be sitting there quietly white or green in the corner of your eye. Then one day it glows red and suddenly you become really aware of it: you’re losing money!

What I really love about the Ambient Orb is that it takes advantage of its presence in a physical world to do things I’ve been complaining are hard online. Other people nearby can tell if you’re looking at it, there’s a visibility of use. You can catch it in your peripheral vision, take it for granted, and never really focus on it until you see it’s red.

I think what’s missing from it is an aspect of how we process complex data normally. It doesn’t have an aspect of “look closer”, you know, you don’t examine harder it to get a better representation of the stockmarket.

It’s the same with the Dangling String, which is a device that hangs in your peripheral vision, a piece of string hanging from the ceiling, and it jiggles about the more network traffic there is on your local network. It’s a terrific example of what Mark Weiser, the father of ubiquitous computing, calls “calm technology”. In fact, I think this kind of calm technology is the future of public computing in general. But let’s say it’s jiggling really badly one day and you want to see what’s going on — so you look really close, but what do you see? Just more string!

That ‘look closer’ bit is missing. What we’re finding with these new supersenses – the Ambient Orb, Dangling String and Montage – is that we can’t use our normal computer-world metaphors of objects-and-messages to approximate how human beings really work. How we actually use our senses, not just looking and hearing but our social senses too.

That is, before now we could think about the email and the email client as being separate things. We didn’t have to consider what it really means for one person to send an email to another person, not in the social sense. It’s all abstraction layers, afterall. An email client receives an email: why should the program care who it’s from, whether it was expected or not? They’re orthogonal issues, surely?

Well what we’re finding is that with small groups the abstraction layers break down. From a design perspective we can’t just think about discrete events, we have to enable [garden] the dynamics processes of ongoing communication too. And that’s part of the second big trend.

The second big trend is the gradual improvement of our models for understanding dynamic processes.

A very brief history.

The computing world comes out of first-order cybernetics. This way of looking at the world came from the 1950s and was all about controlling systems with loops and feedback. From that came the idea of sending messages, of systems responding to messages and sending more messages out. If we could structure the world into objects and information, all in messages, all nicely abstracted, that’s all we’d need to do, we’d be sorted.

That’s the worldview that produced the computer chip, programming, and cyberspace. It’s all request and response, messages being sent between boxes.

We’re now confronting issues already identified by the more mature second-order cybernetics which arose in the 1970s, but it was pretty vague so not so influential. It’s all about human processes and instead of looking at individual objects and messages, talked about systems which self-created and changed. For this we need to allow fuzzier edges. There should be visibility of those messages being sent around so nearby objects can alter their behaviour and adapt. Systems should be able to complexify, simplify.

Now the reason this is so important, this second trend, is the constructivist nature of cyberspace I mentioned earlier. We use our mental models both to understand the world, and there’s feedback too: we use our mental models to create it.

If we understand the world through the lens of first-order cybernetics, that means we model the world in terms of people being objects sending messages to one another. That’s the world in which all we care about is that person A can send an email to person B.

On the other hand, if we understand the world in terms of dynamic processes, then we’re more interested in how people band together into small groups. We’re more interested in make email work better to send to people you’re really close to. To help defuse arguments, help people save face.

And that’s the world we’re gradually moving into.

[…]

What Floridi points out is that cyberspace is still relatively simple. The actions of a single individual can disproportionately effect the composition or evolution of the society that exists online. What’s more, the composition of the environment quite directly affects the kinds of actions people can perform: the existence of the email protocol allows a new form of interpersonal communication.

This combination – of being powerful and having clear consequences – puts us in a similar situation to what’s happening in the real world with the environment. When humans became powerful enough to affect the environment on a global scale, a new kind of ethics emerged, one that gave value to things which might inadvertently be damaged: the atmosphere, rainforests, rocks. We give these things intrinsic value. Actually it happens even on a small scale. Geologists have a code too, where the rocks have an intrinsic worth: you don’t bore holes into them in obvious places, you don’t leave paint splashed around.

In the context of cyberspace, Floridi calls this cyberethics.

Information objects themselves, he says, have moral worth. The more able we are to manipulate and use an object, that is, the more handles it has, the more valuable it is, the more worthy it is. If you improve the information, you’re doing a good deed. That’s wiki gardening, the concept of idly improving a website just as you wander by. If you leave the object open to be used in as many ways as possible, to be more manipulable, you’re doing a good deed. Well, that’s the free software movement.

Floridi underpins with a simple, graspable concept, what we who have lived with the internet feel instinctively is good and bad.

So from this perspective, concepts like adaptable design, and designing for hackability and unintended consequences aren’t just design rules of thumb, they’re aspects of how to be a good person and create a just society.

From Floridi’s environmental cyberethics, wiki gardening and free software are the cyberspace equivalents of respecting rainforests and biodiversity.