ON FEBRUARY 10TH, 1982, in a room full of designers and engineers drinking champagne and eating cake, Steve Jobs called out the names of Apple’s Macintosh team. And one by one, beginning with motherboard engineer Burrell Smith, they signed their names to a large sheet of paper.
These 47 signatures—some in perfect script, others loopy and illegible, a few just hastily printed—would soon be inscribed on the inside of every Macintosh, etched into the hard plastic case. According to former engineer Andy Hertzfeld, whose signature is on that paper and whose business card during his time at Apple read “Software Wizard,” this was a natural course of events. “Since the Macintosh team were artists,” he wrote on his blog Folklore.org, “it was only appropriate that we sign our work.”
Yet what Dye seems most fascinated by is one of the Apple Watch’s faces, called Motion, which you can set to show a flower blooming. Each time you raise your wrist, you’ll see a different color, a different flower. This is not CGI. It’s photography.
“We shot all this stuff,” Dye says, “the butterflies and the jellyfish and the flowers for the motion face, it’s all in-camera. And so the flowers were shot blooming over time. I think the longest one took us 285 hours, and over 24,000 shots.”
He flips a few pages further into the making-of book, onto the first of several full-page spreads with gorgeous photos of jellyfish. There’s no obvious reason to have a jellyfish watch face. Dye just loves the way they look. “We thought that there was something beautiful about jellyfish, in this sort of space-y, alien, abstract sort of way,” he says. But they didn’t just visit the Monterey Bay Aquarium with an underwater camera. They built a tank in their studio, and shot a variety of species at 300 frames-per-second on incredibly high-end slow-motion Phantom cameras. Then they shrunk the resulting 4096 x 2304 images to fit the Watch’s screen, which is less than a tenth the size. Now, “when you look at the Motion face of the jellyfish, no reasonable person can see that level of detail,” Dye says. “And yet to us it’s really important to get those details right.”
The Watch’s faces are littered with such details. The Mickey Mouse face, which is an explicit update on the 1933 Mickey Mouse Watch from Ingersoll, was particularly complex. Select this face, and watch Mickey’s toe tap once per second, in perfect time. Line up a bunch of watches, Dye says, and they’ll all tap at exactly the same time. There’s no reason to point out that almost no one will ever fact-check this claim—he doesn’t care. He did it for the same reason Jony Ive has taken to personally designing the internals of the Mac. Details matter.
The Astronomy watch face is another of Dye’s favorites: it gives you a view of the Earth as if you were floating peacefully above it. Spin the Digital Crown and you see moon phases, the Earth’s rotation, and even the solar system. It’s a riff on the oldest method of telling the time just with digital stars and planets instead of those far-away real ones.
Dye points out the subtlety of this face. “When you tap on the Earth and fly over the moon: We worked really hard with our engineering team to make sure the path you take from your actual position on the Earth to where the moon is and seeing its phase, is true to the actual position of the Earth relative to the moon.”
Apple employees often use the word “inevitable” to describe their work. When Dye uses it, it’s self-deprecating, as if to say: ‘this was always the right answer, but it took us a while to figure that out.’ It’s true of even seemingly simple things, he says, like the concentric circles the Watch uses to display your fitness goals.
“I couldn’t tell you from a design perspective the number of iterations we did on those three rings.” The human interface team wanted to make it easy to see progress and activity for the day, but also to make you want to hit your goals. “We spent a year, and did far more studies… enough studies to kind of fill this wall, probably,” he says, gesturing to the giant glass walls of Apple’s Caffe Macs cafeteria. “Different ways that, at a glance, someone could understand that information, and easily assess where they’re at in their day, and hopefully in a really simple and visceral way feel like they accomplished something when they fill them up.” They arrived at three circles because there’s just something about a not-quite-complete circle that drives you just crazy enough to take those last 400 steps.
They wanted to pay tribute to the original architecture of the galleries by using it as a raw material for their work.
“As the space is a provocation to artists and curators, so the installation is a provocation to the building,” Diller told Dezeen.
“One of the obvious attributes is this transparency and how it creates a provocation to everyone using it. So our first instinct was to create a problem for that transparency and to flirt with it in a different way.”
The glass walls of the larger gallery space to the left of the main entrance are coated with a liquid crystal film that fades in and out of transparency as an electric current passes through it.
“Liquid crystal film has been around probably for about twenty years or more. Generally it goes off and on. What makes this film unique is that you can control it,” explained Scofidio. “You can actually dial it down so it gradually changes to transparent, to translucent.”
“We tried to make it as invisible as possible,” added Diller.
A red plastic bucket on wheels appears to be the only occupant of the room. Inside the bucket is a camera and sensors that guide its movements around the space to collect drops of water that fall from the ceiling, as if there is a leak. As each drop falls, a loud noise sounds.
“We came up with this kind of mischievous thing, this leak. Just a leak, but it’s a very smart leak with a very smart bucket that captures it,” said Diller. “The [idea of this] empty space with just one very kind of banal object that is actually doing something very smart – it grew out of that. And then we thought: okay what do we do with the sound of that drop? How do we relate it to the next space?”
The smaller gallery to the right of the main entrance is occupied by a large screen that hangs parallel to the floor like a suspended ceiling, but just one metre above ground level.
To view the images being shown, visitors are invited to lie down on black loungers supported on wheels and propel themselves underneath the screen or use curved mirrors controlled using long black metal handles.
Once underneath, the moving image they see is a blown up version of the video footage captured by the camera in the bucket moving around in the space opposite. As each drop falls into the bucket, the surface of the water ripples, with the effect becoming amplified on the screen.
The sounds initially generated to accompany the drops of water also become distorted in the second room and choral voices are added to the acoustic arrangement, which was devised by American composer David Lang.
“The notion of, in one space – in the big space – doing something very tiny, almost invisible, almost nothing, and then taking that to the other space, makes it into the comic here and the sublime over there,” said Diller.
“It’s doing something that’s very ethereal in a way, but also grotesque, with that very large image and that drop becoming very forceful and the compression of watching with that very low floor-to-ceiling height.”
“We started by doing installations in galleries and it’s only now that we are the other side of the wall,” said Scofidio.
“We never said ‘one day we’ll be doing this’ or ‘one day we’ll have a big office’. It was never our intention. We were simply doing things that interested us and using the way that architects conceive the world to investigate conditions which we generally don’t pay a lot of attention to.”
The sanctum sanctorum of Abbey Road is Studio Two, the room where the majority of The Beatles’ recordings were made.
Standing at the threshold of Studio Two, it doesn’t look all that different from a small school gymnasium: a big rectangular box with white walls, 24-foot-high ceilings, and a parquet floor. But as soon as we entered, any thoughts of dribbling basketballs fell away, as I began to remember images of John Lennon and Paul McCartney standing around a microphone at the far end of the room, working out their harmonies.
When each of the tools in that display was first introduced, many music experts were totally wrong about the impact they would have on creative culture. “Records will kill live music,” they said as the phonograph gained popularity. Tape recording was initially viewed with suspicion by recordists accustomed to using disc-cutting lathes.
As digital technology arrived, many people thought it would surely relegate analog recording equipment to the scrap heap. In what seems like a stunning example of shortsightedness, some of Abbey Road’s most noteworthy gear was sold off in a 1980 sale as “memorabilia” at bargain-basement prices. One example—A 4-track recorder used on “Sgt. Peppers’” went for just $800 (that’s $2,300 in today’s money).
For melodic pop music, Studio Two has physical, tonal qualities which transcend its humble appearance. “It emphasizes the midrange,” Kehew says, ”and has a warm, short reverb unusual for a room its size.” These reverberant qualities are so well known that Abbey Road’s rental contract actually prohibits any sampling of its distinctive acoustic signature. As I stood in the room, I could hear the echoes of the vocals and kick drums on some of my favorite recordings of all time.
Kehew agrees that every tool can have a place as part of an artistic palate. “Old is not good or bad,” he said. “Question it. Try it. Listen. Buy weird bad gear and great quality gear—see what it does for you. I love Jon Brion’s quote—‘I don’t want to be Lo-Fi or Hi-Fi, I want to be ALL-Fi!’”
Scott touched on this in the lecture too, recounting that this was the approach that caused Beatles producer George Martin to turn down Abbey Road’s first 8-track recorder for use on the White Album. The 4-track recorders used for years by The Beatles had been specially modified to help create some of their signature sounds. Because the new 8-track recorder lacked those modifications, Martin declined to bring it into the session. His thinking, Scott said, was that it would be better for the process to maintain continuity.
In an ironic twist, Scott mentioned that The Beatles themselves had a different idea. They decided to use the 8-track without Martin’s permission, which got Scott and another engineer into a fair amount of trouble. The fact that the device was used to track parts of “While My Guitar Gently Weeps” probably helped accelerate the forgiveness. Even though new technologies can kill off old ways of working, it’s ultimately up to humans to decide the hour that they should.
“It was the 60s,” Scott said of the incident. “Rules were meant to be broken.”
At the beginning of the Beatles era, technicians had to complete what amounted to an extended apprenticeship program—and were even required to wear white lab coats (Winston Churchill once quipped that Abbey Road made him feel like he was visiting a hospital). Prospective engineers were brought up through the ranks slowly and instructed on the “rules of the process” at each stage.
But as the 60s went on, culture—specifically counter-culture—began seeping into the studio and changing that dynamic relationship between the engineers and their tools. Over time, the room became filled with incredibly skilled people who were willing to break any rule if it helped their artists create new and interesting sounds.
It was this combination of playfulness, openness to risk-taking, and deep professionalism which enabled Abbey Road’s technicians to respond to seemingly off-the-wall requests from The Beatles. Engineers began to record amps inside cupboards to get unique sounds. The studio’s tape recorders were rewired to automatically double-track performances. The tapes themselves were sped-up, slowed-down, sliced, and looped—to great effect. Even a joke, Scott says, was turned into an engineering puzzle that he had to solve when John Lennon took him up on his “suggestion” to fit the entire band in a small utility closet for the recording of “Yer Blues.”
A sort of positive feedback loop was happening: Culture was driving the development of technologies which, in turn, emboldened that creative culture to go even farther to create new tools and techniques. This embrace of the unorthodox didn’t mean that the Abbey Road staff abandoned everything they had been taught in the “white coat days,” though. In fact, Scott says it was that training which gave engineers the necessary skills to successfully and intelligently break the rules and develop all those new sounds and techniques.
When you listen to recordings from a generation or two ago, though, you often hear all sorts of rough edges: large dynamic transitions between loud and quiet, the sounds of oversaturated tape and tubes, instruments bleeding together. Chunked notes. Vocals that are out of pitch. Drums that drift in and out of time. Mistakes. Lots of mistakes.
Today’s creative paradox is that this human element, which often makes a song distinct or artistically interesting, is the thing which is almost always erased from modern productions.
“Do mistakes make music better?” I asked Kehew. Not really, he responded. It’s just that, when it comes to what people like about music, there was actually only one thing worse than these imperfections: perfection.
“I’ve done it and seen it many times,” he said. “Take something flawed, work on it ’til every part is ‘improved’ then listen. It’s worse. How could that be? Every piece is now better. But it’s a worse final product.”
This tendency towards incessant improvement has been encouraged by the power of modern tools. These days, sounds are almost always passed through a computer at some point in the recording process. These computers have their own working paradigms—things like cutting-and-pasting, the automated repetition of tasks, and “infinite undo”—which gives them incredible power to alter performances. It also adds more potential for overpolishing and something recording engineers refer to as “option paralysis,” a state where the sheer number of choices available prevents decisions from being made. Almost any element of a recording can be changed, right up until the moment that a song is released to the public.
The limitations of Beatles-era technology were substantial by comparison, and they forced a commitment to creative choices at earlier stages of the recording process. If, for example, an engineer wanted to exceed the number of recorded tracks that their tape machine allowed, two or more tracks had to be mixed together and “bounced” to an open track elsewhere. Cuts were physical, done with razor blades and tape. Mixes were performed by engineers in real time. Big mistakes at any point in the process could force an entire recording to be scrapped.
It was because artists were often stuck with the mistakes they made that they sometimes decided to embrace them. Once while recording a Beatles song called “Glass Onion” Scott accidentally erased a large number of drum parts that had been painstakingly overdubbed. Certain that he’d be fired, he played the tape to John Lennon. To Scott’s surprise, Lennon said that he liked the unexpected effect created by the glitch—and both the track and Scott stayed.
Scott was clear in his opinion: It isn’t so much the use of these new tools as it is their overuse that serves to undermine musicality.
“The trick,” Kehew says, “is a savvy or talented producer or engineer knows when to be bold and stop. To let character and roughness and lack of polish exist. I can bet most people spend more time polishing something than writing or creating the substance of it. The only cure is to work faster, more often, so you don’t treat every damn thing as being so precious that ‘It Must Be Perfect For All Time.’”
I asked Kevin Ryan if he was able to heed Scott’s warning in his own work. He laughed and acknowledged that knowing the risks of overusing digital tools didn’t make it any easier for him to resist that temptation. Kehew’s final word on the subject was, I thought, an especially Beatle-like principle for not overworking something: “Let it be what it was,” he says. “If it’s not that good, you shouldn’t be recording it.”
Today, Abbey Road straddles a line between modern culture and English Heritage. It has become Pop Music’s Westminster Abbey: partly a tourist attraction, partly a working cathedral where all the traditional rites and rituals are still observed.
Abbey Road is still producing hits though—even as tighter budgets and rising costs have caused many other recording facilities to close. An almost unbelievable number of influential artists and projects have worked (and continue to work) at the studio. Even if you eliminated the entire Beatles oeuvre the list is impressive. Pink Floyd’s “Dark Side of the Moon” was tracked there. Acts like Kate Bush, Elton John, Oasis, Nick Cave and the Bad Seeds, Green Day, U2, Radiohead, and Kanye West have all recorded there. Countless film scores, too—Star Wars, Raiders of the Lost Ark, Lord of the Rings.
The first thing you notice about the IBM Model M keyboard, when you finally get your hands on it, is its size. After years of tapping chiclet keys and glass screens on two- and three-pound devices, hefting five pounds of plastic and metal (including a thick steel plate) is slightly intimidating. The second thing is the sound – the solid click that’s turned a standard-issue beige peripheral into one of the computer world’s most prized and useful antiques.
Next year, the Model M turns 30. But to many people, it’s still the only keyboard worth using.
Looking at a Model M for the first time in years, what was most remarkable about the keyboard was just how unremarkable it looks. The Model M might be a relic of the past, but its DNA remains in almost every keyboard we use today.
The QWERTY keyboard layout was designed for typewriters in the late 19th century and quickly became universal. But by the time IBM released its first PC in 1981, layout was no longer a simple matter of spaces and capital letters — users now needed special keys to communicate with word processors, terminals, and “microcomputers.” In hindsight, keyboards from the ’70s and ’80s range from familiar to counterintuitive to utterly foreign: in the IBM PC’s original 83-key keyboard — known as the PC / XT — the all-important Shift and Return keys were undersized and pushed to the side, their labels replaced by enigmatic arrows. The entire thing looks like a mess of tiny buttons and inexplicable gaps. In August of 1984, IBM announced the far more palatable PC / AT keyboard. Compared to the previous model, “the AT keyboard is unassailable,” said PC Magazine. The AT couldn’t pass for a present-day keyboard: the function keys are arranged in two rows on the far left instead of along the top, Escape is nestled in the numeric keypad, and Ctrl and Caps Lock have been switched. Even so, it’s cleaner and far more comprehensible than its predecessor to modern eyes.
But IBM wanted something more than merely acceptable. In the early ’80s the company had assembled a 10-person task force to build a better keyboard, informed by experts and users. The design for the previous iteration was done “quickly, expeditiously — not the product of a lot of focus group activity,” says David Bradley, a member of the task force who also happens to be the creator of the now-universal Ctrl+Alt+Delete function. The new group brought in novice computer users to test a friendlier keyboard, making important controls bigger and duplicating commonly used keys like Ctrl and Alt so they could be reached by either hand. Many of the keys were detachable from their bases, letting users swap them around as needed. And the Model M was born.
Introduced in 1985 as part of the IBM 3161 terminal, the Model M was initially called the “IBM Enhanced Keyboard.” A PC-compatible version appeared the following spring, and it officially became standard with the IBM Personal System / 2 in 1987.
That layout of the Model M has been around so long that today it’s simply taken for granted. But the keyboard’s descendents have jettisoned one of the Model M’s most iconic features — “buckling springs,” a key system introduced in the PC / XT. Unlike mechanical switches that are depressed straight down like plungers, the Model M has springs under each key that contract, snap flat, or “buckle,” and then spring back into place when released. They demand attention in a way that the soft, silent rubber domes in most modern keyboards don’t. This isn’t always a good thing; Model M owners sometimes ruefully post stories of spouses and coworkers who can’t stand the incessant chatter. But fans say the springs’ resistance and their audible “click” make it clear when a keypress is registered, reducing errors. Maybe more importantly, typing on the Model M is a special, tangible experience. Much like on a typewriter, the sharp click gives every letter a physical presence.
“This is like oil. One day oil will run out. It’ll be a big crash,” says Ermita. For now, though, that crash seems far away. The oldest Model Ms have already lasted 30 years, and Ermita hopes they’ll make it for another 10 or 20 — long enough for at least one more generation to use a piece of computing history.
The Model M is an artifact from a time when high-end computing was still the province of industry, not pleasure. The computer that standardized it, the PS / 2, sold for a minimum of $2,295 (or nearly $5,000 today) and was far less powerful and versatile than any modern smartphone. In the decades since, computers have become exponentially more capable, and drastically cheaper. But in that shift, manufacturers have abandoned the concept of durability and longevity: in an environment where countless third-party companies are ready to sell customers specialty mice and keyboards at bargain basement prices, it’s hard to justify investing more than the bare minimum.
That disposability has made us keenly aware of what we’ve lost, and inspired a passion for hardware that can, well, take a licking and keep on clicking. As one Reddit user recently commented, “Those bastards are the ORIGINAL gaming keyboards. No matter how much you abuse it, you’ll die before it does.”
1981 IBM PC/XT
1984 IBM PC/AT
1985 IBM Model M
2014 Unicomp Ultra Classic
Tesla’s newly launched Model SD electric car could be “summoned” by owners to pick them up autonomously using the car company’s new Autopilot function, potentially eliminating the need for taxi services.
By integrating a number of safety technologies, Tesla‘s Autopilot system could eventually enable its electric cars to drive and collect passengers without anyone at the wheel, according to Tesla CEO Elon Musk.
Drivers could command their cars to pick them up using their phones, or by pre-programming a calendar.
“You’ll be able to summon the car and it will come to wherever you are,” explained Musk. “It can even go a step beyond that… if you have your calendar turned on, it’ll meet you there”.
Under existing regulations, drivers will be able to use the Autopilot mode on private land for a number of functions including self-parking.
“When you get home, you’ll actually be able to just step out of the car and have it park itself in your garage,” said Musk.
The car will be able to steer itself to stay within a lane and change lanes as well as manage its own speed by “reading” road signs. According to Tesla, it will take “several months” for all Autopilot features to be completed and uploaded to the cars.
“Tesla’s Autopilot is a way to relieve drivers of the most boring and potentially dangerous aspects of road travel – but the driver is still responsible for, and ultimately in control of, the car,” explained a statement released by Tesla.
The vehicle’s safety features, which have enabled its Autopilot functionality, include a forward-looking radar system that can detect potential collision risks even in poor weather conditions.
A camera located at the front has been programmed to distinguish road features such as traffic lights and safety barriers, as well as pedestrians and cyclists.
Twelve sensors have also been positioned around the vehicle to form a “safety cocoon”, which detects hazards in blind spots.
The system can activate a digitally controlled electric braking system and give tactile feedback through the steering wheel, alerting the driver to perceived risks.
In addition to enhanced safety features and Autopilot, the Model SD has managed to match the acceleration performance of the iconic McLaren F1 sports car, reaching 60 miles per hour from a standstill in just 3.2 seconds.
The power is generated from two electric motors, which are located on the front and rear axels respectively. Each motor is digitally and independently controlling torque to the front and rear wheels, making minor adjustments to effectively translate its power to the road without loss of traction and wheel-spinning.
“We’re going to have an option in the settings whereby you’ll actually be able to choose from three settings,” explained Musk. “Normal, sport and insane.”
There’s a great quote that is often attributed to Henry Ford, the man who revolutionized the automobile industry with the introduction of the Model T in 1908. You’ve probably heard it before:
If I had asked my customers what they wanted they would have said a faster horse.
Whether Ford actually ever said them or not, those are wise words, and they apply to a great many things beyond cars. The gist of it is that consumers largely judge new products by comparing them to their existing competitors. That’s how we instinctively know if something is better. However, what happens when an entirely new product comes along? What happens when there are no real competitors?
When there’s no reference, there’s no objective way to quantify how good —or bad— a product is. As a last resort, people will still try to compare it to the closest thing they can think of, even if the comparison doesn’t really work. That can be a dangerous thing, but it can also be an opportunity.
The main lesson behind Ford’s words is that, if you aim to create a revolution, you must be willing to part with the existing preconceptions that are holding your competitors back. Only then will you be able to take a meaningful leap forward. That will surely attract some criticism in the beginning, but once the product manages to stand on its own, people will see it for what it really is.
The tech world is largely governed by that rule. It’s what we now call disruption. Apple, in particular, is famous for anticipating what people need before they even know it, disrupting entire markets. That’s arguably the main reason behind their massive success during the past decade.
In retrospect, Apple products are often seen as revolutionary, but only after they’ve gained a foothold in the market and more importantly, in our collective consciousness. Only then, people start seeing them for the revolutionary devices they always were. At the time of their announcement, though, they tend to face strong criticism from people that don’t really understand them. Apple products are usually not terribly concerned with conforming to the status quo and in fact, more often than not they’re actively trying to disrupt it. And that drives some people nuts.
It happened with the iPod:
No wireless. Less space than a nomad. Lame.
It happened with the iPhone.
That is the most expensive phone in the world and it doesn’t appeal to business customers because it doesn’t have a keyboard, which makes it not a very good email machine…
It also happened with the iPad.
It’s just a big iPod touch.
There’s another example that’s particularly telling. During the last episode of The Talk Show, John Gruber and Ben Thompson reminded me of the public criticism that the original iPhone faced when Apple announced it. Much of that criticism was focused on its non-removable battery, a first in the mobile phone industry at the time. Back then, many people were used to carrying a spare battery in case their phone happened to die mid-day. Once the iPhone arrived and people couldn’t swap batteries anymore, they became angry. The iPhone didn’t conform to what they already knew, and they didn’t like it.
But the iPhone was never a horse.
7 years later, swappable batteries are no longer a thing, and nobody remembers them anymore. Some people may think of it as nice-to-have, and some others prefer to carry an extra battery pack, but for the most part, battery-swappability is not a factor driving smartphone sales.
Was it ever really a big deal?
Of course not. Swappable batteries were never a feature, they were merely a way to deal with the technological shortcomings of the time. Apple knew that if they managed to get a full day’s worth of use out of the iPhone’s battery, there wouldn’t be a need for it to be removable anymore, and they trusted people to eventually understand and accept that. It was a gamble, but history has shown that they were right.
The same thing happened with MacBooks a few years ago, but by then, Apple’s solution had already proven to be the right one. Indeed, it seems a bit silly to complain about a non-removable battery when your laptop gets 12 hours of battery life.
And yet, no matter how many times Apple has been right in the past, people keep finding reasons to complain about their new products. The Apple Watch, of course, is no different:
Apple Watch is ugly and boring (and Steve Jobs would have agreed).
It’s not even a finished product, and some people are already slamming it. And it’s only going to get worse.
People don’t like what they don’t understand and so far, nobody understands the Apple Watch. I’m not even sure anybody can; we just don’t know enough about it at this point. In the absence of a valid reference, many are sure to dismiss it as either irrelevant or flawed, simply because it doesn’t conform to their own existing preconceptions. Because, like the iPhone, the Apple Watch is not a horse either.
That’s a very human response, deeply rooted in our nature. It’s actually uncontrollable, to a degree. We’ve been evolutionary conditioned to be wary of the unknown, because there was a time not so long ago, when our very survival depended on it. However, given that we’re not fighting smilodons for food anymore, perhaps we should at least try to keep an open mind about things. Especially shiny things that cost hundreds —or thousands— of dollars and have the potential to disrupt our entire lives and redefine the way we communicate with each other.
I’m not saying that you should like the Apple Watch. I’m certainly not saying you should buy one. I’m just saying, it can’t hurt to give it the benefit of the doubt. There’s so much to gain and so little to lose.
The Apple Watch is not a faster horse but who knows? It just may end up being your favorite thing.
ELECTRONIC DISPLAY | UNKNOWN | c. 1950s
For the first eight thousand years of writing, letterforms were free of restriction. Add serifs to your letters, manipulate shape in a million ways, design intricate ligatures, make the ascender on your lower case “h” stretch to the heavens, create letters made from stacked drawings of clown shoes: the world of type design was a world without physical limits. Then came electronics.
Getting early electro-mechanical systems to dynamically display changable text was a pain in the ass. One technique used a system consisting of preset messages painted on a series of flaps attached to a central shaft. Rotate the shaft, and different messages flop into view. Mount together several hundred single-character flap devices and you’ve got yourself a versatile and easy-to-read “split-flap display” message board, the kind you can still see — and hear, as they make their wonderful clack-clack noise each time the sign updates — in train stations and airports.
Flap signs allow freedom of font choice (Helvetica is particularly popular) but they are big clunky complex devices, filled with gears, motors, and relays. What if you want to eliminate all of that?
One of the most widely used mechanics-free electronic displays consists of a matrix of light bulbs that can be illuminated in different patterns to produce different characters. And here — for perhaps the first time in the history of writing — designers found themselves in a situation where the complexity of the font they used had a direct effect on the cost of their display. The more subtle and intricate the letterforms, the more pixel-lightbulbs needed to render it. This forced the adoption of a typeface stripped down to the minimum — a typeface so simple that each letter can be rendered with only 35 lights, arranged in a 7×5 matrix. If you only need to display the numbers 0-9, you can reduce things even further, and use a display of only 15 lights, arranged in a 5×3 matrix grid.
A similar quandary popped up when the first electronic calculators were coming to market. Those devices used displays consisting of tiny light emitting diodes (LEDs), each one capable of displaying a tiny line segment. Similarly to the lightbulb displays, turning on the LED lines in different patterns could create different characters. Designers realized that if all you needed to display was numbers, just seven segments arranged in a pattern like the number “8” could do the job (14 segments if you needed to display letters).
Both of these techniques produce letterforms created by necessity, composed of strokes and dots of pure information. Products — and visual symbols — of the electronic age.
How might the promise of what at the time was called an “internet of things” play out in the near future? What would the future look like in a world blanketed by advances in protection and surveillance technologies? If Autonomous Vehicle innovations continued its passionate race forward, what would it be to pick up the groceries, take a commercial airline flight, commute to work, have mail and parcels delivered, drop off the dry cleaning, meet friends at a bar across town, go on cross-country family vacations, or take the kids to sports practice or school? Would food sciences offer us new forms of ingestible energy such as coconut-based and other high-caloric energy sources, or caloric burners that would help us avoid exercise-based diets? In what ways would live, streaming, recorded and crowd-authored music and filmed entertainment evolve? How might advances in portable spring power hold up against traditional chemical battery power? How would emerging forms of family and kinship be reflected in social networks? How will Chinese migration to Africa shape that continent’s entry into the world of manufacturing, and how would that inevitability shape distribution and production economies? What is to become of open-source education and the over-supply of capable yet unemployed engineers? Would personal privacy and data hiding protocols be developed to help protect our families and businesses from profile pirates and data heists? What happens to our sense of social relations as today’s algorithmic analytic interpersonal relationship matchers get too good and algorithms effectively pre-pubscently “couple us off” before we have a chance to experience the peculiarities of dating life? Will crytocurrency disrupt today’s national currencies? What will become of coffee and plant-based protein products?
Ultimately though, our task was to decant even the most preposterous idea through a series of design procedures that would make it as normal, ordinary, and everyday blasé as, for one retrospective example, the billions of 140-character messages sent into the ether each day – a form of personal individual communication that must have, at its inception, seemed to most of the world to be the most ridiculous idea ever. The point being that the most extraordinary preposterous social rituals have often made their ways into our lives to become normal and even taken for granted.
A report (or catalog, such as TBD) offers a way to normalize those extraordinary ideas and represent them as entirely ordinary. We imagined it to be a catalog of some sort, as might appear in a street vending box in any neighborhood, or in a pile next to the neighborhood real estate guides or advertising-based classified newspapers near the entrance to your local convenience store.
Rather than the staid, old-fashioned, bland, unadventurous “strategy consultant’s” report or “futurist’s” white paper (or, even worse – bullet-pointed PowerPoint conclusion to a project), we wanted to present the results of our workshop in a form that had the potential to feel as immersive as an engaging, well-told story. We wanted our insights to exists as if they were an object or an experience that might be found in the world we were describing for our client. We wanted our client to receive our insights with the shift in perspective that comes when one is able to suspend their disbelief as to what is possible.
During our workshop, we used a little known design-engineering concept generation and development protocol called Design Fiction. Through a series of rigorous design procedures, selection protocols, and proprietary generative work kits, Design Fiction creates diegetic and engineered prototypes that suspend disbelief in their possibility. Design Fiction is a way of moving an idea into existence through the use of design tools and fictional contexts that results in a suspension of one’s disbelief, which then allows one to overcome one’s skeptical nature and see possibility where there was once only skepticism or doubt.
There were a variety of tools and instruments we could put in service to construct these normal ordinary everyday things. For example, several canonical graphs used to represent trajectories of ideas towards their materialization would come in handy. These are simple and familiar graphs. Their representations embody specific epistemological systems of belief about how ideas, technologies, markets, societies evolve. These are typically positivist up-and-to-the-right tendencies. With graphs such as these, one can place an idea in the present and trace it towards its evolved near future form to see where its promise might end up.
We also had the Design Fiction Product Design Work Kit, a work kit useful for parceling ideas into their atomic elements, re-arranging them into something that, for the present, would be quite extra-ordinary. But, in the near future everyday, would be quite ordinary.
No. Not prediction. Rather we were providing thought provocations. We were creating a catalog of things to think with and think about. We were creating a catalog full of creative inspiration for one possible near future – a near future that would be an extrapolation from todays state of things. Our objective was to create a context in which possible-probables as well as unexpected-unlikelies were all made comprehensible. Were one to do a subsequent catalog as a reflection on another year, it would almost certainly be concerned with very different topics and, as such, materialize in a rather different set of products.
There were no touch-interaction fetish things like e-paper magazines, no iPhones with bigger screens, no Space Marine Exo-Skeletons, no time-traveling devices, not as many computational screen devices in bathroom medicine cabinets as one may have hoped or feared. There was no over-emphasis on reality goggles, no naive wrist-based ‘wearables’, a bare minimum of 3D printer accessories. Where those naive futures appeared we debased them – we represented them with as much reverence as one might a cheap mass-produced lager, an off-brand laundry soap, or an electric toothbrush replacement head. We focused on the practicalities of the ordinary and everyday and, where we felt necessary, commoditized, bargainized, three-for-a-dollarized and normalized.
What was most interesting is that the deliverable – a catalog of the near future’s normal ordinary everyday – led us in a curious way to a state that felt rather like the ontological present. I mean, the products and services and “ways of being” were extrapolated, but people still worried about finding a playmate for their kid and getting out of debt. As prevalent as ever were the shady promises of a better, fitter, sexier body and new tinctures to prevent the resilient common cold. People in our near future were looking for ways to avoid boredom, to be told a story, find the sport scores or place a bet, get from here to there, avoid unpleasantries, protect their loved ones and buy a pair of trousers. Tomorrow ended up very much the same as today, only the 19 of us were less “there” than the generations destined to inherit the world designed by the TBD Catalog. Those inheritors, the cast of characters we imagined browsing and purchasing from this catalog in the near future, seemed to take things in stride when it came to biomonitoring toilets, surveillance mitigation services, luxurious ice cubes, the need for data mangling, living a parametric-algorithmic lifestyle, goofy laser pointer toys, data sanctuaries, and the inevitable boredom of commuting to work (even with “self-drivers” or other forms of AV’s.)
The near future comes pre-built with the expectation that, being the future, it must be quite different from the vantage point of the present. This is an assumption we were trying to alter for a moment – the assumption that the future is either better or worse than the present. Quite less often is the future represented as the same as now only with a slightly different cast of characters. Were we to take this approach, which we did, it would be required that the cast of characters from the future would be no more nor less awestruck by their present than we are today awestruck by the fact that we have on-demand satellite maps in our palms, that the vapor trail above us is a craft with hundreds of souls whipping through the stratosphere at breakneck speeds, and that when we sit down at a restaurant fresh water (with ice) is offered in several varieties from countries far away, with or without bubbles.
It was important that the concepts be carefully represented as normal, rather than spectacular. Were things to have a tinge of unexpected social or technical complexity as suggested, for example, by regulatory warnings, a hint of their possible mishaps, an indication that it may induce a coronary or require a signed waiver — all the better as these are indications of something in the normal ordinary everyday.
the near future may probably be quite like the present, only with a new cast of social actors and algorithms who will, like today, suffer under the banal, colorful, oftentimes infuriating characteristic of any socialized instrument and its services. I am referring to the bureaucracies that are introduced, the jargon, the new kinds of job titles, the mishaps, the hopes, the error messages, the dashed dreams, the family arguments, the accidental data leak embarrassments, the evolved social norms, the humiliated politicians, the revised expectations of manner and decorum, the inevitable reactionary designed things that reverse current norms, the battalions of accessories. Etcetera.
Also, concepts often started as abstract speculations requiring deciphering and explication. These would need to be designed properly as products or services that felt as though they were well-lived in the world. Predictive design and speculative design lives well in these zones of abstraction. To move a concept from speculative to design fictional requires work. To materialize an idea requires that one push it forward through the gauntlet any design concept must endure to become the product of the mass-manufacturers process of thing-making. To make an idea become a cataloged, consumable product in the world requires that it be manufacturable, desirable and profitable. Each of these dimensions in turn require that, for example, the thing be imagined to have endured regulatory approvals, be protected as much as possible from intellectual property theft, be manufactured somewhere, suffer the inevitable tension between business drivers, marketing objectives, sales goals and design dreams while also withstanding transcontinental shipping, piracy of all kinds, the CEO’s wives color-choice whims (perhaps multiple CEOs over the course of a single product’s development) and have a price that is as cheap as necessary in many cases but perhaps reassuringly expensive in others. Things need to be imagined for their potential defects, their inevitable flaws and world-damaging properties. A product feels real if it has problems it mitigates as well as new, unexpected problems it introduces. Things need names that are considered for certain categories of product, and naive or imbecilic for others. Things need to be imagined in the hand, in use in “real world” contexts – in the home, office, data center, one’s AV, amongst children or co-workers. They should be forced to live in their springtime with fanfare, and their arthritic decline on the tangled, cracked and chipped 3/99¢ bin. To do this requires that they live, not just as flat perfect things for board room PowerPoint and advertisements, but as mangled things co-existing with all of the dynamic tensions and forces in the world.
Ultimately, things are an embodiment of our own lived existence — our desires and aspirations; our vanities and conceits; our servility and humility. A Design Fiction catalog of things becomes an epistemic reflection of the times. One might read such a catalog as one might read a statement titled “The Year In Review” – a meditation on the highlights of a year recently concluded. This would not be prediction. It would be a narrative device, a form of storytelling that transcends naive fiction to become an object extracted from a near future world and brought back to us to consider, argue over and discuss. And, possibly, do again as an alternative to the old journalistic “The Year In Review” trope. Is there a better name or form for the thing that looks forward with modesty from today and captures what is seen there? What do we call the thing that stretches into the near future the nascent, barely embryonic hopes, speculations, hypotheses, forces, political tendencies – even the predictions from those still into such things? Is it Design Fiction? An evolved genre that splices together naive fiction, science-fiction, image-and-graphic mood boards and the now ridiculously useless ‘futurist’ predictions and reports? Something in between crowd-funding as a way to prototype a DIY idea and multiform, transmedia shenanigans?
We started receiving inquiries from individuals around the world who wanted to order items and provide crowd-funding style financial backing for product concepts. Some entities demanded licensing fees because a product the “catalog” purported to “sell” was something they had already developed and were selling themselves or, in some cases, they had even patented and so were notifying us that they would pursue legal remedies to address our malfeasance.
We found that products and entire service ecosystems we implied through advertisements actually existed in an obscure corner of the business world. Of course, there were items in the catalog that we knew existed already. In those cases, our task was not to re-predict them, but to continue them along their trajectory using one or a combination of our graphs of the future (see following pages). In these cases, it can be expected that an unwitting reader of TBD Catalog would naturally make contact with us to find out why they had not be made aware of the new version of the product, how could they get a discounted upgrade, or how they could download the firmware update for which they simply had not already been aware.
One could write quite didactically about innovation of such-and-so, or make a prediction of some sort or commission a trend analysts report or a clever name-brand futurists’ speculation. Or, one could start with the names of some things and fill out their descriptions at their “consumer face” and let the things themselves come to life, define the sensibilities of those humans (or algorithms?) that might use them. How would those things be sold – what materials? what cost? what consumer segment? Three-for-one? Party colors? Or one could do a very modern form of combined prototyping-funding such as the ‘Kickstarter model’ of presenting an idea before it is much more than a collection of pretty visual aids and then see what people might pay for an imaginary thing. Design Fiction is the modern form of imagining, innovating and making when we live in a world where the future may already have been here before.
It was square, squat, and inherently cute. It was friendly. It was easy to use. I’m talking about the beige box with the blue grinning face that came to live with us in 1985. But I’m also talking about the font that came with it. It was the typeface Chicago that spelled out “Welcome to Macintosh,” ushering us into a new age of personal computing. But it was also a new age for digital type. Here was a typeface created explicitly for the Macintosh, part of designer Susan Kare’s strategy to customize everything from the characters to the icons — that happy computer, the wristwatch, an actual trashcan — to make it feel more human and less machine.
Most of us couldn’t quite put our finger on what made these letters so different. But the secret was in the spaces between the letters. Chicago was one of the first proportional fonts, which meant that instead of each character straining to fill up pixels in a specified rectangle, the letters were allowed to take up as much or little space as they needed. It was more like a book than a screen. For the thousands of families who brought this radical new technology into our homes, Chicago helped us feel like the Mac was already speaking our language.
Maybe it was because I was so used to it greeting me, guiding me through every decision — chirping up in a dialogue box confirming that I did, indeed, want to shut down. But when I began writing on the computer, at age eight, Chicago was the typeface I used. I was mostly writing poems about trees during this period, and I’d bring the text into MacPaint where I could illustrate them using Kare’s paintbrush icon, sweeping the page with basket-weave patterns and single-pixel polka dots. Sometimes I’d click over and scroll through the available typefaces — New York, Geneva, Monaco — and reject each one not only on looks, but on principle. As a kid growing up in suburban St. Louis, Chicago was the only place on the list I had been.
Eventually, Apple retired Chicago, commissioning a new typeface, Charcoal, as a kind of homage to Chicago’s functionality. But I would be reunited with Chicago one last time: Due to its excellent readability at low resolution, Chicago was the font used on the black-and-white screen of the very first iPod. Once again, it was the typeface to welcome early adopters. Those of us who’d learned to type on a Mac were greeted by a familiar font on our first portable music players, cheerily guiding us as we spun the clickwheels in wonderment.
For its newest operating system, released this year, Apple chose Helvetica Neue. It is a choice meant to graduate us into yet another new world, of Retina displays in glassy tablets, where style wins out over substance. This world is also generic and cold. I type thousands of words into my screen every day, rarely pausing to specifically mourn the loss of Chicago. But I often wonder what happened to that smiling computer who used to greet me from the other side of the screen.
Not long ago, viewers of CBS’s 60 Minutes were treated to an intriguing bit of political theater when, in a story called “The Pentagon’s Ray Gun,” a crowd of what seemed to be angry protesters confronted a Humvee with a sinister-looking dish antenna on its roof. Waving placards that read world peace, love for all, peace not war, and, oddly, hug me, the crowd, in reality, was made up of U.S. soldiers playacting for the camera at a military base in Georgia. Shouting “Go home!” they threw what looked like tennis balls at uniformed comrades, “creating a scenario soldiers might encounter in Iraq,” explained correspondent David Martin: “angry protesters advancing on American troops, who have to choose between backing down or opening fire.” Fortunately — and this was the point of the story — there is now another option, demonstrated when the camera cut to the Humvee, where the “ray gun” operator was lining up the “protesters” in his crosshairs. Martin narrated: “He squeezes off a blast. The first shot hits them like an invisible punch. The protesters regroup, and he fires again, and again. Finally they’ve had enough. The ray gun drives them away with no harm done.” World peace would have to wait.
The story was in essence a twelve-minute Pentagon infomercial. What the “protesters” had come up against was the Active Denial System, a weapon, we were told, that “could change the rules of war and save huge numbers of lives in Iraq.” Active denial works like a giant, open-air microwave oven, using a beam of electromagnetic radiation to heat the skin of its targets to 130 degrees and force anyone in its path to flee in pain — but without injury, officials insist, making it one of the few weapons in military history to be promoted as harmless to its targets. The Pentagon claims that 11,000 tests on humans have resulted in but two cases of seconddegree burns, a “safety” record that has put active denial at the forefront of an international arms-development effort involving an astonishing range of technologies: electrical weapons that shock and stun; laser weapons that cause dizziness or temporary blindness; acoustic weapons that deafen and nauseate; chemical weapons that irritate, incapacitate, or sedate; projectile weapons that knock down, bruise, and disable; and an assortment of nets, foams, and sprays that obstruct or immobilize. “Non-lethal” is the Pentagon’s approved term for these weapons, but their manufacturers also use the terms “soft kill,” “less-lethal,” “limited effects,” “low collateral damage,” and “compliance.” The weapons are intended primarily for use against unarmed or primitively armed civilians; they are designed to control crowds, clear buildings and streets, subdue and restrain individuals, and secure borders. The result is what appears to be the first arms race in which the opponent is the general population.1
That race began in the Sixties, when the rise of television introduced a new political dynamic to the exercise of state violence best encapsulated by the popular slogan “The whole world is watching.” As communications advances in the years since have increasingly exposed such violence, governments have realized that the public’s perception of injury and bloodshed must be carefully managed. “Even the lawful application of force can be misrepresented to or misunderstood by the public,” warns a 1997 joint report from the Pentagon and the Justice Department. “More than ever, the police and the military must be highly discreet when applying force.”
In this new era of triage, as democratic institutions and social safety nets are increasingly considered dispensable luxuries, the task of governance will be to lower the political and economic expectations of the masses without inciting full-fledged revolt. Non-lethal weapons promise to enhance what military theorists call “the political utility of force,” allowing dissent to be suppressed inconspicuously.
When the leveling power of mass communications has increased the ability of protesters to achieve concrete political gains, the Pentagon and federal law-enforcement agencies have responded by developing more media-friendly systems of control. Now, under cover of the “war on terror,” the deployment of these systems on the home front has dramatically escalated, an omen of a new phase in the ongoing class conflict.
The commission recognized that in riot control, the dilemma facing police was “too much force or too little.” Warning that excessive force “will incite the mob to further violence, as well as kindle seeds of resentment for police that, in turn, could cause a riot to recur,” the commission identified the problem as the lack of a “middle range of physical force.” It saw the solution in “nonlethal control equipment,” and called for an urgent program of research, noting some of the possibilities:
Distinctive marking dyes or odors and the filming of rioters have been recommended both to deter and positively identify persons guilty of illegal acts. Sticky tapes, adhesive blobs, and liquid foam are advocated to immobilize or block rioters. Intensely bright lights and loud distressing sounds capable of creating temporary disability may prove to be useful. Technology will provide still other options.
The ultimate goal, it seems, is to fight “Military Operations on Urban Terrain” (MOUT), using weapons with a rheostatic capability that, like Star Trek’s “phasers,” will allow military commanders to fine-tune the amount and type of force used in a given situation, and thereby to control opponents’ behavior with the scientific precision of a wellmanaged global production system.
The first significant use of these new weapons, appropriately, was against the fierce anti-globalization demonstrations that began at the World Trade Organization conference in Seattle in 1999. The largest upsurge of the left since the Sixties, the anti-globalization movement mobilized thousands of separate groups in a campaign against the human and environmental costs of corporate imperialism. Protesters had a new technology of their own to exploit — the Internet, which provided an unprecedented means of organizing and sharing information. More than 40,000 protesters converged on Seattle that November with the widely announced intention of “shutting down the WTO” in order to highlight its predatory “free trade” policies. With mass civil disobedience coordinated by cell phones and laptops, teams trained in nonviolence formed human blockades at strategic locations, snarling traffic, trapping trade delegates in hotels, and barricading conference sites; many thousands more swarmed streets in a “Festival of Resistance,” paralyzing the city’s business district.
Police attacked demonstrators with nearly every non-lethal weapon available to civilian authorities: MK-46 pepper-spray “Riot Extinguishers,” CS and CN grenades, pepper-spray grenades, pepperball launchers, “stinger” rubber-ball grenades, flash-bang concussion grenades, and a variety of blunt-trauma projectiles. But the protesters held their positions, forcing WTO officials to cancel that day’s events
Galvanized by their victory, protesters targeted economic summits in rapid succession, swarming meetings of the World Economic Forum, the G8, and other gatherings in a dozen major cities. But without Seattle’s advantage of surprise, they faced increasingly elaborate MOUT tactics.
With the launch of the Global War on Terror, “the gloves were off,” as the White House put it: authorities had free rein to target protesters as potential terrorists.
The Rand Corporation, for its part, had already anticipated the power of what it called “netwar,” in which networks of “nonstate actors” use “swarming tactics” to overwhelm police and military. As Rand analysts wrote in a 2001 study, Networks, Netwars, and the Fight for the Future, the practitioners of such tactics “are proving very hard to deal with; some are winning. What all have in common is that they operate in small dispersed units that can deploy nimbly” and “know how to swarm and disperse, penetrate and disrupt, as well as elude and evade,” all aided by the quick exchange of information over the Internet.11
Now new tactics were at the ready, and the antiwar movement stalled as protesters found themselves faced with fenced-off “free speech zones”; stockyard-gated “containment pens”; the denial of march permits; mass detentions; media disinformation operations; harassment and detention of legal observers and independent media; police and FBI surveillance; pre-emptive raids on lodgings and meeting places; and growing deployments of non-lethal weapons. Among the more foreboding of these was the presence at the 2004 Republican National Convention in New York City of two Long Range Acoustic Devices, or LRADs, which use highly focused beams of ear-splitting sound to, as the manufacturer says, “influence behavior.”
The next hurdle for non-lethality, as Colonel Hymes’s comments suggest, will be the introduction of so-called second-generation non-lethal weapons into everyday policing and crowd control. Although “first-generation” weapons like rubber bullets and pepper spray have gained a certain acceptance, despite their many drawbacks, exotic technologies like the Active Denial System invariably cause public alarm.13 Nevertheless, the trend is now away from chemical and “kinetic” weapons that rely on physical trauma and toward post-kinetic weapons that, as researchers put it, “induce behavioral modification” more discreetly.14 One indication that the public may come to accept these new weapons has been the successful introduction of the Taser — apparently, even the taboo on electroshock can be overcome given the proper political climate.
Originally sold as an alternative to firearms, the Taser today has become an all-purpose tool for what police call “pain compliance.” Mounting evidence shows that the weapon is routinely used on people who pose little threat: those in handcuffs, in jail cells, in wheelchairs and hospital beds; schoolchildren, pregnant women, the mentally disturbed, the elderly; irate shoppers, obnoxious lawyers, argumentative drivers, nonviolent protesters — in fact, YouTube now has an entire category of videos in which people are Tasered for dubious reasons. In late 2007, public outrage flared briefly over the two most famous such videos — those of college student Andrew Meyer “drive-stunned” at a John Kerry speech, and of a distraught Polish immigrant, Robert Dziekanski, dying after repeated Taser jolts at Vancouver airport — but police and weapon were found blameless in both incidents.15 Strangely, YouTube’s videos may be promoting wider acceptance of the Taser; it appears that many viewers watch them for entertainment.
Flush with success, Taser International is now moving more directly into crowd control. Among its new offerings are a “Shockwave AreaDenial System,” which blankets the area in question with electrified darts, and a wireless Taser projectile with a 100-meter range, helpful for picking off “ringleaders” in unruly crowds. In line with the Pentagon’s growing interest in robotics, the company has also started a joint venture with the iRobot Corporation, maker of the Roomba vacuum cleaner, to develop Taser-armed robots; and in France, Taser’s distributor has announced plans for a flying drone that fires stun darts at criminal suspects or rioters.
Second-generation non-lethal weapons already appear to have been tested in the field. In a first in U.S. crowd control, protesters at last September’s G20 summit in Pittsburgh found themselves clutching their ears in pain as a vehicle mounted with an LRAD circled streets emitting a piercing “deterrent tone.” First seen (but not used) at the 2004 Republican Convention, the LRAD has since been used on Iraqi protesters and on pirates off the Somali coast; the Israeli Army has used a similar device against Palestinian protesters that it calls “the Scream,” which reportedly causes overwhelming dizziness and nausea.
It may be “tactical pharmacology,” finally, that holds the most promise for quelling the unrest stirred by capitalist meltdowns, imperialist wars, and environmental collapse. As JNLWD research director Susan Levine told a reporter in 1999, “We need something besides tear gas, like calmatives, anesthetic agents, that would put people to sleep or in a good mood.” Pentagon interest in “advanced riot-control agents” has long been an open secret
Penn State’s College of Medicine researchers agreed, contrary to accepted principles of medical ethics, that “the development and use of non-lethal calmative techniques is both achievable and desirable,” and identified a large number of promising drug candidates, including benzodiazepines like Valium, serotonin-reuptake inhibitors like Prozac, and opiate derivatives like morphine, fentanyl, and carfentanyl, the last commonly used by veterinarians to sedate large animals. The only problems they saw were in developing effective delivery vehicles and regulating dosages, but these problems could be solved readily, they recommended, through strategic partnerships with the pharmaceutical industry.16
such research is prohibited by the 1993 Chemical Weapons Convention, signed by more than 180 nations and ratified by the U.S. Senate in 1997. Little more was heard about the Pentagon’s “advanced riot-control agent” program until July 2008, when the Army announced that production was scheduled for its XM1063 “non-lethal personal suppression projectile,” an artillery shell that bursts in midair over its target, scattering 152 canisters over a 100,000-square-foot area, each dispersing a chemical agent as it parachutes down. There are many indications that a calmative, such as fentanyl, is the intended payload — a literal opiate of the masses.
Schlesinger, who served under Richard Nixon, repeated a familiar argument. If riot-control agents were to be banned, “whether in peace or war,” he said, “we may wind up placing ourselves in the position of the Chinese government in dealing with the Tiananmen Square uprising in 1989. The failure to use tear gas meant that the government only had recourse to the massive use of firepower to disperse the crowd.”17
the formulators of our policy of pain compliance feel so limited in their options — confronted by citizens calling for change, their only response is to seek control or death. There are many other possible responses, most of them far better attuned to the democratic ideals they espouse in other contexts. That pain compliance seems to them the best alternative to justice is an indictment not of the dreams of the protesters but of the nightmares of those who would control them.
The message arrives on my “clean machine,” a MacBook Air loaded only with a sophisticated encryption package. “Change in plans,” my contact says. “Be in the lobby of the Hotel ______ by 1 pm. Bring a book and wait for ES to find you.”
He is a uniquely postmodern breed of whistle-blower. Physically, very few people have seen him since he disappeared into Moscow’s airport complex last June. But he has nevertheless maintained a presence on the world stage—not only as a man without a country but as a man without a body. When being interviewed at the South by Southwest conference or receiving humanitarian awards, his disembodied image smiles down from jumbotron screens. For an interview at the TED conference in March, he went a step further—a small screen bearing a live image of his face was placed on two leg-like poles attached vertically to remotely controlled wheels, giving him the ability to “walk” around the event, talk to people, and even pose for selfies with them. The spectacle suggests a sort of Big Brother in reverse: Orwell’s Winston Smith, the low-ranking party functionary, suddenly dominating telescreens throughout Oceania with messages promoting encryption and denouncing encroachments on privacy.
I read a recent Washington Post report. The story, by Greg Miller, recounts daily meetings with senior officials from the FBI, CIA, and State Department, all desperately trying to come up with ways to capture Snowden. One official told Miller: “We were hoping he was going to be stupid enough to get on some kind of airplane, and then have an ally say: ‘You’re in our airspace. Land.’ ” He wasn’t. And since he disappeared into Russia, the US seems to have lost all trace of him.
I do my best to avoid being followed as I head to the designated hotel for the interview, one that is a bit out of the way and attracts few Western visitors. I take a seat in the lobby facing the front door and open the book I was instructed to bring. Just past one, Snowden walks by, dressed in dark jeans and a brown sport coat and carrying a large black backpack over his right shoulder. He doesn’t see me until I stand up and walk beside him. “Where were you?” he asks. “I missed you.” I point to my seat. “And you were with the CIA?” I tease. He laughs.
He has been in Russia for more than a year now. He shops at a local grocery store where no one recognizes him, and he has picked up some of the language. He has learned to live modestly in an expensive city that is cleaner than New York and more sophisticated than Washington. In August, Snowden’s temporary asylum was set to expire. (On August 7, the government announced that he’d been granted a permit allowing him to stay three more years.)
Snowden is careful about what’s known in the intelligence world as operational security. As we sit down, he removes the battery from his cell phone. I left my iPhone back at my hotel. Snowden’s handlers repeatedly warned me that, even switched off, a cell phone can easily be turned into an NSA microphone. Knowledge of the agency’s tricks is one of the ways that Snowden has managed to stay free. Another is by avoiding areas frequented by Americans and other Westerners. Nevertheless, when he’s out in public at, say, a computer store, Russians occasionally recognize him. “Shh,” Snowden tells them, smiling, putting a finger to his lips.
Snowden still holds out hope that he will someday be allowed to return to the US. “I told the government I’d volunteer for prison, as long as it served the right purpose,” he says. “I care more about the country than what happens to me. But we can’t allow the law to become a political weapon or agree to scare people away from standing up for their rights, no matter how good the deal. I’m not going to be part of that.”
Meanwhile, Snowden will continue to haunt the US, the unpredictable impact of his actions resonating at home and around the world. The documents themselves, however, are out of his control. Snowden no longer has access to them; he says he didn’t bring them with him to Russia. Copies are now in the hands of three groups: First Look Media, set up by journalist Glenn Greenwald and American documentary filmmaker Laura Poitras, the two original recipients of the documents; The Guardian newspaper, which also received copies before the British government pressured it into transferring physical custody (but not ownership) to The New York Times; and Barton Gellman, a writer for The Washington Post. It’s highly unlikely that the current custodians will ever return the documents to the NSA.
That has left US officials in something like a state of impotent expectation, waiting for the next round of revelations, the next diplomatic upheaval, a fresh dose of humiliation. Snowden tells me it doesn’t have to be like this. He says that he actually intended the government to have a good idea about what exactly he stole. Before he made off with the documents, he tried to leave a trail of digital bread crumbs so investigators could determine which documents he copied and took and which he just “touched.” That way, he hoped, the agency would see that his motive was whistle-blowing and not spying for a foreign government. It would also give the government time to prepare for leaks in the future, allowing it to change code words, revise operational plans, and take other steps to mitigate damage. But he believes the NSA’s audit missed those clues and simply reported the total number of documents he touched—1.7 million. (Snowden says he actually took far fewer.) “I figured they would have a hard time,” he says. “I didn’t figure they would be completely incapable.”
Snowden speculates that the government fears that the documents contain material that’s deeply damaging—secrets the custodians have yet to find. “I think they think there’s a smoking gun in there that would be the death of them all politically,” Snowden says. “The fact that the government’s investigation failed—that they don’t know what was taken and that they keep throwing out these ridiculous huge numbers—implies to me that somewhere in their damage assessment they must have seen something that was like, ‘Holy shit.’ And they think it’s still out there.”
Yet it is very likely that no one knows precisely what is in the mammoth haul of documents—not the NSA, not the custodians, not even Snowden himself. He would not say exactly how he gathered them, but others in the intelligence community have speculated that he simply used a web crawler, a program that can search for and copy all documents containing particular keywords or combinations of keywords. This could account for many of the documents that simply list highly technical and nearly unintelligible signal parameters and other statistics.
And there’s another prospect that further complicates matters: Some of the revelations attributed to Snowden may not in fact have come from him but from another leaker spilling secrets under Snowden’s name. Snowden himself adamantly refuses to address this possibility on the record. But independent of my visit to Snowden, I was given unrestricted access to his cache of documents in various locations. And going through this archive using a sophisticated digital search tool, I could not find some of the documents that have made their way into public view, leading me to conclude that there must be a second leaker somewhere. I’m not alone in reaching that conclusion. Both Greenwald and security expert Bruce Schneier—who have had extensive access to the cache—have publicly stated that they believe another whistle-blower is releasing secret documents to the media.
Some have even raised doubts about whether the infamous revelation that the NSA was tapping German chancellor Angela Merkel’s cell phone, long attributed to Snowden, came from his trough. At the time of that revelation, Der Spiegel simply attributed the information to Snowden and other unnamed sources. If other leakers exist within the NSA, it would be more than another nightmare for the agency—it would underscore its inability to control its own information and might indicate that Snowden’s rogue protest of government overreach has inspired others within the intelligence community. “They still haven’t fixed their problems,” Snowden says. “They still have negligent auditing, they still have things going for a walk, and they have no idea where they’re coming from and they have no idea where they’re going. And if that’s the case, how can we as the public trust the NSA with all of our information, with all of our private records, the permanent record of our lives?”
Snowden keeps close tabs on his evolving public profile, but he has been resistant to talking about himself. In part, this is because of his natural shyness and his reluctance about “dragging family into it and getting a biography.” He says he worries that sharing personal details will make him look narcissistic and arrogant. But mostly he’s concerned that he may inadvertently detract from the cause he has risked his life to promote. “I’m an engineer, not a politician,” he says. “I don’t want the stage. I’m terrified of giving these talking heads some distraction, some excuse to jeopardize, smear, and delegitimize a very important movement.”
While in Geneva, Snowden says, he met many spies who were deeply opposed to the war in Iraq and US policies in the Middle East. “The CIA case officers were all going, what the hell are we doing?” Because of his job maintaining computer systems and network operations, he had more access than ever to information about the conduct of the war. What he learned troubled him deeply. “This was the Bush period, when the war on terror had gotten really dark,” he says. “We were torturing people; we had warrantless wiretapping.”
He began to consider becoming a whistle-blower, but with Obama about to be elected, he held off. “I think even Obama’s critics were impressed and optimistic about the values that he represented,” he says. “He said that we’re not going to sacrifice our rights. We’re not going to change who we are just to catch some small percentage more terrorists.” But Snowden grew disappointed as, in his view, Obama didn’t follow through on his lofty rhetoric. “Not only did they not fulfill those promises, but they entirely repudiated them,” he says. “They went in the other direction. What does that mean for a society, for a democracy, when the people that you elect on the basis of promises can basically suborn the will of the electorate?”
Snowden’s disenchantment would only grow. It was bad enough when spies were getting bankers drunk to recruit them; now he was learning about targeted killings and mass surveillance, all piped into monitors at the NSA facilities around the world. Snowden would watch as military and CIA drones silently turned people into body parts. And he would also begin to appreciate the enormous scope of the NSA’s surveillance capabilities, an ability to map the movement of everyone in a city by monitoring their MAC address, a unique identifier emitted by every cell phone, computer, and other electronic device.
Snowden adjusts his glasses; one of the nose pads is missing, making them slip occasionally. He seems lost in thought, looking back to the moment of decision, the point of no return. The time when, thumb drive in hand, aware of the enormous potential consequences, he secretly went to work. “If the government will not represent our interests,” he says, his face serious, his words slow, “then the public will champion its own interests. And whistle-blowing provides a traditional means to do so.”
Snowden landed a job as an infrastructure analyst with another giant NSA contractor, Booz Allen. The role gave him rare dual-hat authority covering both domestic and foreign intercept capabilities—allowing him to trace domestic cyberattacks back to their country of origin. In his new job, Snowden became immersed in the highly secret world of planting malware into systems around the world and stealing gigabytes of foreign secrets. At the same time, he was also able to confirm, he says, that vast amounts of US communications “were being intercepted and stored without a warrant, without any requirement for criminal suspicion, probable cause, or individual designation.” He gathered that evidence and secreted it safely away.
One day an intelligence officer told him that TAO—a division of NSA hackers—had attempted in 2012 to remotely install an exploit in one of the core routers at a major Internet service provider in Syria, which was in the midst of a prolonged civil war. This would have given the NSA access to email and other Internet traffic from much of the country. But something went wrong, and the router was bricked instead—rendered totally inoperable. The failure of this router caused Syria to suddenly lose all connection to the Internet—although the public didn’t know that the US government was responsible. (This is the first time the claim has been revealed.)
“It’s no secret that we hack China very aggressively,” he says. “But we’ve crossed lines. We’re hacking universities and hospitals and wholly civilian infrastructure rather than actual government targets and military targets. And that’s a real concern.”
The last straw for Snowden was a secret program he discovered while getting up to speed on the capabilities of the NSA’s enormous and highly secret data storage facility in Bluffdale, Utah. Potentially capable of holding upwards of a yottabyte of data, some 500 quintillion pages of text, the 1 million-square-foot building is known within the NSA as the Mission Data Repository. (According to Snowden, the original name was Massive Data Repository, but it was changed after some staffers thought it sounded too creepy—and accurate.) Billions of phone calls, faxes, emails, computer-to-computer data transfers, and text messages from around the world flow through the MDR every hour. Some flow right through, some are kept briefly, and some are held forever.
The massive surveillance effort was bad enough, but Snowden was even more disturbed to discover a new, Strangelovian cyberwarfare program in the works, codenamed MonsterMind. The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country—a “kill” in cyber terminology.
Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement. That’s a problem, Snowden says, because the initial attacks are often routed through computers in innocent third countries. “These attacks can be spoofed,” he says. “You could have someone sitting in China, for example, making it appear that one of these attacks is originating in Russia. And then we end up shooting back at a Russian hospital. What happens next?”
In addition to the possibility of accidentally starting a war, Snowden views MonsterMind as the ultimate threat to privacy because, in order for the system to work, the NSA first would have to secretly get access to virtually all private communications coming in from overseas to people in the US. “The argument is that the only way we can identify these malicious traffic flows and respond to them is if we’re analyzing all traffic flows,” he says. “And if we’re analyzing all traffic flows, that means we have to be intercepting all traffic flows. That means violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.”
Given the NSA’s new data storage mausoleum in Bluffdale, its potential to start an accidental war, and the charge to conduct surveillance on all incoming communications, Snowden believed he had no choice but to take his thumb drives and tell the world what he knew. The only question was when.
On March 13, 2013, sitting at his desk in the “tunnel” surrounded by computer screens, Snowden read a news story that convinced him that the time had come to act. It was an account of director of national intelligence James Clapper telling a Senate committee that the NSA does “not wittingly” collect information on millions of Americans. “I think I was reading it in the paper the next day, talking to coworkers, saying, can you believe this shit?”
Snowden and his colleagues had discussed the routine deception around the breadth of the NSA’s spying many times, so it wasn’t surprising to him when they had little reaction to Clapper’s testimony. “It was more of just acceptance,” he says, calling it “the banality of evil”—a reference to Hannah Arendt’s study of bureaucrats in Nazi Germany.
“It’s like the boiling frog,” Snowden tells me. “You get exposed to a little bit of evil, a little bit of rule-breaking, a little bit of dishonesty, a little bit of deceptiveness, a little bit of disservice to the public interest, and you can brush it off, you can come to justify it. But if you do that, it creates a slippery slope that just increases over time, and by the time you’ve been in 15 years, 20 years, 25 years, you’ve seen it all and it doesn’t shock you. And so you see it as normal. And that’s the problem, that’s what the Clapper event was all about. He saw deceiving the American people as what he does, as his job, as something completely ordinary. And he was right that he wouldn’t be punished for it, because he was revealed as having lied under oath and he didn’t even get a slap on the wrist for it. It says a lot about the system and a lot about our leaders.” Snowden decided it was time to hop out of the water before he too was boiled alive.
At the same time, he knew there would be dire consequences. “It’s really hard to take that step—not only do I believe in something, I believe in it enough that I’m willing to set my own life on fire and burn it to the ground.”
But he felt that he had no choice. Two months later he boarded a flight to Hong Kong with a pocket full of thumb drives.
rather than the Russian secret police, it’s his old employers, the CIA and the NSA, that Snowden most fears. “If somebody’s really watching me, they’ve got a team of guys whose job is just to hack me,” he says. “I don’t think they’ve geolocated me, but they almost certainly monitor who I’m talking to online. Even if they don’t know what you’re saying, because it’s encrypted, they can still get a lot from who you’re talking to and when you’re talking to them.”
More than anything, Snowden fears a blunder that will destroy all the progress toward reforms for which he has sacrificed so much. “I’m not self-destructive. I don’t want to self-immolate and erase myself from the pages of history. But if we don’t take chances, we can’t win,” he says. And so he takes great pains to stay one step ahead of his presumed pursuers—he switches computers and email accounts constantly. Nevertheless, he knows he’s liable to be compromised eventually: “I’m going to slip up and they’re going to hack me. It’s going to happen.”
Indeed, some of his fellow travelers have already committed some egregious mistakes. Last year, Greenwald found himself unable to open the encryption on a large trove of secrets from GCHQ—the British counterpart of the NSA—that Snowden had passed to him. So he sent his longtime partner, David Miranda, from their home in Rio to Berlin to get another set from Poitras. But in making the arrangements, The Guardian booked a transfer through London. Tipped off, probably as a result of GCHQ surveillance, British authorities detained Miranda as soon as he arrived and questioned him for nine hours. In addition, an external hard drive containing 60 gigabits of data—about 58,000 pages of documents—was seized. Although the documents had been encrypted using a sophisticated program known as True Crypt, the British authorities discovered a paper of Miranda’s with the password for one of the files, and they were able to decrypt about 75 pages. (Greenwald has still not gained access to the complete GCHQ documents.)
Another concern for Snowden is what he calls NSA fatigue—the public becoming numb to disclosures of mass surveillance, just as it becomes inured to news of battle deaths during a war. “One death is a tragedy, and a million is a statistic,” he says, mordantly quoting Stalin. “Just as the violation of Angela Merkel’s rights is a massive scandal and the violation of 80 million Germans is a nonstory.”
Nor is he optimistic that the next election will bring any meaningful reform. In the end, Snowden thinks we should put our faith in technology—not politicians. “We have the means and we have the technology to end mass surveillance without any legislative action at all, without any policy changes.” The answer, he says, is robust encryption. “By basically adopting changes like making encryption a universal standard—where all communications are encrypted by default—we can end mass surveillance not just in the United States but around the world.”
“The question for us is not what new story will come out next. The question is, what are we going to do about it?”
Carrying two iPhones that beep out assignments throughout the day, Lyons works for four different app-enabled bike-courier services: WunWun, UberRush, Zipments and Petal by Pedal. He does about 25 to 30 deliveries per day, which adds up to about 50 miles, including the commute.
When he first got started last year, Lyons tried working for traditional bike-courier services where he would make $3 per delivery. “It was outrageous,” he says. “They treat you like an animal.”
Some of the newer services Lyons works for are subsidized. When it first started, Uber was giving away free courier service for its UberRush local delivery trial. Lyons says that demand has dropped a bit since the initial promos wore out.
WunWun — which has the insane premise of deliveries from any store or restaurant in Manhattan within an hour, for free — keeps Lyons the busiest.
Lyons claims WunWun’s system of working for tips, which are suggested within the app at 30 percent, somehow actually works. “You never really get snubbed out on a tip,” he says.
By literally working his butt off, Lyons thinks he will make between $45,000 and $60,000 this year.
“If people wanted it so badly, why did it not exist?” he says. “It was too darned expensive, and it was not sustainable. Even in 2010, a business like ours would be incredibly difficult to start because not enough sections of the population had smartphones.”
Still, Xu will admit that Palo Alto might not be the most representative test market in the world. As we drive to pick up the delivery, we pass three Teslas parked in a row in the shopping-center parking lot. “Only in Palo Alto,” he says.
But it’s bigger than Palo Alto. It’s bigger than San Francisco or New York. Take all these stories together and the larger point is: The business of bringing people what they want, when they want it, is booming.
A decade ago, we got iTunes, and the ability to buy a song bought and delivered with the push of a button. Then Facebook helped us stay in touch with our spread-out friends and family from the comfort of our couch. Then Netflix DVDs started coming over the air instead of to our mailboxes. Now it’s not just Web pages that we can load up instantly, it’s the physical world.
Not to neglect the important historical contributions of pizza joints and Chinese restaurants, but the groundwork for what you might call the instant gratification economy was laid by Amazon, which spent years building up its inventory, fulfillment infrastructure and, most importantly, customer expectations for getting whatever they want delivered to their doors two days later.
Then Uber came along and established the precedent of a large-scale marketplace powered by independent workers and smartphones. After that started to work, every pitch deck in Silicon Valley seemed to morph overnight into an “Uber for X” startup.
On the one hand, this is a positive development. As startups merge online expectations with offline reality, the Internet is becoming more than a glowing screen drawing us away from the real world. On the other hand, instant gratification tempts us to be profoundly lazy and perhaps unreasonably impatient.
As for whether there’s demand, forces are converging to fulfill the notion of what some pundits label “IWWIWWIWI.” That is, “I want what I want when I want it.” It’s not the easiest acronym to get your tongue around — but it’s pretty to look at, and it’s right on the money.
Yarrow thinks we’ve become conditioned for impatience by technology like Internet search and smartphones. “Today, we have almost no tolerance for boredom,” she told me. “Our brains are malleable, and I think they have shifted to accommodate much more stimulation. We’re fascinated by newness, and we desire to get the new thing right away. We want what we want when we want it.”
Someone had told me the day before that one way to think about all this instant gratification stuff is that it basically brings rich-people benefits to the average person.
In his view, the magic of Uber and services modeled on Uber is that they help you value your time the way a rich person would, without spending your money the way a rich person would.
For decades, books and TV shows planted seeds of desire for instant gratification in impressionable minds. But across many of these stories about suburban genies and witches, magic wands and technology of the future, there’s a shadow side to getting what you want when you want it. The princesses always seem to run out of wishes before they get what they really need. Their greed is their doom.
“Don’t care how, I want it nooow,” sings greedy little Veruca Salt, right up until she falls into Willy Wonka’s garbage chute, never to be seen again.
In Pixar’s wistful animated sci-fi story “Wall-E,” the people of the future zoom around in hovering chairs in a climate-controlled dome, with robots refilling their sodas. Their bodies are so flabby they can’t even stand. It’s the ultimate incarnation of the couch potato.
The most important reason that this is happening now is that workers have smartphones. After a briefer-than-brief application process, companies like Uber hand out phones to workers — or just give them an app to download onto their personal devices — and suddenly, for better or worse, they’ve got a branded on-demand service.
Over and over again, startups in the instant gratification space tell me that the most crucial part of their arsenal is an app to help remote workers receive assignments, schedule jobs and map where they are going.
In large part because they are powered by a mobile workforce, instant gratification startups avoid much of the hassle and expense of building physical infrastructure.
“Remote controls for real life” is how venture capitalist Matt Cohler described mobile apps like Uber and the food-delivery service GrubHub two years ago — because their simple interfaces summon things to happen in the physical world.
Today, that real-life remote control feels even more like a magic wand. At a lunch meeting, investor Shervin Pishevar pulls out his phone, opens the Uber app and sets his location to Japan. “If I push this button right now,” he marvels, “I’m going to move metal in Tokyo.”
He describes this as a boomerang back to a village economy. After years of trends toward suburbs, big-box stores and car ownership, smartphones could be helping us get back to where we came from. The combined forces of urbanization, online commerce and trust mean that people can efficiently share goods and services on a local level, more than ever before.
Caviar, which was founded on the premise that “no good restaurants in San Francisco deliver,” became profitable within three months of launching. It has a much snazzier list of restaurants than GrubHub, including Momofuku in New York and Delfina in San Francisco.
Caviar CEO Jason Wang says his startup plans to soon drop delivery fees to $4.99 from $9.99. It pays drivers $15 per delivery and takes a cut of up to 25 percent of each order, depending on the restaurant. Even after the price cut, “We’ll still make money, because our margins are very good,” Wang says.
Uber is a company that owns nothing. It connects available drivers and their cars to people who want to be their passengers. By juicing supply with surge pricing and demand with discounts, Uber is able to create — out of thin air — a reliable service that exists in 140 cities around the world.
Without fail, instant gratification startups say they will win because they are smart at logistics.
Describing his business, Instacart founder and CEO Apoorva Mehta says, “It really is a data-science problem masked into a consumer product.”
DoorDash’s Xu describes his purpose as a machine-learning problem: Discovering “the variance of the variance” so his algorithm can reliably estimate prep and delivery time based on factors like how long a type of food stays warm, what a restaurant’s error rate is (the norm is 25 percent) and how fast a particular driver has been in the past.
Uber aims to match up a driver and passenger as quickly as possible. Food delivery is more complicated, according to Xu.
“It’s almost never the driver that’s closest to the restaurant when the order is placed,” Xu says.
a mobile medical-marijuana delivery startup called Eaze launched in San Francisco. Not only was Eaze open for business, it was open for business 24 hours a day.
it can be too easy to forget that people make “instant” happen. And, generally, these people are not a traditionally stable workforce. They are instead a flexible and scalable network of workers — “fractional employees” — that tap in and tap out as needed, and as suits them.
The smartphone is at the center of the sharing economy. Every company mentioned in this series on the instant gratification economy runs on worker smartphones. GPS, texting and mobile-app notifications are the ways to make flexible work actually work.
It’s very common for people to pick up gigs from multiple services — in the morning, grab some grocery orders on Instacart; then when you get tired of lifting large bags, run a shift during Sprig’s prime lunch hours; then when you get lonely from ferrying around inanimate objects, sign into Lyft to interact with an actual person.
NYU business school professor Arun Sundararajan’s summer research project is counting the number of jobs created by the sharing economy. He doesn’t have an estimate yet, but he points out that the U.S. workforce is already 20 percent to 25 percent freelance.
Sundararajan says he sees a lot of good in the sharing economy. “It will lead people to entrepreneurship without the extreme risks.” He thinks of platforms like Uber as gateways. “It’s even easier than finding a full-time job, which is easier than freelance.”
Redefining delivery for a new era of customers who want everything right away requires rethinking operations. By focusing attention on creating a powerful logistical system, and tying into the “sharing economy,” many of the new crop of startups in the on-demand space are trying to offer faster service at a much lower operational cost.
And so the young players in the instant gratification economy are ferrying cargo across town via crowdsourced workers.
Usually, these are independent contractors, who decide when they want to work, drive their own vehicles, receive directions about where they need to be via smartphone — and cover the cost of their own parking tickets. The new buzzword for this is “fractional employment.”
Deliv is trying to do deliveries of almost anything and everything later that day, for as little as $5.
Crowdsourced drivers pick up batches of orders, and then take them out to people’s homes.
“I don’t own trucks, I don’t pay for drivers I don’t use, I don’t pay for hubs,” Carmeli says. “The malls are my hubs.”
Amazon said last year that more than 20 million members signed up for its two-day delivery service, Prime, which now costs $99 per year. While that’s a small number in the grand scheme of things, the high-spending habits of the group — estimated to be more than twice as much as regular Amazon customers — are having a magnetic effect on the rest of the industry.
A skunkworks team at Google developed what became Google Shopping Express last year, by putting the Amazon Prime model under a microscope. According to a source familiar with the project, the biggest lesson was that it’s worth investing ahead of where the market might be today.
Which is to say, many people still don’t know they want same-day delivery, because today they think same-day delivery means fuss, friction and expense. But if you make something fast and easy, consumers will come to appreciate it — and maybe even pay for it. So the upfront investment is worth it.
“It’s better to build volume first, than to launch with a ‘gotcha,’” the source says.
That’s the hypothesis, anyway.
And Google isn’t testing the last part of that hypothesis — charging people money — yet.
It is currently subsidizing six-month trials of unlimited free delivery. In fact, the company is throwing something like $500 million at Google Shopping Express.
Competing with that kind of budget is a scary prospect for startups.
The scrum now includes two Ubers for home cleaning, a few Ubers for handypeople, at least three Ubers for massages, five Ubers for valet parking, a couple of Ubers for laundries, an emerging group of Ubers for hair and makeup, and so very many Ubers for food.
Could you actually make a business out of offering same-day delivery — for free? Permanently, not as a promotion.
WunWun, promises to buy anything from any store or any food from any restaurant in Manhattan, parts of Brooklyn and the Hamptons, and deliver it to any place in that same zone. It’s free.
Hnetinka was inspired by an April 2013 investment memo from Jefferies called “Same-Day: The Next Killer App,” which made two big points: 1) Free shipping has become a “must-have” in e-commerce. Half of consumers abandon online shopping carts without it; and 2) there’s the opportunity to improve on that service by making it same-day.
For today, WunWun is making money by taking a slice of tips, and by getting discounts from retailers it spends a lot of money with that it doesn’t pass along to customers.
Tomorrow, WunWun will try to create the offline equivalent of search advertising, Hnetinka says.
Stores will be able to bid to be the supplier for WunWun orders, whether tennis balls, ChapStick or Yankees hats.
“That’s when WunWun really starts to make a lot of money,” Hnetinka says. “We have created the largest demand funnel. We’ve brought together convenience of ordering online with immediacy of offline. So we’re not talking about profitability margins, we’re talking about marketing budgets.”
at that moment in time, it seemed like all you had to do was pick a noun, add “.com,” and you were in business.
As a sign of the times, one company called Computer.com spent half its $5.8 million in venture capital airing Super Bowl ads on the day it launched a site purporting to teach people about using computers.
And there were parties, legendary parties, where the likes of Elvis Costello and Beck and the B-52s played, sponsor banners bedecked the walls, and many of the revelers collected their mountains of swag while having no idea which company was even throwing that night’s bash.
Even if Kozmo and its cohort had a chance at a business model that worked, they were all spending more money than they could possibly earn on advertising and parties and weird promotional tie-ups to return movies at Starbucks.
As we all know, that boom went bust in 2000. The period’s most famous flameouts — Pets.com, Urbanfetch, Kozmo, Webvan, even Computer.com, somehow — were all gone by 2001. What’s left — a cautionary tale and some mascot dolls for sale on eBay.
Same-day service is the single-biggest wave in e-commerce, Wainwright says. The single best experience she had shopping online was when she forgot to pack a certain special black cashmere sweater before flying to New York for a business trip.
Wainwright says she realized the sweater was missing at 11 pm, when she unpacked her bag at the hotel. But it was still posted on the online retailer Net-A-Porter, where she originally bought it, so she placed another order and it was delivered to her office at 10:30 the next morning by a deliveryman in a bellboy suit bearing an iPad for her signature.
“It was absolutely the most amazing thing,” Wainwright says. “It was like $25, it was nothing. Now, the sweater wasn’t cheap — but it was the exact same sweater I had left on my bed.”
Jennings has set up a virtual Google Voice number attached to his doorbell so he can let people into his entryway from his phone when he’s not home.
“Say you run out of toothpaste in the morning, you can order it, and then it’s ready for when you brush your teeth at night,” he says.
“The majority of the time, there’s no interaction,” Jennings says, meaning he doesn’t have to say hello to a delivery person or sign for a package.
And in the future, people may be taken out of the delivery equation altogether.
That future is coming sooner than you think. Two years ago, the geek world went wild for an idea called Tacocopter. “Flying robots deliver tacos to your location,” said its website. “Easy ordering on your smartphone.”
“It wouldn’t surprise me to see that the regulations that now limit such uses of drone technology will almost certainly remain in effect much longer than the technological limitations remain a hurdle,” wrote Mike Masnick.
Eight months ago, Amazon upped the Tacocopter stakes with a promo video for Amazon Prime Air, showing a hovering robotic aircraft depositing a package on a suburban patio. It was a marketing stunt designed to jumpstart the holiday shopping season.
Or was it?
In July, Amazon wrote to the FAA asking for permission to test flying commercial drones outside at speeds of up to 50 miles per hour. The company said it hopes to deliver packages weighing five pounds within 30 minutes of orders being placed.
“A lot of things fundamentally change,” he says. “Does the architecture of homes change because there’s more space when you don’t need garages and kitchens? Do you really need a grocery store? You shouldn’t use all that real estate in a city for giant parking lots, you should push a button and be able to get what you want delivered, like Instacart.”
He continues. “And then you argue, is there a world where you have Munchery [another San Francisco food creation and distribution service] delivered to a restaurant that’s not really a restaurant, but it’s a … it’s a front-end. It’s a beautiful spot with a beautiful view, and it doesn’t need a kitchen, just have a few tables for a sit-down dinner.”
This train of thought has taken him to a new place. “You know, I hadn’t thought about that,” Pishevar says. “It’s just a … a distributed table. And then someone would come serve you.”
A popular justification for all this food-startup fundraising is frequency: Most people eat three times a day, at least.
No, really, that’s what every venture capitalist will remind you. This market is an opportunity because it ties into existing daily habits. People eat more often than they need to Uber across town. And so, the biggest opportunity in “instant” is food.
Sure, making food is not novel. The innovation here is making food that ties into smart logistics systems that match supply and demand, and coordinating crowdsourced workers so that meals arrive so fast it seems like magic.
“We’re mass-producing the same meal for all these people. We get economies of scale that no restaurant will ever have because of the physical location. Whereas, we can serve the whole Bay Area with the same supply.”
This is not just a restaurant, says Tsui. Combining the core mobile functions of location and real-time makes for a fundamental shift beyond what other mobile apps — besides Uber — are doing.
Especially for those who live in the cities well served by these services, it’s probably time to start thinking about what deserves to be slowed down, and what things we’d prefer to wait for and savor. Either that, or the inexorable march toward convenience will bring us ever closer to fulfilling the prophecy of those shapeless “Wall-E” couch potatoes, who have trouble standing up after sitting on the couch for so long.
But beyond instant — what comes next?
It’s probably making those brilliant on-demand logistics systems even more brilliant, anticipating our wants and needs before we even have them, and starting to send things our way before we push the button.
And maybe then Veruca Salt would just calm down.
In October 2002, Peter Ho, the permanent secretary of defense for the tiny island city-state of Singapore, paid a visit to the offices of the Defense Advanced Research Projects Agency (DARPA), the U.S. Defense Department’s R&D outfit best known for developing the M16 rifle, stealth aircraft technology, and the Internet. Ho didn’t want to talk about military hardware. Rather, he had made the daylong plane trip to meet with retired Navy Rear Adm. John Poindexter, one of DARPA’s then-senior program directors and a former national security advisor to President Ronald Reagan. Ho had heard that Poindexter was running a novel experiment to harness enormous amounts of electronic information and analyze it for patterns of suspicious activity — mainly potential terrorist attacks.
The two men met in Poindexter’s small office in Virginia, and on a whiteboard, Poindexter sketched out for Ho the core concepts of his imagined system, which Poindexter called Total Information Awareness (TIA). It would gather up all manner of electronic records — emails, phone logs, Internet searches, airline reservations, hotel bookings, credit card transactions, medical reports — and then, based on predetermined scenarios of possible terrorist plots, look for the digital “signatures” or footprints that would-be attackers might have left in the data space. The idea was to spot the bad guys in the planning stages and to alert law enforcement and intelligence officials to intervene.
Ho returned home inspired that Singapore could put a TIA-like system to good use. Four months later he got his chance, when an outbreak of severe acute respiratory syndrome (SARS) swept through the country, killing 33, dramatically slowing the economy, and shaking the tiny island nation to its core. Using Poindexter’s design, the government soon established the Risk Assessment and Horizon Scanning program (RAHS, pronounced “roz”) inside a Defense Ministry agency responsible for preventing terrorist attacks and “nonconventional” strikes, such as those using chemical or biological weapons — an effort to see how Singapore could avoid or better manage “future shocks.” Singaporean officials gave speeches and interviews about how they were deploying big data in the service of national defense — a pitch that jibed perfectly with the country’s technophilic culture.
many current and former U.S. officials have come to see Singapore as a model for how they’d build an intelligence apparatus if privacy laws and a long tradition of civil liberties weren’t standing in the way.
They are drawn not just to Singapore’s embrace of mass surveillance but also to the country’s curious mix of democracy and authoritarianism, in which a paternalistic government ensures people’s basic needs — housing, education, security — in return for almost reverential deference. It is a law-and-order society, and the definition of “order” is all-encompassing.
Ten years after its founding, the RAHS program has evolved beyond anything Poindexter could have imagined. Across Singapore’s national ministries and departments today, armies of civil servants use scenario-based planning and big-data analysis from RAHS for a host of applications beyond fending off bombs and bugs. They use it to plan procurement cycles and budgets, make economic forecasts, inform immigration policy, study housing markets, and develop education plans for Singaporean schoolchildren — and they are looking to analyze Facebook posts, Twitter messages, and other social media in an attempt to “gauge the nation’s mood” about everything from government social programs to the potential for civil unrest.
In other words, Singapore has become a laboratory not only for testing how mass surveillance and big-data analysis might prevent terrorism, but for determining whether technology can be used to engineer a more harmonious society.
In a country run by engineers and technocrats, it’s an article of faith among the governing elite, and seemingly among most of the public, that Singapore’s 3.8 million citizens and permanent residents — a mix of ethnic Chinese, Indians, and Malays who live crammed into 716 square kilometers along with another 1.5 million nonresident immigrants and foreign workers — are perpetually on a knife’s edge between harmony and chaos.
“Singapore is a small island,” residents are quick to tell visitors, reciting the mantra to explain both their young country’s inherent fragility and its obsessive vigilance. Since Singapore gained independence from its union with Malaysia in 1965, the nation has been fixated on the forces aligned against it, from the military superiority of potentially aggressive and much larger neighbors, to its lack of indigenous energy resources, to the country’s longtime dependence on Malaysia for fresh water. “Singapore shouldn’t exist. It’s an invented country,” one top-ranking government official told me on a recent visit, trying to capture the existential peril that seems to inform so many of the country’s decisions.
But in less than 50 years, Singapore has achieved extraordinary success. Despite the government’s quasi-socialistic cradle-to-grave care, the city-state is enthusiastically pro-business, and a 2012 report ranked it as the world’s wealthiest country, based on GDP per capita. Singapore’s port handles 20 percent of the world’s shipping containers and nearly half of the world’s crude oil shipments; its airport is the principal air-cargo hub for all of Southeast Asia; and thousands of corporations have placed their Asian regional headquarters there. This economic rise might be unprecedented in the modern era, yet the more Singapore has grown, the more Singaporeans fear loss. The colloquial word kiasu, which stems from a vernacular Chinese word that means “fear of losing,” is a shorthand by which natives concisely convey the sense of vulnerability that seems coded into their social DNA (as well as their anxiety about missing out — on the best schools, the best jobs, the best new consumer products). Singaporeans’ boundless ambition is matched only by their extreme aversion to risk.
That is one reason the SARS outbreak flung the door wide open for RAHS. From late February to July of 2003, the virus flamed through the country. It turned out that three women who were hospitalized and treated for pneumonia in Singapore had contracted SARS while traveling in Hong Kong. Although two of the women recovered without infecting anyone, the third patient sparked an outbreak when she passed the virus to 22 people, including a nurse who went on to infect dozens of others. The officials identified a network of three more so-called “superspreaders” — together, five people caused more than half the country’s 238 infections. If Singaporean officials had detected any of these cases sooner, they might have halted the spread of the virus.
Health officials formed a task force two weeks after the virus was first spotted and took extraordinary measures to contain it, but they knew little about how it was spreading. They distributed thermometers to more than 1 million households, along with descriptions of SARS’s symptoms. Officials checked for fevers at schools and businesses, and they even used infrared thermal imagers to scan travelers at the airport. The government invoked Singapore’s Infectious Diseases Act and ordered in-home quarantines for more than 850 people who showed signs of infection, enforcing the rule with surveillance devices and electronic monitoring equipment. Investigators tracked down all people with whom the victims had been in contact. The government closed all schools at the pre-university level, affecting 600,000 students.
The SARS outbreak reminded Singaporeans that their national prosperity could be imperiled in just a few months by a microscopic invader that might wipe out a significant portion of the densely packed island’s population.
Months after the virus abated, Ho and his colleagues ran a simulation using Poindexter’s TIA ideas to see whether they could have detected the outbreak. Ho will not reveal what forms of information he and his colleagues used — by U.S. standards, Singapore’s privacy laws are virtually nonexistent, and it’s possible that the government collected private communications, financial data, public transportation records, and medical information without any court approval or private consent — but Ho claims that the experiment was very encouraging. It showed that if Singapore had previously installed a big-data analysis system, it could have spotted the signs of a potential outbreak two months before the virus hit the country’s shores. Prior to the SARS outbreak, for example, there were reports of strange, unexplained lung infections in China. Threads of information like that, if woven together, could in theory warn analysts of pending crises.
The system uses a mixture of proprietary and commercial technology and is based on a “cognitive model” designed to mimic the human thought process — a key design feature influenced by Poindexter’s TIA system. RAHS, itself, doesn’t think. It’s a tool that helps human beings sift huge stores of data for clues on just about everything. It is designed to analyze information from practically any source — the input is almost incidental — and to create models that can be used to forecast potential events. Those scenarios can then be shared across the Singaporean government and be picked up by whatever ministry or department might find them useful. Using a repository of information called an ideas database, RAHS and its teams of analysts create “narratives” about how various threats or strategic opportunities might play out. The point is not so much to predict the future as to envision a number of potential futures that can tell the government what to watch and when to dig further.
The officials running RAHS today are tight-lipped about exactly what data they monitor, though they acknowledge that a significant portion of “articles” in their databases come from publicly available information, including news reports, blog posts, Facebook updates, and Twitter messages. (“These articles have been trawled in by robots or uploaded manually” by analysts, says one program document.) But RAHS doesn’t need to rely only on open-source material or even the sorts of intelligence that most governments routinely collect: In Singapore, electronic surveillance of residents and visitors is pervasive and widely accepted.
Surveillance starts in the home, where all Internet traffic in Singapore is filtered, a senior Defense Ministry official told me (commercial and business traffic is not screened, the official said). Traffic is monitored primarily for two sources of prohibited content: porn and racist invective. About 100 websites featuring sexual content are officially blocked. The list is a state secret, but it’s generally believed to include Playboy and Hustler magazine’s websites and others with sexually laden words in the title. (One Singaporean told me it’s easy to find porn — just look for the web addresses without any obviously sexual words in them.) All other sites, including foreign media, social networks, and blogs, are open to Singaporeans. But post a comment or an article that the law deems racially offensive or inflammatory, and the police may come to your door.
Singaporeans have been charged under the Sedition Act for making racist statements online, but officials are quick to point out that they don’t consider this censorship. Hateful speech threatens to tear the nation’s multiethnic social fabric and is therefore a national security threat, they say. After the 2012 arrest of two Chinese teenage boys, who police alleged had made racist comments on Facebook and Twitter about ethnic Malays, a senior police official explained to reporters: “The right to free speech does not extend to making remarks that incite racial and religious friction and conflict. The Internet may be a convenient medium to express one’s views, but members of the public should bear in mind that they are no less accountable for their actions online.”
Singaporean officials stress that citizens are free to criticize the government, and they do.
Commentary that impugns an individual’s character or motives, however, is off-limits because, like racial invective, it is seen as a threat to the nation’s delicate balance. Journalists, including foreign news organizations, have frequently been charged under the country’s strict libel laws.
Not only does the government keep a close eye on what its citizens write and say publicly, but it also has the legal authority to monitor all manner of electronic communications, including phone calls, under several domestic security laws aimed at preventing terrorism, prosecuting drug dealing, and blocking the printing of “undesirable” material.
The surveillance extends to visitors as well. Mobile-phone SIM cards are an easy way for tourists to make cheap calls and are available at nearly any store — as ubiquitous as chewing gum in the United States. (Incidentally, the Singaporean government banned commercial sales of gum because chewers were depositing their used wads on subway doors, among other places.) Criminals like disposable SIM cards because they can be hard to trace to an individual user. But to purchase a card in Singapore, a customer has to provide a passport number, which is linked to the card, meaning the phone company — and, presumably, by extension the government — has a record of every call made on a supposedly disposable, anonymous device.
Privacy International reported that Singaporeans who want to obtain an Internet account must also show identification — in the form of the national ID card that every citizen carries — and Internet service providers “reportedly provide, on a regular basis, information on users to government officials.” The Ministry of Home Affairs also has the authority to compel businesses in Singapore to hand over information about threats against their computer networks in order to defend the country’s computer systems from malicious software and hackers
Perhaps no form of surveillance is as pervasive in Singapore as its network of security cameras, which police have installed in more than 150 “zones” across the country. Even though they adorn the corners of buildings, are fastened to elevator ceilings, and protrude from the walls of hotels, stores, and apartment lobbies, I had little sense of being surrounded by digital hawk eyes while walking around Singapore, any more than while surfing the web I could detect the digital filters of government speech-minders. Most Singaporeans I met hardly cared that they live in a surveillance bubble and were acutely aware that they’re not unique in some respects. “Don’t you have cameras everywhere in London and New York?” many of the people I talked to asked. (In fact, according to city officials, “London has one of the highest number of CCTV cameras of any city in the world.”) Singaporeans presumed that the cameras deterred criminals and accepted that in a densely populated country, there are simply things you shouldn’t say. “In Singapore, people generally feel that if you’re not a criminal or an opponent of the government, you don’t have anything to worry about,” one senior government official told me.
This year, the World Justice Project, a U.S.-based advocacy group that studies adherence to the rule of law, ranked Singapore as the world’s second-safest country. Prized by Singaporeans, this distinction has earned the country a reputation as one of the most stable places to do business in Asia. Interpol is also building a massive new center in Singapore to police cybercrime. It’s only the third major Interpol site outside Lyon, France, and Argentina, and it reflects both the international law enforcement group’s desire to crack down on cybercrime and its confidence that Singapore is the best place in Asia to lead that fight.
But it’s hard to know whether the low crime rates and adherence to the rule of law are more a result of pervasive surveillance or Singaporeans’ unspoken agreement that they mustn’t turn on one another, lest the tiny island come apart at the seams. If it’s the latter, then the Singapore experiment suggests that governments can install cameras on every block in their cities and mine every piece of online data and all that still wouldn’t be enough to dramatically curb crime, prevent terrorism, or halt an epidemic. A national unity of purpose, a sense that we all sink or swim together, has to be instilled in the population. So Singapore is using technology to do that too.
The provision of affordable, equitable housing is a fundamental promise that the government makes to its citizens, and keeping them happy in their neighborhoods has been deemed essential to national harmony. Eighty percent of Singapore’s citizens live in public housing — fashionable, multiroom apartments in high-rise buildings, some of which would sell for around U.S. $1 million on the open market. The government, which also owns about 80 percent of the city’s land, sells apartments at interest rates below 3 percent and allows buyers to repay their mortgages out of a forced retirement savings account, to which employers also make a contribution. The effect is that nearly all Singaporean citizens own their own home, and it doesn’t take much of a bite out of their income.
The Singapore Tourism Board used the methodology to examine trends about who will be visiting the country over the next decade. Officials have tried to forecast whether “alternative foods” derived from experiments and laboratories could reduce Singapore’s near-total dependence on food imports.
Singapore is now undertaking a multiyear initiative to study how people in lower-level service or manufacturing jobs could be replaced by automated systems like computers or robots, or be outsourced. Officials want to understand where the jobs of the future will come from so that they can retrain current workers and adjust education curricula. But turning lower-end jobs into more highly skilled ones — which native Singaporeans can do — is a step toward pushing lower-skilled immigrants out of the country.
Singaporeans speak, often reverently, of the “social contract” between the people and their government. They have consciously chosen to surrender certain civil liberties and individual freedoms in exchange for fundamental guarantees: security, education, affordable housing, health care.
One future study that examined “surveillance from below” concluded that the proliferation of smartphones and social media is turning the watched into the watchers. These new technologies “have empowered citizens to intensely scrutinise government elites, corporations and law enforcement officials … increasing their exposure to reputational risks,” the study found. From the angry citizen who takes a photo of a policeman sleeping in his car and posts it to Twitter to an opposition blogger who challenges party orthodoxy, Singapore’s leaders cannot escape the watch of their own citizens.
In this tiny laboratory of big-data mining, the experiment is yielding an unexpected result: The more time Singaporeans spend online, the more they read, the more they share their thoughts with each other and their government, the more they’ve come to realize that Singapore’s light-touch repression is not entirely normal among developed, democratic countries — and that their government is not infallible. To the extent that Singapore is a model for other countries to follow, it may tell them more about the limits of big data and that not every problem can be predicted.
I was strapped into Birdly, a full-body flight simulator designed to make you forget you’re not a bird. “Press the red buttons and pump your arms to start soaring again,” said Max Rheiner, the Swiss artist responsible for my in-flight experience this week at Swissnex. (Birdly flew here from its birthplace at the Zurich University of the Arts.) The reason that Rheiner could make this simulator now, and not 20 years ago when he first dreamed of helping humans feel like birds, is the arrival of the Oculus Rift. The Rift is the first virtual reality headset with two key features: It’s cheap, and it doesn’t make you want to vomit. Now that there’s a way to provide accurate head-tracking at low enough latency to prevent motion sickness, people who were raised on the promise of virtual reality are starting to experiment.
At first the designers of Birdly took the dream a bit too literally and used a physics engine to model airflow around virtual wings. But it turns out to be hard for humans to fly like an actual bird, learning to flap their wings at the right angle and catch thermals to spiral up. To simulate the effortlessness of dream flight, Rheiner made the interface more metaphorical and intuitive. By twisting your arm you control the pitch of the wing: Tip up to soar higher, and tip down to dive. Catch the air with one hand to bank. To climb faster, you can vigorously pump both wings. Pistons provide realistic resistance, and a fan is calibrated to make the windspeed match your virtual velocity.
It’s admittedly a bit awkward to climb onto Birdly. You bend over a padded frame, strap on a tight headset and headphones, then hook your hands into wooden wings. But then the screen flips on and you find yourself floating above the city, watching your bird-shadow drifting across the rooftops. If you crane your neck, you can see your brown feathers ruffling in the breeze. After a few seconds, flying feels natural.
the experience will soon include smells. Rheiner, working with a Dutch fragrance designer, built a rig to emit little pumps of scented alcohol as you fly. But a realistic cityscape has to include hot asphalt and car exhaust, and it’s tricky to deliver a whiff of those that won’t knock you out of the sky.
While Rheiner’s team was aiming for art, there might also be a future in travel and fitness. Imagine spending an afternoon flying through the Grand Canyon—a beautiful trip, and if you need to flap your wings the whole time to stay aloft, a serious work out. Before a walk to your basement can replace a helicopter ride in Hawaii, though, a compendium of detailed 3-D maps would be required. Promisingly, Google Earth sent 20 people to visit Birdly last week.
An interface like Birdly’s could some day be used to fly a real drone in realtime, says Rheiner. You could fly wherever you want and see what’s happening there right now, no mapping necessary.
A stereoscopic view of Coit Tower, as rendered on the screen of an Oculus Rift. Internal lenses correct the distortion to produce a 3-D experience
Students at Deep Springs College in the California desert, near the Nevada border, where education involves ranching, farming, and self-governance in addition to academics – Jodi Cobb/National Geographic/Getty Images
The financial crush has come just when colleges are starting to think of Internet learning as a substitute for the classroom. And the coincidence has engendered a new variant of the reflection theory. We are living (the digital entrepreneurs and their handlers like to say) in a technological society, or a society in which new technology is rapidly altering people’s ways of thinking, believing, behaving, and learning. It follows that education itself ought to reflect the change. Mastery of computer technology is the major competence schools should be asked to impart. But what if you can get the skills more cheaply without the help of a school?
A troubled awareness of this possibility has prompted universities, in their brochures, bulletins, and advertisements, to heighten the one clear advantage that they maintain over the Internet. Universities are physical places; and physical existence is still felt to be preferable in some ways to virtual existence. Schools have been driven to present as assets, in a way they never did before, nonacademic programs and facilities that provide students with the “quality of life” that makes a college worth the outlay. Auburn University in Alabama recently spent $72 million on a Recreation and Wellness Center. Stanford built Escondido Village Highrise Apartments. Must a college that wants to compete now have a student union with a food court and plasma screens in every room?
The model seems to be the elite club—in this instance, a club whose leading function is to house in comfort thousands of young people while they complete some serious educational tasks and form connections that may help them in later life.
A hidden danger both of intramural systems and of public forums like “Rate My Professors” is that they discourage eccentricity. Samuel Johnson defined a classic of literature as a work that has pleased many and pleased long. Evaluations may foster courses that please many and please fast.
At the utopian edge of the technocratic faith, a rising digital remedy for higher education goes by the acronym MOOCs (massive open online courses). The MOOC movement is represented in Ivory Tower by the Silicon Valley outfit Udacity. “Does it really make sense,” asks a Udacity adept, “to have five hundred professors in five hundred different universities each teach students in a similar way?” What you really want, he thinks, is the academic equivalent of a “rock star” to project knowledge onto the screens and into the brains of students without the impediment of fellow students or a teacher’s intrusive presence in the room. “Maybe,” he adds, “that rock star could do a little bit better job” than the nameless small-time academics whose fame and luster the video lecturer will rightly displace.
That the academic star will do a better job of teaching than the local pedagogue who exactly resembles 499 others of his kind—this, in itself, is an interesting assumption at Udacity and a revealing one. Why suppose that five hundred teachers of, say, the English novel from Defoe to Joyce will all tend to teach the materials in the same way, while the MOOC lecturer will stand out because he teaches the most advanced version of the same way? Here, as in other aspects of the movement, under all the talk of variety there lurks a passion for uniformity.
The pillars of education at Deep Springs are self-governance, academics, and physical labor. The students number scarcely more than the scholar-hackers on Thiel Fellowships—a total of twenty-six—but they are responsible for all the duties of ranching and farming on the campus in Big Pine, California, along with helping to set the curriculum and keep their quarters. Two minutes of a Deep Springs seminar on citizen and state in the philosophy of Hegel give a more vivid impression of what college education can be than all the comments by college administrators in the rest of Ivory Tower.
Teaching at a university, he says, involves a commitment to the preservation of “cultural memory”; it is therefore in some sense “an effort to cheat death.”
Here, I want to consider a German-born artist based in France whose paintings are the most Ballardian I have ever seen. So far as I am aware Peter Klasen has never been discussed previously in relation to Ballard or his writing. There are good reasons for supposing that Ballard was unaware of Klasen’s work and I have found no evidence to suggest that the artist was aware of Ballard, though it remains a possibility. The remarkable overlap in their thinking and practice at a critical moment in the 1960s is a matter of synchronicity, not influence.
Ballard’s impact on the art world has been a subject of growing interest, which was given an additional spur by his death in 2009. His readily acknowledged debt to Surrealism is already well covered and critical attention has recently moved to his friendship with the artist Eduardo Paolozzi.
the Gagosian Gallery in London mounted the exhibition “Crash: Homage to J.G. Ballard.” This included artists Ballard is known to have admired — Dalí, De Chirico, Paul Delvaux, Edward Hopper, Ed Ruscha, Francis Bacon, Eduardo Paolozzi, Tacita Dean — as well as artists felt by the curators to share concerns with the writer, including Richard Prince, Jeff Koons, Cindy Sherman, Jake and Dinos Chapman, Douglas Gordon and Damien Hirst. (See the lavish catalogue designed by Graphic Thought Facility.)
The three paintings shown here are typical of Klasen’s work in the mid-1960s. All of these images utilize a combinatorial system derived from modernist montage of the 1920s. Occasionally Klasen glues images and small objects to the canvas, but just as often he paints the entire “montage” as a seamless unit. The component images are shattered into fragments and here Klasen differs from an American Pop artist such as James Rosenquist whose image quotations are more complete, continuous and celebratory. The resemblance to Richard Hamilton, whose painterly probes of popular culture also fused image-sections into new aesthetic configurations, comes in the way Klasen deploys these fragments across the picture plane, allowing zones of unoccupied space to open up between them. Although traditional commercial Pop iconography sometimes appears (a hotdog, a bowl of food, a lipstick), Klasen’s overriding concern is the equivalence between female body parts drawn from advertising and glamour pictures — lips, eyes, breasts, elbow — and the manufactured or mechanical elements, which include taps, valves, plugs, handles, switches, syringes, steering wheels and car windows. He presents both types of image on equal terms within the painting’s symbiotically organized structure. Several of the same image fragments recur from picture to picture and Klasen’s color-drained image-world becomes a semiotic pressure chamber in which new forms of control (and desire?) subordinate the erotic presence of the female subjects.
In an interview in 2008, Klasen recalled the influence during these years of Jean-Luc Godard’s approach to film-collage, his essayistic abstractions, disruptive inter-titles and anti-cinematic moments of rupture. A graphic montage using sources also favored by Klasen can be seen in a poster from 1966 for Godard’s Two or Three Things I Know About Her about the life of a prostitute in Paris. If Klasen’s pictures are still “sexy” to us, despite their coldness and extreme, disassociating fragmentation, then it’s a violently ultra-modern kind of sexiness.
Now consider this passage from a chapter titled “Notes Towards a Mental Breakdown” in The Atrocity Exhibition (first published with the title “The Death Module” in New Worlds no. 173, July 1967):
Operating Formulae. Gesturing Catherine Austin into the chair beside his desk, Dr Nathan studied the elegant and mysterious advertisements which had appeared that afternoon in copies of Vogue and Paris-Match. In sequence they advertised: (1) The left orbit and zygomatic arch of Marina Oswald. (2) The angle between two walls. (3) A “neural interval”— a balcony unit on the twenty-seventh floor of the Hilton Hotel, London. (4) A pause in an unreported conversation outside an exhibition of photographs of automobile accidents. (5) The time, 11:47 a.m., June 23rd, 1975. (6) A gesture — a supine forearm extended across a candlewick bedspread. (7) A moment of recognition — a young woman’s buccal pout and dilated eyes.
This is one of Ballard’s celebrated image lists found throughout The Atrocity Exhibition. The items that comprise the “operating formulae” can be seen as a miniature exhibition list, as an extreme form of conceptual montage, and as a forced marriage of apparently unrelated images (a classic Surrealist stratagem), which replicates the scrambled structure of the narratives within each chapter, and the way these non-linear chapters ultimately cohere as a work. At the same time, it would be possible to use Ballard’s image kit as a set of instructions to assemble a montage on paper that might then resemble a painting by Klasen (zygomatic arch, angle between walls, balcony unit, accident photos, forearm, dilated eyes, etc.). What both Ballard and Klasen share at this point in the mid-1960s is a cold, appraising, analytical eye. It’s impossible to tell how they feel about what they show, or to know what they want us to feel, if anything at all. Their findings are disturbing and perhaps even repellent from a humanist perspective, yet the new aesthetic forms they use to embody them are, even today, exciting, provocative and tantalizingly difficult to resolve.
Ballard’s experiments with condensed collage-novels in the late 1950s have received increasing attention and they were shown at the Gagosian Gallery; the “Advertiser’s Announcements” he presented in Ambit from the summer of 1967 appear in the catalogue. A few months earlier, in New Worlds no. 167 (October 1966), Ballard published a series of comments on his new experimental texts, under the title “Notes from Nowhere.” He considers the intersection of three kinds of plane: the world of public events, the immediate personal environment, and the inner world of the psyche. “Where these planes intersect,” he writes, “images are born.” In Ballard’s attempt to locate himself, by calling on “the geometry of my own postures, the time-values contained in this room, the motion-space of highways, staircases, the angles between these walls,” the intersection of planes again suggests Klasen’s surgically precise combinatorial technique. Ballard goes on to propose that it might one day be possible “to represent a novel or short story, with all its images and relationships, simply as a three-dimensional geometric model.” Then, just a few lines later, in a curious unedited moment that seems to express his ambivalence, he says that he is worried that a work of fiction could become “nothing more than a three-dimensional geometric model.”
By the early 1970s, Klasen had severely reduced the number of image fragments and the agitated visual complexity seen in his earlier montages. In a development that actualizes Ballard’s conception of a new kind of three-dimensional fiction, Klasen’s constructions, while still wall-mounted, become fully three-dimensional with projecting pipes and bathroom fittings. The unrelenting hygienic cruelty of this work, its absolute concentration on a few fetishistic motifs to the exclusion of everything else — breasts and basin, waist and switches, lips and bidet — bears comparison with the strange mental journey Ballard would undertake as he worked on Crash, the ultimate statement of his ideas about the sexualization of our relationship with technology. “Nothing is spontaneous, everything is stylized, including human behaviour,” he said in 1970, in an interview with Lynn Barber in Penthouse. “And once you move into this area where everything is stylized, including sexuality, you’re leaving behind any kind of moral or functional relevance.” Also in 1970, in a brief manifesto, reprinted in his latest monograph, Klasen set out his aims:
Play on the dialectic of a photographic reproduction and its pictorial transposition.
Play on the magical and poetic power of an object out of place.
Respond to the aggression of society with another aggression.
Show that beauty is everywhere, in a bathroom, for example.
Demonstrate that a bidet, a washbasin, a switch can exercise the same fascination on the spectator as the mouth, the body of a woman or a racing car.
Return these images and objects to the spectator-consumer, allowing him to react to these object-tableaux and to project his own fantasies onto them.
Stimulate his awareness by providing him with aesthetic and ideological information about himself and the world that surrounds him.
“Respond to the aggression of society with another aggression”: this is exactly what Ballard had done in The Atrocity Exhibition, responding to what he called the “death of affect” — of ordinary emotional responses to events — by playing it out within the glinting, recursive, multi-planar architecture of his book, returning society’s images to the “spectator-consumer,” with their inherent characteristics pulled to the surface and intensified, as a morally ambiguous invitation to know oneself better. Ballard, too, had found a perverse kind of beauty in this material, which is one reason why his writing of this period continues to exert its extraordinary hold on readers.
The overlapping concerns of Ballard and Klasen in the mid- to late 1960s represent one of the great might-have-beens of contemporary art and literature, but a belated union is still possible. It’s hard to imagine better images than Klasen’s, ready-made or otherwise, for the covers of future editions of The Atrocity Exhibition and Crash. It’s strange that the French, great admirers of both these books, haven’t cracked this one already.