Tag Archives: space

An Idea Whose Time Has Come – Metropolis Magazine – June 2013

An Idea Whose Time Has Come – Metropolis Magazine – June 2013.

Billy Wilder’s The Apartment


Living Office concept by Herman Miller

In Billy Wilder’s 1960 comedy The Apartment, an anatomization of sex and power in the white-collar workplace that anticipated Mad Men by half a century, the great director offered a brutally funny, spot-on portrait of the postwar office, depicting the fictitious Consolidated Life of New York as a cornfield-size, perfectly rectilinear grid of anonymous, identical desks. How long ago and far away that seems. Though in places the old model still prevails, today’s ideal office paradigm could not be more different: fluid rather than fixed, less hierarchical and more egalitarian, and encouraging (mostly) of individuality, creativity, and choice.

A new story requires a new stage, and into this brave new world comes Herman Miller’s Living Office, the initial components of which the Zeeland, Michigan, furniture company is introducing at this year’s edition of NeoCon. The first wave of an anticipated two-year rollout, the Living Office’s first three product portfolios—called PUBLIC Office Landscape, Metaform Portfolio, and Locale, and designed, respectively, by fuseproject, Studio 7.5, and Industrial Facility—represent the company’s carefully considered response, not only to the ways in which a changed business culture has transformed workplace design, but to where our personal aspirations may be headed, and how the office can support them.

It’s a resolutely forward-looking vision. Yet this emphasis on what the company calls “human-centered problem-solving” has been the hallmark of Herman Miller since 1930, when Gilbert Rohde, its first design director, famously declared, “The most important thing in the room is not the furniture—it’s the people.”

In fact, the past is prologue to the Living Office in a central way—specifically, a slender, significant book, published in 1968, called The Office: A Facility Based on Change, by Robert Propst, at the time the company’s head of research. Under George Nelson, the second design director, Herman Miller had produced many of postwar America’s most iconic objects, by the likes of Charles and Ray Eames, Isamu Noguchi, and others, including Nelson himself. But by the late 1950s, the residential and commercial businesses had plateaued, and the company’s out-of-the-box-thinking president D.J. DePree began casting about for untapped revenue streams. DePree discovered Propst at the 1958 Aspen Design Conference, and was immediately taken with the artist/teacher/inventor. “Propst was truly brilliant, an innovative thinker,” explains Mark Schurman, Herman Miller’s corporate communications director. “D.J. figured, ‘We’ll set him up with a research division, and he’ll find new opportunities.’ One of his first directives was, ‘Anything but furniture.’”

Despite the company’s mandate, Propst became increasingly absorbed by the idea of reinventing the office, an interest that dovetailed with Nelson’s, who as early as 1948 had talked about the ideal working environment being a “daytime living room” that would be welcoming and humane. Propst, too, concerned himself with the human factor—specifically how flexible floor plans and porous, intercommunicating spaces might empower both the individual and the organization.


Action Office II’s 12 “principles of operation,” encouraged a workplace in which “the individual can participate in goal setting and thus behave like a manager at any level.” Propst’s environment remained “responsive to the goals of the user,” changed gracefully and with minimal disruption, and enabled rapid replanning. It also thrived on contrast: between neatness and chaos, sitting and standing, solitude and collaboration, privacy and community, and, critically, “geometry versus humanism”—that is, a traditional, grid-based floor plan versus a more organic layout.


Alas—and despite Propst’s injunction against the “four-sided enclosure”—by the late 1970s, the dominant application of the Action Office (and its multiple imitations) had become that most despised of office conditions: the cubicle. Propst, who died in 2000, had sought to liberate humankind from the grid, but his invention wound up locking the worker even more tightly into it.

Yet good ideas die hard, and the Living Office—which expresses Propst’s vision in a new-century way—suggests that, 45 years on, it’s an idea whose time has come. For one, when the Action Office appeared, the world depicted in Wilder’s film had its roots in the blue-collar assembly line, an essentially Victorian model. “There was a small group of people who made decisions, and a whole lot of people lined up executing,” says Greg Parsons, Herman Miller’s vice president of New Work Landscape. Today, Parsons points out, “the office is a facility based on creativity, and we need an organizational structure that reflects that.” As well, the anchoring effects of technology, which worsened in the 1980s and 1990s as ever more devices appeared, have been swept away in our wireless world. Both philosophically and physically, the office is far more flexibility-friendly than it was a half-century ago.

No less important is what might be called the Marissa Mayer Effect. Though the Yahoo! CEO’s ban on work-from-home may have been poorly handled, according to Gary Smith, director of design facilitation and exploration at Herman Miller, her point was powerful. “We’re talking about a shift of emphasis, away from housing and technology, capabilities that could exist only in the office,” Smith explains. “Now there’s a different thing that can exist only in the office, and that’s my access to you. I want to tap your potential, because what humans do best is connect and communicate”—something the Living Office is meant to encourage, by creating a multiplicity of differently scaled settings and making the connections between them more logical, adjustable, and fluid.

In keeping with its people-first philosophy, the company focused its predesign research on gathering insight, not information. “Research will expose the manifest behavior of a population, but it won’t reveal innovation,” observes Smith. Instead, Parsons says, “We asked, ‘What’s going on in the world? What’s fundamental about all human beings, and what do they really want to do?’” Toward this end, Herman Miller engaged in a process that Maryln Walton, of the insight and exploration group, describes as “informed dreaming.” Since 2001, the company has completed three rounds of scenarios, in which it looks five years ahead at potential futures; these enable the company to think about how the world might change, and adjust its product development and business strategies accordingly. The brainstorming process begins with a dozen people from different parts of the organization, followed by a two-day “expert workshop” with six individuals representing multiple disciplines—the most recent, which looked ahead to 2018, included two cultural anthropologists, a specialist in Asian HR policies, and a political science professor—to challenge the in-house assumptions.

The team then takes what it’s learned and imagines (and reimagines) the future until it arrives at three possible scenarios. For 2018, these include Datasphere, which looks at how the digital information generated by individuals worldwide can be innovatively repurposed; New Normal, a consideration of potential push-back against organizations, institutions, and governments; and Polarized World, in which the U.S. and China emerge as the two great economic powers. “We ran workshops with groups of people thinking about each scenario,” Walton says. “Then we spent a lot of time synthesizing the results, and developed what we believe are likely workplace realities in 2018.”

These realities— called propositions—are the gold nuggets sieved from the sand of the scenarios. “We don’t think any one of the three stories will come true,” says Walton. “But the eight propositions are things that we really believe.”


PUBLIC Office Landscape
Yves Behar & fuseproject

We found this statistic: 70 percent of collaboration happens at the workstation. This hit me like lightning, and I wrote on the project wall: “THE MAJORITY OF COLLABORATION HAPPENS AT THE DESK, YET DESKS HAVE NEVER BEEN DESIGNED FOR INTERACTION.” Our approach became to think of every place in the office, including one’s individual desk, as a place for collaboration. We came up with the notion of Social Desking.


We believe collaboration doesn’t just happen in conference rooms—it happens everywhere. PUBLIC Office Landscape supports fluid interactions and spontaneous conversations. The seating elements flow into desk surfaces, the fabric elements flow cleanly into hard surfaces. The result is a visual connection that encourages new functionality and casual postures.


“We’re trying to create Living Office products that function in group and community as well as individual zones,” Katie Lane, Herman Miller’s director of product development, tells me as we tour the cheerfully cluttered, bustling obeya space, the company’s fancy name (obeya is Japanese for “big room”) for the R&D skunkworks in its Design Yard, one of several facilities scattered around Zeeland. PUBLIC Office Landscape, the first system Lane showed me, supports areas in which two to six people typically cluster, and is designed specifically “for knowledge transfer and cocreation to occur,” she says. The heart of PUBLIC is the Social Chair, which supports the casual nature of the contemporary workplace by elevating the ergonomic levels of what looks at a glance like hip lawn furniture. Equally suited to perching, slouching, or sitting on the arm rests, the Social Chair, which can be easily pulled up to a desk or arranged in clusters, invites the quick chat or collaborative bull session, and supports what fuseproject principal Yves Behar (noting that “70 percent of short meetings happen at a person’s desk”) calls “collaborative density.” PUBLIC Office Landscape also speaks to one of the most compelling of the 2018 propositions: Swarm-Focused Work, in which—like bees—groups of individuals quickly zoom together to one spot to accomplish tasks.

Metaform Portfolio
Studio 7.5

Our approach was based on our observations in American offices: We saw a shift from individual to collaborative work patterns, we saw the walls being lowered to 42 inches to introduce natural light to the floor plan. We observed a huge amount of content and the transactions associated with work moved to the digital realm, leaving drawers and cabinets empty. We were looking for an environment to support the creative class.


Metaform Portfolio addresses a proposition called Hackable and Kinetic Nodes, a vision of the workplace as a campsite that can be arranged opportunistically and moved when necessary. The design challenge, according to Studio 7.5’s Carola Zwick, involved achieving “an architectural quality that can still be transformed by the inhabitants, since traditional planning cycles miss the needs and dynamics of today’s knowledge workers.” Accordingly, Metaform’s core element is a tiered block of polypropylene, weighing about 18 pounds, which can be combined with identical units to create a semi-enclosed space. The arrangement Lane shows me is formed into a half-circle, with squiggly shelves called Centipedes cantilevered off the tiers, and magazines and work displays tucked into the narrow spaces between them. An adjustable-height table, large enough for small-group collaboration, bisects the half-circle. Vertical versions of the shelving—called Vertipedes—are connected to the top tier and provide light visual screening.

Industrial Facility

In our office, we all travel from our own neighborhoods to a place where we can collaborate in person, so we thought: Why not design an office landscape that behaves like a good neighborhood? In our first thoughts we talked a lot about how social networks behave. Locale is a physical version of how social networks function; the most relevant participants are kept close so that communication is easy, fast, and frequent.

Locale works like a small high street where everything you need is clustered together. The architect or specifier can build small clusters out of different functional modules to form what we call a Workbase, so that the disparate functions of the office reside comfortably together. The library, social setting, working desk, and meeting table are al formed into an architectonic line.

In Sam Hecht and Kim Colin’s Locale, “individual work areas mix with group and collaborative elements to give a high-performance team everything it needs within a neighborhood on the floorplate,” Lane explains, leading me into a zone shaped by standing-height screens, storage/shelving units incorporating sliding easels, and with a low circular coffee table, stand-alone refreshment center, and a row of curved adjustable-height desks. Locale grew out of what Hecht calls an “autobiographical approach” to design, wherein he and Colin thought about how unnatural it felt to have an impromptu get-together in their own office. “You’re sitting, they’re standing, it’s not very productive,” he explains. “We wanted to create a system in which people would collaborate very naturally—every table can be a meeting table.”


Greg Parsons recalls, “We came up with ten modes of work that are repeated in virtually every organization”—including “administer,” “contemplate,” “create,” “quick chat,” “converse,” “warm up/cool down,” and “gather and build”—“and tied them to the kinds of settings we can create,” he says.

Once an organization’s programmatic needs are understood, and what the mix of work modes might be, Gee’s group develops study plans that suggest how an office’s square footage can be best apportioned. The ones she showed me resemble urban site plans, which seems appropriate: A well-functioning business environment, after all, is akin to a neighborhood, different parts of which cater to varying needs and interactions. “Our team uses a lot of urban planning metaphors when we talk about this,” Gee says. “Because getting the settings right is just part of the equation. That would be like getting one building right in a whole city.”


Corpography in No-Man’s Land | Re-inhabiting No-Man’s Land

Corpography in No-Man’s Land | Re-inhabiting No-Man’s Land.

From its first entrance into the English language, designating a mass burial site for 14th century victims of the Black Death, no-man’s lands exhibit an often violent encounter between bodies and the materiality of the earth. So much so, that a distinction is no longer possible.

In his 1922 essay The Battle as Inner Experience, Ernst Jünger describes how the Fronterlebnis – life on the edges of no-man’s land – dissolves the boundary between body and space, transforming the soldier into an integral part of a frontline ecology: “There, the individual is like a raging storm, the tossing sea and the rearing thunder. He has melted into everything”

The experience Jünger describes is not just a traumatic subjection of the body to mechanised war, but, as Jeffrey Herf notes, an almost erotic rebirth and transfiguration of men into a new, improved community of the trenches that will lead the creation of “new forms filled with blood and power [that] will be packed with a hard fist”. Rather than resort to nostalgia for a pastoral pre-industrialised era, in the no-man’s land Jünger discovers a landscape where body, machine and soil are fused to form “magnificent and merciless spectacles”.


In Svetlana Alexievich’s remarkable book of testimonies from Chernobyl, the wife of one of the firemen who was exposed to extreme levels of radiation described the bio-chamber in which he was placed during his hospitalization in Moscow, and the extensive quarantine measures that isolated the man from the medical staff. To complete his dehumanisation, one nurse referred to the dying man as “a radioactive object with a strong density of poisoning.[…] That’s not a person anymore, that’s a nuclear reactor”. The radical unmaking of the human body to the extent that it is no longer distinguished from the original space of disaster, echoes the violent dissolution of distinctions between body and space that constituted the disastrous corpographies of WWI.

Exploring No-Man’s-Land in the 21st Century — War is Boring — Medium

Exploring No-Man’s-Land in the 21st Century — War is Boring — Medium.

Barrier walls in the Palestinian territories in 2004. Lisa Nessan/Flickr photo

Following the end of World War I, Europe’s intellectuals tried to understand and explain what everyone just went through. They also tried to grapple with the reality of industrialized warfare and the no-man’s-lands it created.

Blasted, blown up and raked by machine gun fire. The no-man’s-land was a place that people couldn’t go without risking death.

Some thinkers on the political left saw no-man’s-land as symbolic of the destruction of Europe’s dying, traditional political order. However, intellectuals on the right saw the battlefield as a place where young men could be reborn into the fascist shock troops of Weimar Germany.

The fixed trenches of World War I are long gone. But the no-man’s-land never really went away, according to Noam Leshem, a political geographer at Durham University in England who studies modern no-man’s-lands.

From Cyprus, Western Sahara, the Palestinian territories to the Korean peninsula, no-man’s-lands are now tourist attractions, environmental preserves and places to make money.

Leshem’s work is available at Re-Inhabiting No-Man’s Land, a collection of writing and research on modern dead zones.


Our concern began with the obvious no-man’s-land of the First World War, but Alasdair reminded me the term was constantly being circulated in reference to very different sites.

So anything from other geopolitical areas like the demilitarized zones between the Koreas, the Chernobyl Exclusion Zone, or even urban geopolitical no-man’s-lands like the one that divided Jerusalem until 1967.

But even beyond the geopolitical vocabulary, what we saw was that no-man’s land entered our lingo to refer to anything from gangland in the heart of North American cities to tax havens in the Caribbean.

When we started looking into this, one of our key goals was to try and understand the history of the term, and to our surprise the term is much older than 1915, i.e. the Battle of the Somme. It dates back to the 14th century and to London during the months preceding the plague, when the bishop of London buys a lot of land outside the city to prepare a mass grave ahead of the bubonic plague.

We found that relationship between a space and death to be kind of one of the key characteristics of no-man’s-land throughout its history. And what we’re trying to do today is two things, is first is continue to understand the history of the term beyond its sort of Anglo-Saxon origins, but also ask what do no-man’s-lands in the 21st century mean?

RB: We often think no-man’s-land as a sort of desolate environment. But in the Cyprus buffer zone there’s actually a lot of stuff going on there.

NL: Absolutely. Cyprus is a great example. As you know, there’s a lot of economic activity. There’s a lot of farming going on in the designated U.N. buffer zone, but you also get newly constructed industrial zones that are rezoned by the U.N. for civilian use.

So what you get are sub-civilian spaces within the militarized space of the buffer zone designated for economic activity.

In addition you get a lot of smuggling—of drugs, people across the no-man’s-land. And I would add to that: tourism. The buffer zone in Cyprus has become one of the key tourist attractions on the island. Beaches, good food and you get some buffer zone watchers.

So absolutely this is a very significant space economically and a space that is constantly inhabited, governed, monitored and practiced.

There are things happening in it that makes it a significant space rather than just this empty no-go zone.

RB: There’s also environmental features to these spaces. The demilitarized zone in the Koreas is a famous wildlife sanctuary.

NL: Here’s a funny anecdote from when we were in Cyprus a few weeks. One of our interviewees told us that Cypriots just absolutely love hunting, and although most of the wildlife on the island is completely extinct, he said if you want to find snakes, go to the buffer zone. If you want to find wildlife, go to the buffer zone.

That’s the only place where animals have survived because hunting is not allowed there.

As you pointed out, the demilitarized zone between the Koreas is a very important Asian wildlife sanctuary. Chernobyl is famous for the resuscitation of natural habitats as a result of the withdrawal of human activity. The herds of wild horses that roam Chernobyl these days have become almost as famous as reactor number four.

However, there’s again an interesting history because in 19th century notebooks of expeditions in North America, we find repeated references to the no-man’s-land as a space between two warring tribes where wildlife game finds refuge.

So already that association between sanctuary and no-man’s-land is made long before we designated the demilitarized zone in the Koreas as a sanctuary or the inadvertent creation of a wildlife sanctuary in Chernobyl.

There’s a fantastic film on the community of bunnies that found refuge in Berlin between the two sides of the wall. So in the no-man’s-land in Berlin, there was a huge community of bunnies.

It’s really important issue. It sheds light on the interests that preserve these spaces. I think that’s not just about preserving these spaces for the future, but the sense that the spaces are still a part of human concern.

RB: You had a recent post on your blog about [German war veteran and writer] Ernst Juenger. What were you trying to do there?

NL: Juenger was one of the most important thinkers that repeatedly returns in his writing and thinking to the no-man’s-land. The no-man-land’s for Juenger—contrary to the traditional definition of it as this desolate no-go zone—is a very productive space.

The no-man’s-land is a space from which a new man emerges, a man that has fused with machine and with earth to create this new—almost cyborg—creature that has bettered himself to such an extent that he is a new kind of being.

Not only is this happening on an individual level, but also on a social level. He talks about there being a “community of the trenches.”

But it’s important to remember that Juenger was part of a very specific intellectual group traditionally positioned on the right in Weimar Germany that celebrated the no-man’s-land, that romanticized it. On the other side, still in Weimar Germany, we see people like Walter Benjamin.

Benjamin was exempt from military service in the First World War, but he constantly returns to the no-man’s-land as a space where a philosophical crisis happens. Benjamin repeatedly asks, what’s the meaning of this space of destruction?


In the Second World War, that is transplanted from the trenches to the enclosed space of the gas chamber, or remotely through aerial bombardment. And what we have here is a change in status and no-man’s-land is no longer applied to concrete spaces of warfare and death.

The U.N. buffer zone in Cyprus in December 2012. Athena Lao/Flickr photo

A Japanese Artist Launches Plants Into Space

A Japanese Artist Launches Plants Into Space.

“Flowers aren’t just beautiful to show on tables,” said Azuma Makoto, a 38-year-old artist based in Tokyo. His latest installation piece, if you could call it that, takes this statement to the extreme. Two botanical objects — “Shiki 1,” a Japanese white pine bonsai suspended from a metal frame, and an untitled arrangement of orchids, hydrangeas, lilies and irises, among other blossoms — were launched into the stratosphere on Tuesday in Black Rock Desert outside Gerlach, Nevada, a site made famous for its hosting of the annual Burning Man festival. ”I wanted to see the movement and beauty of plants and flowers suspended in space,” Makoto explained that morning.


“The best thing about this project is that space is so foreign to most of us,” says Powell, “so seeing a familiar object like a bouquet of flowers flying above Earth domesticates space, and the idea of traveling into it.”


He started with an aerial plant tied to a six-rod axis and studiously added peace lilies, poppy seed pods, dahlias, hydrangeas, orchids, bromeliads and a meaty burgundy heliconia. “I am using brightly colored flowers from around the world so that they contrast against the darkness of space,” he said. The scent of the flowers was stronger and more concentrated in the dry desert breeze than in their humid, natural environments, and the launch site was redolent with their perfume. Makoto worked quietly, until the metal rods were covered completely with plants. Then he directed his attention to his bonsai. For this particular project, Makoto chose a 50-year-old pine from his collection of more than 100 specimens, and flew it over from Tokyo in a special box. While readying it for space, he kept it moist and removed a few brown needles with a tweezer.


Using Styrofoam and a very light metal frame, Powell and his volunteers had created two devices to attach the bonsai and the flowers, which would launch separately. JP’s volunteers and Makoto’s team worked to calibrate still cameras, donated by Fuji Film for this project, and six Go Pro video cameras tied in a ball that would record the trip into the stratosphere and back in 360 degrees. There were two different tracking systems on each device, one a Spot GPS tracker that would help locate the vessel once it fell down back to Earth, and the other that recorded altitude and distance traveled from the launch site. A radio transmitted the data to a computer array in a van. While the crew waited, Makoto took a red carnation, drilled a hole in a crack of the arid, sandy soil and planted it there. It was his nod to the huge red sun that had started to come up.


Away 101 went to 91,800 feet, traveling up for 100 minutes until the helium balloon burst. It fell for 40 minutes; two parachutes in baskets opened automatically when there was enough air in the atmosphere to soften impact. Away 100, which held the arrangement, made it up to 87,000 feet. Both devices were retrieved about five miles from the launch site. The bonsai and flowers, though, were never found.


All You Have Eaten: On Keeping a Perfect Record | Longreads

All You Have Eaten: On Keeping a Perfect Record | Longreads.

Screen Shot 2014-07-07 at 4.45.51 PM

Over the course of his or her lifetime, the average person will eat 60,000 pounds of food, the weight of six elephants.

The average American will drink over 3,000 gallons of soda. He will eat about 28 pigs, 2,000 chickens, 5,070 apples, and 2,340 pounds of lettuce. How much of that will he remember, and for how long, and how well?


The human memory is famously faulty; the brain remains mostly a mystery. We know that comfort foods make the pleasure centers in our brains light up the way drugs do. We know, because of a study conducted by Northwestern University and published in the Journal of Neuroscience, that by recalling a moment, you’re altering it slightly, like a mental game of Telephone—the more you conjure a memory, the less accurate it will be down the line. Scientists have implanted false memories in mice and grown memories in pieces of brain in test tubes. But we haven’t made many noteworthy strides in the thing that seems most relevant: how not to forget.

Unless committed to memory or written down, what we eat vanishes as soon as it’s consumed. That’s the point, after all. But because the famous diarist Samuel Pepys wrote, in his first entry, “Dined at home in the garret, where my wife dressed the remains of a turkey, and in the doing of it she burned her hand,” we know that Samuel Pepys, in the 1600s, ate turkey. We know that, hundreds of years ago, Samuel Pepys’s wife burned her hand. We know, because she wrote it in her diary, that Anne Frank at one point ate fried potatoes for breakfast. She once ate porridge and “a hash made from kale that came out of the barrel.”

For breakfast on January 2, 2008, I ate oatmeal with pumpkin seeds and brown sugar and drank a cup of green tea.

I know because it’s the first entry in a food log I still keep today. I began it as an experiment in food as a mnemonic device. The idea was this: I’d write something objective every day that would cue my memories into the future—they’d serve as compasses by which to remember moments.

Andy Warhol kept what he called a “smell collection,” switching perfumes every three months so he could reminisce more lucidly on those months whenever he smelled that period’s particular scent. Food, I figured, took this even further. It involves multiple senses, and that’s why memories that surround food can come on so strong.

What I’d like to have is a perfect record of every day. I’ve long been obsessed with this impossibility, that every day be perfectly productive and perfectly remembered. What I remember from January 2, 2008 is that after eating the oatmeal I went to the post office, where an old woman was arguing with a postal worker about postage—she thought what she’d affixed to her envelope was enough and he didn’t.

I’m terrified of forgetting. My grandmother has battled Alzheimer’s for years now, and to watch someone battle Alzheimer’s—we say “battle,” as though there’s some way of winning—is terrifying. If I’m always thinking about dementia, my unscientific logic goes, it can’t happen to me (the way an earthquake comes when you don’t expect it, and so the best course of action is always to expect it). “Really, one might almost live one’s life over, if only one could make a sufficient effort of recollection” is a sentence I once underlined in John Banville’s The Sea (a book that I can’t remember much else about). But effort alone is not enough and isn’t particularly reasonable, anyway. A man named Robert Shields kept the world’s longest diary: he chronicled every five minutes of his life until a stroke in 2006 rendered him unable to. He wrote about microwaving foods, washing dishes, bathroom visits, writing itself. When he died in 2007, he left 37.5 million words behind—ninety-one boxes of paper. Reading his obituary, I wondered if Robert Shields ever managed to watch a movie straight through.

Last spring, as part of a NASA-funded study, a crew of three men and three women with “astronaut-like” characteristics spent four months in a geodesic dome in an abandoned quarry on the northern slope of Hawaii’s Mauna Loa volcano.

For those four months, they lived and ate as though they were on Mars, only venturing outside to the surrounding Mars-like, volcanic terrain, in simulated space suits.[1] Hawaii Space Exploration Analog and Simulation (HI-SEAS) is a four-year project: a series of missions meant to simulate and study the challenges of long-term space travel, in anticipation of mankind’s eventual trip to Mars. This first mission’s focus was food.

Getting to Mars will take roughly six to nine months each way, depending on trajectory; the mission itself will likely span years. So the question becomes: How do you feed astronauts for so long? On “Mars,” the HI-SEAS crew alternated between two days of pre-prepared meals and two days of dome-cooked meals of shelf-stable ingredients. Researchers were interested in the answers to a number of behavioral issues: among them, the well-documented phenomenon of menu fatigue (when International Space Station astronauts grow weary of their packeted meals, they tend to lose weight). They wanted to see what patterns would evolve over time if a crew’s members were allowed dietary autonomy, and given the opportunity to cook for themselves (“an alternative approach to feeding crews of long term planetary outposts,” read the open call).

Everything was hyper-documented. Everything eaten was logged in painstaking detail: weighed, filmed, and evaluated. The crew filled in surveys before and after meals: queries into how hungry they were, their first impressions, their moods, how the food smelled, what its texture was, how it tasted. They documented their time spent cooking; their water usage; the quantity of leftovers, if any. The goal was to measure the effect of what they ate on their health and morale, along with other basic questions concerning resource use. How much water will it take to cook on Mars? How much water will it take to wash dishes? How much time is required; how much energy? How will everybody feel about it all?


The main food study had a big odor identification component to it: the crew took scratch-n-sniff tests, which Kate said she felt confident about at the mission’s start, and less certain about near the end. “The second-to-last test,” she said, “I would smell grass and feel really wistful.” Their noses were mapped with sonogram because, in space, the shape of your nose changes. And there were, on top of this, studies unrelated to food. They exercised in anti-microbial shirts (laundry doesn’t happen in space), evaluated their experiences hanging out with robot pets, and documented their sleep habits.


“We all had relationships outside that we were trying to maintain in some way,” Kate said. “Some were kind of new, some were tenuous, some were old and established, but they were all very difficult to maintain. A few things that could come off wrong in an e-mail could really bum you out for a long time.”

She told me about another crew member whose boyfriend didn’t email her at his usual time. This was roughly halfway through the mission. She started to get obsessed with the idea that maybe he got into a car accident. “Like seriously obsessed,” Kate said. “I was like, ‘I think your brain is telling you things that aren’t actually happening. Let’s just be calm about this,’ and she was like, ‘Okay, okay.’ But she couldn’t sleep that night. In the end he was just like, ‘Hey, what’s up?’ I knew he would be fine, but I could see how she could think something serious had happened.”

“My wife sent me poems every day but for a couple days she didn’t,” Kate said. “Something was missing from those days, and I don’t think she could have realized how important they were. It was weird. Everything was bigger inside your head because you were living inside your head.”


When I look back on my meals from the past year, the food log does the job I intended more or less effectively.

I can remember, with some clarity, the particulars of given days: who I was with, how I was feeling, the subjects discussed. There was the night in October I stress-scarfed a head of romaine and peanut butter packed onto old, hard bread; the somehow not-sobering bratwurst and fries I ate on day two of a two-day hangover, while trying to keep things light with somebody to whom, the two nights before, I had aired more than I meant to. There was the night in January I cooked “rice, chicken stirfry with bell pepper and mushrooms, tomato-y Chinese broccoli, 1 bottle IPA” with my oldest, best friend, and we ate the stirfry and drank our beers slowly while commiserating about the most recent conversations we’d had with our mothers.

But reading the entries from 2008, that first year, does something else to me: it suffuses me with the same mortification as if I’d written down my most private thoughts (that reaction is what keeps me from maintaining a more conventional journal). There’s nothing especially incriminating about my diet, except maybe that I ate tortilla chips with unusual frequency, but the fact that it’s just food doesn’t spare me from the horror and head-shaking that comes with reading old diaries. Mentions of certain meals conjure specific memories, but mostly what I’m left with are the general feelings from that year. They weren’t happy ones. I was living in San Francisco at the time. A relationship was dissolving.

It seems to me that the success of a relationship depends on a shared trove of memories. Or not shared, necessarily, but not incompatible. That’s the trouble, I think, with parents and children: parents retain memories of their children that the children themselves don’t share. My father’s favorite meal is breakfast and his favorite breakfast restaurant is McDonald’s, and I remember—having just read Michael Pollan or watched Super Size Me—self-righteously not ordering my regular egg McMuffin one morning, and how that actually hurt him.

When a relationship goes south, it’s hard to pinpoint just where or how—especially after a prolonged period of it heading that direction. I was at a loss with this one. Going forward, I didn’t want not to be able to account for myself. If I could remember everything, I thought, I’d be better equipped; I’d be better able to make proper, comprehensive assessments—informed decisions. But my memory had proved itself unreliable, and I needed something better. Writing down food was a way to turn my life into facts: if I had all the facts, I could keep them straight. So the next time this happened I’d know exactly why—I’d have all the data at hand.

In the wake of that breakup there were stretches of days and weeks of identical breakfasts and identical dinners. Those days and weeks blend into one another, become indistinguishable, and who knows whether I was too sad to be imaginative or all the unimaginative food made me sadder.


“I’m always really curious about who you are in a different context. Who am I completely removed from Earth—or pretending to be removed from Earth? When you’re going further and further from this planet, with all its rules and everything you’ve ever known, what happens? Do you invent new rules? What matters to you when you don’t have constructs? Do you take the constructs with you? On an individual level it was an exploration of who I am in a different context, and on a larger scale, going to another planet is an exploration about what humanity is in a different context.”


What I remember is early that evening, drinking sparkling wine and spreading cream cheese on slices of a soft baguette from the fancy Key Biscayne Publix, then spooning grocery-store caviar onto it (“Lumpfish caviar and Prosecco, definitely, on the balcony”). I remember cooking dinner unhurriedly (“You were comparing prices for the seafood and I was impatient”)—the thinnest pasta I could find, shrimp and squid cooked in wine and lots of garlic—and eating it late (“You cooked something good, but I can’t remember what”) and then drinking a café Cubano even later (“It was so sweet it made our teeth hurt and then, for me at least, immediately precipitated a metabolic crisis”) and how, afterward, we all went to the empty beach and got in the water which was, on that warm summer day, not even cold (“It was just so beautiful after the rain”).

“And this wasn’t the same trip,” wrote that wrong-for-me then-boyfriend, “but remember when you and I walked all the way to that restaurant in Bill Baggs park, at the southern tip of the island, and we had that painfully sweet white sangria, and ceviche, and walked back and got tons of mosquito bites, but we didn’t care, and then we were on the beach somehow and we looked at the red lights on top of all the buildings, and across the channel at Miami Beach, and went in the hot Miami ocean, and most importantly it was National Fish Day?”

And it’s heartening to me that I do remember all that—had remembered without his prompting, or consulting the record (I have written down: “D: ceviche; awful sangria; fried plantains; shrimp paella.” “It is National fish day,” I wrote. “There was lightning all night!”). It’s heartening that my memory isn’t as unreliable as I worry it is. I remember it exactly as he describes: the too-sweet sangria at that restaurant on the water, how the two of us had giggled so hard over nothing and declared that day “National Fish Day,” finding him in the kitchen at four in the morning, dipping a sausage into mustard—me taking that other half of the sausage, dipping it into mustard—the two of us deciding to drive the six hours back to Gainesville, right then.

“That is a really happy memory,” he wrote to me. “That is my nicest memory from that year and from that whole period. I wish we could live it again, in some extra-dimensional parallel life.”

Three years ago I moved back to San Francisco, which was, for me, a new-old city.

I’d lived there twice before. The first time I lived there was a cold summer in 2006, during which I met that man I’d be broken up about a couple years later. And though that summer was before I started writing down the food, and before I truly learned how to cook for myself, I can still remember flashes: a dimly lit party and drinks with limes in them and how, ill-versed in flirting, I took the limes from his drink and put them into mine. I remember a night he cooked circular ravioli he’d bought from an expensive Italian grocery store, and zucchini he’d sliced into thin coins. I remembered him splashing Colt 45—leftover from a party—into the zucchini as it was cooking, and all of that charming me: the Colt 45, the expensive ravioli, this dinner of circles.

The second time I lived in San Francisco was the time our thing fell apart. This was where my terror had originated: where I remembered the limes and the ravioli, he remembered or felt the immediacy of something else, and neither of us was right or wrong to remember what we did—all memories, of course, are valid—but still, it sucked. And now I have a record reminding me of the nights I came home drunk and sad and, with nothing else in the house, sautéed kale; blanks on the days I ran hungry to Kezar Stadium from the Lower Haight, running lap after lap after lap to turn my brain off, stopping to read short stories at the bookstore on the way home, all to turn off the inevitable thinking, and at home, of course, the inevitable thinking.


I’m not sure what to make of this data—what conclusions, if any, to draw. What I know is that it accumulates and disappears and accumulates again. No matter how vigilantly we keep track—even if we spend four months in a geodesic dome on a remote volcano with nothing to do but keep track—we experience more than we have the capacity to remember; we eat more than we can retain; we feel more than we can possibly carry with us. And maybe forgetting isn’t so bad. I know there is the “small green apple” from the time we went to a moving sale and he bought bricks, and it was raining lightly, and as we were gathering the bricks we noticed an apple tree at the edge of the property with its branches overhanging into the yard, and we picked two small green apples that’d been washed by the rain, and wiped them off on our shirts. They surprised us by being sweet and tart and good. We put the cores in his car’s cup holders. There was the time he brought chocolate chips and two eggs and a Tupperware of milk to my apartment, and we baked cookies. There are the times he puts candy in my jacket’s small pockets—usually peppermints so ancient they’ve melted and re-hardened inside their wrappers—which I eat anyway, and then are gone, but not gone.

The Fermi Paradox – Wait But Why

The Fermi Paradox – Wait But Why.

Great Filter

SETI (Search for Extraterrestrial Intelligence) is an organization dedicated to listening for signals from other intelligent life. If we’re right that there are 100,000 or more intelligent civilizations in our galaxy, and even a fraction of them are sending out radio waves or laser beams or other modes of attempting to contact others, shouldn’t SETI’s satellite array pick up all kinds of signals?

But it hasn’t. Not one. Ever.

Where is everybody?

It gets stranger. Our sun is relatively young in the lifespan of the universe. There are far older stars with far older Earth-like planets, which should in theory mean civilizations far more advanced than our own. As an example, let’s compare our 4.54 billion-year-old Earth to a hypothetical 8 billion-year-old Planet X.


The technology and knowledge of a civilization only 1,000 years ahead of us could be as shocking to us as our world would be to a medieval person. A civilization 1 million years ahead of us might be as incomprehensible to us as human culture is to chimpanzees. And Planet X is 3.4 billion years ahead of us…

There’s something called The Kardashev Scale, which helps us group intelligent civilizations into three broad categories by the amount of energy they use:

A Type I Civilization has the ability to use all of the energy on their planet. We’re not quite a Type I Civilization, but we’re close (Carl Sagan created a formula for this scale which puts us at a Type 0.7 Civilization).

A Type II Civilization can harness all of the energy of their host star. Our feeble Type I brains can hardly imagine how someone would do this, but we’ve tried our best, imagining things like a Dyson Sphere.

A Type III Civilization blows the other two away, accessing power comparable to that of the entire Milky Way galaxy.

If this level of advancement sounds hard to believe, remember Planet X above and their 3.4 billion years of further development. If a civilization on Planet X were similar to ours and were able to survive all the way to Type III level, the natural thought is that they’d probably have mastered inter-stellar travel by now, possibly even colonizing the entire galaxy.

Continuing to speculate, if 1% of intelligent life survives long enough to become a potentially galaxy-colonizing Type III Civilization, our calculations above suggest that there should be at least 1,000 Type III Civilizations in our galaxy alone—and given the power of such a civilization, their presence would likely be pretty noticeable. And yet, we see nothing, hear nothing, and we’re visited by no one.

So where is everybody?

Welcome to the Fermi Paradox.


In taking a look at some of the most-discussed possible explanations for the Fermi Paradox, let’s divide them into two broad categories—those explanations which assume that there’s no sign of Type II and Type III Civilizations because there are none of them out there, and those which assume they’re out there and we’re not seeing or hearing anything for other reasons:

Explanation Group 1: There are no signs of higher (Type II and III) civilizations because there are no higher civilizations in existence.

Those who subscribe to Group 1 explanations point to something called the non-exclusivity problem, which rebuffs any theory that says, “There are higher civilizations, but none of them have made any kind of contact with us because they all _____.” Group 1 people look at the math, which says there should be so many thousands (or millions) of higher civilizations, that at least one of them would be an exception to the rule. Even if a theory held for 99.99% of higher civilizations, the other .01% would behave differently and we’d become aware of their existence.

Therefore, say Group 1 explanations, it must be that there are no super-advanced civilizations. And since the math suggests that there are thousands of them just in our own galaxy, something else must be going on.

This something else is called The Great Filter.

The Great Filter theory says that at some point from pre-life to Type III intelligence, there’s a wall that all or nearly all attempts at life hit. There’s some stage in that long evolutionary process that is extremely unlikely or impossible for life to get beyond. That stage is The Great Filter.

If this theory is true, the big question is, Where in the timeline does the Great Filter occur?

It turns out that when it comes to the fate of humankind, this question is very important. Depending on where The Great Filter occurs, we’re left with three possible realities: We’re rare, we’re first, or we’re fucked.

1. We’re Rare (The Great Filter is Behind Us)

One hope we have is that The Great Filter is behind us—we managed to surpass it, which would mean it’s extremely rare for life to make it to our level of intelligence. The diagram below shows only two species making it past, and we’re one of them.

This scenario would explain why there are no Type III Civilizations…but it would also mean that we could be one of the few exceptions now that we’ve made it this far. It would mean we have hope. On the surface, this sounds a bit like people 500 years ago suggesting that the Earth is the center of the universe—it implies that we’re special. However, something scientists call “observation selection effect” suggests that anyone who is pondering their own rarity is inherently part of an intelligent life “success story”—and whether they’re actually rare or quite common, the thoughts they ponder and conclusions they draw will be identical. This forces us to admit that being special is at least a possibility.

And if we are special, when exactly did we become special—i.e. which step did we surpass that almost everyone else gets stuck on?

One possibility: The Great Filter could be at the very beginning—it might be incredibly unusual for life to begin at all. This is a candidate because it took about a billion years of Earth’s existence to finally happen, and because we have tried extensively to replicate that event in labs and have never been able to do it. If this is indeed The Great Filter, it would mean that not only is there no intelligent life out there, there may be no other life at all.

Another possibility: The Great Filter could be the jump from the simple prokaryote cell to the complex eukaryote cell. After prokaryotes came into being, they remained that way for almost two billion years before making the evolutionary jump to being complex and having a nucleus. If this is The Great Filter, it would mean the universe is teeming with simple prokaryote cells and almost nothing beyond that.

There are a number of other possibilities—some even think the most recent leap we’ve made to our current intelligence is a Great Filter candidate. While the leap from semi-intelligent life (chimps) to intelligent life (humans) doesn’t at first seem like a miraculous step, Steven Pinker rejects the idea of an inevitable “climb upward” of evolution: “Since evolution does not strive for a goal but just happens, it uses the adaptation most useful for a given ecological niche, and the fact that, on Earth, this led to technological intelligence only once so far may suggest that this outcome of natural selection is rare and hence by no means a certain development of the evolution of a tree of life.”


If we are indeed rare, it could be because of a fluky biological event, but it also could be attributed to what is called the Rare Earth Hypothesis, which suggests that though there may be many Earth-like planets, the particular conditions on Earth—whether related to the specifics of this solar system, its relationship with the moon (a moon that large is unusual for such a small planet and contributes to our particular weather and ocean conditions), or something about the planet itself—are exceptionally friendly to life.

2. We’re the First

For Group 1 Thinkers, if the Great Filter is not behind us, the one hope we have is that conditions in the universe are just recently, for the first time since the Big Bang, reaching a place that would allow intelligent life to develop. In that case, we and many other species may be on our way to super-intelligence, and it simply hasn’t happened yet. We happen to be here at the right time to become one of the first super-intelligent civilizations.

One example of a phenomenon that could make this realistic is the prevalence of gamma-ray bursts, insanely huge explosions that we’ve observed in distant galaxies. In the same way that it took the early Earth a few hundred million years before the asteroids and volcanoes died down and life became possible, it could be that the first chunk of the universe’s existence was full of cataclysmic events like gamma-ray bursts that would incinerate everything nearby from time to time and prevent any life from developing past a certain stage. Now, perhaps, we’re in the midst of an astrobiological phase transition and this is the first time any life has been able to evolve for this long, uninterrupted.

3. We’re Fucked (The Great Filter is Ahead of Us)

If we’re neither rare nor early, Group 1 thinkers conclude that The Great Filter must be in our future. This would suggest that life regularly evolves to where we are, but that something prevents life from going much further and reaching high intelligence in almost all cases—and we’re unlikely to be an exception.

One possible future Great Filter is a regularly-occurring cataclysmic natural event, like the above-mentioned gamma-ray bursts, except they’re unfortunately not done yet and it’s just a matter of time before all life on Earth is suddenly wiped out by one. Another candidate is the possible inevitability that nearly all intelligent civilizations end up destroying themselves once a certain level of technology is reached.

This is why Oxford University philosopher Nick Bostrom says that “no news is good news.” The discovery of even simple life on Mars would be devastating, because it would cut out a number of potential Great Filters behind us. And if we were to find fossilized complex life on Mars, Bostrom says “it would be by far the worst news ever printed on a newspaper cover,” because it would mean The Great Filter is almost definitely ahead of us—ultimately dooming the species. Bostrom believes that when it comes to The Fermi Paradox, “the silence of the night sky is golden.”

Explanation Group 2: Type II and III intelligent civilizations are out there—and there are logical reasons why we might not have heard from them.

Group 2 explanations get rid of any notion that we’re rare or special or the first at anything—on the contrary, they believe in the Mediocrity Principle, whose starting point is that there is nothing unusual or rare about our galaxy, solar system, planet, or level of intelligence, until evidence proves otherwise. They’re also much less quick to assume that the lack of evidence of higher intelligence beings is evidence of their nonexistence—emphasizing the fact that our search for signals stretches only about 100 light years away from us (0.1% across the galaxy) and suggesting a number of possible explanations. Here are 10:

Possibility 1) Super-intelligent life could very well have already visited Earth, but before we were here. In the scheme of things, sentient humans have only been around for about 50,000, a little blip of time—if contact happened before then, it might have made some ducks flip out and run into the water and that’s it. Further, recorded history only goes back 5,500 years—a group of ancient hunter-gatherer tribes may have experienced some crazy alien shit, but they had no good way to tell anyone in the future about it.

Possibility 2) The galaxy has been colonized, but we just live in some desolate rural area of the galaxy. The Americas may have been colonized by Europeans long before anyone in a small Inuit tribe in far northern Canada realized it had happened. There could be an urbanization component to the interstellar dwellings of higher species, in which all the neighboring solar systems in a certain area are colonized and in communication, and it would be impractical and purposeless for anyone to deal with coming all the way out to the random part of the spiral where we live.

Possibility 3) The entire concept of physical colonization is a hilariously backward concept to a more advanced species. Remember the picture of the Type II Civilization above with the sphere around their star? With all that energy, they might have created a perfect environment for themselves that satisfies their every need. They might have crazy-advanced ways of reducing their need for resources and zero interest in leaving their happy utopia to explore the cold, empty, undeveloped universe.

An even more advanced civilization might view the entire physical world as a horribly primitive place, having long ago conquered their own biology and uploaded their brains to a virtual reality, eternal-life paradise. Living in the physical world of biology, mortality, wants, and needs might seem to them the way we view primitive ocean species living in the frigid, dark sea. FYI, thinking about another life form having bested mortality makes me incredibly jealous and upset.

Possibility 4) There are scary predator civilizations out there, and most intelligent life knows better than to broadcast any outgoing signals and advertise their location. This is an unpleasant concept and would help explain the lack of any signals being received by the SETI satellites. It also means that we might be the super naive newbies who are being unbelievably stupid and risky by ever broadcasting outward signals. There’s a debate going on currently about whether we should engage in METI (Messaging to Extraterrestrial Intelligence—the reverse of SETI) or not, and most people say we should not. Stephen Hawking warns, “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.” Even Carl Sagan (a general believer that any civilization advanced enough for interstellar travel would be altruistic, not hostile) called the practice of METI “deeply unwise and immature,” and recommended that “the newest children in a strange and uncertain cosmos should listen quietly for a long time, patiently learning about the universe and comparing notes, before shouting into an unknown jungle that we do not understand.” Scary.[2]

Possibility 5) There’s only one instance of higher-intelligent life—a “superpredator” civilization (like humans are here on Earth)—who is far more advanced than everyone else and keeps it that way by exterminating any intelligent civilization once they get past a certain level. This would suck. The way it might work is that it’s an inefficient use of resources to exterminate all emerging intelligences, maybe because most die out on their own. But past a certain point, the super beings make their move—because to them, an emerging intelligent species becomes like a virus as it starts to grow and spread. This theory suggests that whoever was the first in the galaxy to reach intelligence won, and now no one else has a chance. This would explain the lack of activity out there because it would keep the number of super-intelligent civilizations to just one.

Possibility 6) There’s plenty of activity and noise out there, but our technology is too primitive and we’re listening for the wrong things. Like walking into a modern-day office building, turning on a walkie-talkie, and when you hear no activity (which of course you wouldn’t hear because everyone’s texting, not using walkie-talkies), determining that the building must be empty. Or maybe, as Carl Sagan has pointed out, it could be that our minds work exponentially faster or slower than another form of intelligence out there—e.g. it takes them 12 years to say “Hello,” and when we hear that communication, it just sounds like white noise to us.

Possibility 7) We are receiving contact from other intelligent life, but the government is hiding it. This is an idiotic theory, but I had to mention it because it’s talked about so much.

Possibility 8) Higher civilizations are aware of us and observing us (AKA the “Zoo Hypothesis”). As far as we know, super-intelligent civilizations exist in a tightly-regulated galaxy, and our Earth is treated like part of a vast and protected national park, with a strict “Look but don’t touch” rule for planets like ours. We wouldn’t notice them, because if a far smarter species wanted to observe us, it would know how to easily do so without us noticing. Maybe there’s a rule similar to the Star Trek’s “Prime Directive” which prohibits super-intelligent beings from making any open contact with lesser species like us or revealing themselves in any way, until the lesser species has reached a certain level of intelligence.

Possibility 9) Higher civilizations are here, all around us. But we’re too primitive to perceive them. Michio Kaku sums it up like this:

Lets say we have an ant hill in the middle of the forest. And right next to the ant hill, they’re building a ten-lane super-highway. And the question is “Would the ants be able to understand what a ten-lane super-highway is? Would the ants be able to understand the technology and the intentions of the beings building the highway next to them?

So it’s not that we can’t pick up the signals from Planet X using our technology, it’s that we can’t even comprehend what the beings from Planet X are or what they’re trying to do. It’s so beyond us that even if they really wanted to enlighten us, it would be like trying to teach ants about the internet.

Along those lines, this may also be an answer to “Well if there are so many fancy Type III Civilizations, why haven’t they contacted us yet?” To answer that, let’s ask ourselves—when Pizarro made his way into Peru, did he stop for a while at an anthill to try to communicate? Was he magnanimous, trying to help the ants in the anthill? Did he become hostile and slow his original mission down in order to smash the anthill apart? Or was the anthill of complete and utter and eternal irrelevance to Pizarro? That might be our situation here.

Possibility 10) We’re completely wrong about our reality. There are a lot of ways we could just be totally off with everything we think. The universe might appear one way and be something else entirely, like a hologram. Or maybe we’re the aliens and we were planted here as an experiment or as a form of fertilizer. There’s even a chance that we’re all part of a computer simulation by some researcher from another world, and other forms of life simply weren’t programmed into the simulation.