Tag Archives: e 18

NHM | 51n4e

NHM | 51n4e.

NHM is a project for a Dutch National History Museum that does not own a collection and is not planning to build up one.

This strategic choice allows the museum to have an immense flexibility in presenting itself in different media. At the same time the manifestation as a building has the ambitions to stimulate a physical and collective experience which is directly related to a spatial concept for a museum.

The proposal thrives through the contrast of its two parts: an extra-large exhibition space and a compact slab housing all other museum related functions like reception, education, meeting, lingering. The overscaled space takes the absence of a collection as an opportunity and allows the display of potentially any kind of object, from a middle-age coin to the latest windmill model. Different eras, cultures and societies are brought into dialogue. The space in itself is an abstract and absent background. It can be used for separate parallel exhibitions, presentations, concerts and collective events of any scale. Past and present are shown and happening side by side.

The various rooms of different functions in the thin slab become balconies to the internal landscape of the exhibition hall. For whatever activities the NHM is visited, the exhibition of history is always present. Overviewing the space from a distance, one can individually look back and reflect.

Advertisements

Corpography in No-Man’s Land | Re-inhabiting No-Man’s Land

Corpography in No-Man’s Land | Re-inhabiting No-Man’s Land.

From its first entrance into the English language, designating a mass burial site for 14th century victims of the Black Death, no-man’s lands exhibit an often violent encounter between bodies and the materiality of the earth. So much so, that a distinction is no longer possible.

In his 1922 essay The Battle as Inner Experience, Ernst Jünger describes how the Fronterlebnis – life on the edges of no-man’s land – dissolves the boundary between body and space, transforming the soldier into an integral part of a frontline ecology: “There, the individual is like a raging storm, the tossing sea and the rearing thunder. He has melted into everything”

The experience Jünger describes is not just a traumatic subjection of the body to mechanised war, but, as Jeffrey Herf notes, an almost erotic rebirth and transfiguration of men into a new, improved community of the trenches that will lead the creation of “new forms filled with blood and power [that] will be packed with a hard fist”. Rather than resort to nostalgia for a pastoral pre-industrialised era, in the no-man’s land Jünger discovers a landscape where body, machine and soil are fused to form “magnificent and merciless spectacles”.

[…]

In Svetlana Alexievich’s remarkable book of testimonies from Chernobyl, the wife of one of the firemen who was exposed to extreme levels of radiation described the bio-chamber in which he was placed during his hospitalization in Moscow, and the extensive quarantine measures that isolated the man from the medical staff. To complete his dehumanisation, one nurse referred to the dying man as “a radioactive object with a strong density of poisoning.[…] That’s not a person anymore, that’s a nuclear reactor”. The radical unmaking of the human body to the extent that it is no longer distinguished from the original space of disaster, echoes the violent dissolution of distinctions between body and space that constituted the disastrous corpographies of WWI.

Maps, cracks & geographical imaginations: The Princess of No-Man’s Land | Re-inhabiting No-Man’s Land

Maps, cracks & geographical imaginations: The Princess of No-Man’s Land | Re-inhabiting No-Man’s Land.

Bir Tawil – Arabic for ‘tall well’ – an 800 square mile trapezoid-shaped tract of land wedged in between the southern borders of the Arab Republic of Egypt and the northern border of the Republic of the Sudan.

What makes Bir Tawil so fascinating is that it is seemingly so unwanted. It is unclaimed by both of its continental neighbours and, as a consequence, appears to resist, even exceed, the processes of expansion and enclosure that are so associated with the system of modern nation states. Until now, that is. As of 16 June 2014, Bir Tawil has been claimed – by the unlikely sounding Jeremiah Heaton of Abingdon, Virginia.

Heaton, we are led to believe, is a man who would do almost anything for his seven-year-old daughter, including fulfilling a promise that she could be ‘a real princess’. Eschewing the easy option of procuring a natty costume from a local royal outfitter, Heaton instead cast his geopolitical eye around the world in order to establish his own independent kingdom. He initially considered staking a claim to a portion of Antarctica until he “discovered” that sovereignty claims on the continent are suspended under the Antarctic Treaty System, agreed in 1959. The unclaimed Bir Tawil was a natural second choice. In a move discomforting for its similarity to past colonial possession-taking across Africa, Heaton travelled to Bir Tawil where, on 16 June 2014 (yes, you guessed it, his daughter’s seventh birthday) he planted a self-designed flag and ushered into being the ‘Kingdom of Northern Sudan’.

Jeremiah Heaton staking his claim to Bir Tawil (16 June 2014).

Jeremiah Heaton staking his claim to Bir Tawil (16 June 2014).

[…]

“It is not just a no man’s land, it is actively spurned. It appears to be the only place left on earth that is both habitable and unclaimed.”

The roots of this “unclaiming” date back more than a century to the publication, in 1899 and 1902 respectively, of two maps by British colonial cartographers that created two distinct versions of the border between Egypt and what was, at the time, Anglo-Egyptian Sudan. The 1899 iteration places Bir Tawil within Sudan but incorporates the economically productive pocket of land known as the Hala’ib Triangle within Egypt. The 1902 map reversed this territorial allocation by placing Bir Tawil within Egypt and the Hala’ib Triangle within Sudan.

The effect of this cartographic flip-flopping has been that neither Egypt nor Sudan has pursued an active claim over Bir Tawil because to do so may undermine their respective national claims to the Hala’ib Triangle. Bir Tawil, as a consequence, exists as a crack between two modern nation states and, as such, is evocative of one of the earliest appearances of No-Man’s Lands (nonesmanneslond) in the English language; from around 1320 when it was used in reference to the barren stretches of land—often used as waste or dumping grounds—between two provinces or kingdoms. While these medieval spaces were frequently economically unproductive and therefore unwanted by feudal Lords, the story of Bir Tawil is bound up in a more complex story of sovereignty claims and strategic ‘unclaiming’.

[…]

Bir Tawil was for thousands of years, until comparatively recently, actively used by members of the Ababda tribe in the pursuance of their nomadic lifestyle, culture and practices. Even after 1902, the Ababda continued to transgress – or, again, exceed – the newly-imagined lines of colonial cartography in order to seasonally graze camels, goats and sheep.

[…]

Satellite imagery reveals more contemporary evidence of occupation (albeit temporary) and movement within – and through – Bir Tawil. Tyre tracks point to frequent visitation – whether for the purpose of military patrols, tourism, or the transportation of goods or people. In any case, No Man’s Lands are rarely empty.

[…]

In the meantime, the world surely trembles in anticipation of 16th June 2015, the date when we will learn what Princess Emily requests and requires for her 8th birthday.

Airports: The True Cities of the 21st Century – J.G. Ballard

Airports: The True Cities of the 21st Century – J.G. Ballard.

Ballardian: The World of JG Ballard

Airports, designed around the needs of their collaborating technologies, seem to be the only form of public architecture free from the pressures of kitsch and nostalgia. As far as I know, there are no half-timbered terminal buildings or pebble-dashed control towers.

[…]

For the past 35 years I have lived in Shepperton, a suburb not of London but of London’s Heathrow Airport. The Heathrow-tinged land extends for at least 10 miles south and west, a zone of motorways, science parks, and industrial estates, a landscape that most people affect to loathe but that I regard as the most advanced and admirable in the British Isles, and a paradigm of the best that the future offers us.

[…]

I value the benevolent social and architectural influence that a huge transit facility like Heathrow casts on the urban landscape around it. I have learned to like the intricate network of car rental offices, air freight depots, and travel clinics, the light industrial and motel architecture that unvaryingly surrounds every major airport in the world. Together they constitute the reality of our lives, rather than a mythical domain of village greens, cathedrals, and manorial vistas. I welcome the landscape’s transience, alienation, and discontinuities, and its unashamed response to the pressures of speed, disposability, and the instant impulse. Here, under the flight paths, everything is designed for the next five minutes.

By comparison, London itself seems hopelessly antiquated. Its hundreds of miles of gentrified stucco are a hangover from the 19th century that should have been bulldozed decades ago. I have the sense of a city devised as an instrument of political control, like the class system that preserves England from revolution. The labyrinth of districts and boroughs, the endless porticos that once guarded the modest terraced cottages of Victorian clerks, make it clear that London is a place where people know their place.

At an airport like Heathrow the individual is defined not by the tangible ground mortgaged into his soul for the next 40 years, but by the indeterminate flicker of flight numbers trembling on a screen. We are no longer citizens with civic obligations, but passengers for whom all destinations are theoretically open, our lightness of baggage mandated by the system. Airports have become a new kind of discontinuous city whose vast populations are entirely transient, purposeful, and, for the most part, happy. An easy camaraderie rules the departure lounges, along with the virtual abolition of nationality—whether we are Scots or Japanese is far less important than where we are going. I’ve long suspected that people are truly happy and aware of a real purpose to their lives only when they hand over their tickets at the check-in.

I suspect that the airport will be the true city of the 21st century. The great airports are already the suburbs of an invisible world capital, a virtual metropolis whose border towns are named Heathrow, Kennedy, Charles de Gaulle, Nagoya, a centripetal city whose population forever circles its notional center and will never need to gain access to its dark heart. Mastery of the discontinuities of metropolitan life has always been essential to successful urban dwellers—we know none of our neighbors, and our close friends live equally isolated lives within 50 square miles around us. We work in a district five miles away, shop in another, and see films and plays in a third. Failure to master these discontinuities leaves some ethnic groups at a disadvantage, forced into enclaves that seem to reconstitute mental maps of ancestral villages.

But the modern airport defuses these tensions and offers its passengers the social reassurance of the boarding lounge, an instantly summoned village whose life span is long enough to calm us and short enough not to be a burden. The terminal concourses are the ramblas and agoras of the future city, time-free zones where all the clocks of the world are displayed, an atlas of arrivals and destinations forever updating itself, where briefly we become true world citizens. Air travel may well be the most important civic duty that we discharge today, erasing class and national distinctions and subsuming them within the unitary global culture of the departure lounge.

The Hi-Tech Mess of Higher Education by David Bromwich | The New York Review of Books

The Hi-Tech Mess of Higher Education by David Bromwich | The New York Review of Books.

bromwich_1-081414.jpg

Students at Deep Springs College in the California desert, near the Nevada border, where education involves ranching, farming, and self-governance in addition to academics – Jodi Cobb/National Geographic/Getty Images

The financial crush has come just when colleges are starting to think of Internet learning as a substitute for the classroom. And the coincidence has engendered a new variant of the reflection theory. We are living (the digital entrepreneurs and their handlers like to say) in a technological society, or a society in which new technology is rapidly altering people’s ways of thinking, believing, behaving, and learning. It follows that education itself ought to reflect the change. Mastery of computer technology is the major competence schools should be asked to impart. But what if you can get the skills more cheaply without the help of a school?

A troubled awareness of this possibility has prompted universities, in their brochures, bulletins, and advertisements, to heighten the one clear advantage that they maintain over the Internet. Universities are physical places; and physical existence is still felt to be preferable in some ways to virtual existence. Schools have been driven to present as assets, in a way they never did before, nonacademic programs and facilities that provide students with the “quality of life” that makes a college worth the outlay. Auburn University in Alabama recently spent $72 million on a Recreation and Wellness Center. Stanford built Escondido Village Highrise Apartments. Must a college that wants to compete now have a student union with a food court and plasma screens in every room?

[…]

The model seems to be the elite club—in this instance, a club whose leading function is to house in comfort thousands of young people while they complete some serious educational tasks and form connections that may help them in later life.

[…]

A hidden danger both of intramural systems and of public forums like “Rate My Professors” is that they discourage eccentricity. Samuel Johnson defined a classic of literature as a work that has pleased many and pleased long. Evaluations may foster courses that please many and please fast.

At the utopian edge of the technocratic faith, a rising digital remedy for higher education goes by the acronym MOOCs (massive open online courses). The MOOC movement is represented in Ivory Tower by the Silicon Valley outfit Udacity. “Does it really make sense,” asks a Udacity adept, “to have five hundred professors in five hundred different universities each teach students in a similar way?” What you really want, he thinks, is the academic equivalent of a “rock star” to project knowledge onto the screens and into the brains of students without the impediment of fellow students or a teacher’s intrusive presence in the room. “Maybe,” he adds, “that rock star could do a little bit better job” than the nameless small-time academics whose fame and luster the video lecturer will rightly displace.

That the academic star will do a better job of teaching than the local pedagogue who exactly resembles 499 others of his kind—this, in itself, is an interesting assumption at Udacity and a revealing one. Why suppose that five hundred teachers of, say, the English novel from Defoe to Joyce will all tend to teach the materials in the same way, while the MOOC lecturer will stand out because he teaches the most advanced version of the same way? Here, as in other aspects of the movement, under all the talk of variety there lurks a passion for uniformity.

[…]

The pillars of education at Deep Springs are self-governance, academics, and physical labor. The students number scarcely more than the scholar-hackers on Thiel Fellowships—a total of twenty-six—but they are responsible for all the duties of ranching and farming on the campus in Big Pine, California, along with helping to set the curriculum and keep their quarters. Two minutes of a Deep Springs seminar on citizen and state in the philosophy of Hegel give a more vivid impression of what college education can be than all the comments by college administrators in the rest of Ivory Tower.

[…]

Teaching at a university, he says, involves a commitment to the preservation of “cultural memory”; it is therefore in some sense “an effort to cheat death.”

What I believe, J.G. Ballard

What I believe, J.G. Ballard

Photos from the Freeway series by Catherine Opie

I believe in the power of the imagination to remake the world, to release the truth within us, to hold back the night, to transcend death, to charm motorways, to ingratiate ourselves with birds, to enlist the confidences of madmen.

I believe in my own obsessions, in the beauty of the car crash, in the peace of the submerged forest, in the excitements of the deserted holiday beach, in the elegance of automobile graveyards, in the mystery of multi-storey car parks, in the poetry of abandoned hotels.

I believe in the forgotten runways of Wake Island, pointing towards the Pacifics of our imaginations.

I believe in the mysterious beauty of Margaret Thatcher, in the arch of her nostrils and the sheen on her lower lip; in the melancholy of wounded Argentine conscripts; in the haunted smiles of filling station personnel; in my dream of Margaret Thatcher caressed by that young Argentine soldier in a forgotten motel watched by a tubercular filling station attendant.

I believe in the beauty of all women, in the treachery of their imaginations, so close to my heart; in the junction of their disenchanted bodies with the enchanted chromium rails of supermarket counters; in their warm tolerance of my perversions.

I believe in the death of tomorrow, in the exhaustion of time, in our search for a new time within the smiles of auto-route waitresses and the tired eyes of air-traffic controllers at out-of-season airports.

I believe in the genital organs of great men and women, in the body postures of Ronald Reagan, Margaret Thatcher and Princess Di, in the sweet odours emanating from their lips as they regard the cameras of the entire world.

I believe in madness, in the truth of the inexplicable, in the common sense of stones, in the lunacy of flowers, in the disease stored up for the human race by the Apollo astronauts.

I believe in nothing.

I believe in Max Ernst, Delvaux, Dali, Titian, Goya, Leonardo, Vermeer, Chirico, Magritte, Redon, Duerer, Tanguy, the Facteur Cheval, the Watts Towers, Boecklin, Francis Bacon, and all the invisible artists within the psychiatric institutions of the planet.

I believe in the impossibility of existence, in the humour of mountains, in the absurdity of electromagnetism, in the farce of geometry, in the cruelty of arithmetic, in the murderous intent of logic.

I believe in adolescent women, in their corruption by their own leg stances, in the purity of their dishevelled bodies, in the traces of their pudenda left in the bathrooms of shabby motels.

I believe in flight, in the beauty of the wing, and in the beauty of everything that has ever flown, in the stone thrown by a small child that carries with it the wisdom of statesmen and midwives.

I believe in the gentleness of the surgeon’s knife, in the limitless geometry of the cinema screen, in the hidden universe within supermarkets, in the loneliness of the sun, in the garrulousness of planets, in the repetitiveness or ourselves, in the inexistence of the universe and the boredom of the atom.

I believe in the light cast by video-recorders in department store windows, in the messianic insights of the radiator grilles of showroom automobiles, in the elegance of the oil stains on the engine nacelles of 747s parked on airport tarmacs.

I believe in the non-existence of the past, in the death of the future, and the infinite possibilities of the present.

I believe in the derangement of the senses: in Rimbaud, William Burroughs, Huysmans, Genet, Celine, Swift, Defoe, Carroll, Coleridge, Kafka.

I believe in the designers of the Pyramids, the Empire State Building, the Berlin Fuehrerbunker, the Wake Island runways.

I believe in the body odours of Princess Di.

I believe in the next five minutes.

I believe in the history of my feet.

I believe in migraines, the boredom of afternoons, the fear of calendars, the treachery of clocks.

I believe in anxiety, psychosis and despair.

I believe in the perversions, in the infatuations with trees, princesses, prime ministers, derelict filling stations (more beautiful than the Taj Mahal), clouds and birds.

I believe in the death of the emotions and the triumph of the imagination.

I believe in Tokyo, Benidorm, La Grande Motte, Wake Island, Eniwetok, Dealey Plaza.

I believe in alcoholism, venereal disease, fever and exhaustion.

I believe in pain.

I believe in despair.

I believe in all children.

I believe in maps, diagrams, codes, chess-games, puzzles, airline timetables, airport indicator signs.

I believe all excuses.

I believe all reasons.

I believe all hallucinations.

I believe all anger.

I believe all mythologies, memories, lies, fantasies, evasions.

I believe in the mystery and melancholy of a hand, in the kindness of trees, in the wisdom of light.

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings.

Vannevar Bush’s ‘memex’ — short for ‘memory index’ — a primitive vision for a personal hard drive for information storage and management.

“At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.”

[…]

One of his most fascinating and important points has to do with our outsourcing of memory — or, more specifically, our increasingly deft, search-engine-powered skills of replacing the retention of knowledge in our own brains with the on-demand access to knowledge in the collective brain of the internet. Think, for instance, of those moments when you’re trying to recall the name of a movie but only remember certain fragmentary features — the name of the lead actor, the gist of the plot, a song from the soundtrack. Thompson calls this “tip-of-the-tongue syndrome” and points out that, today, you’ll likely be able to reverse-engineer the name of the movie you don’t remember by plugging into Google what you do remember about it.

[…]

“Tip-of-the-tongue syndrome is an experience so common that cultures worldwide have a phrase for it. Cheyenne Indians call it navonotootse’a, which means “I have lost it on my tongue”; in Korean it’s hyeu kkedu-te mam-dol-da, which has an even more gorgeous translation: “sparkling at the end of my tongue.” The phenomenon generally lasts only a minute or so; your brain eventually makes the connection. But … when faced with a tip-of-the-tongue moment, many of us have begun to rely instead on the Internet to locate information on the fly. If lifelogging … stores “episodic,” or personal, memories, Internet search engines do the same for a different sort of memory: “semantic” memory, or factual knowledge about the world. When you visit Paris and have a wonderful time drinking champagne at a café, your personal experience is an episodic memory. Your ability to remember that Paris is a city and that champagne is an alcoholic beverage — that’s semantic memory.”

[…]

“Writing — the original technology for externalizing information — emerged around five thousand years ago, when Mesopotamian merchants began tallying their wares using etchings on clay tablets. It emerged first as an economic tool. As with photography and the telephone and the computer, newfangled technologies for communication nearly always emerge in the world of commerce. The notion of using them for everyday, personal expression seems wasteful, risible, or debased. Then slowly it becomes merely lavish, what “wealthy people” do; then teenagers take over and the technology becomes common to the point of banality.”

Thompson reminds us of the anecdote, by now itself familiar “to the point of banality,” about Socrates and his admonition that the “technology” of writing would devastate the Greek tradition of debate and dialectic, and would render people incapable of committing anything to memory because “knowledge stored was not really knowledge at all.” He cites Socrates’s parable of the Egyptian god Theuth and how he invented writing, offering it as a gift to the king of Egypt,

“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

That resistance endured as technology changed shape, across the Middle Ages and past Gutenberg’s revolution, but it wasn’t without counter-resistance: Those who recorded their knowledge in writing and, eventually, collected it in the form of books argued that it expanded the scope of their curiosity and the ideas they were able to ponder, whereas the mere act of rote memorization made no guarantees of deeper understanding.

Ultimately, however, Thompson points out that Socrates was both right and wrong: It’s true that, with some deliberately cultivated exceptions and neurological outliers, few thinkers today rely on pure memorization and can recite extensive passages of text from memory. But what Socrates failed to see was the extraordinary dot-connecting enabled by access to knowledge beyond what our own heads can hold — because, as Amanda Palmer poignantly put it, “we can only connect the dots that we collect,” and the outsourcing of memory has exponentially enlarged our dot-collections.

With this in mind, Thompson offers a blueprint to this newly developed system of knowledge management in which access is critical:

“If you are going to read widely but often read books only once; if you going to tackle the ever-expanding universe of ideas by skimming and glancing as well as reading deeply; then you are going to rely on the semantic-memory version of gisting. By which I mean, you’ll absorb the gist of what you read but rarely retain the specifics. Later, if you want to mull over a detail, you have to be able to refind a book, a passage, a quote, an article, a concept.”

This, he argues, is also how and why libraries were born — the death of the purely oral world and the proliferation of print after Gutenberg placed new demands on organizing and storing human knowledge. And yet storage and organization soon proved to be radically different things:

“The Gutenberg book explosion certainly increased the number of books that libraries acquired, but librarians had no agreed-upon ways to organize them. It was left to the idiosyncrasies of each. A core job of the librarian was thus simply to find the book each patron requested, since nobody else knew where the heck the books were. This created a bottleneck in access to books, one that grew insufferable in the nineteenth century as citizens began swarming into public venues like the British Library. “Complaints about the delays in the delivery of books to readers increased,” as Matthew Battles writes in Library: An Unquiet History, “as did comments about the brusqueness of the staff.” Some patrons were so annoyed by the glacial pace of access that they simply stole books; one was even sentenced to twelve months in prison for the crime. You can understand their frustration. The slow speed was not just a physical nuisance, but a cognitive one.”

The solution came in the late 19th century by way of Melville Dewey, whose decimal system imposed order by creating a taxonomy of book placement, eventually rendering librarians unnecessary — at least in their role as literal book-retrievers. They became, instead, curiosity sherpas who helped patrons decide what to read and carry out comprehensive research. In many ways, they came to resemble the editors and curators who help us navigate the internet today, framing for us what is worth attending to and why.

[…]

“The history of factual memory has been fairly predictable up until now. With each innovation, we’ve outsourced more information, then worked to make searching more efficient. Yet somehow, the Internet age feels different. Quickly pulling up [the answer to a specific esoteric question] on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.”

And therein lies the magic of the internet — that unprecedented access to humanity’s collective brain. Thompson cites the work of Harvard psychologist Daniel Wegner, who first began exploring this notion of collective rather than individual knowledge in the 1980s by observing how partners in long-term relationships often divide and conquer memory tasks in sharing the household’s administrative duties:

“Wegner suspected this division of labor takes place because we have pretty good “metamemory.” We’re aware of our mental strengths and limits, and we’re good at intuiting the abilities of others. Hang around a workmate or a romantic partner long enough and you begin to realize that while you’re terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they’re great at it. So you begin to subconsciously delegate the task of remembering that stuff to them, treating them like a notepad or encyclopedia. In many respects, Wegner noted, people are superior to these devices, because what we lose in accuracy we make up in speed.

[…]

Wegner called this phenomenon “transactive” memory: two heads are better than one. We share the work of remembering, Wegner argued, because it makes us collectively smarter — expanding our ability to understand the world around us.”

[…]

This very outsourcing of memory requires that we learn what the machine knows — a kind of meta-knowledge that enables us to retrieve the information when we need it. And, reflecting on Sparrow’s findings, Thomspon points out that this is neither new nor negative:

“We’ve been using transactive memory for millennia with other humans. In everyday life, we are only rarely isolated, and for good reason. For many thinking tasks, we’re dumber and less cognitively nimble if we’re not around other people. Not only has transactive memory not hurt us, it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.”

[…]

Outsourcing our memory to machines rather than to other humans, in fact, offers certain advantages by pulling us into a seemingly infinite rabbit hole of indiscriminate discovery:

“In some ways, machines make for better transactive memory buddies than humans. They know more, but they’re not awkward about pushing it in our faces. When you search the Web, you get your answer — but you also get much more. Consider this: If I’m trying to remember what part of Pakistan has experienced many U.S. drone strikes and I ask a colleague who follows foreign affairs, he’ll tell me “Waziristan.” But when I queried this once on the Internet, I got the Wikipedia page on “Drone attacks in Pakistan.” A chart caught my eye showing the astonishing increase of drone attacks (from 1 a year to 122 a year); then I glanced down to read a précis of studies on how Waziristan residents feel about being bombed. (One report suggested they weren’t as opposed as I’d expected, because many hated the Taliban, too.) Obviously, I was procrastinating. But I was also learning more, reinforcing my schematic understanding of Pakistan.”

[…]

“The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.

Thompson’s most important point, however, has to do with how outsourcing our knowledge to digital tools actually hampers the very process of creative thought, which relies on our ability to connect existing ideas from our mental pool of resources into new combinations, or what the French polymath Henri Poincaré has famously termed “sudden illuminations.” Without a mental catalog of materials which to mull and let incubate in our fringe consciousness, our capacity for such illuminations is greatly deflated. Thompson writes:

“These eureka moments are familiar to all of us; they’re why we take a shower or go for a walk when we’re stuck on a problem. But this technique works only if we’ve actually got a lot of knowledge about the problem stored in our brains through long study and focus. … You can’t come to a moment of creative insight if you haven’t got any mental fuel. You can’t be googling the info; it’s got to be inside you.”

[…]

“Evidence suggests that when it comes to knowledge we’re interested in — anything that truly excites us and has meaning — we don’t turn off our memory. Certainly, we outsource when the details are dull, as we now do with phone numbers. These are inherently meaningless strings of information, which offer little purchase on the mind. … It makes sense that our transactive brains would hand this stuff off to machines. But when information engages us — when we really care about a subject — the evidence suggests we don’t turn off our memory at all.”

[…]

“In an ideal world, we’d all fit the Renaissance model — we’d be curious about everything, filled with diverse knowledge and thus absorbing all current events and culture like sponges. But this battle is age-old, because it’s ultimately not just technological. It’s cultural and moral and spiritual; “getting young people to care about the hard stuff” is a struggle that goes back centuries and requires constant societal arguments and work. It’s not that our media and technological environment don’t matter, of course. But the vintage of this problem indicates that the solution isn’t merely in the media environment either.”

[…]

“A tool’s most transformative uses generally take us by surprise.”

[…]

“How should you respond when you get powerful new tools for finding answers?

Think of harder questions.”