Tag Archives: collection

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings.

Vannevar Bush’s ‘memex’ — short for ‘memory index’ — a primitive vision for a personal hard drive for information storage and management.

“At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.”

[…]

One of his most fascinating and important points has to do with our outsourcing of memory — or, more specifically, our increasingly deft, search-engine-powered skills of replacing the retention of knowledge in our own brains with the on-demand access to knowledge in the collective brain of the internet. Think, for instance, of those moments when you’re trying to recall the name of a movie but only remember certain fragmentary features — the name of the lead actor, the gist of the plot, a song from the soundtrack. Thompson calls this “tip-of-the-tongue syndrome” and points out that, today, you’ll likely be able to reverse-engineer the name of the movie you don’t remember by plugging into Google what you do remember about it.

[…]

“Tip-of-the-tongue syndrome is an experience so common that cultures worldwide have a phrase for it. Cheyenne Indians call it navonotootse’a, which means “I have lost it on my tongue”; in Korean it’s hyeu kkedu-te mam-dol-da, which has an even more gorgeous translation: “sparkling at the end of my tongue.” The phenomenon generally lasts only a minute or so; your brain eventually makes the connection. But … when faced with a tip-of-the-tongue moment, many of us have begun to rely instead on the Internet to locate information on the fly. If lifelogging … stores “episodic,” or personal, memories, Internet search engines do the same for a different sort of memory: “semantic” memory, or factual knowledge about the world. When you visit Paris and have a wonderful time drinking champagne at a café, your personal experience is an episodic memory. Your ability to remember that Paris is a city and that champagne is an alcoholic beverage — that’s semantic memory.”

[…]

“Writing — the original technology for externalizing information — emerged around five thousand years ago, when Mesopotamian merchants began tallying their wares using etchings on clay tablets. It emerged first as an economic tool. As with photography and the telephone and the computer, newfangled technologies for communication nearly always emerge in the world of commerce. The notion of using them for everyday, personal expression seems wasteful, risible, or debased. Then slowly it becomes merely lavish, what “wealthy people” do; then teenagers take over and the technology becomes common to the point of banality.”

Thompson reminds us of the anecdote, by now itself familiar “to the point of banality,” about Socrates and his admonition that the “technology” of writing would devastate the Greek tradition of debate and dialectic, and would render people incapable of committing anything to memory because “knowledge stored was not really knowledge at all.” He cites Socrates’s parable of the Egyptian god Theuth and how he invented writing, offering it as a gift to the king of Egypt,

“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

That resistance endured as technology changed shape, across the Middle Ages and past Gutenberg’s revolution, but it wasn’t without counter-resistance: Those who recorded their knowledge in writing and, eventually, collected it in the form of books argued that it expanded the scope of their curiosity and the ideas they were able to ponder, whereas the mere act of rote memorization made no guarantees of deeper understanding.

Ultimately, however, Thompson points out that Socrates was both right and wrong: It’s true that, with some deliberately cultivated exceptions and neurological outliers, few thinkers today rely on pure memorization and can recite extensive passages of text from memory. But what Socrates failed to see was the extraordinary dot-connecting enabled by access to knowledge beyond what our own heads can hold — because, as Amanda Palmer poignantly put it, “we can only connect the dots that we collect,” and the outsourcing of memory has exponentially enlarged our dot-collections.

With this in mind, Thompson offers a blueprint to this newly developed system of knowledge management in which access is critical:

“If you are going to read widely but often read books only once; if you going to tackle the ever-expanding universe of ideas by skimming and glancing as well as reading deeply; then you are going to rely on the semantic-memory version of gisting. By which I mean, you’ll absorb the gist of what you read but rarely retain the specifics. Later, if you want to mull over a detail, you have to be able to refind a book, a passage, a quote, an article, a concept.”

This, he argues, is also how and why libraries were born — the death of the purely oral world and the proliferation of print after Gutenberg placed new demands on organizing and storing human knowledge. And yet storage and organization soon proved to be radically different things:

“The Gutenberg book explosion certainly increased the number of books that libraries acquired, but librarians had no agreed-upon ways to organize them. It was left to the idiosyncrasies of each. A core job of the librarian was thus simply to find the book each patron requested, since nobody else knew where the heck the books were. This created a bottleneck in access to books, one that grew insufferable in the nineteenth century as citizens began swarming into public venues like the British Library. “Complaints about the delays in the delivery of books to readers increased,” as Matthew Battles writes in Library: An Unquiet History, “as did comments about the brusqueness of the staff.” Some patrons were so annoyed by the glacial pace of access that they simply stole books; one was even sentenced to twelve months in prison for the crime. You can understand their frustration. The slow speed was not just a physical nuisance, but a cognitive one.”

The solution came in the late 19th century by way of Melville Dewey, whose decimal system imposed order by creating a taxonomy of book placement, eventually rendering librarians unnecessary — at least in their role as literal book-retrievers. They became, instead, curiosity sherpas who helped patrons decide what to read and carry out comprehensive research. In many ways, they came to resemble the editors and curators who help us navigate the internet today, framing for us what is worth attending to and why.

[…]

“The history of factual memory has been fairly predictable up until now. With each innovation, we’ve outsourced more information, then worked to make searching more efficient. Yet somehow, the Internet age feels different. Quickly pulling up [the answer to a specific esoteric question] on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.”

And therein lies the magic of the internet — that unprecedented access to humanity’s collective brain. Thompson cites the work of Harvard psychologist Daniel Wegner, who first began exploring this notion of collective rather than individual knowledge in the 1980s by observing how partners in long-term relationships often divide and conquer memory tasks in sharing the household’s administrative duties:

“Wegner suspected this division of labor takes place because we have pretty good “metamemory.” We’re aware of our mental strengths and limits, and we’re good at intuiting the abilities of others. Hang around a workmate or a romantic partner long enough and you begin to realize that while you’re terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they’re great at it. So you begin to subconsciously delegate the task of remembering that stuff to them, treating them like a notepad or encyclopedia. In many respects, Wegner noted, people are superior to these devices, because what we lose in accuracy we make up in speed.

[…]

Wegner called this phenomenon “transactive” memory: two heads are better than one. We share the work of remembering, Wegner argued, because it makes us collectively smarter — expanding our ability to understand the world around us.”

[…]

This very outsourcing of memory requires that we learn what the machine knows — a kind of meta-knowledge that enables us to retrieve the information when we need it. And, reflecting on Sparrow’s findings, Thomspon points out that this is neither new nor negative:

“We’ve been using transactive memory for millennia with other humans. In everyday life, we are only rarely isolated, and for good reason. For many thinking tasks, we’re dumber and less cognitively nimble if we’re not around other people. Not only has transactive memory not hurt us, it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.”

[…]

Outsourcing our memory to machines rather than to other humans, in fact, offers certain advantages by pulling us into a seemingly infinite rabbit hole of indiscriminate discovery:

“In some ways, machines make for better transactive memory buddies than humans. They know more, but they’re not awkward about pushing it in our faces. When you search the Web, you get your answer — but you also get much more. Consider this: If I’m trying to remember what part of Pakistan has experienced many U.S. drone strikes and I ask a colleague who follows foreign affairs, he’ll tell me “Waziristan.” But when I queried this once on the Internet, I got the Wikipedia page on “Drone attacks in Pakistan.” A chart caught my eye showing the astonishing increase of drone attacks (from 1 a year to 122 a year); then I glanced down to read a précis of studies on how Waziristan residents feel about being bombed. (One report suggested they weren’t as opposed as I’d expected, because many hated the Taliban, too.) Obviously, I was procrastinating. But I was also learning more, reinforcing my schematic understanding of Pakistan.”

[…]

“The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.

Thompson’s most important point, however, has to do with how outsourcing our knowledge to digital tools actually hampers the very process of creative thought, which relies on our ability to connect existing ideas from our mental pool of resources into new combinations, or what the French polymath Henri Poincaré has famously termed “sudden illuminations.” Without a mental catalog of materials which to mull and let incubate in our fringe consciousness, our capacity for such illuminations is greatly deflated. Thompson writes:

“These eureka moments are familiar to all of us; they’re why we take a shower or go for a walk when we’re stuck on a problem. But this technique works only if we’ve actually got a lot of knowledge about the problem stored in our brains through long study and focus. … You can’t come to a moment of creative insight if you haven’t got any mental fuel. You can’t be googling the info; it’s got to be inside you.”

[…]

“Evidence suggests that when it comes to knowledge we’re interested in — anything that truly excites us and has meaning — we don’t turn off our memory. Certainly, we outsource when the details are dull, as we now do with phone numbers. These are inherently meaningless strings of information, which offer little purchase on the mind. … It makes sense that our transactive brains would hand this stuff off to machines. But when information engages us — when we really care about a subject — the evidence suggests we don’t turn off our memory at all.”

[…]

“In an ideal world, we’d all fit the Renaissance model — we’d be curious about everything, filled with diverse knowledge and thus absorbing all current events and culture like sponges. But this battle is age-old, because it’s ultimately not just technological. It’s cultural and moral and spiritual; “getting young people to care about the hard stuff” is a struggle that goes back centuries and requires constant societal arguments and work. It’s not that our media and technological environment don’t matter, of course. But the vintage of this problem indicates that the solution isn’t merely in the media environment either.”

[…]

“A tool’s most transformative uses generally take us by surprise.”

[…]

“How should you respond when you get powerful new tools for finding answers?

Think of harder questions.”

Advertisements

Secrets of the Stacks — Medium

Secrets of the Stacks — Medium.

Choosing books for a library like mine in New York is a fulltime job. The head of acquisitions at the Society Library, Steven McGuirl, reads Publishers Weekly, Library Journal, The Times Literary Supplement, The New Yorker, The New York Review of Books, the London Review of Books, The London Times, and The New York Times to decide which fiction should be ordered. Fiction accounts for fully a quarter of the forty-eight hundred books the library acquires each year. There are standing orders for certain novelists—Martin Amis, Zadie Smith, Toni Morrison, for example. Some popular writers merit standing orders for more than one copy.

But first novels and collections of stories present a problem. McGuirl and his two assistants try to guess what the members of the library will want to read. Of course, they respond to members’ requests. If a book is requested by three people, the staff orders it. There’s also a committee of members that meets monthly to recommend books for purchase. The committee checks on the librarians’ lists and suggests titles they’ve missed. The whole enterprise balances enthusiasm and skepticism.

Boosted by reviews, prizes, large sales, word of mouth, or personal recommendations, a novel may make its way onto the library shelf, but even then it is not guaranteed a chance of being read by future generations. Libraries are constantly getting rid of books they have acquired. They have to, or they would run out of space. The polite word for this is “deaccession,” the usual word, “weeding.” I asked a friend who works for a small public library how they choose books to get rid of. Is there a formula? Who makes the decision, a person or a committee? She told me that there was a formula based on the recommendations of the industry-standard CREW manual.

CREW stands for Continuous Review Evaluation and Weeding, and the manual uses “crew” as a transitive verb, so one can talk about a library’s “crewing” its collection. It means weeding but doesn’t sound so harsh. At the heart of the CREW method is a formula consisting of three factors—the number of years since the last copyright, the number of years since the book was last checked out, and a collection of six negative factors given the acronym MUSTIE, to help decide if a book has outlived its usefulness. M. Is it Misleading or inaccurate? Is its information, as so quickly happens with medical and legal texts or travel books, for example, outdated? U. Is it Ugly? Worn beyond repair? S. Has it been Superseded by a new edition or a better account of the subject? T. Is it Trivial, of no discernible literary or scientific merit? I. Is it Irrelevant to the needs and interests of the community the library serves? E. Can it be found Elsewhere, through interlibrary loan or on the Web?

Obviously, not all the MUSTIE factors are relevant in evaluating fiction, notably Misleading and Superseded. Nor is the copyright date important. For nonfiction, the CREW formula might be 8/3/MUSTIE, which would mean “Consider a book for elimination if it is eight years since the copyright date and three years since it has been checked out and if one or more of the MUSTIE factors obtains.” But for fiction the formula is often X/2/MUSTIE, meaning the copyright date doesn’t matter, but consider a book for elimination if it hasn’t been checked out in two years and if it is TUIE—Trivial, Ugly, Irrelevant, or Elsewhere.

[…]

People who feel strongly about retaining books in libraries have a simple way to combat the removal of treasured volumes. Since every system of elimination is based, no matter what they say, on circulation counts, the number of years that have elapsed since a book was last checked out, or the number of times it has been checked out overall, if you feel strongly about a book, you should go to every library you have access to and check out the volume you care about. Take it home awhile. Read it or don’t. Keep it beside you as you read the same book on a Kindle, Nook, or iPad. Let it breathe the air of your home, and then take it back to the library, knowing you have fought the guerrilla war for physical books.

[…]

So many factors affect a novel’s chances of surviving, to say nothing of its becoming one of the immortal works we call a classic: how a book is initially reviewed, whether it sells, whether people continue to read it, whether it is taught in schools, whether it is included in college curricula, what literary critics say about it later, how it responds to various political currents as time moves on.

[…]

De Rerum Natura, lost for fifteen hundred years, was found and its merit recognized. But how many other works of antiquity were not found? How many works from past centuries never got published or, published, were never read?

If you want to see how slippery a judgment is “literary merit” and how unlikely quality is to be recognized at first glance, nothing is more fun—or more comforting to writers—than to read rejection letters or terrible reviews of books that have gone on to prove indispensable to the culture. This, for example, is how the New York Times reviewer greeted Lolita: “Lolita . . . is undeniably news in the world of books. Unfortunately, it is bad news. There are two equally serious reasons why it isn’t worth any adult reader’s attention. The first is that it is dull, dull, dull in a pretentious, florid and archly fatuous fashion. The second is that it is repulsive.”

Negative reviews are fun to write and fun to read, but the world doesn’t need them, since the average work of literary fiction is, in Laura Miller’s words, “invisible to the average reader.” It appears and vanishes from the scene largely unnoticed and unremarked.

[…]

Whether reviews are positive or negative, the attention they bring to a book is rarely sufficient, and it is becoming harder and harder for a novel to lift itself from obscurity. In the succinct and elegant words of James Gleick, “The merchandise of the information economy is not information; it is attention. These commodities have an inverse relationship. When information is cheap, attention becomes expensive.” These days, besides writing, novelists must help draw attention to what they write, tweeting, friending, blogging, and generating meta tags—unacknowledged legislators to Shelley, but now more like unpaid publicists.

On the Web, everyone can be a reviewer, and a consensus about a book can be established covering a range of readers potentially as different as Laura Miller’s cousins and the members of the French Academy. In this changed environment, professional reviewers may become obsolete, replaced by crowd wisdom. More than two centuries ago, Samuel Johnson invented the idea of crowd wisdom as applied to literature, calling it “the common reader.” “I rejoice to concur with the common reader; for by the common sense of readers, uncorrupted by literary prejudices, after all the refinements of subtilty and the dogmatism of learning, must be finally decided all claim to poetical honours.” Virginia Woolf agreed and titled her wonderful collection of essays on literature The Common Reader.

[…]

The Common Reader, however, is not one person. It is a statistical average, the mean between this reader’s one star for One God Clapping and twenty other readers’ enthusiasm for this book, the autobiography of a “Zen rabbi,” producing a four-star rating. What the rating says to me is that if I were the kind of person who wanted to read the autobiography of a Zen rabbi, I’d be very likely to enjoy it. That Amazon reviewers are a self-selected group needs underlining. If you are like Laura Miller’s cousins who have never heard of Jonathan Franzen, you will be unlikely to read Freedom, and even less likely to review it. If you read everything that John Grisham has ever written, you will probably read his latest novel and might even report on it. If you read Lolita, it’s either because you’ve heard it’s one of the great novels of the twentieth century or because you’ve heard it’s a dirty book. Whatever brings you to it, you are likely to enjoy it. Four and a half stars.

The idea of the wisdom of crowds, popularized by James Surowiecki, dates to 1906, when the English statistician Francis Galton (Darwin’s cousin) focused on a contest at a county fair for guessing the weight of an ox. For sixpence, a person could buy a ticket, fill in his name, and guess the weight of the animal after butchering. The person whose guess was closest to the actual weight of the ox won a prize. Galton, having the kind of mind he did, played around with the numbers he gathered from this contest and discovered that the average of all the guesses was only one pound off from the actual weight of the ox, 1,198 pounds. If you’re looking for the Common Reader’s response to a novel, you can’t take any one review as truth but merely as a passionate assertion of one point of view, one person’s guess at the weight of the ox.

“I really enjoy reading this novel it makes you think about a sex offender’s mind. I’m also happy that I purchased this novel on Amazon because I was able to find it easily with a suitable price for me.”

“Vladimir has a way with words. The prose in this book is simply remarkable.”

“Overrated and pretentious. Overly flowery language encapsulating an uninteresting and overdone plot. Older man and pre-adolescent hypersexual woman—please let’s not exaggerate the originality of that concept, it has existed for millennia now. In fact, you’ll find similar stories in every chapter of the Bible.”

“Like many other folk I read Lolita when it first came out. I was a normally-sexed man and I found it excitingly erotic. Now, nearing 80, I still felt the erotic thrill but was more open to the beauty of Nabokov’s prose.”

“Presenting the story from Humbert’s self-serving viewpoint was Nabokov’s peculiarly brilliant means by which a straight, non-perverted reader is taken to secret places she/he might otherwise dare not go.”

“A man who was ‘hip’ while maintaining a bemused detachment from trendiness, what would he have made of shopping malls? Political correctness? Cable television? Alternative music? The Internet? . . . Or some of this decade’s greatest scandals, near-Nabokovian events in themselves, like Joey Buttafuoco, Lorena Bobbitt, O. J. Simpson, Bill and Monica? Wherever he is (Heaven, Hell, Nirvana, Anti-Terra), I would like to thank Nabokov for providing us with a compelling and unique model of how to read, write, and perceive life.”

What would the hip, bemused author of Lolita have made of Amazon ratings? I like to think that he would have reveled in them as evidence of the cheerful self- assurance, the lunatic democracy of his adopted culture.

“Once a populist gimmick, the reviews are vital to make sure a new product is not lost in the digital wilderness,” the Times reports.

Amazon’s own gatekeepers have removed thousands of reviews from its site in an attempt to curb what has become widespread manipulation of its ratings. They eliminated some reviews by family members and people considered too biased to be entitled to an opinion, competing writers, for example. They did not, however, eliminate reviews by people who admit they have not read the book. “We do not require people to have experienced the product in order to review,” said an Amazon spokesman.