Tag Archives: library

Reimagining 448 Local Libraries in Moscow, One Space at a Time | ArchDaily

Reimagining 448 Local Libraries in Moscow, One Space at a Time | ArchDaily.

SVESMI, an unassuming studio based in central Rotterdam, is at the center of a dauntingly complex project that may eventually see the renovation of 448 dilapidated and disused branch libraries in Moscow. Architects Anastassia Smirnova and Alexander Sverdlov balance their time between Rotterdam, which acts as their design studio, and Moscow from which, alongside architects Maria Kataryan and Pavel Rueda, they oversee the project at large. Faced by the potential challenge of reimagining over 450 public ‘living rooms’ spread across the Russian capital and demanding unusually high levels of spatial articulation and social understanding, the Open Library project is also unwinding the hidden narrative of Moscow’s local libraries.

The project began in 2012 with an idea formulated between the part-Dutch-part-Russian practice SVESMI, urban designer Paola Viganò, and a Muscovite bibliophile described asan ‘island of literary independence’ called Boris Kupriyanov (of Falanster). Sverdlov and Kupriyanov took the lead, assisted by a group of thirty-five multidisciplinary minds engaged in the production of a provocative research document which boldly called for the restoration of Moscow’s vast network of small-scale libraries. This field research was followed by the thesis of Giovanni Bellotti and , under the supervision of Paola Viganò and Alexander Sverdlov, at the Università IUAV di Venezia. The foremost goal of this research as a whole was to explore what libraries were, are and should be in order to prove that a dose of fresh ambition could shock the system into rapid reform.

Bellotti and Ruaro’s Moscow Library Atlas analysed a proportion of the city’s libraries in fantastic detail. The publication exposed the complex individual relationships between these public nodes and the wider urban context, bringing the characteristics of certain library types to light. Interestingly, the number of libraries per capita in Moscow rivals other European cultural capitals yet, prior to the inception of this project, were unpopular and disproportionately underpopulated public places. The vast majority of them remain dense with unfulfilled potential and, according to SVESMI, “do not play any significant role in the shaping of the city’s cultural landscape.” Armed with a research document demonstrating, among other things, that Moscow spends €43 per visitor per year compared to Amsterdam which spends €4,50 per visitor per year, the team had a degree of leverage to convince Sergei Kapkov, Moscow’s Culture Minister, to help set the project in motion.

[…]

As with most spaces that appear aesthetically ‘simple’, the social, strategic and spatial complexity in the background of these projects is enormous. Conversation with SVESMI’s Alexander Sverdlov uncovered interesting observations into the design of the libraries. Rather than describing them as introverted spaces they are, for Svedlov, “spaces of elevated neutrality.” “People can be engaged with themselves whilst also being observant of the city around them, just by being beautifully disconnected.” Neutrality – “a political project in itself” – is a difficult state to attain and then maintain. “To not be colored left or right, but to just be there in a state of silence and concentration, gives independence.” In this sense, the designers saw the windows as “completely crucial”, not only for those looking into the libraries but also for those readers looking out towards the street from the comfort of a beautiful, calm, well-lit space.

[…]

With such a vast collection of small spaces across Moscow ready for renovation the practice is now prioritizing the creation of a set of guidelines which clearly explains, for example, the correct layout of furniture (designed in-house due to the incredibly short construction period). In such didactic designs there is significance in the arrangement of space on all scales. The tables in Library #127, for example, are positioned in a way which engages library dwellers in a new dimension. It facilitates social incidents within public space.

Advertisements

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings.

Vannevar Bush’s ‘memex’ — short for ‘memory index’ — a primitive vision for a personal hard drive for information storage and management.

“At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.”

[…]

One of his most fascinating and important points has to do with our outsourcing of memory — or, more specifically, our increasingly deft, search-engine-powered skills of replacing the retention of knowledge in our own brains with the on-demand access to knowledge in the collective brain of the internet. Think, for instance, of those moments when you’re trying to recall the name of a movie but only remember certain fragmentary features — the name of the lead actor, the gist of the plot, a song from the soundtrack. Thompson calls this “tip-of-the-tongue syndrome” and points out that, today, you’ll likely be able to reverse-engineer the name of the movie you don’t remember by plugging into Google what you do remember about it.

[…]

“Tip-of-the-tongue syndrome is an experience so common that cultures worldwide have a phrase for it. Cheyenne Indians call it navonotootse’a, which means “I have lost it on my tongue”; in Korean it’s hyeu kkedu-te mam-dol-da, which has an even more gorgeous translation: “sparkling at the end of my tongue.” The phenomenon generally lasts only a minute or so; your brain eventually makes the connection. But … when faced with a tip-of-the-tongue moment, many of us have begun to rely instead on the Internet to locate information on the fly. If lifelogging … stores “episodic,” or personal, memories, Internet search engines do the same for a different sort of memory: “semantic” memory, or factual knowledge about the world. When you visit Paris and have a wonderful time drinking champagne at a café, your personal experience is an episodic memory. Your ability to remember that Paris is a city and that champagne is an alcoholic beverage — that’s semantic memory.”

[…]

“Writing — the original technology for externalizing information — emerged around five thousand years ago, when Mesopotamian merchants began tallying their wares using etchings on clay tablets. It emerged first as an economic tool. As with photography and the telephone and the computer, newfangled technologies for communication nearly always emerge in the world of commerce. The notion of using them for everyday, personal expression seems wasteful, risible, or debased. Then slowly it becomes merely lavish, what “wealthy people” do; then teenagers take over and the technology becomes common to the point of banality.”

Thompson reminds us of the anecdote, by now itself familiar “to the point of banality,” about Socrates and his admonition that the “technology” of writing would devastate the Greek tradition of debate and dialectic, and would render people incapable of committing anything to memory because “knowledge stored was not really knowledge at all.” He cites Socrates’s parable of the Egyptian god Theuth and how he invented writing, offering it as a gift to the king of Egypt,

“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

That resistance endured as technology changed shape, across the Middle Ages and past Gutenberg’s revolution, but it wasn’t without counter-resistance: Those who recorded their knowledge in writing and, eventually, collected it in the form of books argued that it expanded the scope of their curiosity and the ideas they were able to ponder, whereas the mere act of rote memorization made no guarantees of deeper understanding.

Ultimately, however, Thompson points out that Socrates was both right and wrong: It’s true that, with some deliberately cultivated exceptions and neurological outliers, few thinkers today rely on pure memorization and can recite extensive passages of text from memory. But what Socrates failed to see was the extraordinary dot-connecting enabled by access to knowledge beyond what our own heads can hold — because, as Amanda Palmer poignantly put it, “we can only connect the dots that we collect,” and the outsourcing of memory has exponentially enlarged our dot-collections.

With this in mind, Thompson offers a blueprint to this newly developed system of knowledge management in which access is critical:

“If you are going to read widely but often read books only once; if you going to tackle the ever-expanding universe of ideas by skimming and glancing as well as reading deeply; then you are going to rely on the semantic-memory version of gisting. By which I mean, you’ll absorb the gist of what you read but rarely retain the specifics. Later, if you want to mull over a detail, you have to be able to refind a book, a passage, a quote, an article, a concept.”

This, he argues, is also how and why libraries were born — the death of the purely oral world and the proliferation of print after Gutenberg placed new demands on organizing and storing human knowledge. And yet storage and organization soon proved to be radically different things:

“The Gutenberg book explosion certainly increased the number of books that libraries acquired, but librarians had no agreed-upon ways to organize them. It was left to the idiosyncrasies of each. A core job of the librarian was thus simply to find the book each patron requested, since nobody else knew where the heck the books were. This created a bottleneck in access to books, one that grew insufferable in the nineteenth century as citizens began swarming into public venues like the British Library. “Complaints about the delays in the delivery of books to readers increased,” as Matthew Battles writes in Library: An Unquiet History, “as did comments about the brusqueness of the staff.” Some patrons were so annoyed by the glacial pace of access that they simply stole books; one was even sentenced to twelve months in prison for the crime. You can understand their frustration. The slow speed was not just a physical nuisance, but a cognitive one.”

The solution came in the late 19th century by way of Melville Dewey, whose decimal system imposed order by creating a taxonomy of book placement, eventually rendering librarians unnecessary — at least in their role as literal book-retrievers. They became, instead, curiosity sherpas who helped patrons decide what to read and carry out comprehensive research. In many ways, they came to resemble the editors and curators who help us navigate the internet today, framing for us what is worth attending to and why.

[…]

“The history of factual memory has been fairly predictable up until now. With each innovation, we’ve outsourced more information, then worked to make searching more efficient. Yet somehow, the Internet age feels different. Quickly pulling up [the answer to a specific esoteric question] on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.”

And therein lies the magic of the internet — that unprecedented access to humanity’s collective brain. Thompson cites the work of Harvard psychologist Daniel Wegner, who first began exploring this notion of collective rather than individual knowledge in the 1980s by observing how partners in long-term relationships often divide and conquer memory tasks in sharing the household’s administrative duties:

“Wegner suspected this division of labor takes place because we have pretty good “metamemory.” We’re aware of our mental strengths and limits, and we’re good at intuiting the abilities of others. Hang around a workmate or a romantic partner long enough and you begin to realize that while you’re terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they’re great at it. So you begin to subconsciously delegate the task of remembering that stuff to them, treating them like a notepad or encyclopedia. In many respects, Wegner noted, people are superior to these devices, because what we lose in accuracy we make up in speed.

[…]

Wegner called this phenomenon “transactive” memory: two heads are better than one. We share the work of remembering, Wegner argued, because it makes us collectively smarter — expanding our ability to understand the world around us.”

[…]

This very outsourcing of memory requires that we learn what the machine knows — a kind of meta-knowledge that enables us to retrieve the information when we need it. And, reflecting on Sparrow’s findings, Thomspon points out that this is neither new nor negative:

“We’ve been using transactive memory for millennia with other humans. In everyday life, we are only rarely isolated, and for good reason. For many thinking tasks, we’re dumber and less cognitively nimble if we’re not around other people. Not only has transactive memory not hurt us, it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.”

[…]

Outsourcing our memory to machines rather than to other humans, in fact, offers certain advantages by pulling us into a seemingly infinite rabbit hole of indiscriminate discovery:

“In some ways, machines make for better transactive memory buddies than humans. They know more, but they’re not awkward about pushing it in our faces. When you search the Web, you get your answer — but you also get much more. Consider this: If I’m trying to remember what part of Pakistan has experienced many U.S. drone strikes and I ask a colleague who follows foreign affairs, he’ll tell me “Waziristan.” But when I queried this once on the Internet, I got the Wikipedia page on “Drone attacks in Pakistan.” A chart caught my eye showing the astonishing increase of drone attacks (from 1 a year to 122 a year); then I glanced down to read a précis of studies on how Waziristan residents feel about being bombed. (One report suggested they weren’t as opposed as I’d expected, because many hated the Taliban, too.) Obviously, I was procrastinating. But I was also learning more, reinforcing my schematic understanding of Pakistan.”

[…]

“The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.

Thompson’s most important point, however, has to do with how outsourcing our knowledge to digital tools actually hampers the very process of creative thought, which relies on our ability to connect existing ideas from our mental pool of resources into new combinations, or what the French polymath Henri Poincaré has famously termed “sudden illuminations.” Without a mental catalog of materials which to mull and let incubate in our fringe consciousness, our capacity for such illuminations is greatly deflated. Thompson writes:

“These eureka moments are familiar to all of us; they’re why we take a shower or go for a walk when we’re stuck on a problem. But this technique works only if we’ve actually got a lot of knowledge about the problem stored in our brains through long study and focus. … You can’t come to a moment of creative insight if you haven’t got any mental fuel. You can’t be googling the info; it’s got to be inside you.”

[…]

“Evidence suggests that when it comes to knowledge we’re interested in — anything that truly excites us and has meaning — we don’t turn off our memory. Certainly, we outsource when the details are dull, as we now do with phone numbers. These are inherently meaningless strings of information, which offer little purchase on the mind. … It makes sense that our transactive brains would hand this stuff off to machines. But when information engages us — when we really care about a subject — the evidence suggests we don’t turn off our memory at all.”

[…]

“In an ideal world, we’d all fit the Renaissance model — we’d be curious about everything, filled with diverse knowledge and thus absorbing all current events and culture like sponges. But this battle is age-old, because it’s ultimately not just technological. It’s cultural and moral and spiritual; “getting young people to care about the hard stuff” is a struggle that goes back centuries and requires constant societal arguments and work. It’s not that our media and technological environment don’t matter, of course. But the vintage of this problem indicates that the solution isn’t merely in the media environment either.”

[…]

“A tool’s most transformative uses generally take us by surprise.”

[…]

“How should you respond when you get powerful new tools for finding answers?

Think of harder questions.”

Secrets of the Stacks — Medium

Secrets of the Stacks — Medium.

Choosing books for a library like mine in New York is a fulltime job. The head of acquisitions at the Society Library, Steven McGuirl, reads Publishers Weekly, Library Journal, The Times Literary Supplement, The New Yorker, The New York Review of Books, the London Review of Books, The London Times, and The New York Times to decide which fiction should be ordered. Fiction accounts for fully a quarter of the forty-eight hundred books the library acquires each year. There are standing orders for certain novelists—Martin Amis, Zadie Smith, Toni Morrison, for example. Some popular writers merit standing orders for more than one copy.

But first novels and collections of stories present a problem. McGuirl and his two assistants try to guess what the members of the library will want to read. Of course, they respond to members’ requests. If a book is requested by three people, the staff orders it. There’s also a committee of members that meets monthly to recommend books for purchase. The committee checks on the librarians’ lists and suggests titles they’ve missed. The whole enterprise balances enthusiasm and skepticism.

Boosted by reviews, prizes, large sales, word of mouth, or personal recommendations, a novel may make its way onto the library shelf, but even then it is not guaranteed a chance of being read by future generations. Libraries are constantly getting rid of books they have acquired. They have to, or they would run out of space. The polite word for this is “deaccession,” the usual word, “weeding.” I asked a friend who works for a small public library how they choose books to get rid of. Is there a formula? Who makes the decision, a person or a committee? She told me that there was a formula based on the recommendations of the industry-standard CREW manual.

CREW stands for Continuous Review Evaluation and Weeding, and the manual uses “crew” as a transitive verb, so one can talk about a library’s “crewing” its collection. It means weeding but doesn’t sound so harsh. At the heart of the CREW method is a formula consisting of three factors—the number of years since the last copyright, the number of years since the book was last checked out, and a collection of six negative factors given the acronym MUSTIE, to help decide if a book has outlived its usefulness. M. Is it Misleading or inaccurate? Is its information, as so quickly happens with medical and legal texts or travel books, for example, outdated? U. Is it Ugly? Worn beyond repair? S. Has it been Superseded by a new edition or a better account of the subject? T. Is it Trivial, of no discernible literary or scientific merit? I. Is it Irrelevant to the needs and interests of the community the library serves? E. Can it be found Elsewhere, through interlibrary loan or on the Web?

Obviously, not all the MUSTIE factors are relevant in evaluating fiction, notably Misleading and Superseded. Nor is the copyright date important. For nonfiction, the CREW formula might be 8/3/MUSTIE, which would mean “Consider a book for elimination if it is eight years since the copyright date and three years since it has been checked out and if one or more of the MUSTIE factors obtains.” But for fiction the formula is often X/2/MUSTIE, meaning the copyright date doesn’t matter, but consider a book for elimination if it hasn’t been checked out in two years and if it is TUIE—Trivial, Ugly, Irrelevant, or Elsewhere.

[…]

People who feel strongly about retaining books in libraries have a simple way to combat the removal of treasured volumes. Since every system of elimination is based, no matter what they say, on circulation counts, the number of years that have elapsed since a book was last checked out, or the number of times it has been checked out overall, if you feel strongly about a book, you should go to every library you have access to and check out the volume you care about. Take it home awhile. Read it or don’t. Keep it beside you as you read the same book on a Kindle, Nook, or iPad. Let it breathe the air of your home, and then take it back to the library, knowing you have fought the guerrilla war for physical books.

[…]

So many factors affect a novel’s chances of surviving, to say nothing of its becoming one of the immortal works we call a classic: how a book is initially reviewed, whether it sells, whether people continue to read it, whether it is taught in schools, whether it is included in college curricula, what literary critics say about it later, how it responds to various political currents as time moves on.

[…]

De Rerum Natura, lost for fifteen hundred years, was found and its merit recognized. But how many other works of antiquity were not found? How many works from past centuries never got published or, published, were never read?

If you want to see how slippery a judgment is “literary merit” and how unlikely quality is to be recognized at first glance, nothing is more fun—or more comforting to writers—than to read rejection letters or terrible reviews of books that have gone on to prove indispensable to the culture. This, for example, is how the New York Times reviewer greeted Lolita: “Lolita . . . is undeniably news in the world of books. Unfortunately, it is bad news. There are two equally serious reasons why it isn’t worth any adult reader’s attention. The first is that it is dull, dull, dull in a pretentious, florid and archly fatuous fashion. The second is that it is repulsive.”

Negative reviews are fun to write and fun to read, but the world doesn’t need them, since the average work of literary fiction is, in Laura Miller’s words, “invisible to the average reader.” It appears and vanishes from the scene largely unnoticed and unremarked.

[…]

Whether reviews are positive or negative, the attention they bring to a book is rarely sufficient, and it is becoming harder and harder for a novel to lift itself from obscurity. In the succinct and elegant words of James Gleick, “The merchandise of the information economy is not information; it is attention. These commodities have an inverse relationship. When information is cheap, attention becomes expensive.” These days, besides writing, novelists must help draw attention to what they write, tweeting, friending, blogging, and generating meta tags—unacknowledged legislators to Shelley, but now more like unpaid publicists.

On the Web, everyone can be a reviewer, and a consensus about a book can be established covering a range of readers potentially as different as Laura Miller’s cousins and the members of the French Academy. In this changed environment, professional reviewers may become obsolete, replaced by crowd wisdom. More than two centuries ago, Samuel Johnson invented the idea of crowd wisdom as applied to literature, calling it “the common reader.” “I rejoice to concur with the common reader; for by the common sense of readers, uncorrupted by literary prejudices, after all the refinements of subtilty and the dogmatism of learning, must be finally decided all claim to poetical honours.” Virginia Woolf agreed and titled her wonderful collection of essays on literature The Common Reader.

[…]

The Common Reader, however, is not one person. It is a statistical average, the mean between this reader’s one star for One God Clapping and twenty other readers’ enthusiasm for this book, the autobiography of a “Zen rabbi,” producing a four-star rating. What the rating says to me is that if I were the kind of person who wanted to read the autobiography of a Zen rabbi, I’d be very likely to enjoy it. That Amazon reviewers are a self-selected group needs underlining. If you are like Laura Miller’s cousins who have never heard of Jonathan Franzen, you will be unlikely to read Freedom, and even less likely to review it. If you read everything that John Grisham has ever written, you will probably read his latest novel and might even report on it. If you read Lolita, it’s either because you’ve heard it’s one of the great novels of the twentieth century or because you’ve heard it’s a dirty book. Whatever brings you to it, you are likely to enjoy it. Four and a half stars.

The idea of the wisdom of crowds, popularized by James Surowiecki, dates to 1906, when the English statistician Francis Galton (Darwin’s cousin) focused on a contest at a county fair for guessing the weight of an ox. For sixpence, a person could buy a ticket, fill in his name, and guess the weight of the animal after butchering. The person whose guess was closest to the actual weight of the ox won a prize. Galton, having the kind of mind he did, played around with the numbers he gathered from this contest and discovered that the average of all the guesses was only one pound off from the actual weight of the ox, 1,198 pounds. If you’re looking for the Common Reader’s response to a novel, you can’t take any one review as truth but merely as a passionate assertion of one point of view, one person’s guess at the weight of the ox.

“I really enjoy reading this novel it makes you think about a sex offender’s mind. I’m also happy that I purchased this novel on Amazon because I was able to find it easily with a suitable price for me.”

“Vladimir has a way with words. The prose in this book is simply remarkable.”

“Overrated and pretentious. Overly flowery language encapsulating an uninteresting and overdone plot. Older man and pre-adolescent hypersexual woman—please let’s not exaggerate the originality of that concept, it has existed for millennia now. In fact, you’ll find similar stories in every chapter of the Bible.”

“Like many other folk I read Lolita when it first came out. I was a normally-sexed man and I found it excitingly erotic. Now, nearing 80, I still felt the erotic thrill but was more open to the beauty of Nabokov’s prose.”

“Presenting the story from Humbert’s self-serving viewpoint was Nabokov’s peculiarly brilliant means by which a straight, non-perverted reader is taken to secret places she/he might otherwise dare not go.”

“A man who was ‘hip’ while maintaining a bemused detachment from trendiness, what would he have made of shopping malls? Political correctness? Cable television? Alternative music? The Internet? . . . Or some of this decade’s greatest scandals, near-Nabokovian events in themselves, like Joey Buttafuoco, Lorena Bobbitt, O. J. Simpson, Bill and Monica? Wherever he is (Heaven, Hell, Nirvana, Anti-Terra), I would like to thank Nabokov for providing us with a compelling and unique model of how to read, write, and perceive life.”

What would the hip, bemused author of Lolita have made of Amazon ratings? I like to think that he would have reveled in them as evidence of the cheerful self- assurance, the lunatic democracy of his adopted culture.

“Once a populist gimmick, the reviews are vital to make sure a new product is not lost in the digital wilderness,” the Times reports.

Amazon’s own gatekeepers have removed thousands of reviews from its site in an attempt to curb what has become widespread manipulation of its ratings. They eliminated some reviews by family members and people considered too biased to be entitled to an opinion, competing writers, for example. They did not, however, eliminate reviews by people who admit they have not read the book. “We do not require people to have experienced the product in order to review,” said an Amazon spokesman.

A World Digital Library Is Coming True! by Robert Darnton | The New York Review of Books

A World Digital Library Is Coming True! by Robert Darnton | The New York Review of Books.

darnton_2-052214.jpg

In the scramble to gain market share in cyberspace, something is getting lost: the public interest. Libraries and laboratories—crucial nodes of the World Wide Web—are buckling under economic pressure, and the information they diffuse is being diverted away from the public sphere, where it can do most good.

Not that information comes free or “wants to be free,” as Internet enthusiasts proclaimed twenty years ago.1 It comes filtered through expensive technologies and financed by powerful corporations. No one can ignore the economic realities that underlie the new information age, but who would argue that we have reached the right balance between commercialization and democratization?

Consider the cost of scientific periodicals, most of which are published exclusively online. It has increased at four times the rate of inflation since 1986. The average price of a year’s subscription to a chemistry journal is now $4,044. In 1970 it was $33. A subscription to the Journal of Comparative Neurology cost $30,860 in 2012—the equivalent of six hundred monographs. Three giant publishers—Reed Elsevier, Wiley-Blackwell, and Springer—publish 42 percent of all academic articles, and they make giant profits from them. In 2013 Elsevier turned a 39 percent profit on an income of £2.1 billion from its science, technical, and medical journals.

All over the country research libraries are canceling subscriptions to academic journals, because they are caught between decreasing budgets and increasing costs. The logic of the bottom line is inescapable, but there is a higher logic that deserves consideration—namely, that the public should have access to knowledge produced with public funds.

[…]

The struggle over academic journals should not be dismissed as an “academic question,” because a great deal is at stake. Access to research drives large sectors of the economy—the freer and quicker the access, the more powerful its effect. The Human Genome Project cost $3.8 billion in federal funds to develop, and thanks to the free accessibility of the results, it has already produced $796 billion in commercial applications. Linux, the free, open-source software system, has brought in billions in revenue for many companies, including Google.

[…]

According to a study completed in 2006 by John Houghton, a specialist in the economics of information, a 5 percent increase in the accessibility of research would have produced an increase in productivity worth $16 billion.

[…]

Yet accessibility may decrease, because the price of journals has escalated so disastrously that libraries—and also hospitals, small-scale laboratories, and data-driven enterprises—are canceling subscriptions. Publishers respond by charging still more to institutions with budgets strong enough to carry the additional weight.

[…]

In the long run, journals can be sustained only through a transformation of the economic basis of academic publishing. The current system developed as a component of the professionalization of academic disciplines in the nineteenth century. It served the public interest well through most of the twentieth century, but it has become dysfunctional in the age of the Internet.

[…]

The entire system of communicating research could be made less expensive and more beneficial for the public by a process known as “flipping.” Instead of subsisting on subscriptions, a flipped journal covers its costs by charging processing fees before publication and making its articles freely available, as “open access,” afterward. That will sound strange to many academic authors. Why, they may ask, should we pay to get published? But they may not understand the dysfunctions of the present system, in which they furnish the research, writing, and refereeing free of charge to the subscription journals and then buy back the product of their work—not personally, of course, but through their libraries—at an exorbitant price. The public pays twice—first as taxpayers who subsidize the research, then as taxpayers or tuition payers who support public or private university libraries.

By creating open-access journals, a flipped system directly benefits the public. Anyone can consult the research free of charge online, and libraries are liberated from the spiraling costs of subscriptions. Of course, the publication expenses do not evaporate miraculously, but they are greatly reduced, especially for nonprofit journals, which do not need to satisfy shareholders. The processing fees, which can run to a thousand dollars or more, depending on the complexities of the text and the process of peer review, can be covered in various ways. They are often included in research grants to scientists, and they are increasingly financed by the author’s university or a group of universities.

[…]

The main impediment to public-spirited publishing of this kind is not financial. It involves prestige. Scientists prefer to publish in expensive journals like Nature, Science, and Cell, because the aura attached to them glows on CVs and promotes careers. But some prominent scientists have undercut the prestige effect by founding open-access journals and recruiting the best talent to write and referee for them. Harold Varmus, a Nobel laureate in physiology and medicine, has made a huge success of Public Library of Science, and Paul Crutzen, a Nobel laureate in chemistry, has done the same with Atmospheric Chemistry and Physics. They have proven the feasibility of high-quality, open-access journals. Not only do they cover costs through processing fees, but they produce a profit—or rather, a “surplus,” which they invest in further open-access projects.

[…]

DASH now includes 17,000 articles, and it has registered three million downloads from countries in every continent. Repositories in other universities also report very high scores in their counts of downloads. They make knowledge available to a broad public, including researchers who have no connection to an academic institution; and at the same time, they make it possible for writers to reach far more readers than would be possible by means of subscription journals.

The desire to reach readers may be one of the most underestimated forces in the world of knowledge. Aside from journal articles, academics produce a large numbers of books, yet they rarely make much money from them. Authors in general derive little income from a book a year or two after its publication. Once its commercial life has ended, it dies a slow death, lying unread, except for rare occasions, on the shelves of libraries, inaccessible to the vast majority of readers. At that stage, authors generally have one dominant desire—for their work to circulate freely through the public; and their interest coincides with the goals of the open-access movement.

[…]

All sorts of complexities remain to be worked out before such a plan can succeed: How to accommodate the interests of publishers, who want to keep books on their backlists? Where to leave room for rights holders to opt out and for the revival of books that take on new economic life? Whether to devise some form of royalties, as in the extended collective licensing programs that have proven to be successful in the Scandinavian countries? It should be possible to enlist vested interests in a solution that will serve the public interest, not by appealing to altruism but rather by rethinking business plans in ways that will make the most of modern technology.

Several experimental enterprises illustrate possibilities of this kind. Knowledge Unlatched gathers commitments and collects funds from libraries that agree to purchase scholarly books at rates that will guarantee payment of a fixed amount to the publishers who are taking part in the program. The more libraries participating in the pool, the lower the price each will have to pay. While electronic editions of the books will be available everywhere free of charge through Knowledge Unlatched, the subscribing libraries will have the exclusive right to download and print out copies.

[…]

OpenEdition Books, located in Marseille, operates on a somewhat similar principle. It provides a platform for publishers who want to develop open-access online collections, and it sells the e-content to subscribers in formats that can be downloaded and printed. Operating from Cambridge, England, Open Book Publishers also charges for PDFs, which can be used with print-on-demand technology to produce physical books, and it applies the income to subsidies for free copies online. It recruits academic authors who are willing to provide manuscripts without payment in order to reach the largest possible audience and to further the cause of open access.

The famous quip of Samuel Johnson, “No man but a blockhead ever wrote, except for money,” no longer has the force of a self-evident truth in the age of the Internet. By tapping the goodwill of unpaid authors, Open Book Publishers has produced forty-one books in the humanities and social sciences, all rigorously peer-reviewed, since its foundation in 2008. “We envisage a world in which all research is freely available to all readers,” it proclaims on its website.

[…]

Google set out to digitize millions of books in research libraries and then proposed to sell subscriptions to the resulting database. Having provided the books to Google free of charge, the libraries would then have to buy back access to them, in digital form, at a price to be determined by Google and that could escalate as disastrously as the prices of scholarly journals.

Google Book Search actually began as a search service, which made available only snippets or short passages of books. But because many of the books were covered by copyright, Google was sued by the rights holders; and after lengthy negotiations the plaintiffs and Google agreed on a settlement, which transformed the search service into a gigantic commercial library financed by subscriptions. But the settlement had to be approved by a court, and on March 22, 2011, the Southern Federal District Court of New York rejected it on the grounds that, among other things, it threatened to constitute a monopoly in restraint of trade. That decision put an end to Google’s project and cleared the way for the DPLA to offer digitized holdings—but nothing covered by copyright—to readers everywhere, free of charge.

Aside from its not-for-profit character, the DPLA differs from Google Book Search in a crucial respect: it is not a vertical organization erected on a database of its own. It is a distributed, horizontal system, which links digital collections already in the possession of the participating institutions, and it does so by means of a technological infrastructure that makes them instantly available to the user with one click on an electronic device. It is fundamentally horizontal, both in organization and in spirit.

Instead of working from the top down, the DPLA relies on “service hubs,” or small administrative centers, to promote local collections and aggregate them at the state level. “Content hubs” located in institutions with collections of at least 250,000 items—for example, the New York Public Library, the Smithsonian Institution, and the collective digital repository known as HathiTrust—provide the bulk of the DPLA’s holdings. There are now two dozen service and content hubs, and soon, if financing can be found, they will exist in every state of the union.

Such horizontality reinforces the democratizing impulse behind the DPLA. Although it is a small, nonprofit corporation with headquarters and a minimal staff in Boston, the DPLA functions as a network that covers the entire country. It relies heavily on volunteers. More than a thousand computer scientists collaborated free of charge in the design of its infrastructure, which aggregates metadata (catalog-type descriptions of documents) in a way that allows easy searching.

Therefore, for example, a ninth-grader in Dallas who is preparing a report on an episode of the American Revolution can download a manuscript from New York, a pamphlet from Chicago, and a map from San Francisco in order to study them side by side. Unfortunately, he or she will not be able to consult any recent books, because copyright laws keep virtually everything published after 1923 out of the public domain. But the courts, which are considering a flurry of cases about the “fair use” of copyright, may sustain a broad-enough interpretation for the DPLA to make a great deal of post-1923 material available for educational purposes.

A small army of volunteer “Community Reps,” mainly librarians with technical skills, is fanning out across the country to promote various outreach programs sponsored by the DPLA. They reinforce the work of the service hubs, which concentrate on public libraries as centers of collection-building. A grant from the Bill and Melinda Gates Foundation is financing a Public Library Partnerships Project to train local librarians in the latest digital technologies. Equipped with new skills, the librarians will invite people to bring in material of their own—family letters, high school yearbooks, postcard collections stored in trunks and attics—to be digitized, curated, preserved, and made accessible online by the DPLA. While developing local community consciousness about culture and history, this project will also help integrate local collections in the national network.

[…]

In these and other ways, the DPLA will go beyond its basic mission of making the cultural heritage of America available to all Americans. It will provide opportunities for them to interact with the material and to develop materials of their own. It will empower librarians and reinforce public libraries everywhere, not only in the United States. Its technological infrastructure has been designed to be interoperable with that of Europeana, a similar enterprise that is aggregating the holdings of libraries in the twenty-eight member states of the European Union. The DPLA’s collections include works in more than four hundred languages, and nearly 30 percent of its users come from outside the US. Ten years from now, the DPLA’s first year of activity may look like the beginning of an international library system.

It would be naive, however, to imagine a future free from the vested interests that have blocked the flow of information in the past. The lobbies at work in Washington also operate in Brussels, and a newly elected European Parliament will soon have to deal with the same issues that remain to be resolved in the US Congress. Commercialization and democratization operate on a global scale, and a great deal of access must be opened before the World Wide Web can accommodate a worldwide library.

Why Libraries Should Be the Next Great Start-Up Incubators – CityLab

Why Libraries Should Be the Next Great Start-Up Incubators – CityLab.

Image

One of the world’s first and most famous libraries, in Alexandria, Egypt, was frequently home some 2,000 years ago to the self-starters and self-employed of that era. “When you look back in history, they had philosophers and mathematicians and all sorts of folks who would get together and solve the problems of their time,” says Tracy Lea, the venture manager with Arizona State University’s economic development and community engagement arm. “We kind of look at it as the first template for the university. They had lecture halls, gathering spaces. They had co-working spaces.”

This old idea of the public library as co-working space now offers a modern answer – one among many – for how these aging institutions could become more relevant two millennia after the original Alexandria library burned to the ground. Would-be entrepreneurs everywhere are looking for business know-how and physical space to incubate their start-ups. Libraries meanwhile may be associated today with an outmoded product in paper books. But they also happen to have just about everything a 21st century innovator could need: Internet access, work space, reference materials, professional guidance.

[…]

Libraries also provide a perfect venue to expand the concept of start-up accelerators beyond the renovated warehouses and stylish offices of “innovation districts.” They offer a more familiar entry-point for potential entrepreneurs less likely to walk into a traditional start-up incubator (or an ASU office, for that matter). Public libraries long ago democratized access to knowledge; now they could do the same in a start-up economy.

“We refer to it as democratizing entrepreneurship,” Lea says, “so everyone really can be involved.”

Library as Infrastructure: Places: Design Observer

Library as Infrastructure: Places: Design Observer.

Melvil Dewey was a one-man Silicon Valley born a century before Steve Jobs. He was the quintessential Industrial Age entrepreneur, but unlike the Carnegies and Rockefellers, with their industries of heavy materiality and heavy labor, Dewey sold ideas. His ambition revealed itself early: in 1876, shortly after graduating from Amherst College, he copyrighted his library classification scheme. That same year, he helped found the American Library Association, served as founding editor of Library Journal, and launched the American Metric Bureau, which campaigned for adoption of the metric system. He was 24 years old. He had already established the Library Bureau, a company that sold (and helped standardize) library supplies, furniture, media display and storage devices, and equipment for managing the circulation of collection materials. Its catalog (which would later include another Dewey invention, the hanging vertical file) represented the library as a “machine” of uplift and enlightenment that enabled proto-Taylorist approaches to public education and the provision of social services. As chief librarian at Columbia College, Dewey established the first library school — called, notably, the School of Library Economy — whose first class was 85% female; then he brought the school to Albany, where he directed the New York State Library. In his spare time, he founded the Lake Placid Club and helped win the bid for the 1932 Winter Olympics.

Dewey was thus simultaneously in the furniture business, the office-supply business, the consulting business, the publishing business, the education business, the human resources business, and what we might today call the “knowledge solutions” business. Not only did he recognize the potential for monetizing and cross-promoting his work across these fields; he also saw that each field would be the better for it. His career (which was not without its significant controversies) embodied a belief that classification systems and labeling standards and furniture designs and people work best when they work towards the same end — in other words, that intellectual and material systems and labor practices are mutually constructed and mutually reinforcing.

Today’s libraries, Apple-era versions of the Dewey/Carnegie institution, continue to materialize, at multiple scales, their underlying bureaucratic and epistemic structures — from the design of their web interfaces to the architecture of their buildings to the networking of their technical infrastructures. This has been true of knowledge institutions throughout history, and it will be true of our future institutions, too. I propose that thinking about the library as a network of integrated, mutually reinforcing, evolving infrastructures — in particular, architectural, technological, social, epistemological and ethical infrastructures — can help us better identify what roles we want our libraries to serve, and what we can reasonably expect of them. What ideas, values and social responsibilities can we scaffold within the library’s material systems — its walls and wires, shelves and servers?

Library as Platform

For millennia libraries have acquired resources, organized them, preserved them and made them accessible (or not) to patrons. But the forms of those resources have changed — from scrolls and codices; to LPs and LaserDiscs; to e-books, electronic databases and open data sets. Libraries have had at least to comprehend, if not become a key node within, evolving systems of media production and distribution. Consider the medieval scriptoria where manuscripts were produced; the evolution of the publishing industry and book trade after Gutenberg; the rise of information technology and its webs of wires, protocols and regulations. [1] At every stage, the contexts — spatial, political, economic, cultural — in which libraries function have shifted; so they are continuously reinventing themselves and the means by which they provide those vital information services.

Libraries have also assumed a host of ever-changing social and symbolic functions. They have been expected to symbolize the eminence of a ruler or state, to integrally link “knowledge” and “power” — and, more recently, to serve as “community centers,” “public squares” or “think tanks.” Even those seemingly modern metaphors have deep histories. The ancient Library of Alexandria was a prototypical think tank, [2] and the early Carnegie buildings of the 1880s were community centers with swimming pools and public baths, bowling alleys, billiard rooms, even rifle ranges, as well as book stacks. [3] As the Carnegie funding program expanded internationally — to more than 2,500 libraries worldwide — secretary James Bertram standardized the design in his 1911 pamphlet “Notes on the Erection of Library Buildings,” which offered grantees a choice of six models, believed to be the work of architect Edward Tilton. Notably, they all included a lecture room.

In short, the library has always been a place where informational and social infrastructures intersect within a physical infrastructure that (ideally) supports that program.

Now we are seeing the rise of a new metaphor: the library as “platform” — a buzzy word that refers to a base upon which developers create new applications, technologies and processes. In an influential 2012 article in Library Journal, David Weinberger proposed that we think of libraries as “open platforms” — not only for the creation of software, but also for the development of knowledge and community. [4] Weinberger argued that libraries should open up their entire collections, all their metadata, and any technologies they’ve created, and allow anyone to build new products and services on top of that foundation. The platform model, he wrote, “focuses our attention away from the provisioning of resources to the foment” — the “messy, rich networks of people and ideas” — that “those resources engender.” Thus the ancient Library of Alexandria, part of a larger museum with botanical gardens, laboratories, living quarters and dining halls, was a platform not only for the translation and copying of myriad texts and the compilation of a magnificent collection, but also for the launch of works by Euclid, Archimedes, Eratosthenes and their peers.

Yet the platform metaphor has limitations. For one thing, it smacks of Silicon Valley entrepreneurial epistemology, which prioritizes “monetizable” “knowledge solutions.” Further, its association with new media tends to bracket out the similarly generative capacities of low-tech, and even non-technical, library resources. One key misperception of those who proclaim the library’s obsolescence is that its function as a knowledge institution can be reduced to its technical services and information offerings. Knowledge is never solely a product of technology and the information it delivers.

Another problem with the platform model is the image it evokes: a flat, two-dimensional stage on which resources are laid out for users to do stuff with. The platform doesn’t have any implied depth, so we’re not inclined to look underneath or behind it, or to question its structure. Weinberger encourages us to “think of the library not as a portal we go through on occasion but as infrastructure that is as ubiquitous and persistent as the streets and sidewalks of a town.” It’s like a “canopy,” he says — or like a “cloud.” But these metaphors are more poetic than critical; they obfuscate all the wires, pulleys, lights and scaffolding that you inevitably find underneath and above that stage — and the casting, staging and direction that determine what happens on the stage, and that allow it to function as a stage. Libraries are infrastructures not only because they are ubiquitous and persistent, but also, and primarily, because they are made of interconnected networks that undergird all that foment, that create what Pierre Bourdieu would call “structuring structures” that support Weinberger’s “messy, rich networks of people and ideas.”

It can be instructive for our libraries publics’ — and critical for our libraries’ leaders — to assess those structuring structures. In this age of e-books, smartphones, firewalls, proprietary media platforms and digital rights management; of atrophying mega-bookstores and resurgent independent bookshops and a metastasizing Amazon; of Google Books and Google Search and Google Glass; of economic disparity and the continuing privatization of public space and services — which is simultaneously an age of democratized media production and vibrant DIY and activist cultures — libraries play a critical role as mediators, at the hub of all the hubbub. Thus we need to understand how our libraries function as, and as part of, infrastructural ecologies — as sites where spatial, technological, intellectual and social infrastructures shape and inform one another. And we must consider how those infrastructures can embody the epistemological, political, economic and cultural values that we want to define our communities.

Library as Social Infrastructure
Public libraries are often seen as “opportunity institutions,” opening doors to, and for, the disenfranchised. [6] People turn to libraries to access the internet, take a GED class, get help with a resumé or job search, and seek referrals to other community resources. A recent report by the Center for an Urban Future highlighted the benefits to immigrants, seniors, individuals searching for work, public school students and aspiring entrepreneurs: “No other institution, public or private, does a better job of reaching people who have been left behind in today’s economy, have failed to reach their potential in the city’s public school system or who simply need help navigating an increasingly complex world.” [7]

[…]

Partly because of their skill in reaching populations that others miss, libraries have recently reported record circulation and visitation, despite severe budget cuts, decreased hours and the threatened closure or sale of “underperforming” branches.

[…]

Libraries also bring communities together in times of calamity or disaster. Toyo Ito, architect of the acclaimed Sendai Mediatheque, recalled that after the 2011 earthquake in Japan, local officials reopened the library quickly even though it had sustained minor damage, “because it functions as a kind of cultural refuge in the city.” He continued, “Most people who use the building are not going there just to read a book or watch a film; many of them probably do not have any definite purpose at all. They go just to be part of the community in the building.” [10]

We need to attend more closely to such “social infrastructures,” the “facilities and conditions that allow connection between people,” says sociologist Eric Klinenberg. In a recent interview, he argued that urban resilience can be measured not only by the condition of transit systems and basic utilities and communication networks, but also by the condition of parks, libraries and community organizations: “open, accessible, and welcoming public places where residents can congregate and provide social support during times of need but also every day.” [11] In his book Heat Wave, Klinenberg noted that a vital public culture in Chicago neighborhoods drew people out of sweltering apartments during the 1995 heat wave, and into cooler public spaces, thus saving lives.

The need for physical spaces that promote a vibrant social infrastructure presents many design opportunities, and some libraries are devising innovative solutions. Brooklyn and other cultural institutions have partnered with the Uni, a modular, portable library that I wrote about earlier in this journal. And modular solutions — kits of parts — are under consideration in a design study sponsored by the Center for an Urban Future and the Architectural League of New York, which aims to reimagine New York City’s library branches so that they can more efficiently and effectively serve their communities. CUF also plans to publish, at the end of June, an audit of, and a proposal for, New York’s three library systems. [12] New York Times architecture critic Michael Kimmelman, reflecting on the roles played by New York libraries during recent hurricanes, goes so far as to suggest that the city’s branch libraries, which have “become our de facto community centers,” “could be designed in the future with electrical systems out of harm’s way and set up with backup generators and solar panels, even kitchens and wireless mesh networks.”

[…]

I’ve recently returned from Seattle, where I revisited OMA’s Central Library on its 10th anniversary and toured several new branch libraries. [15] Under the 1998 bond measure “Libraries for All,” citizens voted to tax themselves to support construction of the Central Library and four new branches, and to upgrade every branch in the system. The vibrant, sweeping Ballard branch (2005), by Bohlin Cywinski Jackson, includes a separate entrance for the Ballard Neighborhood Service Center, a “little city hall“ where residents can find information about public services, get pet licenses, pay utility bills, and apply for passports and city jobs. While the librarians undoubtedly field questions about such services, they’re also able to refer patrons next door, where city employees are better equipped to meet their needs — thus affording the library staff more time to answer reference questions and host writing groups and children’s story hours.

[…]

These entrepreneurial models reflect what seems to be an increasingly widespread sentiment: that while libraries continue to serve a vital role as “opportunity institutions” for the disenfranchised, this cannot be their primary self-justification. They cannot duplicate the responsibilities of our community centers and social service agencies. “Their narrative” — or what I’d call an “epistemic framing,” by which I mean the way the library packages its program as a knowledge institution, and the infrastructures that support it — “must include everyone,” says the University of Michigan’s Kristin Fontichiaro. [19] What programs and services are consistent with an institution dedicated to lifelong learning? Should libraries be reconceived as hubs for civic engagement, where communities can discuss local issues, create media, and archive community history? [20] Should they incorporate media production studios, maker-spaces and hacker labs, repositioning themselves in an evolving ecology of information and educational infrastructures?

These new social functions — which may require new physical infrastructures to support them — broaden the library’s narrative to include everyone, not only the “have-nots.” This is not to say that the library should abandon the needy and focus on an elite patron group; rather, the library should incorporate the “enfranchised” as a key public, both so that the institution can reinforce its mission as a social infrastructure for an inclusive public, and so that privileged, educated users can bring their knowledge and talents to the library and offer them up as social-infrastructural resources.

Many among this well-resourced population — those who have jobs and home internet access and can navigate the government bureaucracy with relative ease — already see themselves as part of the library’s public. They regard the library as a space of openness, egalitarianism and freedom (in multiple senses of the term), within a proprietary, commercial, segregated and surveilled landscape. They understand that no matter how well-connected they are, they actually don’t have the world at their fingertips — that “material protected by stringent copyright and held in proprietary databases is often inaccessible outside libraries” and that, “as digital rights management becomes ever more complicated, we … rely even more on our libraries to help us navigate an increasingly fractured and litigious digital terrain.” [21] And they recognize that they cannot depend on Google to organize the world’s information. As the librarian noted in that discussion on Metafilter:

The [American Library Association] has a proven history of commitment to intellectual freedom. The public service that we’ve been replaced with has a spotty history of “not being evil.” When we’re gone, you middle class, you wealthy, you tech-savvy, who will fight for that with no profit motivation? Even if you never step foot in our doors, and all of your media comes to a brightly lit screen, we’re still working for you.

The library’s social infrastructure thus benefits even those who don’t have an immediate need for its space or its services.

Finally, we must acknowledge the library’s role as a civic landmark — a symbol of what a community values highly enough to place on a prominent site, to materialize in dignified architecture that communicates its openness to everyone, and to support with sufficient public funding despite the fact that it’ll never make a profit. A well-designed library — a contextually-designed library — can reflect a community’s character back to itself, clarifying who it is, in all its multiplicity, and what it stands for. [22] David Adjaye’s Bellevue and Francis Gregory branch libraries, in historically underserved neighborhoods of Washington D.C., have been lauded for performing precisely this function. As Sarah Williams Goldhagen writes:

Adjaye is so attuned to the nuances of urban context that one might be hard pressed to identify them as the work of one designer. Francis Gregory is steel and glass, Bellevue is concrete and wood. Francis Gregory presents a single monolithic volume, Bellevue an irregular accretion of concrete pavilions. Context drives the aesthetic.

His designs “make of this humble municipal building an arena for social interaction, …a distinctive civic icon that helps build a sense of common identity.” This kind of social infrastructure serves a vital need for an entire community.

Library as Technological-Intellectual Infrastructure
Of course, we must not forget the library collection itself. The old-fashioned bookstack was at the center of the recent debate over the proposed renovation of the New York Public Library’s Schwartzman Building on 42nd Street, which was cancelled last month after more than a year of lawsuits and protests. This storage infrastructure, and the delivery system it accommodates, have tremendous significance even in a digital age. For scholars, the stacks represent near-instant access to any materials within the extensive collection. Architectural historians defended the historical significance of the stacks, and engineers argued that they are critical to the structural integrity of the building.

The way a library’s collection is stored and made accessible shapes the intellectual infrastructure of the institution. The Seattle Public Library uses translucent acrylic bookcases made by Spacesaver — and even here this seemingly mundane, utilitarian consideration cultivates a character, an ambience, that reflects the library’s identity and its intellectual values. It might sound corny, but the luminescent glow permeating the stacks acts as a beacon, a welcoming gesture. There are still many contemporary libraries that privilege — perhaps even fetishize — the book and the bookstack: take MVRDV’s Book Mountain (2012), for a town in the Netherlands; or TAX arquitectura’s Biblioteca Jose Vasconcelos (2006) in Mexico City.

Stacks occupy a different, though also fetishized, space in Helmut Jahn’s Mansueto Library (2011) at the University of Chicago, which mixes diverse infrastructures to accommodate media of varying materialities: a grand reading room, a conservation department, a digitization department, and a subterranean warehouse of books retrieved by robot. (It’s worth noting that Boston and other libraries contained book railways and conveyer belt retrieval systems — proto-robots — a century ago.) Snøhetta’s James B. Hunt Jr. Library (2013) at North Carolina State University also incorporates a robotic storage and retrieval system, so that the library can store more books on site, as well as meet its goal of providing seating for 20 percent of the student population. [23] Here the patrons come before the collection.

Back in the early aughts, when I spent a summer touring libraries, the institutions on the leading edge were integrating media production facilities, recognizing that media “consumption” and “creation” lie on a gradient of knowledge production. Today there’s a lot of talk about — and action around — integrating hacker labs and maker-spaces. [24] As Anne Balsamo explains, these sites offer opportunities — embodied, often inter-generational learning experiences that are integral to the development of a “technological imagination” — that are rarely offered in formal learning institutions. [25]

The Hunt Library has a maker-space, a GameLab, various other production labs and studios, an immersion theater, and, rather eyebrow-raisingly, an Apple Technology Showcase (named after library donors whose surname is Apple, with an intentional pun on the electronics company). [26] One might think major funding is needed for those kinds of programs, but the trend actually began in 2011 in tiny Fayetteville, New York (pop. 4,373), thought to be the first public library to have incorporated a maker-space. The following year, the Carnegie Libraries of Pittsburgh — which for years has hosted film competitions, gaming tournaments, and media-making projects for youth — launched, with Google and Heinz Foundation support, The Labs: weekly workshops at three locations where teenagers can access equipment, software and mentors. Around the same time, Chattanooga — a city blessed with a super-high-speed municipal fiber network — opened its lauded 4th Floor, a 12,000-square foot “public laboratory and educational facility” that “supports the production, connection, and sharing of knowledge by offering access to tools and instruction.” Those tools include 3D printers, laser cutters and vinyl cutters, and the instruction includes everything from tech classes, to incubator projects for female tech entrepreneurs, to business pitch competitions.

Last year, the Brooklyn Public Library, just a couple blocks from where I live, opened its Levy Info Commons, which includes space for laptop users and lots of desktop machines featuring creative software suites; seven reserveable teleconference-ready meeting rooms, including one that doubles as a recording studio; and a training lab, which offers an array of digital media workshops led by a local arts and design organization and also invites patrons to lead their own courses. A typical month on their robust event calendar includes resume editing workshops, a Creative Business Tech prototyping workshop, individual meetings with business counselors, Teen Tech tutorials, computer classes for seniors, workshops on podcasting and oral history and “adaptive gaming” for people with disabilities, and even an audio-recording and editing workshop targeted to poets, to help them disseminate their work in new formats. Also last year, the Martin Luther King, Jr., Memorial Library in Washington, D.C., opened its Digital Commons, where patrons can use a print-on-demand bookmaking machine, a 3D printer, and a co-working space known as the “Dream Lab,” or try out a variety of e-book readers. The Chicago Public Library partnered with the Museum of Science and Industry to open a pop-up maker lab featuring open-source design software, laser cutters, a milling machine, and (of course) 3D printers — not one, but three.

Some have proposed that libraries — following in the tradition of Alexandria’s “think tank,” and compelled by a desire to “democratize entrepreneurship” — make for ideal co-working or incubator spaces, where patrons with diverse skill sets can organize themselves into start-ups-for-the-people. [27] Others recommend that librarians entrepreneurialize themselves, rebranding themselves as professional consultants in a complex information economy. Librarians, in this view, are uniquely qualified digital literacy tutors; experts in “copyright compliance, licensing, privacy, information use, and ethics”; gurus of “aligning … programs with collections, space, and resources”; skilled creators of “custom ontologies, vocabularies, taxonomies” and structured data; adept practitioners of data mining. [28] Others recommend that libraries get into the content production business. In the face of increasing pressure to rent and license proprietary digital content with stringent use policies, why don’t libraries do more to promote the creation of independent media or develop their own free, open-source technologies? Not many libraries have the time and resources to undertake such endeavors, but NYPL Labs and Harvard’s Library Test Kitchen, have demonstrated what’s possible when even back-of-house library spaces become sites of technological praxis. Unfortunately, those innovative projects are typically hidden behind the interface (as with so much library labor). Why not bring those operations to the front of the building, as part of the public program?

Of course, with all these new activities come new spatial requirements. Library buildings must incorporate a wide variety of furniture arrangements, lighting designs, acoustical conditions, etc., to accommodate multiple sensory registers, modes of working, postures and more. Librarians and designers are now acknowledging — and designing for, rather than designing out — activities that make noise and can occasionally be a bit messy. I did a study several years ago on the evolution of library sounds and found widespread recognition that knowledge-making doesn’t readily happen when “shhh!” is the prevailing rule.

These new physical infrastructures create space for an epistemology embracing the integration of knowledge consumption and production, of thinking and making. Yet sometimes I have to wonder, given all the hoopla over “making”: are tools of computational fabrication really the holy grail of the knowledge economy? What knowledge is produced when I churn out, say, a keychain on a MakerBot? I worry that the boosterism surrounding such projects — and the much-deserved acclaim they’ve received for “rebranding” the library — glosses over the neoliberal values that these technologies sometimes embody. Neoliberalism channels the pursuit of individual freedom through property rights and free markets [30] — and what better way to express yourself than by 3D-printing a bust of your own head at the library, or using the library’s CNC router to launch your customizable cutting board business on Etsy? While librarians have long been advocates of free and democratic access to information, I trust — I hope — that they’re helping their patrons to cultivate a critical perspective regarding the politics of “technological innovation” — and the potential instrumentalism of makerhood. Sure, Dewey was part of this instrumentalist tradition, too. But our contemporary pursuit of “innovation” promotes the idea that “making new stuff” = “producing knowledge,” which can be a dangerous falsehood.

Library staff might want to take up the critique of “innovation,” too. Each new Google product release, new mobile technology development, new e-reader launch brings new opportunities for the library to innovate in response. And while “keeping current” is a crucial goal, it’s important to place that pursuit in a larger cultural, political-economic and institutional context. Striving to stay technologically relevant can backfire when it means merely responding to the profit-driven innovations of commercial media; we see these mistakes — innovation for innovation’s sake — in the ed-tech arena quite often.

[…]

As Zadie Smith argued beautifully in the New York Review of Books, we risk losing the library’s role as a “different kind of social reality (of the three dimensional kind), which by its very existence teaches a system of values beyond the fiscal.” [31] Barbara Fister, a librarian at Gustavus Adolphus College, offered an equally eloquent plea for the library as a space of exception:

Libraries are not, or at least should not be, engines of productivity. If anything, they should slow people down and seduce them with the unexpected, the irrelevant, the odd and the unexplainable. Productivity is a destructive way to justify the individual’s value in a system that is naturally communal, not an individualistic or entrepreneurial zero-sum game to be won by the most industrious. [32]

Libraries, she argued, “will always be at a disadvantage” to Google and Amazon because they value privacy; they refuse to exploit users’ private data to improve the search experience. Yet libraries’ failure to compete in efficiency is what affords them the opportunity to offer a “different kind of social reality.” I’d venture that there is room for entrepreneurial learning in the library, but there also has to be room for that alternate reality where knowledge needn’t have monetary value, where learning isn’t driven by a profit motive. We can accommodate both spaces for entrepreneurship and spaces of exception, provided the institution has a strong epistemic framing that encompasses both. This means that the library needs to know how to read itself as a social-technical-intellectual infrastructure.

It’s particularly important to cultivate these critical capacities — the ability to “read” our libraries’ multiple infrastructures and the politics and ethics they embody — when the concrete infrastructures look like San Antonio’s BiblioTech, a “bookless” library featuring 10,000 e-books, downloadable via the 3M Cloud App; 600 circulating “stripped down” 3M e-readers; 200 “enhanced” tablets for kids; and, for use on-site, 48 computers, plus laptops and iPads. The library, which opened last fall, also offers computer classes and meeting space, but it’s all locked within a proprietary platformed world.

In libraries like BiblioTech — and the Digital Public Library of America — the collection itself is off-site. Do patrons wonder where, exactly, all those books and periodicals and cloud-based materials live? What’s under, or floating above, the “platform”? Do they think about the algorithms that lead them to particular library materials, and the conduits and protocols through which they access them? Do they consider what it means to supplant bookstacks with server stacks — whose metal racks we can’t kick, lights we can’t adjust, knobs we can’t fiddle with? Do they think about the librarians negotiating access licenses and adding metadata to “digital assets,” or the engineers maintaining the servers? With the increasing recession of these technical infrastructures — and the human labor that supports them — further off-site, behind the interface, deeper inside the black box, how can we understand the ways in which those structures structure our intellect and sociality?

We need to develop — both among library patrons and librarians themselves — new critical capacities to understand the distributed physical, technical and social architectures that scaffold our institutions of knowledge and program our values. And we must consider where those infrastructures intersect — where they should be, and perhaps aren’t, mutually reinforcing one another. When do our social obligations compromise our intellectual aspirations, or vice versa? And when do those social or intellectual aspirations for the library exceed — or fail to fully exploit — the capacities of our architectural and technological infrastructures? Ultimately, we need to ensure that we have a strong epistemological framework — a narrative that explains how the library promotes learning and stewards knowledge — so that everything hangs together, so there’s some institutional coherence. We need to sync the library’s intersecting infrastructures so that they work together to support our shared intellectual and ethical goals.

 

The Birth of the Information Age: How Paul Otlet’s Vision for Cataloging and Connecting Humanity Shaped Our World | Brain Pickings

The Birth of the Information Age: How Paul Otlet’s Vision for Cataloging and Connecting Humanity Shaped Our World | Brain Pickings.

Decades before Alan Turing pioneered computer science and Vannevar Bush imagined the web, a visionary Belgian idealist named Paul Otlet (August 23, 1868–December 10, 1944) set out to organize the world’s information. For nearly half a century, he worked unrelentingly to index and catalog every significant piece of human thought ever published or recorded, building a massive Universal Bibliography of 15 million books, magazines, newspapers, photographs, posters, museum pieces, and other assorted media. His monumental collection was predicated not on ownership but on access and sharing — while amassing it, he kept devising increasingly ambitious schemes for enabling universal access, fostering peaceful relations between nations, and democratizing human knowledge through a global information network he called the “Mundaneum” — a concept partway between Voltaire’s Republic of Letters, Marshall McLuhan’s “global village,” and the übermind of the future. Otlet’s work would go on to inspire generations of information science pioneers, including the founding fathers of the modern internet and the world wide web.

[…]

Otlet tried to assemble a great catalog of the world’s published information, create an encyclopedic atlas of human knowledge, build a network of federated museums and other cultural institutions, and establish a World City that would serve as the headquarters for a new world government. For Otlet these were not disconnected activities but part of a larger vision of worldwide harmony. In his later years he started to describe the Mundaneum in transcendental terms, envisioning his global knowledge network as something akin to a universal consciousness and as a gateway to collective enlightenment.

[…]

What the Nazis saw as a “pile of rubbish,” Otlet saw as the foundation for a global network that, one day, would make knowledge freely available to people all over the world. In 1934, he described his vision for a system of networked computers — “electric telescopes,” he called them — that would allow people to search through millions of interlinked documents, images, and audio and video files. He imagined that individuals would have desktop workstations—each equipped with a viewing screen and multiple movable surfaces — connected to a central repository that would provide access to a wide range of resources on whatever topics might interest them. As the network spread, it would unite individuals and institutions of all stripes — from local bookstores and classrooms to universities and governments. The system would also feature so-called selection machines capable of pinpointing a particular passage or individual fact in a document stored on microfilm, retrieved via a mechanical indexing and retrieval tool. He dubbed the whole thing a réseau mondial: a “worldwide network” or, as the scholar Charles van den Heuvel puts it, an “analog World Wide Web.”

Twenty-five years before the first microchip, forty years before the first personal computer, and fifty years before the first Web browser, Paul Otlet had envisioned something very much like today’s Internet.

[…]

Everything in the universe, and everything of man, would be registered at a distance as it was produced. In this way a moving image of the world will be established, a true mirror of [its] memory. From a distance, everyone will be able to read text, enlarged and limited to the desired subject, projected on an individual screen. In this way, everyone from his armchair will be able to contemplate creation, in whole or in certain parts.

Otlet’s prescience, Wright notes, didn’t end there — he also envisioned speech recognition tools, wireless networks that would enable people to upload files to remote servers, social networks and virtual communities around individual pieces of media that would allow people to “participate, applaud, give ovations, sing in the chorus,” and even concepts we are yet to crack with our present technology, such as transmitting sensory experiences like smell and taste.

[…]

By today’s standards, Otlet’s proto-Web was a clumsy affair, relying on a patchwork system of index cards, file cabinets, telegraph machines, and a small army of clerical workers. But in his writing he looked far ahead to a future in which networks circled the globe and data could travel freely. Moreover, he imagined a wide range of expression taking shape across the network: distributed encyclopedias, virtual classrooms, three-dimensional information spaces, social networks, and other forms of knowledge that anticipated the hyperlinked structure of today’s Web. He saw these developments as fundamentally connected to a larger utopian project that would bring the world closer to a state of permanent and lasting peace and toward a state of collective spiritual enlightenment.

[…]

The contemporary construct of “the user” that underlies so much software design figures nowhere in Otlet’s work. He saw the mission of the Mundaneum as benefiting humanity as a whole, rather than serving the whims of individuals. While he imagined personalized workstations (those Mondotheques), he never envisioned the network along the lines of a client-server “architecture” (a term that would not come into being for another two decades). Instead, each machine would act as a kind of “dumb” terminal, fetching and displaying material stored in a central location.

The counterculture programmers who paved the way for the Web believed they were participating in a process of personal liberation. Otlet saw it as a collective undertaking, one dedicated to a higher purpose than mere personal gratification. And while he might well have been flummoxed by the anything-goes ethos of present-day social networking sites like Facebook or Twitter, he also imagined a system that allowed groups of individuals to take part in collaborative experiences like lectures, opera performances, or scholarly meetings, where they might “applaud” or “give ovations.” It seems a short conceptual hop from here to Facebook’s ubiquitous “Like” button.

[…]

Would the Internet have turned out any differently had Paul Otlet’s vision come to fruition? Counterfactual history is a fool’s game, but it is perhaps worth considering a few possible lessons from the Mundaneum. First and foremost, Otlet acted not out of a desire to make money — something he never succeeded at doing — but out of sheer idealism. His was a quest for universal knowledge, world peace, and progress for humanity as a whole. The Mundaneum was to remain, as he said, “pure.” While many entrepreneurs vow to “change the world” in one way or another, the high-tech industry’s particular brand of utopianism almost always carries with it an underlying strain of free-market ideology: a preference for private enterprise over central planning and a distrust of large organizational structures. This faith in the power of “bottom-up” initiatives has long been a hallmark of Silicon Valley culture, and one that all but precludes the possibility of a large-scale knowledge network emanating from anywhere but the private sector.