Tag Archives: connectivity

Edward Snowden: The Untold Story | Threat Level | WIRED

Edward Snowden: The Untold Story | Threat Level | WIRED.

The message arrives on my “clean machine,” a MacBook Air loaded only with a sophisticated encryption package. “Change in plans,” my contact says. “Be in the lobby of the Hotel ______ by 1 pm. Bring a book and wait for ES to find you.”

[…]

He is a uniquely postmodern breed of whistle-blower. Physically, very few people have seen him since he disappeared into Moscow’s airport complex last June. But he has nevertheless maintained a presence on the world stage—not only as a man without a country but as a man without a body. When being interviewed at the South by Southwest conference or receiving humanitarian awards, his disembodied image smiles down from jumbotron screens. For an interview at the TED conference in March, he went a step further—a small screen bearing a live image of his face was placed on two leg-like poles attached vertically to remotely controlled wheels, giving him the ability to “walk” around the event, talk to people, and even pose for selfies with them. The spectacle suggests a sort of Big Brother in reverse: Orwell’s Winston Smith, the low-ranking party functionary, suddenly dominating telescreens throughout Oceania with messages promoting encryption and denouncing encroachments on privacy.

[…]

I read a recent Washington Post report. The story, by Greg Miller, recounts daily meetings with senior officials from the FBI, CIA, and State Department, all desperately trying to come up with ways to capture Snowden. One official told Miller: “We were hoping he was going to be stupid enough to get on some kind of airplane, and then have an ally say: ‘You’re in our airspace. Land.’ ” He wasn’t. And since he disappeared into Russia, the US seems to have lost all trace of him.

I do my best to avoid being followed as I head to the designated hotel for the interview, one that is a bit out of the way and attracts few Western visitors. I take a seat in the lobby facing the front door and open the book I was instructed to bring. Just past one, Snowden walks by, dressed in dark jeans and a brown sport coat and carrying a large black backpack over his right shoulder. He doesn’t see me until I stand up and walk beside him. “Where were you?” he asks. “I missed you.” I point to my seat. “And you were with the CIA?” I tease. He laughs.

[…]

He has been in Russia for more than a year now. He shops at a local grocery store where no one recognizes him, and he has picked up some of the language. He has learned to live modestly in an expensive city that is cleaner than New York and more sophisticated than Washington. In August, Snowden’s temporary asylum was set to expire. (On August 7, the government announced that he’d been granted a permit allowing him to stay three more years.)

[…]

Snowden is careful about what’s known in the intelligence world as operational security. As we sit down, he removes the battery from his cell phone. I left my iPhone back at my hotel. Snowden’s handlers repeatedly warned me that, even switched off, a cell phone can easily be turned into an NSA microphone. Knowledge of the agency’s tricks is one of the ways that Snowden has managed to stay free. Another is by avoiding areas frequented by Americans and other Westerners. Nevertheless, when he’s out in public at, say, a computer store, Russians occasionally recognize him. “Shh,” Snowden tells them, smiling, putting a finger to his lips.

[…]

Snowden still holds out hope that he will someday be allowed to return to the US. “I told the government I’d volunteer for prison, as long as it served the right purpose,” he says. “I care more about the country than what happens to me. But we can’t allow the law to become a political weapon or agree to scare people away from standing up for their rights, no matter how good the deal. I’m not going to be part of that.”

Meanwhile, Snowden will continue to haunt the US, the unpredictable impact of his actions resonating at home and around the world. The documents themselves, however, are out of his control. Snowden no longer has access to them; he says he didn’t bring them with him to Russia. Copies are now in the hands of three groups: First Look Media, set up by journalist Glenn Greenwald and American documentary filmmaker Laura Poitras, the two original recipients of the documents; The Guardian newspaper, which also received copies before the British government pressured it into transferring physical custody (but not ownership) to The New York Times; and Barton Gellman, a writer for The Washington Post. It’s highly unlikely that the current custodians will ever return the documents to the NSA.

That has left US officials in something like a state of impotent expectation, waiting for the next round of revelations, the next diplomatic upheaval, a fresh dose of humiliation. Snowden tells me it doesn’t have to be like this. He says that he actually intended the government to have a good idea about what exactly he stole. Before he made off with the documents, he tried to leave a trail of digital bread crumbs so investigators could determine which documents he copied and took and which he just “touched.” That way, he hoped, the agency would see that his motive was whistle-blowing and not spying for a foreign government. It would also give the government time to prepare for leaks in the future, allowing it to change code words, revise operational plans, and take other steps to mitigate damage. But he believes the NSA’s audit missed those clues and simply reported the total number of documents he touched—1.7 million. (Snowden says he actually took far fewer.) “I figured they would have a hard time,” he says. “I didn’t figure they would be completely incapable.”

[…]

Snowden speculates that the government fears that the documents contain material that’s deeply damaging—secrets the custodians have yet to find. “I think they think there’s a smoking gun in there that would be the death of them all politically,” Snowden says. “The fact that the government’s investigation failed—that they don’t know what was taken and that they keep throwing out these ridiculous huge numbers—implies to me that somewhere in their damage assessment they must have seen something that was like, ‘Holy shit.’ And they think it’s still out there.”

Yet it is very likely that no one knows precisely what is in the mammoth haul of documents—not the NSA, not the custodians, not even Snowden himself. He would not say exactly how he gathered them, but others in the intelligence community have speculated that he simply used a web crawler, a program that can search for and copy all documents containing particular keywords or combinations of keywords. This could account for many of the documents that simply list highly technical and nearly unintelligible signal parameters and other statistics.

And there’s another prospect that further complicates matters: Some of the revelations attributed to Snowden may not in fact have come from him but from another leaker spilling secrets under Snowden’s name. Snowden himself adamantly refuses to address this possibility on the record. But independent of my visit to Snowden, I was given unrestricted access to his cache of documents in various locations. And going through this archive using a sophisticated digital search tool, I could not find some of the documents that have made their way into public view, leading me to conclude that there must be a second leaker somewhere. I’m not alone in reaching that conclusion. Both Greenwald and security expert Bruce Schneier—who have had extensive access to the cache—have publicly stated that they believe another whistle-blower is releasing secret documents to the media.

[…]

Some have even raised doubts about whether the infamous revelation that the NSA was tapping German chancellor Angela Merkel’s cell phone, long attributed to Snowden, came from his trough. At the time of that revelation, Der Spiegel simply attributed the information to Snowden and other unnamed sources. If other leakers exist within the NSA, it would be more than another nightmare for the agency—it would underscore its inability to control its own information and might indicate that Snowden’s rogue protest of government overreach has inspired others within the intelligence community. “They still haven’t fixed their problems,” Snowden says. “They still have negligent auditing, they still have things going for a walk, and they have no idea where they’re coming from and they have no idea where they’re going. And if that’s the case, how can we as the public trust the NSA with all of our information, with all of our private records, the permanent record of our lives?”

[…]

Snowden keeps close tabs on his evolving public profile, but he has been resistant to talking about himself. In part, this is because of his natural shyness and his reluctance about “dragging family into it and getting a biography.” He says he worries that sharing personal details will make him look narcissistic and arrogant. But mostly he’s concerned that he may inadvertently detract from the cause he has risked his life to promote. “I’m an engineer, not a politician,” he says. “I don’t want the stage. I’m terrified of giving these talking heads some distraction, some excuse to jeopardize, smear, and delegitimize a very important movement.”

[…]

While in Geneva, Snowden says, he met many spies who were deeply opposed to the war in Iraq and US policies in the Middle East. “The CIA case officers were all going, what the hell are we doing?” Because of his job maintaining computer systems and network operations, he had more access than ever to information about the conduct of the war. What he learned troubled him deeply. “This was the Bush period, when the war on terror had gotten really dark,” he says. “We were torturing people; we had warrantless wiretapping.”

He began to consider becoming a whistle-blower, but with Obama about to be elected, he held off. “I think even Obama’s critics were impressed and optimistic about the values that he represented,” he says. “He said that we’re not going to sacrifice our rights. We’re not going to change who we are just to catch some small percentage more terrorists.” But Snowden grew disappointed as, in his view, Obama didn’t follow through on his lofty rhetoric. “Not only did they not fulfill those promises, but they entirely repudiated them,” he says. “They went in the other direction. What does that mean for a society, for a democracy, when the people that you elect on the basis of promises can basically suborn the will of the electorate?”

[…]

Snowden’s disenchantment would only grow. It was bad enough when spies were getting bankers drunk to recruit them; now he was learning about targeted killings and mass surveillance, all piped into monitors at the NSA facilities around the world. Snowden would watch as military and CIA drones silently turned people into body parts. And he would also begin to appreciate the enormous scope of the NSA’s surveillance capabilities, an ability to map the movement of everyone in a city by monitoring their MAC address, a unique identifier emitted by every cell phone, computer, and other electronic device.

[…]

Snowden adjusts his glasses; one of the nose pads is missing, making them slip occasionally. He seems lost in thought, looking back to the moment of decision, the point of no return. The time when, thumb drive in hand, aware of the enormous potential consequences, he secretly went to work. “If the government will not represent our interests,” he says, his face serious, his words slow, “then the public will champion its own interests. And whistle-blowing provides a traditional means to do so.”

[…]

Snowden landed a job as an infrastructure analyst with another giant NSA contractor, Booz Allen. The role gave him rare dual-hat authority covering both domestic and foreign intercept capabilities—allowing him to trace domestic cyberattacks back to their country of origin. In his new job, Snowden became immersed in the highly secret world of planting malware into systems around the world and stealing gigabytes of foreign secrets. At the same time, he was also able to confirm, he says, that vast amounts of US communications “were being intercepted and stored without a warrant, without any requirement for criminal suspicion, probable cause, or individual designation.” He gathered that evidence and secreted it safely away.

[…]

One day an intelligence officer told him that TAO—a division of NSA hackers—had attempted in 2012 to remotely install an exploit in one of the core routers at a major Internet service provider in Syria, which was in the midst of a prolonged civil war. This would have given the NSA access to email and other Internet traffic from much of the country. But something went wrong, and the router was bricked instead—rendered totally inoperable. The failure of this router caused Syria to suddenly lose all connection to the Internet—although the public didn’t know that the US government was responsible. (This is the first time the claim has been revealed.)

[…]

“It’s no secret that we hack China very aggressively,” he says. “But we’ve crossed lines. We’re hacking universities and hospitals and wholly civilian infrastructure rather than actual government targets and military targets. And that’s a real concern.”

The last straw for Snowden was a secret program he discovered while getting up to speed on the capabilities of the NSA’s enormous and highly secret data storage facility in Bluffdale, Utah. Potentially capable of holding upwards of a yottabyte of data, some 500 quintillion pages of text, the 1 million-square-foot building is known within the NSA as the Mission Data Repository. (According to Snowden, the original name was Massive Data Repository, but it was changed after some staffers thought it sounded too creepy—and accurate.) Billions of phone calls, faxes, emails, computer-to-computer data transfers, and text messages from around the world flow through the MDR every hour. Some flow right through, some are kept briefly, and some are held forever.

The massive surveillance effort was bad enough, but Snowden was even more disturbed to discover a new, Strangelovian cyberwarfare program in the works, codenamed MonsterMind. The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country—a “kill” in cyber terminology.

Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement. That’s a problem, Snowden says, because the initial attacks are often routed through computers in innocent third countries. “These attacks can be spoofed,” he says. “You could have someone sitting in China, for example, making it appear that one of these attacks is originating in Russia. And then we end up shooting back at a Russian hospital. What happens next?”

In addition to the possibility of accidentally starting a war, Snowden views MonsterMind as the ultimate threat to privacy because, in order for the system to work, the NSA first would have to secretly get access to virtually all private communications coming in from overseas to people in the US. “The argument is that the only way we can identify these malicious traffic flows and respond to them is if we’re analyzing all traffic flows,” he says. “And if we’re analyzing all traffic flows, that means we have to be intercepting all traffic flows. That means violating the Fourth Amendment, seizing private communications without a warrant, without probable cause or even a suspicion of wrongdoing. For everyone, all the time.”

[…]

Given the NSA’s new data storage mausoleum in Bluffdale, its potential to start an accidental war, and the charge to conduct surveillance on all incoming communications, Snowden believed he had no choice but to take his thumb drives and tell the world what he knew. The only question was when.

On March 13, 2013, sitting at his desk in the “tunnel” surrounded by computer screens, Snowden read a news story that convinced him that the time had come to act. It was an account of director of national intelligence James Clapper telling a Senate committee that the NSA does “not wittingly” collect information on millions of Americans. “I think I was reading it in the paper the next day, talking to coworkers, saying, can you believe this shit?”

Snowden and his colleagues had discussed the routine deception around the breadth of the NSA’s spying many times, so it wasn’t surprising to him when they had little reaction to Clapper’s testimony. “It was more of just acceptance,” he says, calling it “the banality of evil”—a reference to Hannah Arendt’s study of bureaucrats in Nazi Germany.

“It’s like the boiling frog,” Snowden tells me. “You get exposed to a little bit of evil, a little bit of rule-breaking, a little bit of dishonesty, a little bit of deceptiveness, a little bit of disservice to the public interest, and you can brush it off, you can come to justify it. But if you do that, it creates a slippery slope that just increases over time, and by the time you’ve been in 15 years, 20 years, 25 years, you’ve seen it all and it doesn’t shock you. And so you see it as normal. And that’s the problem, that’s what the Clapper event was all about. He saw deceiving the American people as what he does, as his job, as something completely ordinary. And he was right that he wouldn’t be punished for it, because he was revealed as having lied under oath and he didn’t even get a slap on the wrist for it. It says a lot about the system and a lot about our leaders.” Snowden decided it was time to hop out of the water before he too was boiled alive.

At the same time, he knew there would be dire consequences. “It’s really hard to take that step—not only do I believe in something, I believe in it enough that I’m willing to set my own life on fire and burn it to the ground.”

But he felt that he had no choice. Two months later he boarded a flight to Hong Kong with a pocket full of thumb drives.

[…]

rather than the Russian secret police, it’s his old employers, the CIA and the NSA, that Snowden most fears. “If somebody’s really watching me, they’ve got a team of guys whose job is just to hack me,” he says. “I don’t think they’ve geolocated me, but they almost certainly monitor who I’m talking to online. Even if they don’t know what you’re saying, because it’s encrypted, they can still get a lot from who you’re talking to and when you’re talking to them.”

More than anything, Snowden fears a blunder that will destroy all the progress toward reforms for which he has sacrificed so much. “I’m not self-destructive. I don’t want to self-immolate and erase myself from the pages of history. But if we don’t take chances, we can’t win,” he says. And so he takes great pains to stay one step ahead of his presumed pursuers—he switches computers and email accounts constantly. Nevertheless, he knows he’s liable to be compromised eventually: “I’m going to slip up and they’re going to hack me. It’s going to happen.”

Indeed, some of his fellow travelers have already committed some egregious mistakes. Last year, Greenwald found himself unable to open the encryption on a large trove of secrets from GCHQ—the British counterpart of the NSA—that Snowden had passed to him. So he sent his longtime partner, David Miranda, from their home in Rio to Berlin to get another set from Poitras. But in making the arrangements, The Guardian booked a transfer through London. Tipped off, probably as a result of GCHQ surveillance, British authorities detained Miranda as soon as he arrived and questioned him for nine hours. In addition, an external hard drive containing 60 gigabits of data—about 58,000 pages of documents—was seized. Although the documents had been encrypted using a sophisticated program known as True Crypt, the British authorities discovered a paper of Miranda’s with the password for one of the files, and they were able to decrypt about 75 pages. (Greenwald has still not gained access to the complete GCHQ documents.)

Another concern for Snowden is what he calls NSA fatigue—the public becoming numb to disclosures of mass surveillance, just as it becomes inured to news of battle deaths during a war. “One death is a tragedy, and a million is a statistic,” he says, mordantly quoting Stalin. “Just as the violation of Angela Merkel’s rights is a massive scandal and the violation of 80 million Germans is a nonstory.”

Nor is he optimistic that the next election will bring any meaningful reform. In the end, Snowden thinks we should put our faith in technology—not politicians. “We have the means and we have the technology to end mass surveillance without any legislative action at all, without any policy changes.” The answer, he says, is robust encryption. “By basically adopting changes like making encryption a universal standard—where all communications are encrypted by default—we can end mass surveillance not just in the United States but around the world.”

[…]

“The question for us is not what new story will come out next. The question is, what are we going to do about it?”

I Want It, and I Want It Now — It’s Time for Instant Gratification | Re/code

I Want It, and I Want It Now — It’s Time for Instant Gratification | Re/code (part 1)

It Takes a New Kind of Worker to Make “Instant” Happen | Re/code (part 2)

Can “Instant” Become a Viable Business? | Re/code part 3)

Instant Gratification Pioneers Kozmo, Webvan, Pets.com Still Believe | Re/code (part 4)

Living in an Instant World: What’s Next After Now? | Re/code (part 5)

Carrying two iPhones that beep out assignments throughout the day, Lyons works for four different app-enabled bike-courier services: WunWun, UberRush, Zipments and Petal by Pedal. He does about 25 to 30 deliveries per day, which adds up to about 50 miles, including the commute.

When he first got started last year, Lyons tried working for traditional bike-courier services where he would make $3 per delivery. “It was outrageous,” he says. “They treat you like an animal.”

Some of the newer services Lyons works for are subsidized. When it first started, Uber was giving away free courier service for its UberRush local delivery trial. Lyons says that demand has dropped a bit since the initial promos wore out.

WunWun — which has the insane premise of deliveries from any store or restaurant in Manhattan within an hour, for free — keeps Lyons the busiest.

Lyons claims WunWun’s system of working for tips, which are suggested within the app at 30 percent, somehow actually works. “You never really get snubbed out on a tip,” he says.

By literally working his butt off, Lyons thinks he will make between $45,000 and $60,000 this year.

[…]

“If people wanted it so badly, why did it not exist?” he says. “It was too darned expensive, and it was not sustainable. Even in 2010, a business like ours would be incredibly difficult to start because not enough sections of the population had smartphones.”

Still, Xu will admit that Palo Alto might not be the most representative test market in the world. As we drive to pick up the delivery, we pass three Teslas parked in a row in the shopping-center parking lot. “Only in Palo Alto,” he says.

But it’s bigger than Palo Alto. It’s bigger than San Francisco or New York. Take all these stories together and the larger point is: The business of bringing people what they want, when they want it, is booming.

A decade ago, we got iTunes, and the ability to buy a song bought and delivered with the push of a button. Then Facebook helped us stay in touch with our spread-out friends and family from the comfort of our couch. Then Netflix DVDs started coming over the air instead of to our mailboxes. Now it’s not just Web pages that we can load up instantly, it’s the physical world.

Not to neglect the important historical contributions of pizza joints and Chinese restaurants, but the groundwork for what you might call the instant gratification economy was laid by Amazon, which spent years building up its inventory, fulfillment infrastructure and, most importantly, customer expectations for getting whatever they want delivered to their doors two days later.

Then Uber came along and established the precedent of a large-scale marketplace powered by independent workers and smartphones. After that started to work, every pitch deck in Silicon Valley seemed to morph overnight into an “Uber for X” startup.

On the one hand, this is a positive development. As startups merge online expectations with offline reality, the Internet is becoming more than a glowing screen drawing us away from the real world. On the other hand, instant gratification tempts us to be profoundly lazy and perhaps unreasonably impatient.

[…]

As for whether there’s demand, forces are converging to fulfill the notion of what some pundits label “IWWIWWIWI.” That is, “I want what I want when I want it.” It’s not the easiest acronym to get your tongue around — but it’s pretty to look at, and it’s right on the money.

[…]

Yarrow thinks we’ve become conditioned for impatience by technology like Internet search and smartphones. “Today, we have almost no tolerance for boredom,” she told me. “Our brains are malleable, and I think they have shifted to accommodate much more stimulation. We’re fascinated by newness, and we desire to get the new thing right away. We want what we want when we want it.”

[…]

Someone had told me the day before that one way to think about all this instant gratification stuff is that it basically brings rich-people benefits to the average person.

In his view, the magic of Uber and services modeled on Uber is that they help you value your time the way a rich person would, without spending your money the way a rich person would.

[…]

For decades, books and TV shows planted seeds of desire for instant gratification in impressionable minds. But across many of these stories about suburban genies and witches, magic wands and technology of the future, there’s a shadow side to getting what you want when you want it. The princesses always seem to run out of wishes before they get what they really need. Their greed is their doom.

“Don’t care how, I want it nooow,” sings greedy little Veruca Salt, right up until she falls into Willy Wonka’s garbage chute, never to be seen again.

[…]

In Pixar’s wistful animated sci-fi story “Wall-E,” the people of the future zoom around in hovering chairs in a climate-controlled dome, with robots refilling their sodas. Their bodies are so flabby they can’t even stand. It’s the ultimate incarnation of the couch potato.

[…]

The most important reason that this is happening now is that workers have smartphones. After a briefer-than-brief application process, companies like Uber hand out phones to workers — or just give them an app to download onto their personal devices — and suddenly, for better or worse, they’ve got a branded on-demand service.

Over and over again, startups in the instant gratification space tell me that the most crucial part of their arsenal is an app to help remote workers receive assignments, schedule jobs and map where they are going.

In large part because they are powered by a mobile workforce, instant gratification startups avoid much of the hassle and expense of building physical infrastructure.

“Remote controls for real life” is how venture capitalist Matt Cohler described mobile apps like Uber and the food-delivery service GrubHub two years ago — because their simple interfaces summon things to happen in the physical world.

Today, that real-life remote control feels even more like a magic wand. At a lunch meeting, investor Shervin Pishevar pulls out his phone, opens the Uber app and sets his location to Japan. “If I push this button right now,” he marvels, “I’m going to move metal in Tokyo.”

[…]

He describes this as a boomerang back to a village economy. After years of trends toward suburbs, big-box stores and car ownership, smartphones could be helping us get back to where we came from. The combined forces of urbanization, online commerce and trust mean that people can efficiently share goods and services on a local level, more than ever before.

[…]

Caviar, which was founded on the premise that “no good restaurants in San Francisco deliver,” became profitable within three months of launching. It has a much snazzier list of restaurants than GrubHub, including Momofuku in New York and Delfina in San Francisco.

Caviar CEO Jason Wang says his startup plans to soon drop delivery fees to $4.99 from $9.99. It pays drivers $15 per delivery and takes a cut of up to 25 percent of each order, depending on the restaurant. Even after the price cut, “We’ll still make money, because our margins are very good,” Wang says.

[…]

Uber is a company that owns nothing. It connects available drivers and their cars to people who want to be their passengers. By juicing supply with surge pricing and demand with discounts, Uber is able to create — out of thin air — a reliable service that exists in 140 cities around the world.

Without fail, instant gratification startups say they will win because they are smart at logistics.

Describing his business, Instacart founder and CEO Apoorva Mehta says, “It really is a data-science problem masked into a consumer product.”

[…]

DoorDash’s Xu describes his purpose as a machine-learning problem: Discovering “the variance of the variance” so his algorithm can reliably estimate prep and delivery time based on factors like how long a type of food stays warm, what a restaurant’s error rate is (the norm is 25 percent) and how fast a particular driver has been in the past.

Uber aims to match up a driver and passenger as quickly as possible. Food delivery is more complicated, according to Xu.

“It’s almost never the driver that’s closest to the restaurant when the order is placed,” Xu says.

[…]

a mobile medical-marijuana delivery startup called Eaze launched in San Francisco. Not only was Eaze open for business, it was open for business 24 hours a day.

It Takes a New Kind of Worker to Make “Instant” Happen | Re/code (part 2)

it can be too easy to forget that people make “instant” happen. And, generally, these people are not a traditionally stable workforce. They are instead a flexible and scalable network of workers — “fractional employees” — that tap in and tap out as needed, and as suits them.

[…]

The smartphone is at the center of the sharing economy. Every company mentioned in this series on the instant gratification economy runs on worker smartphones. GPS, texting and mobile-app notifications are the ways to make flexible work actually work.

[…]

It’s very common for people to pick up gigs from multiple services — in the morning, grab some grocery orders on Instacart; then when you get tired of lifting large bags, run a shift during Sprig’s prime lunch hours; then when you get lonely from ferrying around inanimate objects, sign into Lyft to interact with an actual person.

NYU business school professor Arun Sundararajan’s summer research project is counting the number of jobs created by the sharing economy. He doesn’t have an estimate yet, but he points out that the U.S. workforce is already 20 percent to 25 percent freelance.

Sundararajan says he sees a lot of good in the sharing economy. “It will lead people to entrepreneurship without the extreme risks.” He thinks of platforms like Uber as gateways. “It’s even easier than finding a full-time job, which is easier than freelance.”

Can “Instant” Become a Viable Business? | Re/code part 3).

Redefining delivery for a new era of customers who want everything right away requires rethinking operations. By focusing attention on creating a powerful logistical system, and tying into the “sharing economy,” many of the new crop of startups in the on-demand space are trying to offer faster service at a much lower operational cost.

And so the young players in the instant gratification economy are ferrying cargo across town via crowdsourced workers.

Usually, these are independent contractors, who decide when they want to work, drive their own vehicles, receive directions about where they need to be via smartphone — and cover the cost of their own parking tickets. The new buzzword for this is “fractional employment.”

[…]

Deliv is trying to do deliveries of almost anything and everything later that day, for as little as $5.

[…]

Crowdsourced drivers pick up batches of orders, and then take them out to people’s homes.

“I don’t own trucks, I don’t pay for drivers I don’t use, I don’t pay for hubs,” Carmeli says. “The malls are my hubs.”

[…]

Amazon said last year that more than 20 million members signed up for its two-day delivery service, Prime, which now costs $99 per year. While that’s a small number in the grand scheme of things, the high-spending habits of the group — estimated to be more than twice as much as regular Amazon customers — are having a magnetic effect on the rest of the industry.

A skunkworks team at Google developed what became Google Shopping Express last year, by putting the Amazon Prime model under a microscope. According to a source familiar with the project, the biggest lesson was that it’s worth investing ahead of where the market might be today.

Which is to say, many people still don’t know they want same-day delivery, because today they think same-day delivery means fuss, friction and expense. But if you make something fast and easy, consumers will come to appreciate it — and maybe even pay for it. So the upfront investment is worth it.

“It’s better to build volume first, than to launch with a ‘gotcha,’” the source says.

That’s the hypothesis, anyway.

And Google isn’t testing the last part of that hypothesis — charging people money — yet.

It is currently subsidizing six-month trials of unlimited free delivery. In fact, the company is throwing something like $500 million at Google Shopping Express.

Competing with that kind of budget is a scary prospect for startups.

[…]

The scrum now includes two Ubers for home cleaning, a few Ubers for handypeople, at least three Ubers for massages, five Ubers for valet parking, a couple of Ubers for laundries, an emerging group of Ubers for hair and makeup, and so very many Ubers for food.

[…]

Could you actually make a business out of offering same-day delivery — for free? Permanently, not as a promotion.

[…]

WunWun, promises to buy anything from any store or any food from any restaurant in Manhattan, parts of Brooklyn and the Hamptons, and deliver it to any place in that same zone. It’s free.

[…]

Hnetinka was inspired by an April 2013 investment memo from Jefferies called “Same-Day: The Next Killer App,” which made two big points: 1) Free shipping has become a “must-have” in e-commerce. Half of consumers abandon online shopping carts without it; and 2) there’s the opportunity to improve on that service by making it same-day.

[…]

For today, WunWun is making money by taking a slice of tips, and by getting discounts from retailers it spends a lot of money with that it doesn’t pass along to customers.

Tomorrow, WunWun will try to create the offline equivalent of search advertising, Hnetinka says.

Stores will be able to bid to be the supplier for WunWun orders, whether tennis balls, ChapStick or Yankees hats.

“That’s when WunWun really starts to make a lot of money,” Hnetinka says. “We have created the largest demand funnel. We’ve brought together convenience of ordering online with immediacy of offline. So we’re not talking about profitability margins, we’re talking about marketing budgets.”

Instant Gratification Pioneers Kozmo, Webvan, Pets.com Still Believe | Re/code (part 4)

at that moment in time, it seemed like all you had to do was pick a noun, add “.com,” and you were in business.

As a sign of the times, one company called Computer.com spent half its $5.8 million in venture capital airing Super Bowl ads on the day it launched a site purporting to teach people about using computers.

And there were parties, legendary parties, where the likes of Elvis Costello and Beck and the B-52s played, sponsor banners bedecked the walls, and many of the revelers collected their mountains of swag while having no idea which company was even throwing that night’s bash.

Even if Kozmo and its cohort had a chance at a business model that worked, they were all spending more money than they could possibly earn on advertising and parties and weird promotional tie-ups to return movies at Starbucks.

As we all know, that boom went bust in 2000. The period’s most famous flameouts — Pets.com, Urbanfetch, Kozmo, Webvan, even Computer.com, somehow — were all gone by 2001. What’s left — a cautionary tale and some mascot dolls for sale on eBay.

[…]

Same-day service is the single-biggest wave in e-commerce, Wainwright says. The single best experience she had shopping online was when she forgot to pack a certain special black cashmere sweater before flying to New York for a business trip.

Wainwright says she realized the sweater was missing at 11 pm, when she unpacked her bag at the hotel. But it was still posted on the online retailer Net-A-Porter, where she originally bought it, so she placed another order and it was delivered to her office at 10:30 the next morning by a deliveryman in a bellboy suit bearing an iPad for her signature.

“It was absolutely the most amazing thing,” Wainwright says. “It was like $25, it was nothing. Now, the sweater wasn’t cheap — but it was the exact same sweater I had left on my bed.”

Living in an Instant World: What’s Next After Now? | Re/code (part 5)

Jennings has set up a virtual Google Voice number attached to his doorbell so he can let people into his entryway from his phone when he’s not home.

“Say you run out of toothpaste in the morning, you can order it, and then it’s ready for when you brush your teeth at night,” he says.

“The majority of the time, there’s no interaction,” Jennings says, meaning he doesn’t have to say hello to a delivery person or sign for a package.

And in the future, people may be taken out of the delivery equation altogether.

That future is coming sooner than you think. Two years ago, the geek world went wild for an idea called Tacocopter. “Flying robots deliver tacos to your location,” said its website. “Easy ordering on your smartphone.”

[…]

“It wouldn’t surprise me to see that the regulations that now limit such uses of drone technology will almost certainly remain in effect much longer than the technological limitations remain a hurdle,” wrote Mike Masnick.

Eight months ago, Amazon upped the Tacocopter stakes with a promo video for Amazon Prime Air, showing a hovering robotic aircraft depositing a package on a suburban patio. It was a marketing stunt designed to jumpstart the holiday shopping season.

Or was it?

In July, Amazon wrote to the FAA asking for permission to test flying commercial drones outside at speeds of up to 50 miles per hour. The company said it hopes to deliver packages weighing five pounds within 30 minutes of orders being placed.

[…]

“A lot of things fundamentally change,” he says. “Does the architecture of homes change because there’s more space when you don’t need garages and kitchens? Do you really need a grocery store? You shouldn’t use all that real estate in a city for giant parking lots, you should push a button and be able to get what you want delivered, like Instacart.”

He continues. “And then you argue, is there a world where you have Munchery [another San Francisco food creation and distribution service] delivered to a restaurant that’s not really a restaurant, but it’s a … it’s a front-end. It’s a beautiful spot with a beautiful view, and it doesn’t need a kitchen, just have a few tables for a sit-down dinner.”

This train of thought has taken him to a new place. “You know, I hadn’t thought about that,” Pishevar says. “It’s just a … a distributed table. And then someone would come serve you.”

[…]

A popular justification for all this food-startup fundraising is frequency: Most people eat three times a day, at least.

No, really, that’s what every venture capitalist will remind you. This market is an opportunity because it ties into existing daily habits. People eat more often than they need to Uber across town. And so, the biggest opportunity in “instant” is food.

[…]

Sure, making food is not novel. The innovation here is making food that ties into smart logistics systems that match supply and demand, and coordinating crowdsourced workers so that meals arrive so fast it seems like magic.

“We’re mass-producing the same meal for all these people. We get economies of scale that no restaurant will ever have because of the physical location. Whereas, we can serve the whole Bay Area with the same supply.”

This is not just a restaurant, says Tsui. Combining the core mobile functions of location and real-time makes for a fundamental shift beyond what other mobile apps — besides Uber — are doing.

[…]

Especially for those who live in the cities well served by these services, it’s probably time to start thinking about what deserves to be slowed down, and what things we’d prefer to wait for and savor. Either that, or the inexorable march toward convenience will bring us ever closer to fulfilling the prophecy of those shapeless “Wall-E” couch potatoes, who have trouble standing up after sitting on the couch for so long.

But beyond instant — what comes next?

It’s probably making those brilliant on-demand logistics systems even more brilliant, anticipating our wants and needs before we even have them, and starting to send things our way before we push the button.

Both Amazon and Google are already working in this direction. Or maybe instead of tacos and drones, we’ll all just get 3-D printers, so we can replicate our meals at the table, just like Jane Jetson.

And maybe then Veruca Salt would just calm down.

The Social Laboratory

The Social Laboratory.

Animation of security cameras overlaid on Singapore

In October 2002, Peter Ho, the permanent secretary of defense for the tiny island city-state of Singapore, paid a visit to the offices of the Defense Advanced Research Projects Agency (DARPA), the U.S. Defense Department’s R&D outfit best known for developing the M16 rifle, stealth aircraft technology, and the Internet. Ho didn’t want to talk about military hardware. Rather, he had made the daylong plane trip to meet with retired Navy Rear Adm. John Poindexter, one of DARPA’s then-senior program directors and a former national security advisor to President Ronald Reagan. Ho had heard that Poindexter was running a novel experiment to harness enormous amounts of electronic information and analyze it for patterns of suspicious activity — mainly potential terrorist attacks.

The two men met in Poindexter’s small office in Virginia, and on a whiteboard, Poindexter sketched out for Ho the core concepts of his imagined system, which Poindexter called Total Information Awareness (TIA). It would gather up all manner of electronic records — emails, phone logs, Internet searches, airline reservations, hotel bookings, credit card transactions, medical reports — and then, based on predetermined scenarios of possible terrorist plots, look for the digital “signatures” or footprints that would-be attackers might have left in the data space. The idea was to spot the bad guys in the planning stages and to alert law enforcement and intelligence officials to intervene.

[…]

Ho returned home inspired that Singapore could put a TIA-like system to good use. Four months later he got his chance, when an outbreak of severe acute respiratory syndrome (SARS) swept through the country, killing 33, dramatically slowing the economy, and shaking the tiny island nation to its core. Using Poindexter’s design, the government soon established the Risk Assessment and Horizon Scanning program (RAHS, pronounced “roz”) inside a Defense Ministry agency responsible for preventing terrorist attacks and “nonconventional” strikes, such as those using chemical or biological weapons — an effort to see how Singapore could avoid or better manage “future shocks.” Singaporean officials gave speeches and interviews about how they were deploying big data in the service of national defense — a pitch that jibed perfectly with the country’s technophilic culture.

[…]

many current and former U.S. officials have come to see Singapore as a model for how they’d build an intelligence apparatus if privacy laws and a long tradition of civil liberties weren’t standing in the way.

[…]

They are drawn not just to Singapore’s embrace of mass surveillance but also to the country’s curious mix of democracy and authoritarianism, in which a paternalistic government ensures people’s basic needs — housing, education, security — in return for almost reverential deference. It is a law-and-order society, and the definition of “order” is all-encompassing.

Ten years after its founding, the RAHS program has evolved beyond anything Poindexter could have imagined. Across Singapore’s national ministries and departments today, armies of civil servants use scenario-based planning and big-data analysis from RAHS for a host of applications beyond fending off bombs and bugs. They use it to plan procurement cycles and budgets, make economic forecasts, inform immigration policy, study housing markets, and develop education plans for Singaporean schoolchildren — and they are looking to analyze Facebook posts, Twitter messages, and other social media in an attempt to “gauge the nation’s mood” about everything from government social programs to the potential for civil unrest.

In other words, Singapore has become a laboratory not only for testing how mass surveillance and big-data analysis might prevent terrorism, but for determining whether technology can be used to engineer a more harmonious society.

In a country run by engineers and technocrats, it’s an article of faith among the governing elite, and seemingly among most of the public, that Singapore’s 3.8 million citizens and permanent residents — a mix of ethnic Chinese, Indians, and Malays who live crammed into 716 square kilometers along with another 1.5 million nonresident immigrants and foreign workers — are perpetually on a knife’s edge between harmony and chaos.

“Singapore is a small island,” residents are quick to tell visitors, reciting the mantra to explain both their young country’s inherent fragility and its obsessive vigilance. Since Singapore gained independence from its union with Malaysia in 1965, the nation has been fixated on the forces aligned against it, from the military superiority of potentially aggressive and much larger neighbors, to its lack of indigenous energy resources, to the country’s longtime dependence on Malaysia for fresh water. “Singapore shouldn’t exist. It’s an invented country,” one top-ranking government official told me on a recent visit, trying to capture the existential peril that seems to inform so many of the country’s decisions.

But in less than 50 years, Singapore has achieved extraordinary success. Despite the government’s quasi-socialistic cradle-to-grave care, the city-state is enthusiastically pro-business, and a 2012 report ranked it as the world’s wealthiest country, based on GDP per capita. Singapore’s port handles 20 percent of the world’s shipping containers and nearly half of the world’s crude oil shipments; its airport is the principal air-cargo hub for all of Southeast Asia; and thousands of corporations have placed their Asian regional headquarters there. This economic rise might be unprecedented in the modern era, yet the more Singapore has grown, the more Singaporeans fear loss. The colloquial word kiasu, which stems from a vernacular Chinese word that means “fear of losing,” is a shorthand by which natives concisely convey the sense of vulnerability that seems coded into their social DNA (as well as their anxiety about missing out — on the best schools, the best jobs, the best new consumer products). Singaporeans’ boundless ambition is matched only by their extreme aversion to risk.

That is one reason the SARS outbreak flung the door wide open for RAHS. From late February to July of 2003, the virus flamed through the country. It turned out that three women who were hospitalized and treated for pneumonia in Singapore had contracted SARS while traveling in Hong Kong. Although two of the women recovered without infecting anyone, the third patient sparked an outbreak when she passed the virus to 22 people, including a nurse who went on to infect dozens of others. The officials identified a network of three more so-called “superspreaders” — together, five people caused more than half the country’s 238 infections. If Singaporean officials had detected any of these cases sooner, they might have halted the spread of the virus.

Health officials formed a task force two weeks after the virus was first spotted and took extraordinary measures to contain it, but they knew little about how it was spreading. They distributed thermometers to more than 1 million households, along with descriptions of SARS’s symptoms. Officials checked for fevers at schools and businesses, and they even used infrared thermal imagers to scan travelers at the airport. The government invoked Singapore’s Infectious Diseases Act and ordered in-home quarantines for more than 850 people who showed signs of infection, enforcing the rule with surveillance devices and electronic monitoring equipment. Investigators tracked down all people with whom the victims had been in contact. The government closed all schools at the pre-university level, affecting 600,000 students.

[…]

The SARS outbreak reminded Singaporeans that their national prosperity could be imperiled in just a few months by a microscopic invader that might wipe out a significant portion of the densely packed island’s population.

Months after the virus abated, Ho and his colleagues ran a simulation using Poindexter’s TIA ideas to see whether they could have detected the outbreak. Ho will not reveal what forms of information he and his colleagues used — by U.S. standards, Singapore’s privacy laws are virtually nonexistent, and it’s possible that the government collected private communications, financial data, public transportation records, and medical information without any court approval or private consent — but Ho claims that the experiment was very encouraging. It showed that if Singapore had previously installed a big-data analysis system, it could have spotted the signs of a potential outbreak two months before the virus hit the country’s shores. Prior to the SARS outbreak, for example, there were reports of strange, unexplained lung infections in China. Threads of information like that, if woven together, could in theory warn analysts of pending crises.

[…]

The system uses a mixture of proprietary and commercial technology and is based on a “cognitive model” designed to mimic the human thought process — a key design feature influenced by Poindexter’s TIA system. RAHS, itself, doesn’t think. It’s a tool that helps human beings sift huge stores of data for clues on just about everything. It is designed to analyze information from practically any source — the input is almost incidental — and to create models that can be used to forecast potential events. Those scenarios can then be shared across the Singaporean government and be picked up by whatever ministry or department might find them useful. Using a repository of information called an ideas database, RAHS and its teams of analysts create “narratives” about how various threats or strategic opportunities might play out. The point is not so much to predict the future as to envision a number of potential futures that can tell the government what to watch and when to dig further.

The officials running RAHS today are tight-lipped about exactly what data they monitor, though they acknowledge that a significant portion of “articles” in their databases come from publicly available information, including news reports, blog posts, Facebook updates, and Twitter messages. (“These articles have been trawled in by robots or uploaded manually” by analysts, says one program document.) But RAHS doesn’t need to rely only on open-source material or even the sorts of intelligence that most governments routinely collect: In Singapore, electronic surveillance of residents and visitors is pervasive and widely accepted.

Surveillance starts in the home, where all Internet traffic in Singapore is filtered, a senior Defense Ministry official told me (commercial and business traffic is not screened, the official said). Traffic is monitored primarily for two sources of prohibited content: porn and racist invective. About 100 websites featuring sexual content are officially blocked. The list is a state secret, but it’s generally believed to include Playboy and Hustler magazine’s websites and others with sexually laden words in the title. (One Singaporean told me it’s easy to find porn — just look for the web addresses without any obviously sexual words in them.) All other sites, including foreign media, social networks, and blogs, are open to Singaporeans. But post a comment or an article that the law deems racially offensive or inflammatory, and the police may come to your door.

Singaporeans have been charged under the Sedition Act for making racist statements online, but officials are quick to point out that they don’t consider this censorship. Hateful speech threatens to tear the nation’s multiethnic social fabric and is therefore a national security threat, they say. After the 2012 arrest of two Chinese teenage boys, who police alleged had made racist comments on Facebook and Twitter about ethnic Malays, a senior police official explained to reporters: “The right to free speech does not extend to making remarks that incite racial and religious friction and conflict. The Internet may be a convenient medium to express one’s views, but members of the public should bear in mind that they are no less accountable for their actions online.”

Singaporean officials stress that citizens are free to criticize the government, and they do.

[…]

Commentary that impugns an individual’s character or motives, however, is off-limits because, like racial invective, it is seen as a threat to the nation’s delicate balance. Journalists, including foreign news organizations, have frequently been charged under the country’s strict libel laws.

[…]

Not only does the government keep a close eye on what its citizens write and say publicly, but it also has the legal authority to monitor all manner of electronic communications, including phone calls, under several domestic security laws aimed at preventing terrorism, prosecuting drug dealing, and blocking the printing of “undesirable” material.

[…]

The surveillance extends to visitors as well. Mobile-phone SIM cards are an easy way for tourists to make cheap calls and are available at nearly any store — as ubiquitous as chewing gum in the United States. (Incidentally, the Singaporean government banned commercial sales of gum because chewers were depositing their used wads on subway doors, among other places.) Criminals like disposable SIM cards because they can be hard to trace to an individual user. But to purchase a card in Singapore, a customer has to provide a passport number, which is linked to the card, meaning the phone company — and, presumably, by extension the government — has a record of every call made on a supposedly disposable, anonymous device.

Privacy International reported that Singaporeans who want to obtain an Internet account must also show identification — in the form of the national ID card that every citizen carries — and Internet service providers “reportedly provide, on a regular basis, information on users to government officials.” The Ministry of Home Affairs also has the authority to compel businesses in Singapore to hand over information about threats against their computer networks in order to defend the country’s computer systems from malicious software and hackers

[…]

Perhaps no form of surveillance is as pervasive in Singapore as its network of security cameras, which police have installed in more than 150 “zones” across the country. Even though they adorn the corners of buildings, are fastened to elevator ceilings, and protrude from the walls of hotels, stores, and apartment lobbies, I had little sense of being surrounded by digital hawk eyes while walking around Singapore, any more than while surfing the web I could detect the digital filters of government speech-minders. Most Singaporeans I met hardly cared that they live in a surveillance bubble and were acutely aware that they’re not unique in some respects. “Don’t you have cameras everywhere in London and New York?” many of the people I talked to asked. (In fact, according to city officials, “London has one of the highest number of CCTV cameras of any city in the world.”) Singaporeans presumed that the cameras deterred criminals and accepted that in a densely populated country, there are simply things you shouldn’t say. “In Singapore, people generally feel that if you’re not a criminal or an opponent of the government, you don’t have anything to worry about,” one senior government official told me.

This year, the World Justice Project, a U.S.-based advocacy group that studies adherence to the rule of law, ranked Singapore as the world’s second-safest country. Prized by Singaporeans, this distinction has earned the country a reputation as one of the most stable places to do business in Asia. Interpol is also building a massive new center in Singapore to police cybercrime. It’s only the third major Interpol site outside Lyon, France, and Argentina, and it reflects both the international law enforcement group’s desire to crack down on cybercrime and its confidence that Singapore is the best place in Asia to lead that fight.

But it’s hard to know whether the low crime rates and adherence to the rule of law are more a result of pervasive surveillance or Singaporeans’ unspoken agreement that they mustn’t turn on one another, lest the tiny island come apart at the seams. If it’s the latter, then the Singapore experiment suggests that governments can install cameras on every block in their cities and mine every piece of online data and all that still wouldn’t be enough to dramatically curb crime, prevent terrorism, or halt an epidemic. A national unity of purpose, a sense that we all sink or swim together, has to be instilled in the population. So Singapore is using technology to do that too.

[…]

The provision of affordable, equitable housing is a fundamental promise that the government makes to its citizens, and keeping them happy in their neighborhoods has been deemed essential to national harmony. Eighty percent of Singapore’s citizens live in public housing — fashionable, multiroom apartments in high-rise buildings, some of which would sell for around U.S. $1 million on the open market. The government, which also owns about 80 percent of the city’s land, sells apartments at interest rates below 3 percent and allows buyers to repay their mortgages out of a forced retirement savings account, to which employers also make a contribution. The effect is that nearly all Singaporean citizens own their own home, and it doesn’t take much of a bite out of their income.

[…]

The Singapore Tourism Board used the methodology to examine trends about who will be visiting the country over the next decade. Officials have tried to forecast whether “alternative foods” derived from experiments and laboratories could reduce Singapore’s near-total dependence on food imports.

[…]

Singapore is now undertaking a multiyear initiative to study how people in lower-level service or manufacturing jobs could be replaced by automated systems like computers or robots, or be outsourced. Officials want to understand where the jobs of the future will come from so that they can retrain current workers and adjust education curricula. But turning lower-end jobs into more highly skilled ones — which native Singaporeans can do — is a step toward pushing lower-skilled immigrants out of the country.

[…]

Singaporeans speak, often reverently, of the “social contract” between the people and their government. They have consciously chosen to surrender certain civil liberties and individual freedoms in exchange for fundamental guarantees: security, education, affordable housing, health care.

[…]

One future study that examined “surveillance from below” concluded that the proliferation of smartphones and social media is turning the watched into the watchers. These new technologies “have empowered citizens to intensely scrutinise government elites, corporations and law enforcement officials … increasing their exposure to reputational risks,” the study found. From the angry citizen who takes a photo of a policeman sleeping in his car and posts it to Twitter to an opposition blogger who challenges party orthodoxy, Singapore’s leaders cannot escape the watch of their own citizens.

[…]

In this tiny laboratory of big-data mining, the experiment is yielding an unexpected result: The more time Singaporeans spend online, the more they read, the more they share their thoughts with each other and their government, the more they’ve come to realize that Singapore’s light-touch repression is not entirely normal among developed, democratic countries — and that their government is not infallible. To the extent that Singapore is a model for other countries to follow, it may tell them more about the limits of big data and that not every problem can be predicted.

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings.

Vannevar Bush’s ‘memex’ — short for ‘memory index’ — a primitive vision for a personal hard drive for information storage and management.

“At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.”

[…]

One of his most fascinating and important points has to do with our outsourcing of memory — or, more specifically, our increasingly deft, search-engine-powered skills of replacing the retention of knowledge in our own brains with the on-demand access to knowledge in the collective brain of the internet. Think, for instance, of those moments when you’re trying to recall the name of a movie but only remember certain fragmentary features — the name of the lead actor, the gist of the plot, a song from the soundtrack. Thompson calls this “tip-of-the-tongue syndrome” and points out that, today, you’ll likely be able to reverse-engineer the name of the movie you don’t remember by plugging into Google what you do remember about it.

[…]

“Tip-of-the-tongue syndrome is an experience so common that cultures worldwide have a phrase for it. Cheyenne Indians call it navonotootse’a, which means “I have lost it on my tongue”; in Korean it’s hyeu kkedu-te mam-dol-da, which has an even more gorgeous translation: “sparkling at the end of my tongue.” The phenomenon generally lasts only a minute or so; your brain eventually makes the connection. But … when faced with a tip-of-the-tongue moment, many of us have begun to rely instead on the Internet to locate information on the fly. If lifelogging … stores “episodic,” or personal, memories, Internet search engines do the same for a different sort of memory: “semantic” memory, or factual knowledge about the world. When you visit Paris and have a wonderful time drinking champagne at a café, your personal experience is an episodic memory. Your ability to remember that Paris is a city and that champagne is an alcoholic beverage — that’s semantic memory.”

[…]

“Writing — the original technology for externalizing information — emerged around five thousand years ago, when Mesopotamian merchants began tallying their wares using etchings on clay tablets. It emerged first as an economic tool. As with photography and the telephone and the computer, newfangled technologies for communication nearly always emerge in the world of commerce. The notion of using them for everyday, personal expression seems wasteful, risible, or debased. Then slowly it becomes merely lavish, what “wealthy people” do; then teenagers take over and the technology becomes common to the point of banality.”

Thompson reminds us of the anecdote, by now itself familiar “to the point of banality,” about Socrates and his admonition that the “technology” of writing would devastate the Greek tradition of debate and dialectic, and would render people incapable of committing anything to memory because “knowledge stored was not really knowledge at all.” He cites Socrates’s parable of the Egyptian god Theuth and how he invented writing, offering it as a gift to the king of Egypt,

“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

That resistance endured as technology changed shape, across the Middle Ages and past Gutenberg’s revolution, but it wasn’t without counter-resistance: Those who recorded their knowledge in writing and, eventually, collected it in the form of books argued that it expanded the scope of their curiosity and the ideas they were able to ponder, whereas the mere act of rote memorization made no guarantees of deeper understanding.

Ultimately, however, Thompson points out that Socrates was both right and wrong: It’s true that, with some deliberately cultivated exceptions and neurological outliers, few thinkers today rely on pure memorization and can recite extensive passages of text from memory. But what Socrates failed to see was the extraordinary dot-connecting enabled by access to knowledge beyond what our own heads can hold — because, as Amanda Palmer poignantly put it, “we can only connect the dots that we collect,” and the outsourcing of memory has exponentially enlarged our dot-collections.

With this in mind, Thompson offers a blueprint to this newly developed system of knowledge management in which access is critical:

“If you are going to read widely but often read books only once; if you going to tackle the ever-expanding universe of ideas by skimming and glancing as well as reading deeply; then you are going to rely on the semantic-memory version of gisting. By which I mean, you’ll absorb the gist of what you read but rarely retain the specifics. Later, if you want to mull over a detail, you have to be able to refind a book, a passage, a quote, an article, a concept.”

This, he argues, is also how and why libraries were born — the death of the purely oral world and the proliferation of print after Gutenberg placed new demands on organizing and storing human knowledge. And yet storage and organization soon proved to be radically different things:

“The Gutenberg book explosion certainly increased the number of books that libraries acquired, but librarians had no agreed-upon ways to organize them. It was left to the idiosyncrasies of each. A core job of the librarian was thus simply to find the book each patron requested, since nobody else knew where the heck the books were. This created a bottleneck in access to books, one that grew insufferable in the nineteenth century as citizens began swarming into public venues like the British Library. “Complaints about the delays in the delivery of books to readers increased,” as Matthew Battles writes in Library: An Unquiet History, “as did comments about the brusqueness of the staff.” Some patrons were so annoyed by the glacial pace of access that they simply stole books; one was even sentenced to twelve months in prison for the crime. You can understand their frustration. The slow speed was not just a physical nuisance, but a cognitive one.”

The solution came in the late 19th century by way of Melville Dewey, whose decimal system imposed order by creating a taxonomy of book placement, eventually rendering librarians unnecessary — at least in their role as literal book-retrievers. They became, instead, curiosity sherpas who helped patrons decide what to read and carry out comprehensive research. In many ways, they came to resemble the editors and curators who help us navigate the internet today, framing for us what is worth attending to and why.

[…]

“The history of factual memory has been fairly predictable up until now. With each innovation, we’ve outsourced more information, then worked to make searching more efficient. Yet somehow, the Internet age feels different. Quickly pulling up [the answer to a specific esoteric question] on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.”

And therein lies the magic of the internet — that unprecedented access to humanity’s collective brain. Thompson cites the work of Harvard psychologist Daniel Wegner, who first began exploring this notion of collective rather than individual knowledge in the 1980s by observing how partners in long-term relationships often divide and conquer memory tasks in sharing the household’s administrative duties:

“Wegner suspected this division of labor takes place because we have pretty good “metamemory.” We’re aware of our mental strengths and limits, and we’re good at intuiting the abilities of others. Hang around a workmate or a romantic partner long enough and you begin to realize that while you’re terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they’re great at it. So you begin to subconsciously delegate the task of remembering that stuff to them, treating them like a notepad or encyclopedia. In many respects, Wegner noted, people are superior to these devices, because what we lose in accuracy we make up in speed.

[…]

Wegner called this phenomenon “transactive” memory: two heads are better than one. We share the work of remembering, Wegner argued, because it makes us collectively smarter — expanding our ability to understand the world around us.”

[…]

This very outsourcing of memory requires that we learn what the machine knows — a kind of meta-knowledge that enables us to retrieve the information when we need it. And, reflecting on Sparrow’s findings, Thomspon points out that this is neither new nor negative:

“We’ve been using transactive memory for millennia with other humans. In everyday life, we are only rarely isolated, and for good reason. For many thinking tasks, we’re dumber and less cognitively nimble if we’re not around other people. Not only has transactive memory not hurt us, it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.”

[…]

Outsourcing our memory to machines rather than to other humans, in fact, offers certain advantages by pulling us into a seemingly infinite rabbit hole of indiscriminate discovery:

“In some ways, machines make for better transactive memory buddies than humans. They know more, but they’re not awkward about pushing it in our faces. When you search the Web, you get your answer — but you also get much more. Consider this: If I’m trying to remember what part of Pakistan has experienced many U.S. drone strikes and I ask a colleague who follows foreign affairs, he’ll tell me “Waziristan.” But when I queried this once on the Internet, I got the Wikipedia page on “Drone attacks in Pakistan.” A chart caught my eye showing the astonishing increase of drone attacks (from 1 a year to 122 a year); then I glanced down to read a précis of studies on how Waziristan residents feel about being bombed. (One report suggested they weren’t as opposed as I’d expected, because many hated the Taliban, too.) Obviously, I was procrastinating. But I was also learning more, reinforcing my schematic understanding of Pakistan.”

[…]

“The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.

Thompson’s most important point, however, has to do with how outsourcing our knowledge to digital tools actually hampers the very process of creative thought, which relies on our ability to connect existing ideas from our mental pool of resources into new combinations, or what the French polymath Henri Poincaré has famously termed “sudden illuminations.” Without a mental catalog of materials which to mull and let incubate in our fringe consciousness, our capacity for such illuminations is greatly deflated. Thompson writes:

“These eureka moments are familiar to all of us; they’re why we take a shower or go for a walk when we’re stuck on a problem. But this technique works only if we’ve actually got a lot of knowledge about the problem stored in our brains through long study and focus. … You can’t come to a moment of creative insight if you haven’t got any mental fuel. You can’t be googling the info; it’s got to be inside you.”

[…]

“Evidence suggests that when it comes to knowledge we’re interested in — anything that truly excites us and has meaning — we don’t turn off our memory. Certainly, we outsource when the details are dull, as we now do with phone numbers. These are inherently meaningless strings of information, which offer little purchase on the mind. … It makes sense that our transactive brains would hand this stuff off to machines. But when information engages us — when we really care about a subject — the evidence suggests we don’t turn off our memory at all.”

[…]

“In an ideal world, we’d all fit the Renaissance model — we’d be curious about everything, filled with diverse knowledge and thus absorbing all current events and culture like sponges. But this battle is age-old, because it’s ultimately not just technological. It’s cultural and moral and spiritual; “getting young people to care about the hard stuff” is a struggle that goes back centuries and requires constant societal arguments and work. It’s not that our media and technological environment don’t matter, of course. But the vintage of this problem indicates that the solution isn’t merely in the media environment either.”

[…]

“A tool’s most transformative uses generally take us by surprise.”

[…]

“How should you respond when you get powerful new tools for finding answers?

Think of harder questions.”

Secrets of the Stacks — Medium

Secrets of the Stacks — Medium.

Choosing books for a library like mine in New York is a fulltime job. The head of acquisitions at the Society Library, Steven McGuirl, reads Publishers Weekly, Library Journal, The Times Literary Supplement, The New Yorker, The New York Review of Books, the London Review of Books, The London Times, and The New York Times to decide which fiction should be ordered. Fiction accounts for fully a quarter of the forty-eight hundred books the library acquires each year. There are standing orders for certain novelists—Martin Amis, Zadie Smith, Toni Morrison, for example. Some popular writers merit standing orders for more than one copy.

But first novels and collections of stories present a problem. McGuirl and his two assistants try to guess what the members of the library will want to read. Of course, they respond to members’ requests. If a book is requested by three people, the staff orders it. There’s also a committee of members that meets monthly to recommend books for purchase. The committee checks on the librarians’ lists and suggests titles they’ve missed. The whole enterprise balances enthusiasm and skepticism.

Boosted by reviews, prizes, large sales, word of mouth, or personal recommendations, a novel may make its way onto the library shelf, but even then it is not guaranteed a chance of being read by future generations. Libraries are constantly getting rid of books they have acquired. They have to, or they would run out of space. The polite word for this is “deaccession,” the usual word, “weeding.” I asked a friend who works for a small public library how they choose books to get rid of. Is there a formula? Who makes the decision, a person or a committee? She told me that there was a formula based on the recommendations of the industry-standard CREW manual.

CREW stands for Continuous Review Evaluation and Weeding, and the manual uses “crew” as a transitive verb, so one can talk about a library’s “crewing” its collection. It means weeding but doesn’t sound so harsh. At the heart of the CREW method is a formula consisting of three factors—the number of years since the last copyright, the number of years since the book was last checked out, and a collection of six negative factors given the acronym MUSTIE, to help decide if a book has outlived its usefulness. M. Is it Misleading or inaccurate? Is its information, as so quickly happens with medical and legal texts or travel books, for example, outdated? U. Is it Ugly? Worn beyond repair? S. Has it been Superseded by a new edition or a better account of the subject? T. Is it Trivial, of no discernible literary or scientific merit? I. Is it Irrelevant to the needs and interests of the community the library serves? E. Can it be found Elsewhere, through interlibrary loan or on the Web?

Obviously, not all the MUSTIE factors are relevant in evaluating fiction, notably Misleading and Superseded. Nor is the copyright date important. For nonfiction, the CREW formula might be 8/3/MUSTIE, which would mean “Consider a book for elimination if it is eight years since the copyright date and three years since it has been checked out and if one or more of the MUSTIE factors obtains.” But for fiction the formula is often X/2/MUSTIE, meaning the copyright date doesn’t matter, but consider a book for elimination if it hasn’t been checked out in two years and if it is TUIE—Trivial, Ugly, Irrelevant, or Elsewhere.

[…]

People who feel strongly about retaining books in libraries have a simple way to combat the removal of treasured volumes. Since every system of elimination is based, no matter what they say, on circulation counts, the number of years that have elapsed since a book was last checked out, or the number of times it has been checked out overall, if you feel strongly about a book, you should go to every library you have access to and check out the volume you care about. Take it home awhile. Read it or don’t. Keep it beside you as you read the same book on a Kindle, Nook, or iPad. Let it breathe the air of your home, and then take it back to the library, knowing you have fought the guerrilla war for physical books.

[…]

So many factors affect a novel’s chances of surviving, to say nothing of its becoming one of the immortal works we call a classic: how a book is initially reviewed, whether it sells, whether people continue to read it, whether it is taught in schools, whether it is included in college curricula, what literary critics say about it later, how it responds to various political currents as time moves on.

[…]

De Rerum Natura, lost for fifteen hundred years, was found and its merit recognized. But how many other works of antiquity were not found? How many works from past centuries never got published or, published, were never read?

If you want to see how slippery a judgment is “literary merit” and how unlikely quality is to be recognized at first glance, nothing is more fun—or more comforting to writers—than to read rejection letters or terrible reviews of books that have gone on to prove indispensable to the culture. This, for example, is how the New York Times reviewer greeted Lolita: “Lolita . . . is undeniably news in the world of books. Unfortunately, it is bad news. There are two equally serious reasons why it isn’t worth any adult reader’s attention. The first is that it is dull, dull, dull in a pretentious, florid and archly fatuous fashion. The second is that it is repulsive.”

Negative reviews are fun to write and fun to read, but the world doesn’t need them, since the average work of literary fiction is, in Laura Miller’s words, “invisible to the average reader.” It appears and vanishes from the scene largely unnoticed and unremarked.

[…]

Whether reviews are positive or negative, the attention they bring to a book is rarely sufficient, and it is becoming harder and harder for a novel to lift itself from obscurity. In the succinct and elegant words of James Gleick, “The merchandise of the information economy is not information; it is attention. These commodities have an inverse relationship. When information is cheap, attention becomes expensive.” These days, besides writing, novelists must help draw attention to what they write, tweeting, friending, blogging, and generating meta tags—unacknowledged legislators to Shelley, but now more like unpaid publicists.

On the Web, everyone can be a reviewer, and a consensus about a book can be established covering a range of readers potentially as different as Laura Miller’s cousins and the members of the French Academy. In this changed environment, professional reviewers may become obsolete, replaced by crowd wisdom. More than two centuries ago, Samuel Johnson invented the idea of crowd wisdom as applied to literature, calling it “the common reader.” “I rejoice to concur with the common reader; for by the common sense of readers, uncorrupted by literary prejudices, after all the refinements of subtilty and the dogmatism of learning, must be finally decided all claim to poetical honours.” Virginia Woolf agreed and titled her wonderful collection of essays on literature The Common Reader.

[…]

The Common Reader, however, is not one person. It is a statistical average, the mean between this reader’s one star for One God Clapping and twenty other readers’ enthusiasm for this book, the autobiography of a “Zen rabbi,” producing a four-star rating. What the rating says to me is that if I were the kind of person who wanted to read the autobiography of a Zen rabbi, I’d be very likely to enjoy it. That Amazon reviewers are a self-selected group needs underlining. If you are like Laura Miller’s cousins who have never heard of Jonathan Franzen, you will be unlikely to read Freedom, and even less likely to review it. If you read everything that John Grisham has ever written, you will probably read his latest novel and might even report on it. If you read Lolita, it’s either because you’ve heard it’s one of the great novels of the twentieth century or because you’ve heard it’s a dirty book. Whatever brings you to it, you are likely to enjoy it. Four and a half stars.

The idea of the wisdom of crowds, popularized by James Surowiecki, dates to 1906, when the English statistician Francis Galton (Darwin’s cousin) focused on a contest at a county fair for guessing the weight of an ox. For sixpence, a person could buy a ticket, fill in his name, and guess the weight of the animal after butchering. The person whose guess was closest to the actual weight of the ox won a prize. Galton, having the kind of mind he did, played around with the numbers he gathered from this contest and discovered that the average of all the guesses was only one pound off from the actual weight of the ox, 1,198 pounds. If you’re looking for the Common Reader’s response to a novel, you can’t take any one review as truth but merely as a passionate assertion of one point of view, one person’s guess at the weight of the ox.

“I really enjoy reading this novel it makes you think about a sex offender’s mind. I’m also happy that I purchased this novel on Amazon because I was able to find it easily with a suitable price for me.”

“Vladimir has a way with words. The prose in this book is simply remarkable.”

“Overrated and pretentious. Overly flowery language encapsulating an uninteresting and overdone plot. Older man and pre-adolescent hypersexual woman—please let’s not exaggerate the originality of that concept, it has existed for millennia now. In fact, you’ll find similar stories in every chapter of the Bible.”

“Like many other folk I read Lolita when it first came out. I was a normally-sexed man and I found it excitingly erotic. Now, nearing 80, I still felt the erotic thrill but was more open to the beauty of Nabokov’s prose.”

“Presenting the story from Humbert’s self-serving viewpoint was Nabokov’s peculiarly brilliant means by which a straight, non-perverted reader is taken to secret places she/he might otherwise dare not go.”

“A man who was ‘hip’ while maintaining a bemused detachment from trendiness, what would he have made of shopping malls? Political correctness? Cable television? Alternative music? The Internet? . . . Or some of this decade’s greatest scandals, near-Nabokovian events in themselves, like Joey Buttafuoco, Lorena Bobbitt, O. J. Simpson, Bill and Monica? Wherever he is (Heaven, Hell, Nirvana, Anti-Terra), I would like to thank Nabokov for providing us with a compelling and unique model of how to read, write, and perceive life.”

What would the hip, bemused author of Lolita have made of Amazon ratings? I like to think that he would have reveled in them as evidence of the cheerful self- assurance, the lunatic democracy of his adopted culture.

“Once a populist gimmick, the reviews are vital to make sure a new product is not lost in the digital wilderness,” the Times reports.

Amazon’s own gatekeepers have removed thousands of reviews from its site in an attempt to curb what has become widespread manipulation of its ratings. They eliminated some reviews by family members and people considered too biased to be entitled to an opinion, competing writers, for example. They did not, however, eliminate reviews by people who admit they have not read the book. “We do not require people to have experienced the product in order to review,” said an Amazon spokesman.

A World Digital Library Is Coming True! by Robert Darnton | The New York Review of Books

A World Digital Library Is Coming True! by Robert Darnton | The New York Review of Books.

darnton_2-052214.jpg

In the scramble to gain market share in cyberspace, something is getting lost: the public interest. Libraries and laboratories—crucial nodes of the World Wide Web—are buckling under economic pressure, and the information they diffuse is being diverted away from the public sphere, where it can do most good.

Not that information comes free or “wants to be free,” as Internet enthusiasts proclaimed twenty years ago.1 It comes filtered through expensive technologies and financed by powerful corporations. No one can ignore the economic realities that underlie the new information age, but who would argue that we have reached the right balance between commercialization and democratization?

Consider the cost of scientific periodicals, most of which are published exclusively online. It has increased at four times the rate of inflation since 1986. The average price of a year’s subscription to a chemistry journal is now $4,044. In 1970 it was $33. A subscription to the Journal of Comparative Neurology cost $30,860 in 2012—the equivalent of six hundred monographs. Three giant publishers—Reed Elsevier, Wiley-Blackwell, and Springer—publish 42 percent of all academic articles, and they make giant profits from them. In 2013 Elsevier turned a 39 percent profit on an income of £2.1 billion from its science, technical, and medical journals.

All over the country research libraries are canceling subscriptions to academic journals, because they are caught between decreasing budgets and increasing costs. The logic of the bottom line is inescapable, but there is a higher logic that deserves consideration—namely, that the public should have access to knowledge produced with public funds.

[…]

The struggle over academic journals should not be dismissed as an “academic question,” because a great deal is at stake. Access to research drives large sectors of the economy—the freer and quicker the access, the more powerful its effect. The Human Genome Project cost $3.8 billion in federal funds to develop, and thanks to the free accessibility of the results, it has already produced $796 billion in commercial applications. Linux, the free, open-source software system, has brought in billions in revenue for many companies, including Google.

[…]

According to a study completed in 2006 by John Houghton, a specialist in the economics of information, a 5 percent increase in the accessibility of research would have produced an increase in productivity worth $16 billion.

[…]

Yet accessibility may decrease, because the price of journals has escalated so disastrously that libraries—and also hospitals, small-scale laboratories, and data-driven enterprises—are canceling subscriptions. Publishers respond by charging still more to institutions with budgets strong enough to carry the additional weight.

[…]

In the long run, journals can be sustained only through a transformation of the economic basis of academic publishing. The current system developed as a component of the professionalization of academic disciplines in the nineteenth century. It served the public interest well through most of the twentieth century, but it has become dysfunctional in the age of the Internet.

[…]

The entire system of communicating research could be made less expensive and more beneficial for the public by a process known as “flipping.” Instead of subsisting on subscriptions, a flipped journal covers its costs by charging processing fees before publication and making its articles freely available, as “open access,” afterward. That will sound strange to many academic authors. Why, they may ask, should we pay to get published? But they may not understand the dysfunctions of the present system, in which they furnish the research, writing, and refereeing free of charge to the subscription journals and then buy back the product of their work—not personally, of course, but through their libraries—at an exorbitant price. The public pays twice—first as taxpayers who subsidize the research, then as taxpayers or tuition payers who support public or private university libraries.

By creating open-access journals, a flipped system directly benefits the public. Anyone can consult the research free of charge online, and libraries are liberated from the spiraling costs of subscriptions. Of course, the publication expenses do not evaporate miraculously, but they are greatly reduced, especially for nonprofit journals, which do not need to satisfy shareholders. The processing fees, which can run to a thousand dollars or more, depending on the complexities of the text and the process of peer review, can be covered in various ways. They are often included in research grants to scientists, and they are increasingly financed by the author’s university or a group of universities.

[…]

The main impediment to public-spirited publishing of this kind is not financial. It involves prestige. Scientists prefer to publish in expensive journals like Nature, Science, and Cell, because the aura attached to them glows on CVs and promotes careers. But some prominent scientists have undercut the prestige effect by founding open-access journals and recruiting the best talent to write and referee for them. Harold Varmus, a Nobel laureate in physiology and medicine, has made a huge success of Public Library of Science, and Paul Crutzen, a Nobel laureate in chemistry, has done the same with Atmospheric Chemistry and Physics. They have proven the feasibility of high-quality, open-access journals. Not only do they cover costs through processing fees, but they produce a profit—or rather, a “surplus,” which they invest in further open-access projects.

[…]

DASH now includes 17,000 articles, and it has registered three million downloads from countries in every continent. Repositories in other universities also report very high scores in their counts of downloads. They make knowledge available to a broad public, including researchers who have no connection to an academic institution; and at the same time, they make it possible for writers to reach far more readers than would be possible by means of subscription journals.

The desire to reach readers may be one of the most underestimated forces in the world of knowledge. Aside from journal articles, academics produce a large numbers of books, yet they rarely make much money from them. Authors in general derive little income from a book a year or two after its publication. Once its commercial life has ended, it dies a slow death, lying unread, except for rare occasions, on the shelves of libraries, inaccessible to the vast majority of readers. At that stage, authors generally have one dominant desire—for their work to circulate freely through the public; and their interest coincides with the goals of the open-access movement.

[…]

All sorts of complexities remain to be worked out before such a plan can succeed: How to accommodate the interests of publishers, who want to keep books on their backlists? Where to leave room for rights holders to opt out and for the revival of books that take on new economic life? Whether to devise some form of royalties, as in the extended collective licensing programs that have proven to be successful in the Scandinavian countries? It should be possible to enlist vested interests in a solution that will serve the public interest, not by appealing to altruism but rather by rethinking business plans in ways that will make the most of modern technology.

Several experimental enterprises illustrate possibilities of this kind. Knowledge Unlatched gathers commitments and collects funds from libraries that agree to purchase scholarly books at rates that will guarantee payment of a fixed amount to the publishers who are taking part in the program. The more libraries participating in the pool, the lower the price each will have to pay. While electronic editions of the books will be available everywhere free of charge through Knowledge Unlatched, the subscribing libraries will have the exclusive right to download and print out copies.

[…]

OpenEdition Books, located in Marseille, operates on a somewhat similar principle. It provides a platform for publishers who want to develop open-access online collections, and it sells the e-content to subscribers in formats that can be downloaded and printed. Operating from Cambridge, England, Open Book Publishers also charges for PDFs, which can be used with print-on-demand technology to produce physical books, and it applies the income to subsidies for free copies online. It recruits academic authors who are willing to provide manuscripts without payment in order to reach the largest possible audience and to further the cause of open access.

The famous quip of Samuel Johnson, “No man but a blockhead ever wrote, except for money,” no longer has the force of a self-evident truth in the age of the Internet. By tapping the goodwill of unpaid authors, Open Book Publishers has produced forty-one books in the humanities and social sciences, all rigorously peer-reviewed, since its foundation in 2008. “We envisage a world in which all research is freely available to all readers,” it proclaims on its website.

[…]

Google set out to digitize millions of books in research libraries and then proposed to sell subscriptions to the resulting database. Having provided the books to Google free of charge, the libraries would then have to buy back access to them, in digital form, at a price to be determined by Google and that could escalate as disastrously as the prices of scholarly journals.

Google Book Search actually began as a search service, which made available only snippets or short passages of books. But because many of the books were covered by copyright, Google was sued by the rights holders; and after lengthy negotiations the plaintiffs and Google agreed on a settlement, which transformed the search service into a gigantic commercial library financed by subscriptions. But the settlement had to be approved by a court, and on March 22, 2011, the Southern Federal District Court of New York rejected it on the grounds that, among other things, it threatened to constitute a monopoly in restraint of trade. That decision put an end to Google’s project and cleared the way for the DPLA to offer digitized holdings—but nothing covered by copyright—to readers everywhere, free of charge.

Aside from its not-for-profit character, the DPLA differs from Google Book Search in a crucial respect: it is not a vertical organization erected on a database of its own. It is a distributed, horizontal system, which links digital collections already in the possession of the participating institutions, and it does so by means of a technological infrastructure that makes them instantly available to the user with one click on an electronic device. It is fundamentally horizontal, both in organization and in spirit.

Instead of working from the top down, the DPLA relies on “service hubs,” or small administrative centers, to promote local collections and aggregate them at the state level. “Content hubs” located in institutions with collections of at least 250,000 items—for example, the New York Public Library, the Smithsonian Institution, and the collective digital repository known as HathiTrust—provide the bulk of the DPLA’s holdings. There are now two dozen service and content hubs, and soon, if financing can be found, they will exist in every state of the union.

Such horizontality reinforces the democratizing impulse behind the DPLA. Although it is a small, nonprofit corporation with headquarters and a minimal staff in Boston, the DPLA functions as a network that covers the entire country. It relies heavily on volunteers. More than a thousand computer scientists collaborated free of charge in the design of its infrastructure, which aggregates metadata (catalog-type descriptions of documents) in a way that allows easy searching.

Therefore, for example, a ninth-grader in Dallas who is preparing a report on an episode of the American Revolution can download a manuscript from New York, a pamphlet from Chicago, and a map from San Francisco in order to study them side by side. Unfortunately, he or she will not be able to consult any recent books, because copyright laws keep virtually everything published after 1923 out of the public domain. But the courts, which are considering a flurry of cases about the “fair use” of copyright, may sustain a broad-enough interpretation for the DPLA to make a great deal of post-1923 material available for educational purposes.

A small army of volunteer “Community Reps,” mainly librarians with technical skills, is fanning out across the country to promote various outreach programs sponsored by the DPLA. They reinforce the work of the service hubs, which concentrate on public libraries as centers of collection-building. A grant from the Bill and Melinda Gates Foundation is financing a Public Library Partnerships Project to train local librarians in the latest digital technologies. Equipped with new skills, the librarians will invite people to bring in material of their own—family letters, high school yearbooks, postcard collections stored in trunks and attics—to be digitized, curated, preserved, and made accessible online by the DPLA. While developing local community consciousness about culture and history, this project will also help integrate local collections in the national network.

[…]

In these and other ways, the DPLA will go beyond its basic mission of making the cultural heritage of America available to all Americans. It will provide opportunities for them to interact with the material and to develop materials of their own. It will empower librarians and reinforce public libraries everywhere, not only in the United States. Its technological infrastructure has been designed to be interoperable with that of Europeana, a similar enterprise that is aggregating the holdings of libraries in the twenty-eight member states of the European Union. The DPLA’s collections include works in more than four hundred languages, and nearly 30 percent of its users come from outside the US. Ten years from now, the DPLA’s first year of activity may look like the beginning of an international library system.

It would be naive, however, to imagine a future free from the vested interests that have blocked the flow of information in the past. The lobbies at work in Washington also operate in Brussels, and a newly elected European Parliament will soon have to deal with the same issues that remain to be resolved in the US Congress. Commercialization and democratization operate on a global scale, and a great deal of access must be opened before the World Wide Web can accommodate a worldwide library.

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED.

It’s not too much of an exaggeration to call autocorrect the overlooked underwriter of our era of mobile prolixity. Without it, we wouldn’t be able to compose windy love letters from stadium bleachers, write novels on subway commutes, or dash off breakup texts while in line at the post office. Without it, we probably couldn’t even have phones that look anything like the ingots we tickle—the whole notion of touchscreen typing, where our podgy physical fingers are expected to land with precision on tiny virtual keys, is viable only when we have some serious software to tidy up after us. Because we know autocorrect is there as brace and cushion, we’re free to write with increased abandon, at times and in places where writing would otherwise be impossible. Thanks to autocorrect, the gap between whim and word is narrower than it’s ever been, and our world is awash in easily rendered thought.

[…]

I find him in a drably pastel conference room at Microsoft headquarters in Redmond, Washington. Dean Hachamovitch—inventor on the patent for autocorrect and the closest thing it has to an individual creator—reaches across the table to introduce himself.

[…]

Hachamovitch, now a vice president at Microsoft and head of data science for the entire corporation, is a likable and modest man. He freely concedes that he types teh as much as anyone. (Almost certainly he does not often type hte. As researchers have discovered, initial-letter transposition is a much rarer error.)

[…]

The notion of autocorrect was born when Hachamovitch began thinking about a functionality that already existed in Word. Thanks to Charles Simonyi, the longtime Microsoft executive widely recognized as the father of graphical word processing, Word had a “glossary” that could be used as a sort of auto-expander. You could set up a string of words—like insert logo—which, when typed and followed by a press of the F3 button, would get replaced by a JPEG of your company’s logo. Hachamovitch realized that this glossary could be used far more aggressively to correct common mistakes. He drew up a little code that would allow you to press the left arrow and F3 at any time and immediately replace teh with the. His aha moment came when he realized that, because English words are space-delimited, the space bar itself could trigger the replacement, to make correction … automatic! Hachamovitch drew up a list of common errors, and over the next years he and his team went on to solve many of the thorniest. Seperate would automatically change to separate. Accidental cap locks would adjust immediately (making dEAR grEG into Dear Greg). One Microsoft manager dubbed them the Department of Stupid PC Tricks.

[…]

One day Hachamovitch went into his boss’s machine and changed the autocorrect dictionary so that any time he typed Dean it was automatically changed to the name of his coworker Mike, and vice versa. (His boss kept both his computer and office locked after that.) Children were even quicker to grasp the comedic ramifications of the new tool. After Hachamovitch went to speak to his daughter’s third-grade class, he got emails from parents that read along the lines of “Thank you for coming to talk to my daughter’s class, but whenever I try to type her name I find it automatically transforms itself into ‘The pretty princess.’”

[…]

On idiom, some of its calls seemed fairly clear-cut: gorilla warfare became guerrilla warfare, for example, even though a wildlife biologist might find that an inconvenient assumption. But some of the calls were quite tricky, and one of the trickiest involved the issue of obscenity. On one hand, Word didn’t want to seem priggish; on the other, it couldn’t very well go around recommending the correct spelling of mothrefukcer. Microsoft was sensitive to these issues. The solution lay in expanding one of spell-check’s most special lists, bearing the understated title: “Words which should neither be flagged nor suggested.”

[…]

One day Vignola sent Bill Gates an email. (Thorpe couldn’t recall who Bill Vignola was or what he did.) Whenever Bill Vignola typed his own name in MS Word, the email to Gates explained, it was automatically changed to Bill Vaginal. Presumably Vignola caught this sometimes, but not always, and no doubt this serious man was sad to come across like a character in a Thomas Pynchon novel. His email made it down the chain of command to Thorpe. And Bill Vaginal wasn’t the only complainant: As Thorpe recalls, Goldman Sachs was mad that Word was always turning it into Goddamn Sachs.

Thorpe went through the dictionary and took out all the words marked as “vulgar.” Then he threw in a few anatomical terms for good measure. The resulting list ran to hundreds of entries:

anally, asshole, battle-axe, battleaxe, bimbo, booger, boogers, butthead, Butthead …

With these sorts of master lists in place—the corrections, the exceptions, and the to-be-primly-ignored—the joists of autocorrect, then still a subdomain of spell-check, were in place for the early releases of Word. Microsoft’s dominance at the time ensured that autocorrect became globally ubiquitous, along with some of its idiosyncrasies. By the early 2000s, European bureaucrats would begin to notice what came to be called the Cupertino effect, whereby the word cooperation (bizarrely included only in hyphenated form in the standard Word dictionary) would be marked wrong, with a suggested change to Cupertino. There are thus many instances where one parliamentary back-bencher or another longs for increased Cupertino between nations. Since then, linguists have adopted the word cupertino as a term of art for such trapdoors that have been assimilated into the language.

[…]

Autocorrection is no longer an overqualified intern drawing up lists of directives; it’s now a vast statistical affair in which petabytes of public words are examined to decide when a usage is popular enough to become a probabilistically savvy replacement. The work of the autocorrect team has been made algorithmic and outsourced to the cloud.

A handful of factors are taken into account to weight the variables: keyboard proximity, phonetic similarity, linguistic context. But it’s essentially a big popularity contest. A Microsoft engineer showed me a slide where somebody was trying to search for the long-named Austrian action star who became governor of California. Schwarzenegger, he explained, “is about 10,000 times more popular in the world than its variants”—Shwaranegar or Scuzzynectar or what have you. Autocorrect has become an index of the most popular way to spell and order certain words.

When English spelling was first standardized, it was by the effective fiat of those who controlled the communicative means of production. Dictionaries and usage guides have always represented compromises between top-down prescriptivists—those who believe language ought to be used a certain way—and bottom-up descriptivists—those who believe, instead, that there’s no ought about it.

The emerging consensus on usage will be a matter of statistical arbitration, between the way “most” people spell something and the way “some” people do. If it proceeds as it has, it’s likely to be a winner-take-all affair, as alternatives drop out. (Though Apple’s recent introduction of personalized, “contextual” autocorrect—which can distinguish between the language you use with your friends and the language you use with your boss—might complicate that process of standardization and allow us the favor of our characteristic errors.)

[…]

The possibility of linguistic communication is grounded in the fact of what some philosophers of language have called the principle of charity: The first step in a successful interpretation of an utterance is the belief that it somehow accords with the universe as we understand it. This means that we have a propensity to take a sort of ownership over even our errors, hoping for the possibility of meaning in even the most perverse string of letters. We feel honored to have a companion like autocorrect who trusts that, despite surface clumsiness or nonsense, inside us always smiles an articulate truth.

[…]

Today the influence of autocorrect is everywhere: A commenter on the Language Log blog recently mentioned hearing of an entire dialect in Asia based on phone cupertinos, where teens used the first suggestion from autocomplete instead of their chosen word, thus creating a slang that others couldn’t decode. (It’s similar to the Anglophone teenagers who, in a previous texting era, claimed to have replaced the term of approval cool with that of book because of happenstance T9 input priority.) Surrealists once encouraged the practice of écriture automatique, or automatic writing, in order to reveal the peculiar longings of the unconscious. The crackpot suggestions of autocorrect have become our own form of automatic writing—but what they reveal are the peculiar statistics of a world id.