Tag Archives: google

What Google Learned From Its Quest to Build the Perfect Team – The New York Times

New research reveals surprising truths about why some work groups thrive and others falter.

Source: What Google Learned From Its Quest to Build the Perfect Team – The New York Times

Five years ago, Google — one of the most public proselytizers of how studying workers can transform productivity — became focused on building the perfect team. In the last decade, the tech giant has spent untold millions of dollars measuring nearly every aspect of its employees’ lives. Google’s People Operations department has scrutinized everything from how frequently particular people eat together (the most productive employees tend to build larger networks by rotating dining companions) to which traits the best managers share (unsurprisingly, good communication and avoiding micromanaging is critical; more shocking, this was news to many Google managers).

The company’s top executives long believed that building the best teams meant combining the best people. They embraced other bits of conventional wisdom as well, like ‘‘It’s better to put introverts together,’’ said Abeer Dubey, a manager in Google’s People Analytics division, or ‘‘Teams are more effective when everyone is friends away from work.’’ But, Dubey went on, ‘‘it turned out no one had really studied which of those were true.’’

In 2012, the company embarked on an initiative — code-named Project Aristotle — to study hundreds of Google’s teams and figure out why some stumbled while others soared. Dubey, a leader of the project, gathered some of the company’s best statisticians, organizational psychologists, sociologists and engineers. He also needed researchers. Rozovsky, by then, had decided that what she wanted to do with her life was study people’s habits and tendencies. After graduating from Yale, she was hired by Google and was soon assigned to Project Aristotle.

Project Aristotle’s researchers began by reviewing a half-century of academic studies looking at how teams worked. Were the best teams made up of people with similar interests? Or did it matter more whether everyone was motivated by the same kinds of rewards? Based on those studies, the researchers scrutinized the composition of groups inside Google: How often did teammates socialize outside the office? Did they have the same hobbies? Were their educational backgrounds similar? Was it better for all teammates to be outgoing or for all of them to be shy? They drew diagrams showing which teams had overlapping memberships and which groups had exceeded their departments’ goals. They studied how long teams stuck together and if gender balance seemed to have an impact on a team’s success.

No matter how researchers arranged the data, though, it was almost impossible to find patterns — or any evidence that the composition of a team made any difference. ‘‘We looked at 180 teams from all over the company,’’ Dubey said. ‘‘We had lots of data, but there was nothing showing that a mix of specific personality types or skills or backgrounds made any difference. The ‘who’ part of the equation didn’t seem to matter.’’

As they struggled to figure out what made a team successful, Rozovsky and her colleagues kept coming across research by psychologists and sociologists that focused on what are known as ‘‘group norms.’’ Norms are the traditions, behavioral standards and unwritten rules that govern how we function when we gather: One team may come to a consensus that avoiding disagreement is more valuable than debate; another team might develop a culture that encourages vigorous arguments and spurns groupthink. Norms can be unspoken or openly acknowledged, but their influence is often profound. Team members may behave in certain ways as individuals — they may chafe against authority or prefer working independently — but when they gather, the group’s norms typically override individual proclivities and encourage deference to the team.

Project Aristotle’s researchers began searching through the data they had collected, looking for norms. They looked for instances when team members described a particular behavior as an ‘‘unwritten rule’’ or when they explained certain things as part of the ‘‘team’s culture.’’ Some groups said that teammates interrupted one another constantly and that team leaders reinforced that behavior by interrupting others themselves. On other teams, leaders enforced conversational order, and when someone cut off a teammate, group members would politely ask everyone to wait his or her turn. Some teams celebrated birthdays and began each meeting with informal chitchat about weekend plans. Other groups got right to business and discouraged gossip. There were teams that contained outsize personalities who hewed to their group’s sedate norms, and others in which introverts came out of their shells as soon as meetings began.

After looking at over a hundred groups for more than a year, Project Aristotle researchers concluded that understanding and influencing group norms were the keys to improving Google’s teams. But Rozovsky, now a lead researcher, needed to figure out which norms mattered most. Google’s research had identified dozens of behaviors that seemed important, except that sometimes the norms of one effective team contrasted sharply with those of another equally successful group. Was it better to let everyone speak as much as they wanted, or should strong leaders end meandering debates? Was it more effective for people to openly disagree with one another, or should conflicts be played down? The data didn’t offer clear verdicts. In fact, the data sometimes pointed in opposite directions. The only thing worse than not finding a pattern is finding too many of them. Which norms, Rozovsky and her colleagues wondered, were the ones that successful teams shared?

Imagine you have been invited to join one of two groups.

Team A is composed of people who are all exceptionally smart and successful. When you watch a video of this group working, you see professionals who wait until a topic arises in which they are expert, and then they speak at length, explaining what the group ought to do. When someone makes a side comment, the speaker stops, reminds everyone of the agenda and pushes the meeting back on track. This team is efficient. There is no idle chitchat or long debates. The meeting ends as scheduled and disbands so everyone can get back to their desks.

Team B is different. It’s evenly divided between successful executives and middle managers with few professional accomplishments. Teammates jump in and out of discussions. People interject and complete one another’s thoughts. When a team member abruptly changes the topic, the rest of the group follows him off the agenda. At the end of the meeting, the meeting doesn’t actually end: Everyone sits around to gossip and talk about their lives.

Which group would you rather join?

In 2008, a group of psychologists from Carnegie Mellon and M.I.T. began to try to answer a question very much like this one. ‘‘Over the past century, psychologists made considerable progress in defining and systematically measuring intelligence in individuals,’’ the researchers wrote in the journal Science in 2010. ‘‘We have used the statistical approach they developed for individual intelligence to systematically measure the intelligence of groups.’’ Put differently, the researchers wanted to know if there is a collective I. Q. that emerges within a team that is distinct from the smarts of any single member.

To accomplish this, the researchers recruited 699 people, divided them into small groups and gave each a series of assignments that required different kinds of cooperation. One assignment, for instance, asked participants to brainstorm possible uses for a brick. Some teams came up with dozens of clever uses; others kept describing the same ideas in different words. Another had the groups plan a shopping trip and gave each teammate a different list of groceries. The only way to maximize the group’s score was for each person to sacrifice an item they really wanted for something the team needed. Some groups easily divvied up the buying; others couldn’t fill their shopping carts because no one was willing to compromise.

What interested the researchers most, however, was that teams that did well on one assignment usually did well on all the others. Conversely, teams that failed at one thing seemed to fail at everything. The researchers eventually concluded that what distinguished the ‘‘good’’ teams from the dysfunctional groups was how teammates treated one another. The right norms, in other words, could raise a group’s collective intelligence, whereas the wrong norms could hobble a team, even if, individually, all the members were exceptionally bright.

But what was confusing was that not all the good teams appeared to behave in the same ways. ‘‘Some teams had a bunch of smart people who figured out how to break up work evenly,’’ said Anita Woolley, the study’s lead author. ‘‘Other groups had pretty average members, but they came up with ways to take advantage of everyone’s relative strengths. Some groups had one strong leader. Others were more fluid, and everyone took a leadership role.’’

As the researchers studied the groups, however, they noticed two behaviors that all the good teams generally shared. First, on the good teams, members spoke in roughly the same proportion, a phenomenon the researchers referred to as ‘‘equality in distribution of conversational turn-taking.’’ On some teams, everyone spoke during each task; on others, leadership shifted among teammates from assignment to assignment. But in each case, by the end of the day, everyone had spoken roughly the same amount. ‘‘As long as everyone got a chance to talk, the team did well,’’ Woolley said. ‘‘But if only one person or a small group spoke all the time, the collective intelligence declined.’’

Second, the good teams all had high ‘‘average social sensitivity’’ — a fancy way of saying they were skilled at intuiting how others felt based on their tone of voice, their expressions and other nonverbal cues. One of the easiest ways to gauge social sensitivity is to show someone photos of people’s eyes and ask him or her to describe what the people are thinking or feeling — an exam known as the Reading the Mind in the Eyes test. People on the more successful teams in Woolley’s experiment scored above average on the Reading the Mind in the Eyes test. They seemed to know when someone was feeling upset or left out. People on the ineffective teams, in contrast, scored below average. They seemed, as a group, to have less sensitivity toward their colleagues.

In other words, if you are given a choice between the serious-minded Team A or the free-flowing Team B, you should probably opt for Team B. Team A may be filled with smart people, all optimized for peak individual efficiency. But the group’s norms discourage equal speaking; there are few exchanges of the kind of personal information that lets teammates pick up on what people are feeling or leaving unsaid. There’s a good chance the members of Team A will continue to act like individuals once they come together, and there’s little to suggest that, as a group, they will become more collectively intelligent.

In contrast, on Team B, people may speak over one another, go on tangents and socialize instead of remaining focused on the agenda. The team may seem inefficient to a casual observer. But all the team members speak as much as they need to. They are sensitive to one another’s moods and share personal stories and emotions. While Team B might not contain as many individual stars, the sum will be greater than its parts.

Within psychology, researchers sometimes colloquially refer to traits like ‘‘conversational turn-taking’’ and ‘‘average social sensitivity’’ as aspects of what’s known as psychological safety — a group culture that the Harvard Business School professor Amy Edmondson defines as a ‘‘shared belief held by members of a team that the team is safe for interpersonal risk-taking.’’ Psychological safety is ‘‘a sense of confidence that the team will not embarrass, reject or punish someone for speaking up,’’ Edmondson wrote in a study published in 1999. ‘‘It describes a team climate characterized by interpersonal trust and mutual respect in which people are comfortable being themselves.’’

When Rozovsky and her Google colleagues encountered the concept of psychological safety in academic papers, it was as if everything suddenly fell into place. One engineer, for instance, had told researchers that his team leader was ‘‘direct and straightforward, which creates a safe space for you to take risks.’’ That team, researchers estimated, was among Google’s accomplished groups. By contrast, another engineer had told the researchers that his ‘‘team leader has poor emotional control.’’ He added: ‘‘He panics over small issues and keeps trying to grab control. I would hate to be driving with him being in the passenger seat, because he would keep trying to grab the steering wheel and crash the car.’’ That team, researchers presumed, did not perform well.

Most of all, employees had talked about how various teams felt. ‘‘And that made a lot of sense to me, maybe because of my experiences at Yale,’’ Rozovsky said. ‘‘I’d been on some teams that left me feeling totally exhausted and others where I got so much energy from the group.’’ Rozovsky’s study group at Yale was draining because the norms — the fights over leadership, the tendency to critique — put her on guard. Whereas the norms of her case-competition team — enthusiasm for one another’s ideas, joking around and having fun — allowed everyone to feel relaxed and energized.

For Project Aristotle, research on psychological safety pointed to particular norms that are vital to success. There were other behaviors that seemed important as well — like making sure teams had clear goals and creating a culture of dependability. But Google’s data indicated that psychological safety, more than anything else, was critical to making a team work.

‘‘We had to get people to establish psychologically safe environments,’’ Rozovsky told me. But it wasn’t clear how to do that. ‘‘People here are really busy,’’ she said. ‘‘We needed clear guidelines.’’

However, establishing psychological safety is, by its very nature, somewhat messy and difficult to implement. You can tell people to take turns during a conversation and to listen to one another more. You can instruct employees to be sensitive to how their colleagues feel and to notice when someone seems upset. But the kinds of people who work at Google are often the ones who became software engineers because they wanted to avoid talking about feelings in the first place.

Rozovsky and her colleagues had figured out which norms were most critical. Now they had to find a way to make communication and empathy — the building blocks of forging real connections — into an algorithm they could easily scale.

In late 2014, Rozovsky and her fellow Project Aristotle number-crunchers began sharing their findings with select groups of Google’s 51,000 employees. By then, they had been collecting surveys, conducting interviews and analyzing statistics for almost three years. They hadn’t yet figured out how to make psychological safety easy, but they hoped that publicizing their research within Google would prompt employees to come up with some ideas of their own.

[…]

Sakaguchi was particularly interested in Project Aristotle because the team he previously oversaw at Google hadn’t jelled particularly well. ‘‘There was one senior engineer who would just talk and talk, and everyone was scared to disagree with him,’’ Sakaguchi said. ‘‘The hardest part was that everyone liked this guy outside the group setting, but whenever they got together as a team, something happened that made the culture go wrong.’’

[…]

When asked to rate whether the role of the team was clearly understood and whether their work had impact, members of the team gave middling to poor scores. These responses troubled Sakaguchi, because he hadn’t picked up on this discontent. He wanted everyone to feel fulfilled by their work. He asked the team to gather, off site, to discuss the survey’s results. He began by asking everyone to share something personal about themselves. He went first.

‘‘I think one of the things most people don’t know about me,’’ he told the group, ‘‘is that I have Stage 4 cancer.’’ In 2001, he said, a doctor discovered a tumor in his kidney. By the time the cancer was detected, it had spread to his spine. For nearly half a decade, it had grown slowly as he underwent treatment while working at Google. Recently, however, doctors had found a new, worrisome spot on a scan of his liver. That was far more serious, he explained.

[…]

After Sakaguchi spoke, another teammate stood and described some health issues of her own. Then another discussed a difficult breakup. Eventually, the team shifted its focus to the survey. They found it easier to speak honestly about the things that had been bothering them, their small frictions and everyday annoyances. They agreed to adopt some new norms: From now on, Sakaguchi would make an extra effort to let the team members know how their work fit into Google’s larger mission; they agreed to try harder to notice when someone on the team was feeling excluded or down.

There was nothing in the survey that instructed Sakaguchi to share his illness with the group. There was nothing in Project Aristotle’s research that said that getting people to open up about their struggles was critical to discussing a group’s norms. But to Sakaguchi, it made sense that psychological safety and emotional conversations were related. The behaviors that create psychological safety — conversational turn-taking and empathy — are part of the same unwritten rules we often turn to, as individuals, when we need to establish a bond. And those human bonds matter as much at work as anywhere else. In fact, they sometimes matter more.

‘‘I think, until the off-site, I had separated things in my head into work life and life life,’’ Laurent told me. ‘‘But the thing is, my work is my life. I spend the majority of my time working. Most of my friends I know through work. If I can’t be open and honest at work, then I’m not really living, am I?’’

What Project Aristotle has taught people within Google is that no one wants to put on a ‘‘work face’’ when they get to the office. No one wants to leave part of their personality and inner life at home. But to be fully present at work, to feel ‘‘psychologically safe,’’ we must know that we can be free enough, sometimes, to share the things that scare us without fear of recriminations. We must be able to talk about what is messy or sad, to have hard conversations with colleagues who are driving us crazy. We can’t be focused just on efficiency. Rather, when we start the morning by collaborating with a team of engineers and then send emails to our marketing colleagues and then jump on a conference call, we want to know that those people really hear us. We want to know that work is more than just labor.

[…]

The paradox, of course, is that Google’s intense data collection and number crunching have led it to the same conclusions that good managers have always known. In the best teams, members listen to one another and show sensitivity to feelings and needs.

[…]

‘‘Just having data that proves to people that these things are worth paying attention to sometimes is the most important step in getting them to actually pay attention,’’ Rozovsky told me. ‘‘Don’t underestimate the power of giving people a common platform and operating language.’’

How to Solve Google’s Crazy Open-Ended Interview Questions | Business | WIRED

How to Solve Google’s Crazy Open-Ended Interview Questions | Business | WIRED.

brain

One of the most important tools in critical thinking about numbers is to grant yourself permission to generate wrong answers to mathematical problems you encounter. Deliberately wrong answers!

Engineers and scientists do it all the time, so there’s no reason we shouldn’t all be let in on their little secret: the art of approximating, or the “back of the napkin” calculation. As the British writer Saki wrote, “a little bit of inaccuracy saves a great deal of explanation.”

For over a decade, when Google conducted job interviews, they’d ask their applicants questions that have no answers. Google is a company whose very existence depends on innovation—on inventing things that are new and didn’t exist before, and on refining existing ideas and technologies to allow consumers to do things they couldn’t do before.

Contrast this with how most companies conduct job interviews: In the skills portion of the interview, the company wants to know if you can actually do the things that they need doing.

But Google doesn’t even know what skills they need new employees to have. What they need to know is whether an employee can think his way through a problem.

Of Piano Tuners and Skyscrapers

Consider the following question that has been asked at actual Google job interviews: How much does the Empire State Building weigh?

Now, there is no correct answer to this question in any practical sense because no one knows the answer. Google isn’t interested in the answer, though; they’re interested in the process. They want to see a reasoned, rational way of approaching the problem to give them insight into how an applicant’s mind works, how organized a thinker she is.

There are four common responses to the problem. People throw up their hands and say “that’s impossible” or they try to look up the answer somewhere.

The third response? Asking for more information. By “weight of the Empire State Building,” do you mean with or without furniture? Do I count the people in it? But questions like this are a distraction. They don’t bring you any closer to solving the problem; they only postpone being able to start it.

The fourth response is the correct one, using approximating, or what some people call guesstimating. These types of problems are also called estimation problems or Fermi problems, after the physicist Enrico Fermi, who was famous for being able to make estimates with little or no actual data, for questions that seemed impossible to answer. Approximating involves making a series of educated guesses systematically by partitioning the problem into manageable chunks, identifying assumptions, and then using your general knowledge of the world to fill in the blanks.

How would you solve the Fermi problem of “How many piano tuners are there in Chicago?”

 

Where to begin? As with many Fermi problems, it’s often helpful to estimate some intermediate quantity, not the one you’re being asked to estimate, but something that will help you get where you want to go. In this case, it might be easier to start with the number of pianos that you think are in Chicago and then figure out how many tuners it would take to keep them in tune.

THERE IS AN INFINITY OF WAYS ONE MIGHT SOLVE THE PROBLEM, BUT THE FINAL NUMBER IS NOT THE POINT—THE THOUGHT PROCESS, THE SET OF ASSUMPTIONS AND DELIBERATIONS, IS THE ANSWER.

In any Fermi problem, we first lay out what it is we need to know, then list some assumptions:

  1. How often pianos are tuned

  2. How long it takes to tune a piano

  3. How many hours a year the average piano tuner works

  4. The number of pianos in Chicago

Knowing these will help you arrive at an answer. If you know how often pianos are tuned and how long it takes to tune a piano, you know how many hours are spent tuning one piano. Then you multiply that by the number of pianos in Chicago to find out how many hours are spent every year tuning Chicago’s pianos. Divide this by the number of hours each tuner works, and you have the number of tuners.

Assumption 1: The average piano owner tunes his piano once a year.

Where did this number come from? I made it up! But that’s what you do when you’re approximating. It’s certainly within an order of magnitude: The average piano owner isn’t tuning only one time every ten years, nor ten times a year. One time a year seems like a reasonable guesstimate.

Assumption 2: It takes 2 hours to tune a piano. A guess. Maybe it’s only 1 hour, but 2 is within an order of magnitude, so it’s good enough.

Assumption 3: How many hours a year does the average piano tuner work? Let’s assume 40 hours a week, and that the tuner takes 2 weeks’ vacation every year: 40 hours a week x 50 weeks is a 2,000-hour work year. Piano tuners travel to their jobs—people don’t bring their pianos in—so the piano tuner may spend 10 percent–20 percent of his or her time getting from house to house. Keep this in mind and take it off the estimate at the end.

Assumption 4: To estimate the number of pianos in Chicago, you might guess that 1 out of 100 people have a piano—again, a wild guess, but probably within an order of magnitude. In addition, there are schools and other institutions with pianos, many of them with multiple pianos. This estimate is trickier to base on facts, but assume that when these are factored in, they roughly equal the number of private pianos, for a total of 2 pianos for every 100 people.

Now to estimate the number of people in Chicago. If you don’t know the answer to this, you might know that it is the third-largest city in the United States after New York (8 million) and Los Angeles (4 million). You might guess 2.5 million, meaning that 25,000 people have pianos. We decided to double this number to account for institutional pianos, so the result is 50,000 pianos.

So, here are the various estimates:

  1. There are 2.5 million people in Chicago.

  2. There are 2 pianos for every 100 people.

  3. There are 50,000 pianos in Chicago.

  4. Pianos are tuned once a year.

  5. It takes 2 hours to tune a piano.

  6. Piano tuners work 2,000 hours a year.

  7. In one year, a piano tuner can tune 1,000 pianos (2,000 hours per year ÷ 2 hours per piano).

  8. It would take 50 tuners to tune 50,000 pianos (50,000 pianos ÷ 1,000 pianos tuned by each piano tuner).

  9. Add 15 percent to that number to account for travel time, meaning that there are approximately 58 piano tuners in Chicago.

What is the real answer? The Yellow Pages for Chicago lists 83. This includes some duplicates (businesses with more than one phone number are listed twice), and the category includes piano and organ technicians who are not tuners. Deduct 25 for these anomalies, and an estimate of 58 appears to be very close.

But Wait, What About the Empire State Building?

Back to the Google interview and the Empire State Building question. If you were sitting in that interview chair, your interviewer would ask you to think out loud and walk her through your reasoning. There is an infinity of ways one might solve the problem, but to give you a flavor of how a bright, creative, and systematic thinker might do it, here is one possible “answer.” And remember, the final number is not the point—the thought process, the set of assumptions and deliberations, is the answer.

Let’s see. One way to start would be to estimate its size, and then estimate the weight based on that. I’ll begin with some assumptions. I’m going to calculate the weight of the building empty—with no human occupants, no furnishings, appliances, or fixtures. I’m going to assume that the building has a square base and straight sides with no taper at the top, just to simplify the calculations.

For size I need to know height, length, and width. I don’t know how tall the Empire State Building is, but I know that it is definitely more than 20 stories tall and probably less than 200 stories.

I don’t know how tall one story is, but I know from other office buildings I’ve been in that the ceiling is at least 8 feet inside each floor and that there are typically false ceilings to hide electrical wires, conduits, heating ducts, and so on. I’ll guess that these are probably 2 feet. So I’ll approximate 10–15 feet per story.

I’m going to refine my height estimate to say that the building is probably more than 50 stories high. I’ve been in lots of buildings that are 30–35 stories high. My boundary conditions are that it is between 50 and 100 stories; 50 stories work out to being 500–750 feet tall (10–15 feet per story), and 100 stories work out to be 1,000–1,500 feet tall. So my height estimate is between 500 and 1,500 feet. To make the calculations easier, I’ll take the average, 1,000 feet.

Now for its footprint. I don’t know how large its base is, but it isn’t larger than a city block, and I remember learning once that there are typically 10 city blocks to a mile.

HOW MANY USES CAN YOU COME UP WITH FOR A BROOMSTICK? A LEMON? THESE ARE SKILLS THAT CAN BE NURTURED BEGINNING AT A YOUNG AGE. MOST JOBS REQUIRE SOME DEGREE OF CREATIVITY AND FLEXIBLE THINKING.

A mile is 5,280 feet, so a city block is 1/10 of that, or 528 feet. I’ll call it 500 to make calculating easier. I’m going to guess that the Empire State Building is about half of a city block, or about 265 feet on each side. If the building is square, it is 265 x 265 feet in its length x width. I can’t do that in my head, but I know how to calculate 250 x 250 (that is, 25 x 25 = 625, and I add two zeros to get 62,500). I’ll round this total to 60,000, an easier number to work with moving forward.

Now we’ve got the size. There are several ways to go from here. All rely on the fact that most of the building is empty—that is, it is hollow. The weight of the building is mostly in the walls and floors and ceilings. I imagine that the building is made of steel (for the walls) and some combination of steel and concrete for the floors.

 

The volume of the building is its footprint times its height. My footprint estimate above was 60,000 square feet. My height estimate was 1,000 feet. So 60,000 x 1,000 = 60,000,000 cubic feet. I’m not accounting for the fact that it tapers as it goes up.

I could estimate the thickness of the walls and floors and estimate how much a cubic foot of the materials weighs and come up then with an estimate of the weight per story. Alternatively, I could set boundary conditions for the volume of the building. That is, I can say that it weighs more than an equivalent volume of solid air and less than an equivalent volume of solid steel (because it is mostly empty). The former seems like a lot of work. The latter isn’t satisfying because it generates numbers that are likely to be very far apart. Here’s a hybrid option: I’ll assume that on any given floor, 95 percent of the volume is air, and 5 percent is steel.

I’m just pulling this estimate out of the air, really, but it seems reasonable. If the width of a floor is about 265 feet, 5 percent of 265 ≈ 13 feet. That means that the walls on each side, and any interior supporting walls, total 13 feet. As an order of magnitude estimate, that checks out—the total walls can’t be a mere 1.3 feet (one order of magnitude smaller) and they’re not 130 feet (one order of magnitude larger).

I happen to remember from school that a cubic foot of air weights 0.08 pounds. I’ll round that up to 0.1. Obviously, the building is not all air, but a lot of it is—virtually the entire interior space—and so this sets minimum boundary for the weight. The volume times the weight of air gives an estimate of 60,000,000 cubic feet x 0.1 pounds = 6,000,000 pounds.

I don’t know what a cubic foot of steel weighs. But I can estimate that, based on some comparisons. It seems to me that 1 cubic foot of steel must certainly weigh more than a cubic foot of wood. I don’t know what a cubic foot of wood weighs either, but when I stack firewood, I know that an armful weighs about as much as a 50-pound bag of dog food. So I’m going to guess that a cubic foot of wood is about 50 pounds and that steel is about 10 times heavier than that. If the entire Empire State Building were steel, it would weigh 60,000,000 cubic feet x 500 pounds = 30,000,000,000 pounds.

This gives me two boundary conditions: 6 million pounds if the building were all air, and 30 billion pounds if it were solid steel. But as I said, I’m going to assume a mix of 5 percent steel and 95 percent air.

5% x 30 billion = 1,500,000,000

  • 95% x 6 million = 5,700,000

1,505,700,000 pounds

or roughly 1.5 billion pounds. Converting to tons, 1 ton = 2,000 pounds, so 1.5 billion pounds/2,000 = 750,000 tons.

This hypothetical interviewee stated her assumptions at each stage, established boundary conditions, and then concluded with a point estimate at the end, of 750,000 tons. Nicely done!

Now Do It With Cars

Another job interviewee might approach the problem much more parsimoniously. Using the same assumptions about the size of the building, and assumptions about its being empty, a concise protocol might come down to this.

Skyscrapers are constructed from steel. Imagine that the Empire State Building is filled up with cars. Cars also have a lot of air in them, they’re also made of steel, so they could be a good proxy. I know that a car weighs about 2 tons and it is about 15 feet long, 5 feet wide, and 5 feet high. The floors, as estimated above, are about 265 x 265 feet each. If I stacked the cars side by side on the floor, I could get 265/15 = 18 cars in one row, which I’ll round to 20 (one of the beauties of guesstimating).

How many rows will fit? Cars are about 5 feet wide, and the building is 265 feet wide, so 265/5 = 53, which I’ll round to 50. That’s 20 cars x 50 rows = 1,000 cars on each floor. Each floor is 10 feet high and the cars are 5 feet high, so I can fit 2 cars up to the ceiling. 2 x 1,000 = 2,000 cars per floor. And 2,000 cars per floor x 100 floors = 200,000 cars. Add in their weight, 200,000 cars x 4,000 pounds = 800,000,000 pounds, or in tons, 400,000 tons.

These two methods produced estimates that are relatively close—one is a bit less than twice the other—so they help us to perform an important sanity check. Because this has become a somewhat famous problem (and a frequent Google search), the New York State Department of Transportation has taken to giving their estimate of the weight, and it comes in at 365,000 tons. So we find that both guesstimates brought us within an order of magnitude of the official estimate, which is just what was required.

These so-called back-of-the-envelope problems are just one window into assessing creativity. Another test that gets at both creativity and flexible thinking without relying on quantitative skills is the “name as many uses” test.

For example, how many uses can you come up with for a broomstick? A lemon? These are skills that can be nurtured beginning at a young age. Most jobs require some degree of creativity and flexible thinking.

As an admissions test for flight school for commercial airline pilots, the name-as-many-uses test was used because pilots need to be able to react quickly in an emergency, to be able to think of alternative approaches when systems fail. How would you put out a fire in the cabin if the fire extinguisher doesn’t work? How do you control the elevators if the hydraulic system fails?

Exercising this part of your brain involves harnessing the power of free association—the brain’s daydreaming mode—in the service of problem solving, and you want pilots who can do this in a pinch. This type of thinking can be taught and practiced, and can be nurtured in children as young as five years old. It is an increasingly important skill in a technology-driven world with untold unknowns.

There are no right answers, just opportunities to exercise ingenuity, find new connections, and to allow whimsy and experimentation to become a normal and habitual part of our thinking, which will lead to better problem solving.

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings

“Tip-of-the-Tongue Syndrome,” Transactive Memory, and How the Internet Is Making Us Smarter | Brain Pickings.

Vannevar Bush’s ‘memex’ — short for ‘memory index’ — a primitive vision for a personal hard drive for information storage and management.

“At their best, today’s digital tools help us see more, retain more, communicate more. At their worst, they leave us prey to the manipulation of the toolmakers. But on balance, I’d argue, what is happening is deeply positive. This book is about the transformation.”

[…]

One of his most fascinating and important points has to do with our outsourcing of memory — or, more specifically, our increasingly deft, search-engine-powered skills of replacing the retention of knowledge in our own brains with the on-demand access to knowledge in the collective brain of the internet. Think, for instance, of those moments when you’re trying to recall the name of a movie but only remember certain fragmentary features — the name of the lead actor, the gist of the plot, a song from the soundtrack. Thompson calls this “tip-of-the-tongue syndrome” and points out that, today, you’ll likely be able to reverse-engineer the name of the movie you don’t remember by plugging into Google what you do remember about it.

[…]

“Tip-of-the-tongue syndrome is an experience so common that cultures worldwide have a phrase for it. Cheyenne Indians call it navonotootse’a, which means “I have lost it on my tongue”; in Korean it’s hyeu kkedu-te mam-dol-da, which has an even more gorgeous translation: “sparkling at the end of my tongue.” The phenomenon generally lasts only a minute or so; your brain eventually makes the connection. But … when faced with a tip-of-the-tongue moment, many of us have begun to rely instead on the Internet to locate information on the fly. If lifelogging … stores “episodic,” or personal, memories, Internet search engines do the same for a different sort of memory: “semantic” memory, or factual knowledge about the world. When you visit Paris and have a wonderful time drinking champagne at a café, your personal experience is an episodic memory. Your ability to remember that Paris is a city and that champagne is an alcoholic beverage — that’s semantic memory.”

[…]

“Writing — the original technology for externalizing information — emerged around five thousand years ago, when Mesopotamian merchants began tallying their wares using etchings on clay tablets. It emerged first as an economic tool. As with photography and the telephone and the computer, newfangled technologies for communication nearly always emerge in the world of commerce. The notion of using them for everyday, personal expression seems wasteful, risible, or debased. Then slowly it becomes merely lavish, what “wealthy people” do; then teenagers take over and the technology becomes common to the point of banality.”

Thompson reminds us of the anecdote, by now itself familiar “to the point of banality,” about Socrates and his admonition that the “technology” of writing would devastate the Greek tradition of debate and dialectic, and would render people incapable of committing anything to memory because “knowledge stored was not really knowledge at all.” He cites Socrates’s parable of the Egyptian god Theuth and how he invented writing, offering it as a gift to the king of Egypt,

“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories; they will trust to the external written characters and not remember of themselves. The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.”

That resistance endured as technology changed shape, across the Middle Ages and past Gutenberg’s revolution, but it wasn’t without counter-resistance: Those who recorded their knowledge in writing and, eventually, collected it in the form of books argued that it expanded the scope of their curiosity and the ideas they were able to ponder, whereas the mere act of rote memorization made no guarantees of deeper understanding.

Ultimately, however, Thompson points out that Socrates was both right and wrong: It’s true that, with some deliberately cultivated exceptions and neurological outliers, few thinkers today rely on pure memorization and can recite extensive passages of text from memory. But what Socrates failed to see was the extraordinary dot-connecting enabled by access to knowledge beyond what our own heads can hold — because, as Amanda Palmer poignantly put it, “we can only connect the dots that we collect,” and the outsourcing of memory has exponentially enlarged our dot-collections.

With this in mind, Thompson offers a blueprint to this newly developed system of knowledge management in which access is critical:

“If you are going to read widely but often read books only once; if you going to tackle the ever-expanding universe of ideas by skimming and glancing as well as reading deeply; then you are going to rely on the semantic-memory version of gisting. By which I mean, you’ll absorb the gist of what you read but rarely retain the specifics. Later, if you want to mull over a detail, you have to be able to refind a book, a passage, a quote, an article, a concept.”

This, he argues, is also how and why libraries were born — the death of the purely oral world and the proliferation of print after Gutenberg placed new demands on organizing and storing human knowledge. And yet storage and organization soon proved to be radically different things:

“The Gutenberg book explosion certainly increased the number of books that libraries acquired, but librarians had no agreed-upon ways to organize them. It was left to the idiosyncrasies of each. A core job of the librarian was thus simply to find the book each patron requested, since nobody else knew where the heck the books were. This created a bottleneck in access to books, one that grew insufferable in the nineteenth century as citizens began swarming into public venues like the British Library. “Complaints about the delays in the delivery of books to readers increased,” as Matthew Battles writes in Library: An Unquiet History, “as did comments about the brusqueness of the staff.” Some patrons were so annoyed by the glacial pace of access that they simply stole books; one was even sentenced to twelve months in prison for the crime. You can understand their frustration. The slow speed was not just a physical nuisance, but a cognitive one.”

The solution came in the late 19th century by way of Melville Dewey, whose decimal system imposed order by creating a taxonomy of book placement, eventually rendering librarians unnecessary — at least in their role as literal book-retrievers. They became, instead, curiosity sherpas who helped patrons decide what to read and carry out comprehensive research. In many ways, they came to resemble the editors and curators who help us navigate the internet today, framing for us what is worth attending to and why.

[…]

“The history of factual memory has been fairly predictable up until now. With each innovation, we’ve outsourced more information, then worked to make searching more efficient. Yet somehow, the Internet age feels different. Quickly pulling up [the answer to a specific esoteric question] on Google seems different from looking up a bit of trivia in an encyclopedia. It’s less like consulting a book than like asking someone a question, consulting a supersmart friend who lurks within our phones.”

And therein lies the magic of the internet — that unprecedented access to humanity’s collective brain. Thompson cites the work of Harvard psychologist Daniel Wegner, who first began exploring this notion of collective rather than individual knowledge in the 1980s by observing how partners in long-term relationships often divide and conquer memory tasks in sharing the household’s administrative duties:

“Wegner suspected this division of labor takes place because we have pretty good “metamemory.” We’re aware of our mental strengths and limits, and we’re good at intuiting the abilities of others. Hang around a workmate or a romantic partner long enough and you begin to realize that while you’re terrible at remembering your corporate meeting schedule, or current affairs in Europe, or how big a kilometer is relative to a mile, they’re great at it. So you begin to subconsciously delegate the task of remembering that stuff to them, treating them like a notepad or encyclopedia. In many respects, Wegner noted, people are superior to these devices, because what we lose in accuracy we make up in speed.

[…]

Wegner called this phenomenon “transactive” memory: two heads are better than one. We share the work of remembering, Wegner argued, because it makes us collectively smarter — expanding our ability to understand the world around us.”

[…]

This very outsourcing of memory requires that we learn what the machine knows — a kind of meta-knowledge that enables us to retrieve the information when we need it. And, reflecting on Sparrow’s findings, Thomspon points out that this is neither new nor negative:

“We’ve been using transactive memory for millennia with other humans. In everyday life, we are only rarely isolated, and for good reason. For many thinking tasks, we’re dumber and less cognitively nimble if we’re not around other people. Not only has transactive memory not hurt us, it’s allowed us to perform at higher levels, accomplishing acts of reasoning that are impossible for us alone.”

[…]

Outsourcing our memory to machines rather than to other humans, in fact, offers certain advantages by pulling us into a seemingly infinite rabbit hole of indiscriminate discovery:

“In some ways, machines make for better transactive memory buddies than humans. They know more, but they’re not awkward about pushing it in our faces. When you search the Web, you get your answer — but you also get much more. Consider this: If I’m trying to remember what part of Pakistan has experienced many U.S. drone strikes and I ask a colleague who follows foreign affairs, he’ll tell me “Waziristan.” But when I queried this once on the Internet, I got the Wikipedia page on “Drone attacks in Pakistan.” A chart caught my eye showing the astonishing increase of drone attacks (from 1 a year to 122 a year); then I glanced down to read a précis of studies on how Waziristan residents feel about being bombed. (One report suggested they weren’t as opposed as I’d expected, because many hated the Taliban, too.) Obviously, I was procrastinating. But I was also learning more, reinforcing my schematic understanding of Pakistan.”

[…]

“The real challenge of using machines for transactive memory lies in the inscrutability of their mechanics. Transactive memory works best when you have a sense of how your partners’ minds work — where they’re strong, where they’re weak, where their biases lie. I can judge that for people close to me. But it’s harder with digital tools, particularly search engines. You can certainly learn how they work and develop a mental model of Google’s biases. … But search companies are for-profit firms. They guard their algorithms like crown jewels. This makes them different from previous forms of outboard memory. A public library keeps no intentional secrets about its mechanisms; a search engine keeps many. On top of this inscrutability, it’s hard to know what to trust in a world of self-publishing. To rely on networked digital knowledge, you need to look with skeptical eyes. It’s a skill that should be taught with the same urgency we devote to teaching math and writing.

Thompson’s most important point, however, has to do with how outsourcing our knowledge to digital tools actually hampers the very process of creative thought, which relies on our ability to connect existing ideas from our mental pool of resources into new combinations, or what the French polymath Henri Poincaré has famously termed “sudden illuminations.” Without a mental catalog of materials which to mull and let incubate in our fringe consciousness, our capacity for such illuminations is greatly deflated. Thompson writes:

“These eureka moments are familiar to all of us; they’re why we take a shower or go for a walk when we’re stuck on a problem. But this technique works only if we’ve actually got a lot of knowledge about the problem stored in our brains through long study and focus. … You can’t come to a moment of creative insight if you haven’t got any mental fuel. You can’t be googling the info; it’s got to be inside you.”

[…]

“Evidence suggests that when it comes to knowledge we’re interested in — anything that truly excites us and has meaning — we don’t turn off our memory. Certainly, we outsource when the details are dull, as we now do with phone numbers. These are inherently meaningless strings of information, which offer little purchase on the mind. … It makes sense that our transactive brains would hand this stuff off to machines. But when information engages us — when we really care about a subject — the evidence suggests we don’t turn off our memory at all.”

[…]

“In an ideal world, we’d all fit the Renaissance model — we’d be curious about everything, filled with diverse knowledge and thus absorbing all current events and culture like sponges. But this battle is age-old, because it’s ultimately not just technological. It’s cultural and moral and spiritual; “getting young people to care about the hard stuff” is a struggle that goes back centuries and requires constant societal arguments and work. It’s not that our media and technological environment don’t matter, of course. But the vintage of this problem indicates that the solution isn’t merely in the media environment either.”

[…]

“A tool’s most transformative uses generally take us by surprise.”

[…]

“How should you respond when you get powerful new tools for finding answers?

Think of harder questions.”

A World Digital Library Is Coming True! by Robert Darnton | The New York Review of Books

A World Digital Library Is Coming True! by Robert Darnton | The New York Review of Books.

darnton_2-052214.jpg

In the scramble to gain market share in cyberspace, something is getting lost: the public interest. Libraries and laboratories—crucial nodes of the World Wide Web—are buckling under economic pressure, and the information they diffuse is being diverted away from the public sphere, where it can do most good.

Not that information comes free or “wants to be free,” as Internet enthusiasts proclaimed twenty years ago.1 It comes filtered through expensive technologies and financed by powerful corporations. No one can ignore the economic realities that underlie the new information age, but who would argue that we have reached the right balance between commercialization and democratization?

Consider the cost of scientific periodicals, most of which are published exclusively online. It has increased at four times the rate of inflation since 1986. The average price of a year’s subscription to a chemistry journal is now $4,044. In 1970 it was $33. A subscription to the Journal of Comparative Neurology cost $30,860 in 2012—the equivalent of six hundred monographs. Three giant publishers—Reed Elsevier, Wiley-Blackwell, and Springer—publish 42 percent of all academic articles, and they make giant profits from them. In 2013 Elsevier turned a 39 percent profit on an income of £2.1 billion from its science, technical, and medical journals.

All over the country research libraries are canceling subscriptions to academic journals, because they are caught between decreasing budgets and increasing costs. The logic of the bottom line is inescapable, but there is a higher logic that deserves consideration—namely, that the public should have access to knowledge produced with public funds.

[…]

The struggle over academic journals should not be dismissed as an “academic question,” because a great deal is at stake. Access to research drives large sectors of the economy—the freer and quicker the access, the more powerful its effect. The Human Genome Project cost $3.8 billion in federal funds to develop, and thanks to the free accessibility of the results, it has already produced $796 billion in commercial applications. Linux, the free, open-source software system, has brought in billions in revenue for many companies, including Google.

[…]

According to a study completed in 2006 by John Houghton, a specialist in the economics of information, a 5 percent increase in the accessibility of research would have produced an increase in productivity worth $16 billion.

[…]

Yet accessibility may decrease, because the price of journals has escalated so disastrously that libraries—and also hospitals, small-scale laboratories, and data-driven enterprises—are canceling subscriptions. Publishers respond by charging still more to institutions with budgets strong enough to carry the additional weight.

[…]

In the long run, journals can be sustained only through a transformation of the economic basis of academic publishing. The current system developed as a component of the professionalization of academic disciplines in the nineteenth century. It served the public interest well through most of the twentieth century, but it has become dysfunctional in the age of the Internet.

[…]

The entire system of communicating research could be made less expensive and more beneficial for the public by a process known as “flipping.” Instead of subsisting on subscriptions, a flipped journal covers its costs by charging processing fees before publication and making its articles freely available, as “open access,” afterward. That will sound strange to many academic authors. Why, they may ask, should we pay to get published? But they may not understand the dysfunctions of the present system, in which they furnish the research, writing, and refereeing free of charge to the subscription journals and then buy back the product of their work—not personally, of course, but through their libraries—at an exorbitant price. The public pays twice—first as taxpayers who subsidize the research, then as taxpayers or tuition payers who support public or private university libraries.

By creating open-access journals, a flipped system directly benefits the public. Anyone can consult the research free of charge online, and libraries are liberated from the spiraling costs of subscriptions. Of course, the publication expenses do not evaporate miraculously, but they are greatly reduced, especially for nonprofit journals, which do not need to satisfy shareholders. The processing fees, which can run to a thousand dollars or more, depending on the complexities of the text and the process of peer review, can be covered in various ways. They are often included in research grants to scientists, and they are increasingly financed by the author’s university or a group of universities.

[…]

The main impediment to public-spirited publishing of this kind is not financial. It involves prestige. Scientists prefer to publish in expensive journals like Nature, Science, and Cell, because the aura attached to them glows on CVs and promotes careers. But some prominent scientists have undercut the prestige effect by founding open-access journals and recruiting the best talent to write and referee for them. Harold Varmus, a Nobel laureate in physiology and medicine, has made a huge success of Public Library of Science, and Paul Crutzen, a Nobel laureate in chemistry, has done the same with Atmospheric Chemistry and Physics. They have proven the feasibility of high-quality, open-access journals. Not only do they cover costs through processing fees, but they produce a profit—or rather, a “surplus,” which they invest in further open-access projects.

[…]

DASH now includes 17,000 articles, and it has registered three million downloads from countries in every continent. Repositories in other universities also report very high scores in their counts of downloads. They make knowledge available to a broad public, including researchers who have no connection to an academic institution; and at the same time, they make it possible for writers to reach far more readers than would be possible by means of subscription journals.

The desire to reach readers may be one of the most underestimated forces in the world of knowledge. Aside from journal articles, academics produce a large numbers of books, yet they rarely make much money from them. Authors in general derive little income from a book a year or two after its publication. Once its commercial life has ended, it dies a slow death, lying unread, except for rare occasions, on the shelves of libraries, inaccessible to the vast majority of readers. At that stage, authors generally have one dominant desire—for their work to circulate freely through the public; and their interest coincides with the goals of the open-access movement.

[…]

All sorts of complexities remain to be worked out before such a plan can succeed: How to accommodate the interests of publishers, who want to keep books on their backlists? Where to leave room for rights holders to opt out and for the revival of books that take on new economic life? Whether to devise some form of royalties, as in the extended collective licensing programs that have proven to be successful in the Scandinavian countries? It should be possible to enlist vested interests in a solution that will serve the public interest, not by appealing to altruism but rather by rethinking business plans in ways that will make the most of modern technology.

Several experimental enterprises illustrate possibilities of this kind. Knowledge Unlatched gathers commitments and collects funds from libraries that agree to purchase scholarly books at rates that will guarantee payment of a fixed amount to the publishers who are taking part in the program. The more libraries participating in the pool, the lower the price each will have to pay. While electronic editions of the books will be available everywhere free of charge through Knowledge Unlatched, the subscribing libraries will have the exclusive right to download and print out copies.

[…]

OpenEdition Books, located in Marseille, operates on a somewhat similar principle. It provides a platform for publishers who want to develop open-access online collections, and it sells the e-content to subscribers in formats that can be downloaded and printed. Operating from Cambridge, England, Open Book Publishers also charges for PDFs, which can be used with print-on-demand technology to produce physical books, and it applies the income to subsidies for free copies online. It recruits academic authors who are willing to provide manuscripts without payment in order to reach the largest possible audience and to further the cause of open access.

The famous quip of Samuel Johnson, “No man but a blockhead ever wrote, except for money,” no longer has the force of a self-evident truth in the age of the Internet. By tapping the goodwill of unpaid authors, Open Book Publishers has produced forty-one books in the humanities and social sciences, all rigorously peer-reviewed, since its foundation in 2008. “We envisage a world in which all research is freely available to all readers,” it proclaims on its website.

[…]

Google set out to digitize millions of books in research libraries and then proposed to sell subscriptions to the resulting database. Having provided the books to Google free of charge, the libraries would then have to buy back access to them, in digital form, at a price to be determined by Google and that could escalate as disastrously as the prices of scholarly journals.

Google Book Search actually began as a search service, which made available only snippets or short passages of books. But because many of the books were covered by copyright, Google was sued by the rights holders; and after lengthy negotiations the plaintiffs and Google agreed on a settlement, which transformed the search service into a gigantic commercial library financed by subscriptions. But the settlement had to be approved by a court, and on March 22, 2011, the Southern Federal District Court of New York rejected it on the grounds that, among other things, it threatened to constitute a monopoly in restraint of trade. That decision put an end to Google’s project and cleared the way for the DPLA to offer digitized holdings—but nothing covered by copyright—to readers everywhere, free of charge.

Aside from its not-for-profit character, the DPLA differs from Google Book Search in a crucial respect: it is not a vertical organization erected on a database of its own. It is a distributed, horizontal system, which links digital collections already in the possession of the participating institutions, and it does so by means of a technological infrastructure that makes them instantly available to the user with one click on an electronic device. It is fundamentally horizontal, both in organization and in spirit.

Instead of working from the top down, the DPLA relies on “service hubs,” or small administrative centers, to promote local collections and aggregate them at the state level. “Content hubs” located in institutions with collections of at least 250,000 items—for example, the New York Public Library, the Smithsonian Institution, and the collective digital repository known as HathiTrust—provide the bulk of the DPLA’s holdings. There are now two dozen service and content hubs, and soon, if financing can be found, they will exist in every state of the union.

Such horizontality reinforces the democratizing impulse behind the DPLA. Although it is a small, nonprofit corporation with headquarters and a minimal staff in Boston, the DPLA functions as a network that covers the entire country. It relies heavily on volunteers. More than a thousand computer scientists collaborated free of charge in the design of its infrastructure, which aggregates metadata (catalog-type descriptions of documents) in a way that allows easy searching.

Therefore, for example, a ninth-grader in Dallas who is preparing a report on an episode of the American Revolution can download a manuscript from New York, a pamphlet from Chicago, and a map from San Francisco in order to study them side by side. Unfortunately, he or she will not be able to consult any recent books, because copyright laws keep virtually everything published after 1923 out of the public domain. But the courts, which are considering a flurry of cases about the “fair use” of copyright, may sustain a broad-enough interpretation for the DPLA to make a great deal of post-1923 material available for educational purposes.

A small army of volunteer “Community Reps,” mainly librarians with technical skills, is fanning out across the country to promote various outreach programs sponsored by the DPLA. They reinforce the work of the service hubs, which concentrate on public libraries as centers of collection-building. A grant from the Bill and Melinda Gates Foundation is financing a Public Library Partnerships Project to train local librarians in the latest digital technologies. Equipped with new skills, the librarians will invite people to bring in material of their own—family letters, high school yearbooks, postcard collections stored in trunks and attics—to be digitized, curated, preserved, and made accessible online by the DPLA. While developing local community consciousness about culture and history, this project will also help integrate local collections in the national network.

[…]

In these and other ways, the DPLA will go beyond its basic mission of making the cultural heritage of America available to all Americans. It will provide opportunities for them to interact with the material and to develop materials of their own. It will empower librarians and reinforce public libraries everywhere, not only in the United States. Its technological infrastructure has been designed to be interoperable with that of Europeana, a similar enterprise that is aggregating the holdings of libraries in the twenty-eight member states of the European Union. The DPLA’s collections include works in more than four hundred languages, and nearly 30 percent of its users come from outside the US. Ten years from now, the DPLA’s first year of activity may look like the beginning of an international library system.

It would be naive, however, to imagine a future free from the vested interests that have blocked the flow of information in the past. The lobbies at work in Washington also operate in Brussels, and a newly elected European Parliament will soon have to deal with the same issues that remain to be resolved in the US Congress. Commercialization and democratization operate on a global scale, and a great deal of access must be opened before the World Wide Web can accommodate a worldwide library.

Adobe and Google Debut Typeface Family of Asian Languages

Adobe and Google Debut Typeface Family of Asian Languages.

Original sketch by type designer Ryoko Nishizuka.

The Adobe font, named Source Han Sans, is a new open source offering for the company’s Pan-CJK typeface family.

Google is simultaneously releasing its own version of this font under the name Noto Sans CJK as part of a plan to build out its Noto Pan-Unicode font family. Both sets, developed in collaboration, are identical except for the name and will serve 1.5 billion people — roughly a quarter of the world’s population.

The new typeface family is available in seven weights, supporting Japanese, Korean, Traditional Chinese and Simplified Chinese, all in one font.

[…]

“The design is relatively modern in style, but it has simple strokes and is monolinear so it makes text clear and readable on small devices such as tablets and smartphones,” said Nicole Minoza, Adobe’s product marketing manager.

“Because it’s a sans serif typeface, it’s a workhorse font — good for a single line of text or a short phrase or something you might see in a software menu, as well as longer strings of text that would appear in an ebook or a printed publication.”

[…]

Each font weight in the family has a total of 65,535 glyphs (the maximum number of characters supported in the OpenType format), and the entire family contains just under half a million total glyphs.

[…]

“Not only are the open source fonts free, but users can extend and modify them,” Minoza said. “They would have the right to add Vietnamese characters, for example. Hardware and software manufacturers can install the fonts on their devices. There’s a really big audience and the licensing rights for open source makes it good for device manufacturers.”

[…]

Discussions around creating a Pan-CJK font started about 15 years ago at Adobe, but the company couldn’t get beyond the overall cost in terms of time and resources.

With this joint project, Adobe was able to contribute its design, technical skill, in-country type experience, coordination and automation, while letting Google take control of the logistics for project direction, defining requirements, in-country testing of resources and expertise and funding.

[…]

To make sure the font was authentic for native readers, Adobe sought expertise from foundries such as Iwata Corp. to expand the Japanese glyph selection, Sandoll Communication, designer of Korean Hangul (the Korean language native alphabet) and Changzhou SinoType, Adobe’s longtime collaborator in China.

Each foundry was assigned a different task for a unique contribution to the project. Said Minoza, “Iwata fleshed out the original Japanese design, which was provided to our other partners. Sandoll created the Hangul characters from scratch — and they needed to make sure they harmonized with the other characters as well as with the Latin characters — and SinoType not only had to expand the Chinese glyph sets but they had to analyze each of the glyphs to make sure they satisfied regional considerations.

“There are a lot of instances and regional variations for the characters even though they all evolved from the same character originally.” The new font also features Hong Kong and Taiwanese character sets.

Ryoko Nishizuka, an Adobe senior designer on the Tokyo type team, created the overall type design from which the other language variations are derived.

multi-language-sample-v3

Introduction – Material Design – Google design guidelines

Introduction – Material Design – Google design guidelines.

A material metaphor is the unifying theory of a rationalized space and a system of motion. The material is grounded in tactile reality, inspired by the study of paper and ink, yet technologically advanced and open to imagination and magic

Surfaces and edges of the material provide visual cues that are grounded in reality. The use of familiar tactile attributes helps users quickly understand affordances. Yet the flexibility of the material creates new affordances that supercede those in the physical world, without breaking the rules of physics.

The fundamentals of light, surface, and movement are key to conveying how objects move, interact, and exist in space in relation to each other. Realistic lighting shows seams, divides space, and indicates moving parts.

[…]

The foundational elements of print-based design—typography, grids, space, scale, color, and use of imagery—guide visual treatments. These elements do far more than please the eye; they create hierarchy, meaning, and focus. Deliberate color choices, edge-to-edge imagery, large-scale typography, and intentional white space create a bold and graphic interface that immerses the user in the experience.