Tag Archives: artificial intelligence

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED.

It’s not too much of an exaggeration to call autocorrect the overlooked underwriter of our era of mobile prolixity. Without it, we wouldn’t be able to compose windy love letters from stadium bleachers, write novels on subway commutes, or dash off breakup texts while in line at the post office. Without it, we probably couldn’t even have phones that look anything like the ingots we tickle—the whole notion of touchscreen typing, where our podgy physical fingers are expected to land with precision on tiny virtual keys, is viable only when we have some serious software to tidy up after us. Because we know autocorrect is there as brace and cushion, we’re free to write with increased abandon, at times and in places where writing would otherwise be impossible. Thanks to autocorrect, the gap between whim and word is narrower than it’s ever been, and our world is awash in easily rendered thought.

[…]

I find him in a drably pastel conference room at Microsoft headquarters in Redmond, Washington. Dean Hachamovitch—inventor on the patent for autocorrect and the closest thing it has to an individual creator—reaches across the table to introduce himself.

[…]

Hachamovitch, now a vice president at Microsoft and head of data science for the entire corporation, is a likable and modest man. He freely concedes that he types teh as much as anyone. (Almost certainly he does not often type hte. As researchers have discovered, initial-letter transposition is a much rarer error.)

[…]

The notion of autocorrect was born when Hachamovitch began thinking about a functionality that already existed in Word. Thanks to Charles Simonyi, the longtime Microsoft executive widely recognized as the father of graphical word processing, Word had a “glossary” that could be used as a sort of auto-expander. You could set up a string of words—like insert logo—which, when typed and followed by a press of the F3 button, would get replaced by a JPEG of your company’s logo. Hachamovitch realized that this glossary could be used far more aggressively to correct common mistakes. He drew up a little code that would allow you to press the left arrow and F3 at any time and immediately replace teh with the. His aha moment came when he realized that, because English words are space-delimited, the space bar itself could trigger the replacement, to make correction … automatic! Hachamovitch drew up a list of common errors, and over the next years he and his team went on to solve many of the thorniest. Seperate would automatically change to separate. Accidental cap locks would adjust immediately (making dEAR grEG into Dear Greg). One Microsoft manager dubbed them the Department of Stupid PC Tricks.

[…]

One day Hachamovitch went into his boss’s machine and changed the autocorrect dictionary so that any time he typed Dean it was automatically changed to the name of his coworker Mike, and vice versa. (His boss kept both his computer and office locked after that.) Children were even quicker to grasp the comedic ramifications of the new tool. After Hachamovitch went to speak to his daughter’s third-grade class, he got emails from parents that read along the lines of “Thank you for coming to talk to my daughter’s class, but whenever I try to type her name I find it automatically transforms itself into ‘The pretty princess.’”

[…]

On idiom, some of its calls seemed fairly clear-cut: gorilla warfare became guerrilla warfare, for example, even though a wildlife biologist might find that an inconvenient assumption. But some of the calls were quite tricky, and one of the trickiest involved the issue of obscenity. On one hand, Word didn’t want to seem priggish; on the other, it couldn’t very well go around recommending the correct spelling of mothrefukcer. Microsoft was sensitive to these issues. The solution lay in expanding one of spell-check’s most special lists, bearing the understated title: “Words which should neither be flagged nor suggested.”

[…]

One day Vignola sent Bill Gates an email. (Thorpe couldn’t recall who Bill Vignola was or what he did.) Whenever Bill Vignola typed his own name in MS Word, the email to Gates explained, it was automatically changed to Bill Vaginal. Presumably Vignola caught this sometimes, but not always, and no doubt this serious man was sad to come across like a character in a Thomas Pynchon novel. His email made it down the chain of command to Thorpe. And Bill Vaginal wasn’t the only complainant: As Thorpe recalls, Goldman Sachs was mad that Word was always turning it into Goddamn Sachs.

Thorpe went through the dictionary and took out all the words marked as “vulgar.” Then he threw in a few anatomical terms for good measure. The resulting list ran to hundreds of entries:

anally, asshole, battle-axe, battleaxe, bimbo, booger, boogers, butthead, Butthead …

With these sorts of master lists in place—the corrections, the exceptions, and the to-be-primly-ignored—the joists of autocorrect, then still a subdomain of spell-check, were in place for the early releases of Word. Microsoft’s dominance at the time ensured that autocorrect became globally ubiquitous, along with some of its idiosyncrasies. By the early 2000s, European bureaucrats would begin to notice what came to be called the Cupertino effect, whereby the word cooperation (bizarrely included only in hyphenated form in the standard Word dictionary) would be marked wrong, with a suggested change to Cupertino. There are thus many instances where one parliamentary back-bencher or another longs for increased Cupertino between nations. Since then, linguists have adopted the word cupertino as a term of art for such trapdoors that have been assimilated into the language.

[…]

Autocorrection is no longer an overqualified intern drawing up lists of directives; it’s now a vast statistical affair in which petabytes of public words are examined to decide when a usage is popular enough to become a probabilistically savvy replacement. The work of the autocorrect team has been made algorithmic and outsourced to the cloud.

A handful of factors are taken into account to weight the variables: keyboard proximity, phonetic similarity, linguistic context. But it’s essentially a big popularity contest. A Microsoft engineer showed me a slide where somebody was trying to search for the long-named Austrian action star who became governor of California. Schwarzenegger, he explained, “is about 10,000 times more popular in the world than its variants”—Shwaranegar or Scuzzynectar or what have you. Autocorrect has become an index of the most popular way to spell and order certain words.

When English spelling was first standardized, it was by the effective fiat of those who controlled the communicative means of production. Dictionaries and usage guides have always represented compromises between top-down prescriptivists—those who believe language ought to be used a certain way—and bottom-up descriptivists—those who believe, instead, that there’s no ought about it.

The emerging consensus on usage will be a matter of statistical arbitration, between the way “most” people spell something and the way “some” people do. If it proceeds as it has, it’s likely to be a winner-take-all affair, as alternatives drop out. (Though Apple’s recent introduction of personalized, “contextual” autocorrect—which can distinguish between the language you use with your friends and the language you use with your boss—might complicate that process of standardization and allow us the favor of our characteristic errors.)

[…]

The possibility of linguistic communication is grounded in the fact of what some philosophers of language have called the principle of charity: The first step in a successful interpretation of an utterance is the belief that it somehow accords with the universe as we understand it. This means that we have a propensity to take a sort of ownership over even our errors, hoping for the possibility of meaning in even the most perverse string of letters. We feel honored to have a companion like autocorrect who trusts that, despite surface clumsiness or nonsense, inside us always smiles an articulate truth.

[…]

Today the influence of autocorrect is everywhere: A commenter on the Language Log blog recently mentioned hearing of an entire dialect in Asia based on phone cupertinos, where teens used the first suggestion from autocomplete instead of their chosen word, thus creating a slang that others couldn’t decode. (It’s similar to the Anglophone teenagers who, in a previous texting era, claimed to have replaced the term of approval cool with that of book because of happenstance T9 input priority.) Surrealists once encouraged the practice of écriture automatique, or automatic writing, in order to reveal the peculiar longings of the unconscious. The crackpot suggestions of autocorrect have become our own form of automatic writing—but what they reveal are the peculiar statistics of a world id.

The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED

The Moral Hazards and Legal Conundrums of Our Robot-Filled Future | Science | WIRED.

robot-morality-inline

Whether you find it exhilarating or terrifying (or both), progress in robotics and related fields like AI is raising new ethical quandaries and challenging legal codes that were created for a world in which a sharp line separates man from machine. Last week, roboticists, legal scholars, and other experts met at the University of California, Berkeley law school to talk through some of the social, moral, and legal hazards that are likely to arise as that line starts to blur.

[…]

We May Have Feelings for Robots

Darling studies the attachments people form with robots. “There’s evidence that people respond very strongly to robots that are designed to be lifelike,” she said. “We tend to project onto them and anthropomorphize them.”

Most of the evidence for this so far is anecdotal. Darling’s ex-boyfriend, for example, named his Roomba and would feel bad for it when it got stuck under the couch. She’s trying to study human empathy for robots in a more systematic way. In one ongoing study she’s investigating how people react when they’re asked to “hurt” or “kill” a robot by hitting it with various objects. Preliminary evidence suggests they don’t like it one bit.

Another study by Julie Carpenter, a University of Washington graduate student, found that soldiers develop attachments to the robots they use to detect and defuse roadside bombs and other weapons. In interviews with service members, Carpenter found that in some cases they named their robots, ascribed personality traits to them, and felt angry or even sad when their robot got blown up in the line of duty.

This emerging field of research has implications for robot design, Darling says. If you’re building a robot to help take care of elderly people, for example, you might want to foster a deep sense of engagement. But if you’re building a robot for military use, you wouldn’t want the humans to get so attached that they risk their own lives.

There might also be more profound implications. In a 2012 paper, Darling considers the possibility of robot rights. She admits it’s a provocative proposition, but notes that some arguments for animal rights focus not on the animals’ ability to experience pain and anguish but on the effect that cruelty to animals has on humans. If research supports the idea that abusing robots makes people more abusive towards people, it might be a good idea to have legal protections for social robots, Darling says.

Robots Will Have Sex With Us

Robotics is taking sex toys to a new level, and that raises some interesting issues, ranging from the appropriateness of human-robot marriages to using robots to replace prostitutes or spice up the sex lives of the elderly. Some of the most provocative questions involve child-like sex robots. Arkin, the Georgia Tech roboticist, thinks it’s worth investigating whether they could be used to rehabilitate sex offenders.

“We have a problem with pedophilia in society,” Arkin said. “What do we do with these people after they get out of prison? There are very high recidivism rates.” If convicted sex offenders were “prescribed” a child-like sex robot, much like heroin addicts are prescribed methadone as part of a program to kick the habit, it might be possible to reduce recidivism, Arkin suggests. A government agency would probably never fund such a project, Arkin says, and he doesn’t know of anyone else who would either. “But nonetheless I do believe there is a possibility that we may be able to better protect society through this kind of research, rather than having the sex robot cottage industry develop in seedy back rooms, which indeed it is already,” he said.

Even if—and it’s a big if—such a project could win funding and ethical approval, it would be difficult to carry out, Sharkey cautions. “How do you actually do the research until these things are out there in the wild and used for a while? How do you know you’re not creating pedophiles?” he said.

How the legal system would deal with child-like sex robots isn’t entirely clear, according to Ryan Calo, a law professor at the University of Washington. In 2002, the Supreme Court ruled that simulated child pornography (in which young adults or computer generated characters play the parts of children) is protected by the First Amendment and can’t be criminalized. “I could see that extending to embodied [robotic] children, but I can also see courts and regulators getting really upset about that,” Calo said.

Our Laws Aren’t Made for Robots

Child-like sex robots are just one of the many ways in which robots are likely to challenge the legal system in the future, Calo said. “The law assumes, by and large, a dichotomy between a person and a thing. Yet robotics is a place where that gets conflated,” he said.

For example, the concept of mens rea (Latin for “guilty mind”) is central to criminal law: For an act to be considered a crime, there has to be intent. Artificial intelligence could throw a wrench into that thinking, Calo said. “The prospect of robotics behaving in the wild, displaying emergent or learned behavior creates the possibility there will be crimes that no one really intended.”

To illustrate the point, Calo used the example of Darius Kazemi, a programmer who created a bot that buys random stuff for him on Amazon. “He comes home and he’s delighted to find some box that his bot purchased,” Calo said. But what if Kazemi’s bot bought some alcoholic candy, which is illegal in his home state of Massachusetts? Could he be held accountable? So far the bot hasn’t stumbled on Amazon’s chocolate liqueur candy offerings—it’s just hypothetical. But Calo thinks we’ll soon start seeing cases that raise these kinds of questions.

And it won’t stop there. The apparently imminent arrival of autonomous vehicles will raise new questions in liability law. Social robots inside the home will raise 4th Amendment issues. “Could the FBI get a warrant to plant a question in a robot you talk to, ‘So, where’d you go this weekend?’” Calo asked. Then there are issues of how to establish the limits that society deems appropriate. Should robots or the roboticists who make them be the target of our laws and regulations?

The Mystery of Go, the Ancient Game That Computers Still Can’t Win | Enterprise | WIRED

The Mystery of Go, the Ancient Game That Computers Still Can’t Win | Enterprise | WIRED.

Remi Coulom (left) plays against Norimoto Yoda in Tokyo. Photo: Takashi Osato/WIRED

Crazy Stone and Nomitan are locked in a game of Go, the Eastern version of chess. On each screen, you can see a Go board — a grid of 19 lines by 19 lines — filling up with black and white playing pieces, each placed at the intersection of two lines. If Crazy Stone can win and advance to the finals, it will earn the right play one of the best human Go players in Japan. No machine has ever beaten a top human Go player — at least not without a huge head-start. Even if it does advance to the man-machine match, Crazy Stone has no chance of changing this, but Coulom wants to see how far his creation has come.

[…]

In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.

Invented over 2500 years ago in China, Go is a pastime beloved by emperors and generals, intellectuals and child prodigies. Like chess, it’s a deterministic perfect information game — a game where no information is hidden from either player, and there are no built-in elements of chance, such as dice.1 And like chess, it’s a two-person war game. Play begins with an empty board, where players alternate the placement of black and white stones, attempting to surround territory while avoiding capture by the enemy. That may seem simpler than chess, but it’s not. When Deep Blue was busy beating Kasparov, the best Go programs couldn’t even challenge a decent amateur. And despite huge computing advances in the years since — Kasparov would probably lose to your home computer — the automation of expert-level Go remains one of AI’s greatest unsolved riddles.

[…]

The challenge is daunting. In 1994, machines took the checkers crown, when a program called Chinook beat the top human. Then, three years later, they topped the chess world, IBM’s Deep Blue supercomputer besting world champion Garry Kasparov. Now, computers match or surpass top humans in a wide variety of games: Othello, Scrabble, backgammon, poker, even Jeopardy. But not Go. It’s the one classic game where wetware still dominates hardware.

Invented over 2500 years ago in China, Go is a pastime beloved by emperors and generals, intellectuals and child prodigies. Like chess, it’s a deterministic perfect information game — a game where no information is hidden from either player, and there are no built-in elements of chance, such as dice.1 And like chess, it’s a two-person war game. Play begins with an empty board, where players alternate the placement of black and white stones, attempting to surround territory while avoiding capture by the enemy. That may seem simpler than chess, but it’s not. When Deep Blue was busy beating Kasparov, the best Go programs couldn’t even challenge a decent amateur. And despite huge computing advances in the years since — Kasparov would probably lose to your home computer — the automation of expert-level Go remains one of AI’s greatest unsolved riddles.

[…]

… games of Go are often so complex that only extremely high-level players can understand how they’re progressing.

[…]

‘THERE IS CHESS IN THE WESTERN WORLD, BUT GO IS INCOMPARABLY MORE SUBTLE AND INTELLECTUAL.’

This is not for lack of trying on the part of programmers, who have worked on Go alongside chess for the last fifty years, with substantially less success. The first chess programs were written in the early fifties, one by Turing himself. By the 1970s, they were quite good. But as late as 1962, despite the game’s popularity among programmers, only two people had succeeded at publishing Go programs, neither of which was implemented or tested against humans.

Finally, in 1968, computer game theory genius Alfred Zobrist authored the first Go program capable of beating an absolute beginner. It was a promising first step, but notwithstanding enormous amounts of time, effort, brilliance, and quantum leaps in processing power, programs remained incapable of beating accomplished amateurs for the next four decades.

To understand this, think about Go in relation to chess. At the beginning of a chess game, White has twenty possible moves. After that, Black also has twenty possible moves. Once both sides have played, there are 400 possible board positions. Go, by contrast, begins with an empty board, where Black has 361 possible opening moves, one at every intersection of the 19 by 19 grid. White can follow with 360 moves. That makes for 129,960 possible board positions after just the first round of moves.

The rate at which possible positions increase is directly related to a game’s “branching factor,” or the average number of moves available on any given turn. Chess’s branching factor is 35. Go’s is 250. Games with high branching factors make classic search algorithms like minimax extremely costly. Minimax creates a search tree that evaluates possible moves by simulating all possible games that might follow, and then it chooses the move that minimizes the opponent’s best-case scenario. Improvements on the algorithm — such as alpha-beta search and null-move — can prune the chess game tree, identifying which moves deserve more attention and facilitating faster and deeper searches. But what works for chess — and checkers and Othello — does not work for Go.

[…]

“A lot of people peak out at a certain level of amateur and never get any stronger,” David Fotland explains. Fotland, an early computer Go innovator, also worked as chief engineer of Hewlett Packard’s PA-RISC processor in the 70s, and tested the system with his Go program. “There’s some kind of mental leap that has to happen to get you past that block, and the programs ran into the same issue. The issue is being able to look at the whole board, not the just the local fights.”

[…]

Coulom had exchanged ideas with a fellow academic named Bruno Bouzy, who believed that the secret to computer Go might lie in a search algorithm known as Monte Carlo. Developed in 1950 to model nuclear explosions, Monte Carlo replaces an exhaustive search with a statistical sampling of fewer possibilities. The approach made sense for Go. Rather than having to search every branch of the game tree, Monte Carlo would play out a series of random games from each possible move, and then deduce the value of the move from an analysis of the results.

[…]

Black and white stones continue to fill the board, beautiful as always, forming what is technically known as a percolated fractal.

[…]

Coulom plays down the Electric Sage Battle. “The real competition is program against program,” he told me during one early phone interview. “When my opponent is a programmer, we are doing the same thing. We can talk to each other. But when I play against a professional and he explains the moves to me, it is too high level. I can’t understand, and he can’t understand what I am doing. The Densei-sen — it is good for publicity. I am not so interested in that.”

[…]

According to University of Sydney cognitive scientist and complex systems theorist Michael Harré, professional Go players behave in ways that are incredibly hard to predict. In a recent study, Harré analyzed Go players of various strengths, focusing on the predictability of their moves given a specific local configuration of stones. “The result was totally unexpected,” he says. “Moves became steadily more predictable until players reached near-professional level. But at that point, moves started getting less predictable, and we don’t know why. Our best guess is that information from the rest of the board started influencing decision-making in a unique way.”

[…]

…no programmers think of their creations as “intelligent.” “The game of Go is spectacularly challenging,” says Coulom, “but there is nothing to do with making a human intelligence.” In other words, Watson and Crazy Stone are not beings. They are solutions to specific problems. That’s why its inaccurate to say that IBM Watson will be used to fight cancer, unless playing Jeopardy helps reduce tumors. Developing Watson might have led to insights that help create an artificial diagnostician, but that diagnostician isn’t Watson, just as MCTS programs used in hospital planning are not Crazy Stone.

The public relations folks at IBM paint a different picture, and so does the press. Anthropomorphized algorithms make for a better story. Deep Blue and Watson can be pitted against humans in highly produced man-machine battles, and IBM becomes the gatekeeper of a new era in artificial intelligence. Caught between atheism and a crippling fear of death, Ray Kurzweil and other futurists feed this mischaracterization by trumpeting the impending technological apotheosis of humanity, their breathless idiocy echoing through popular media. “The Brain’s Last Stand,” read the cover of Newsweek after Kasparov’s defeat. But in truth, these machines are nowhere close to mimicking the brain, and their creators admit as much.

Many Go players see the game as the final bastion of human dominance over computers. This view, which tacitly accepts the existence of a battle of intellects between humans and machines, is deeply misguided. In fact, computers can’t “win” at anything, not until they can experience real joy in victory and sadness in defeat, a programming challenge that makes Go look like tic-tac-toe. Computer Go matches aren’t the brain’s last stand. Rather, they help show just how far machines have to go before achieving something akin to true human intelligence. Until that day comes, perhaps it’s best to view the Densei-sen as programmers do. “It is fun for me,” says Coulom, “but that’s all.”