Tag Archives: communication

The Hi-Tech Mess of Higher Education by David Bromwich | The New York Review of Books

The Hi-Tech Mess of Higher Education by David Bromwich | The New York Review of Books.


Students at Deep Springs College in the California desert, near the Nevada border, where education involves ranching, farming, and self-governance in addition to academics – Jodi Cobb/National Geographic/Getty Images

The financial crush has come just when colleges are starting to think of Internet learning as a substitute for the classroom. And the coincidence has engendered a new variant of the reflection theory. We are living (the digital entrepreneurs and their handlers like to say) in a technological society, or a society in which new technology is rapidly altering people’s ways of thinking, believing, behaving, and learning. It follows that education itself ought to reflect the change. Mastery of computer technology is the major competence schools should be asked to impart. But what if you can get the skills more cheaply without the help of a school?

A troubled awareness of this possibility has prompted universities, in their brochures, bulletins, and advertisements, to heighten the one clear advantage that they maintain over the Internet. Universities are physical places; and physical existence is still felt to be preferable in some ways to virtual existence. Schools have been driven to present as assets, in a way they never did before, nonacademic programs and facilities that provide students with the “quality of life” that makes a college worth the outlay. Auburn University in Alabama recently spent $72 million on a Recreation and Wellness Center. Stanford built Escondido Village Highrise Apartments. Must a college that wants to compete now have a student union with a food court and plasma screens in every room?


The model seems to be the elite club—in this instance, a club whose leading function is to house in comfort thousands of young people while they complete some serious educational tasks and form connections that may help them in later life.


A hidden danger both of intramural systems and of public forums like “Rate My Professors” is that they discourage eccentricity. Samuel Johnson defined a classic of literature as a work that has pleased many and pleased long. Evaluations may foster courses that please many and please fast.

At the utopian edge of the technocratic faith, a rising digital remedy for higher education goes by the acronym MOOCs (massive open online courses). The MOOC movement is represented in Ivory Tower by the Silicon Valley outfit Udacity. “Does it really make sense,” asks a Udacity adept, “to have five hundred professors in five hundred different universities each teach students in a similar way?” What you really want, he thinks, is the academic equivalent of a “rock star” to project knowledge onto the screens and into the brains of students without the impediment of fellow students or a teacher’s intrusive presence in the room. “Maybe,” he adds, “that rock star could do a little bit better job” than the nameless small-time academics whose fame and luster the video lecturer will rightly displace.

That the academic star will do a better job of teaching than the local pedagogue who exactly resembles 499 others of his kind—this, in itself, is an interesting assumption at Udacity and a revealing one. Why suppose that five hundred teachers of, say, the English novel from Defoe to Joyce will all tend to teach the materials in the same way, while the MOOC lecturer will stand out because he teaches the most advanced version of the same way? Here, as in other aspects of the movement, under all the talk of variety there lurks a passion for uniformity.


The pillars of education at Deep Springs are self-governance, academics, and physical labor. The students number scarcely more than the scholar-hackers on Thiel Fellowships—a total of twenty-six—but they are responsible for all the duties of ranching and farming on the campus in Big Pine, California, along with helping to set the curriculum and keep their quarters. Two minutes of a Deep Springs seminar on citizen and state in the philosophy of Hegel give a more vivid impression of what college education can be than all the comments by college administrators in the rest of Ivory Tower.


Teaching at a university, he says, involves a commitment to the preservation of “cultural memory”; it is therefore in some sense “an effort to cheat death.”

A Declaration of the Independence of Cyberspace (February 8, 1996)

A Declaration of the Independence of Cyberspace.

wonderful attention to detail.

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.


Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live.

We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth.

We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.

Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here.

Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion. We believe that from ethics, enlightened self-interest, and the commonweal, our governance will emerge . Our identities may be distributed across many of your jurisdictions. The only law that all our constituent cultures would generally recognize is the Golden Rule. We hope we will be able to build our particular solutions on that basis. But we cannot accept the solutions you are attempting to impose.


Your increasingly obsolete information industries would perpetuate themselves by proposing laws, in America and elsewhere, that claim to own speech itself throughout the world. These laws would declare ideas to be another industrial product, no more noble than pig iron. In our world, whatever the human mind may create can be reproduced and distributed infinitely at no cost. The global conveyance of thought no longer requires your factories to accomplish.


We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.

Davos, Switzerland

February 8, 1996

by John Perry Barlow <barlow@eff.org>

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED

The Fasinatng … Frustrating … Fascinating History of Autocorrect | Gadget Lab | WIRED.

It’s not too much of an exaggeration to call autocorrect the overlooked underwriter of our era of mobile prolixity. Without it, we wouldn’t be able to compose windy love letters from stadium bleachers, write novels on subway commutes, or dash off breakup texts while in line at the post office. Without it, we probably couldn’t even have phones that look anything like the ingots we tickle—the whole notion of touchscreen typing, where our podgy physical fingers are expected to land with precision on tiny virtual keys, is viable only when we have some serious software to tidy up after us. Because we know autocorrect is there as brace and cushion, we’re free to write with increased abandon, at times and in places where writing would otherwise be impossible. Thanks to autocorrect, the gap between whim and word is narrower than it’s ever been, and our world is awash in easily rendered thought.


I find him in a drably pastel conference room at Microsoft headquarters in Redmond, Washington. Dean Hachamovitch—inventor on the patent for autocorrect and the closest thing it has to an individual creator—reaches across the table to introduce himself.


Hachamovitch, now a vice president at Microsoft and head of data science for the entire corporation, is a likable and modest man. He freely concedes that he types teh as much as anyone. (Almost certainly he does not often type hte. As researchers have discovered, initial-letter transposition is a much rarer error.)


The notion of autocorrect was born when Hachamovitch began thinking about a functionality that already existed in Word. Thanks to Charles Simonyi, the longtime Microsoft executive widely recognized as the father of graphical word processing, Word had a “glossary” that could be used as a sort of auto-expander. You could set up a string of words—like insert logo—which, when typed and followed by a press of the F3 button, would get replaced by a JPEG of your company’s logo. Hachamovitch realized that this glossary could be used far more aggressively to correct common mistakes. He drew up a little code that would allow you to press the left arrow and F3 at any time and immediately replace teh with the. His aha moment came when he realized that, because English words are space-delimited, the space bar itself could trigger the replacement, to make correction … automatic! Hachamovitch drew up a list of common errors, and over the next years he and his team went on to solve many of the thorniest. Seperate would automatically change to separate. Accidental cap locks would adjust immediately (making dEAR grEG into Dear Greg). One Microsoft manager dubbed them the Department of Stupid PC Tricks.


One day Hachamovitch went into his boss’s machine and changed the autocorrect dictionary so that any time he typed Dean it was automatically changed to the name of his coworker Mike, and vice versa. (His boss kept both his computer and office locked after that.) Children were even quicker to grasp the comedic ramifications of the new tool. After Hachamovitch went to speak to his daughter’s third-grade class, he got emails from parents that read along the lines of “Thank you for coming to talk to my daughter’s class, but whenever I try to type her name I find it automatically transforms itself into ‘The pretty princess.’”


On idiom, some of its calls seemed fairly clear-cut: gorilla warfare became guerrilla warfare, for example, even though a wildlife biologist might find that an inconvenient assumption. But some of the calls were quite tricky, and one of the trickiest involved the issue of obscenity. On one hand, Word didn’t want to seem priggish; on the other, it couldn’t very well go around recommending the correct spelling of mothrefukcer. Microsoft was sensitive to these issues. The solution lay in expanding one of spell-check’s most special lists, bearing the understated title: “Words which should neither be flagged nor suggested.”


One day Vignola sent Bill Gates an email. (Thorpe couldn’t recall who Bill Vignola was or what he did.) Whenever Bill Vignola typed his own name in MS Word, the email to Gates explained, it was automatically changed to Bill Vaginal. Presumably Vignola caught this sometimes, but not always, and no doubt this serious man was sad to come across like a character in a Thomas Pynchon novel. His email made it down the chain of command to Thorpe. And Bill Vaginal wasn’t the only complainant: As Thorpe recalls, Goldman Sachs was mad that Word was always turning it into Goddamn Sachs.

Thorpe went through the dictionary and took out all the words marked as “vulgar.” Then he threw in a few anatomical terms for good measure. The resulting list ran to hundreds of entries:

anally, asshole, battle-axe, battleaxe, bimbo, booger, boogers, butthead, Butthead …

With these sorts of master lists in place—the corrections, the exceptions, and the to-be-primly-ignored—the joists of autocorrect, then still a subdomain of spell-check, were in place for the early releases of Word. Microsoft’s dominance at the time ensured that autocorrect became globally ubiquitous, along with some of its idiosyncrasies. By the early 2000s, European bureaucrats would begin to notice what came to be called the Cupertino effect, whereby the word cooperation (bizarrely included only in hyphenated form in the standard Word dictionary) would be marked wrong, with a suggested change to Cupertino. There are thus many instances where one parliamentary back-bencher or another longs for increased Cupertino between nations. Since then, linguists have adopted the word cupertino as a term of art for such trapdoors that have been assimilated into the language.


Autocorrection is no longer an overqualified intern drawing up lists of directives; it’s now a vast statistical affair in which petabytes of public words are examined to decide when a usage is popular enough to become a probabilistically savvy replacement. The work of the autocorrect team has been made algorithmic and outsourced to the cloud.

A handful of factors are taken into account to weight the variables: keyboard proximity, phonetic similarity, linguistic context. But it’s essentially a big popularity contest. A Microsoft engineer showed me a slide where somebody was trying to search for the long-named Austrian action star who became governor of California. Schwarzenegger, he explained, “is about 10,000 times more popular in the world than its variants”—Shwaranegar or Scuzzynectar or what have you. Autocorrect has become an index of the most popular way to spell and order certain words.

When English spelling was first standardized, it was by the effective fiat of those who controlled the communicative means of production. Dictionaries and usage guides have always represented compromises between top-down prescriptivists—those who believe language ought to be used a certain way—and bottom-up descriptivists—those who believe, instead, that there’s no ought about it.

The emerging consensus on usage will be a matter of statistical arbitration, between the way “most” people spell something and the way “some” people do. If it proceeds as it has, it’s likely to be a winner-take-all affair, as alternatives drop out. (Though Apple’s recent introduction of personalized, “contextual” autocorrect—which can distinguish between the language you use with your friends and the language you use with your boss—might complicate that process of standardization and allow us the favor of our characteristic errors.)


The possibility of linguistic communication is grounded in the fact of what some philosophers of language have called the principle of charity: The first step in a successful interpretation of an utterance is the belief that it somehow accords with the universe as we understand it. This means that we have a propensity to take a sort of ownership over even our errors, hoping for the possibility of meaning in even the most perverse string of letters. We feel honored to have a companion like autocorrect who trusts that, despite surface clumsiness or nonsense, inside us always smiles an articulate truth.


Today the influence of autocorrect is everywhere: A commenter on the Language Log blog recently mentioned hearing of an entire dialect in Asia based on phone cupertinos, where teens used the first suggestion from autocomplete instead of their chosen word, thus creating a slang that others couldn’t decode. (It’s similar to the Anglophone teenagers who, in a previous texting era, claimed to have replaced the term of approval cool with that of book because of happenstance T9 input priority.) Surrealists once encouraged the practice of écriture automatique, or automatic writing, in order to reveal the peculiar longings of the unconscious. The crackpot suggestions of autocorrect have become our own form of automatic writing—but what they reveal are the peculiar statistics of a world id.

Why the World Cup always reminds me that the yellow card is a brilliant object.: Design Observer

Why the World Cup always reminds me that the yellow card is a brilliant object.: Design Observer.

As objects go, it doesn’t look like much. It’s, you know, a yellow card. But when theatrically brandished by an official, almost literally in the face of a player who has done something uncool, it has wild power. It sets off a stadium-full of whistling, and cartoonish arm-flailing from the carded player and his colleagues. A yellow card has real consequences: Possession, a free kick, and the possibility that if the carded competitor blunders again he’ll leave his team understaffed for this match, and will sit out the next.

It strikes me as more interesting than the other penalty card, the red. That one results in instant ejection, in response to some plainly egregious act. Its presentation is memorable, of course. But the yellow card is both more ambiguous and more humane. It’s a warning: There’s trouble; but it could be worse.

According an item on FIFA’s site, the penalty-card notion was invented by an official named Ken Aston, in the wake of a 1966 World Cup quarterfinal match between England and Argentina. Apparently there was some controversy about whether the referee had clearly communicated penalty warnings leveled against two English players:

It started a train of thought in Aston’s head too. He began to think about ways to avoid such problems in the future. “As I drove down Kensington High Street, the traffic light turned red. I thought, ‘Yellow, take it easy; red, stop, you’re off’.”

Yellow and red penalty cards became part of the World Cup in 1970, and are of course now a routine element of soccer and (according to Wikipedia, anyway) a number of other sports.

The cards are a such a brilliant solution to the problem of making sure a penalty has been adequately signaled — they transcend language; they’re clear not just to everyone on the field, but in the stadium, or watching on a screen — that it’s hard to imagine the game without them.

Moreover (and this is the line of thought that idle World Cup moments lead me to every four years), I semi-seriously wish we could port the card system into daily life. Imagine a yellow card as a warning to the shop employee whose insolence is getting close to inspiring a boycott; to the dinner companion whose habit of checking his phone is on the verge of becoming a friendship-ender; to the aggressive tailgater who is just about to inspire a road-rage incident.

Glancing ETech 2004

Glancing ETech 2004: slide 1.

1 – Eye contact is a polite way to start conversations

Erving Goffman in his book “Behavior in Public Places” studied the way people interacted in twos and threes and small groups and looked at how people move from unfocused interactions, where they’re in the same place but not together, to encounters, where they’re actually talking to each other.

He saw that people didn’t just start talking but used ambiguous expressive communication to ask if it was okay to start talking first.

Hang on, expressive communication? Right, he made a division into two kinds of messages:

Linguistic messages are your spoken ones. You speak about whatever you want, and deliberately communicate the meaning you want to communicate. Like me giving this talk.

Expressive messages are the ones you – you’re the message receiver – glean about me. The fact I chose to use this particular word rather than another. My body language. The fact I’m here at all! A nervous laugh.

Expressive messages are usually involuntary, but you can pretend if you want: that’s like a poker face.

The great thing about expressive messages is that your intention of sending them is usually unclear — or at least unprovable! The reason I’m talking, and talking is linguistic communication, is to give you information, so much is obvious, but if I look in your direction am I trying to get your attention, or just staring into space?

So Goffman found that a person would try to start a conversation with a glance that is…

“sufficiently tentative and ambiguous to allow him to act as if no initiation has been intended, if it appears that his overture is not desired.”

Which makes sense. It’s a good way of saving face. Rather than being a person other people ignore, you can just say their thoughts were on other things. Letting people save face is really important if you want to keep them happy.

Howard Rhiengold in his book Smart Mobs gives a good example of text messaging being used for this. He talked about kids in Sweden after a party. Say you’ve seen someone you quite liked and you’d like to see them again, but don’t know if the feeling’s shared. You’d send them a blank text message, or maybe just a really bland one like “hey, good party”. If they reply, ask for a date. The first message is almost entirely expressive communication: tentative, deniable.

So what usually happens in cyberspace, if I want to approach someone? I could send them an email to see if it’s okay to start emailing… it’s all quite blunt, and although I can be tentative in what I write in that email it’d be better if it was built into the software itself.


2 – Healthier small groups

So the way eye contact works as a tentative conversation opener is you look at someone, and they give you a clearance sign for that conversation by meeting your eyes. The reason this works, says Goffman, is that the very fact we’re using a sense, that fact can be noticed. And the way we notice that is by using those very same senses!

If two people look at each other, they can see each other and simultaneously see that the other person has seen them. It’s really efficient.

This visibility is used in small groups. Whenever you have more than two people together, there’s the chance that a pair of them might be carrying on with their own secret interaction, just between the two of them. They’re being disloyal to the gathering.

This is no problem in the real world because if it gets too bad then everyone else in the group can see what’s going on. That visibility moderates the behaviour and keeps everyone concentrated on the main activity.

No such luck in cyberspace. If there’s a bunch of us chatting, it’s usually really easy for a couple of people to start a direct connection, to start talking without anyone else noticing, even about the same subject. It doesn’t feel impolite, as it would in the physical world, because nobody’s going to notice, even though it still shifts their attention from the main event.

In the real world, people generally opt to stick with the group and feel uncomfortable about not doing so.

In other words, they’re polite. I’m quite up for this idea of politeness. Number one, people want to be polite. Number two, people don’t want to put other people in the position of having to be rude.

You can see this in software.

There’s an example here in a piece of software called Montage which a research group developed to help a team of people work together even though they were in geographically distributed offices. Montage simulated popping your head into someone’s office to see if they were busy, and if they’re free, you can ask them a question.

The way it did this was to have a button on your computer that brought up the video from a webcam on somebody else’s machine. Looking through this webcam, they called a glance. Glances were reciprocal, so if you looked into someone’s office with the webcam, a video of you fades up on their computer.

It worked pretty well as it happens, but people did say they felt more obliged to let those video glances turn into encounters than if someone looked through the door.

Why? I’d say it’s because there’s no plausible way to pretend you didn’t notice the video approach. You’re working on an Excel spreadsheet when bang a video pops up on your screen. No way you’re not going to notice that. In fact, it’s so obvious that you can’t not notice that, the person who’s glancing in must have a really important request! So either you ignore them, and implicitly accuse them of frivolously wasting your time, or you take the message. People take the message.

So. People want to be polite, in general. In a group situation they’ll moderate disloyal activity and join in with the whole group instead of carrying on with a side-interaction. That’s why, in Glancing, you glance not at individual people but at the whole group. Because in real life, politeness would encourage you to look at the whole group. The software default is to assume you want to be polite.

This isn’t true, for example, with email. It’s all too easy to reply only to the sender on a cc’d email. Even if this doesn’t happen to you, you’re not sure whether anyone else is doing it. There’s a lack of visibility.

Incidentally, I’ll come back to the question of why software doesn’t generally give you visibility of sense use in a bit. But for the moment I’m talking about why eye contact is good, so,

3 – Recognition

When you look at someone, you’re recognising they’re there.

Recognition is important because it helps with human bonding.

Why is bonding important in this context? Well, it’s because in small groups we’re dealing with people who are closest to you, and these are the people who you need to bond with the most.

Here’s a tool to help think about this kind of thing. Transactional Analysis is a psychological tool from the 1950s. It models communication between people in terms of transactions, a request and response. The smallest unit of a transaction, the basic unit of human recognition, TA calls a stroke.

It’s a nice way of thinking about it: recognising someone, making eye contact with someone, is a stroke: think of protohumans on the African savannah grooming one another, swapping strokes.

Now, Robin Dunbar, an anthropologist, talked about grooming in his paper on neocortex size and social group size in primates. He said we have a maximum cohesive social group of about 150. That’s the maximum stable size of your community in a given context — so, we find that scientific research specialities have a size of about 150 people. My mum has about 150 people on her christmas card list. It was the size of early villages across the world 8000 years ago, and in comparable cultures now. It’s been the size of army units through the ages. It’s the maximum number of buddies the AOL instant messenger server allows you to have.

Actually, 150 is the number of people the social computing centres of your brain can work with. You know, if you’re keeping track of who you owe favours, who nicked your berries last time you climbed a tree, that kind of thing. 150.

But actually that number is dictated by how much time you spend grooming your primary network. Primary network? This large social group is made out of many smaller networks.

Dunbar found that the primary network, the small group, they’re cohorts. They protect each other, stand up for each, against the big group as a whole. Individuals in too large a social group get stressed; it’s important to have your supportive primary network around, and you maintain that by expending effort on them.

Grooming, for chimps, is picking fleas and lice, but we have a way which is more efficient: conversation. Whereas you can only pick fleas from one other person at a time, you can talk to several at once. One of the key characteristics of this kind of grooming, however, is that it’s public.

This can be seen in the exchange of text messages in Alex Taylor’s paper looking at 16-19 year olds in an English school. They send each other quite mundane messages, with their mobile phones, but what’s important is the reciprocity. They establish their peer networks and social status, inside their community, by who sent what to whom, and who replied. Taylor said it resembled descriptions of gift-giving cultures in Polynesia.

It’s important you can see who’s grooming who because it’s like a public assertation of “don’t mess with my friends”. It’s meaningful when you publicly put your neck on the line for someone.

The kids simulate visibility of the grooming, the strokes of recognition, by showing each other the text messages. They treat them as things of value to show off.

So SMS is brilliant. The two most important things for what it means to be human: figure out the pecking order in your community and getting dates.


These three together tell us what attributes of eye contact we need to support for small groups. We need use of eye contact to be:

  • unconscious or involuntary (but deliberate if you want)

  • visible to other people in the group

Out of that, as we can see with SMS and mobiles, you can grow the tentative requests for encounters and social grooming.

Two other aspects. You need to feel the presence of people around so you can decide to make eye contact.

And I’d prefer it to be polite.

That’s the why. That’s very roughly what we’re aiming for. Now for the How.

Done in two ways:

1 – Presence

Telepresence is a huge topic I wish I had time to go into more here. As it is, I’ll just point you towards At The Heart Of It All which summarises different kinds of presence and why presence is good.

In a nutshell, we’re interested in the subjective feeling there are other people nearby, and that you all feel like the same people are there. That’s social copresence. And presence is good because it does things like improve social judgement and improves learning ability and memory.

All you really need for presence is to be able to detect the actions of another person on your computer. It can be anything above seeing whether the other person has turned the application on or not. Realism, little avatars or faces, isn’t important.


2 – Interface needs to be close to unconscious, visible, and tentative

To make it so the interface to Glancing is almost backgrounded and to encourage perhaps unconscious use, there’s a number of tricks we can use:

  • it’s small, both physically, and how much it stands out among other application. It got a tiny icon and it operates in a very Mac-like way, sitting where these sort of applications usually sit. Looking at the icon and opening the menu is a familiar gesture, so there’s a low cognitive overhead in looking at who’s online, made even lower by the fact you don’t actually choose to glance — it’s a side-effect of seeing the list of who’s in your group. And seeing that list is only a single click away from whatever you’re doing, because that menu is always available.

  • it’s slow. The icons are deliberately very similar so that when the glancing activity changes it doesn’t immediately catch your attention. If it did, that might mean each person in the group would decide to reciprocate, and suddenly you’re all in an encounter situation you didn’t want. So the icons are different enough to tell you the activity level, but not different enough to be distracting. Given the fact people might not notice the level for a while, Glancing is a slow application. A glance persists for 2 hours — that is, two hours after you’ve done a glance, the eye will still be open a little bit.

  • ambiguous. These two contribute to the feeling that you don’t know whether people have deliberately opened the menu or not, or whether they’ve even noticed you’ve been sending glances. It brings in that ‘tentative’ aspect I was talking about earlier, hopefully addresses that problem we saw in Montage. Something that adds to this is that you don’t glance at a specific person, you glance at the whole group. Just to restate this politeness thing: if you were sitting round a pub table with your mates, you wouldn’t just keep on looking at a single person — that’s a subactivity and frowned upon. Besides, everyone else would see you doing it and think you were weird. So to be polite you’d distribute your glances, your little strokes of recognition, around the entire table. What Glancing, the application, does by glancing at the entire group is assume in the first instance you want to be polite and just do that instead.


We can place ourselves in the middle of two long-term trends.

The first trend is the mixing of cyberspace and the real world, which has tides in two directions.

Coming from cyberspace we expect to be able to manipulate objects and automate that manipulation. That requires giving things handles and names. Coming into the physical world, we find it’s not like that: it’s a continuous world, we can’t get handles on it. So we end up creating handles for things: MP3s for Music, GeoURL for locations, email addresses for people. Look at how much effort the social software community is spending talking about Identity, which is just moot, not important, when we socialise face to face. Not only do we create handles in the real world, but we get upset when we can’t make full use of them. Why we get upset about that I’ll come back to at the end because I think it’s important.

This isn’t unique to cyberspace. We do the same thing with scientific models, or any way of talking about the world. We externalise our mental models. This process is named constructivist, this way we have to partition and name the world around us in order to interact with it. Now, I’m not saying this is new: it’s the industrial mindset (the conduit metaphor) — the ability to be able to break down a process into discrete steps is

(a) what gave us the ability to make production lines, to commoditise goods, and to complete the second half of the industrial revolution. That was Fordism, early twentieth century

(b) and this is the same as being able to program. That is, to decide that you can represent a process using only numbers and simple manipulations. You break it up into performable steps.

The problem being that to name and identify things is contentious. In cyberspace we’ve limited the number of people who can name things (have ontic powers), who can bring things into existence by contributing to the naming. So we can’t all create webpages, create new email protocols, or whatever. I’ll come back to this, because I want to talk about the other direction of this tide.

Coming into cyberspace we’re bombarded with data. In the physical world we’re used to handling this with our senses, peripheral vision. So we demand to not just read the data about the stockmarket or our social network, but to convert it into a format where it can be gleaned, experienced.

This is what Mark Rantzer has called supersenses: new communication senses to understand the huge mass of information that confronts us.

The idea is that by compressing complex data and presenting it in a way that minimises cognitive overhead, we can have a kind of background awareness of otherwise difficult to understand qualities.

This is the idea behind the Ambient Orb, which glows different colours depending on different variables. So it could glow red if the stockmarket was falling. Once you’d gotten used to the device, you wouldn’t even notice it was there, it would just be sitting there quietly white or green in the corner of your eye. Then one day it glows red and suddenly you become really aware of it: you’re losing money!

What I really love about the Ambient Orb is that it takes advantage of its presence in a physical world to do things I’ve been complaining are hard online. Other people nearby can tell if you’re looking at it, there’s a visibility of use. You can catch it in your peripheral vision, take it for granted, and never really focus on it until you see it’s red.

I think what’s missing from it is an aspect of how we process complex data normally. It doesn’t have an aspect of “look closer”, you know, you don’t examine harder it to get a better representation of the stockmarket.

It’s the same with the Dangling String, which is a device that hangs in your peripheral vision, a piece of string hanging from the ceiling, and it jiggles about the more network traffic there is on your local network. It’s a terrific example of what Mark Weiser, the father of ubiquitous computing, calls “calm technology”. In fact, I think this kind of calm technology is the future of public computing in general. But let’s say it’s jiggling really badly one day and you want to see what’s going on — so you look really close, but what do you see? Just more string!

That ‘look closer’ bit is missing. What we’re finding with these new supersenses – the Ambient Orb, Dangling String and Montage – is that we can’t use our normal computer-world metaphors of objects-and-messages to approximate how human beings really work. How we actually use our senses, not just looking and hearing but our social senses too.

That is, before now we could think about the email and the email client as being separate things. We didn’t have to consider what it really means for one person to send an email to another person, not in the social sense. It’s all abstraction layers, afterall. An email client receives an email: why should the program care who it’s from, whether it was expected or not? They’re orthogonal issues, surely?

Well what we’re finding is that with small groups the abstraction layers break down. From a design perspective we can’t just think about discrete events, we have to enable [garden] the dynamics processes of ongoing communication too. And that’s part of the second big trend.

The second big trend is the gradual improvement of our models for understanding dynamic processes.

A very brief history.

The computing world comes out of first-order cybernetics. This way of looking at the world came from the 1950s and was all about controlling systems with loops and feedback. From that came the idea of sending messages, of systems responding to messages and sending more messages out. If we could structure the world into objects and information, all in messages, all nicely abstracted, that’s all we’d need to do, we’d be sorted.

That’s the worldview that produced the computer chip, programming, and cyberspace. It’s all request and response, messages being sent between boxes.

We’re now confronting issues already identified by the more mature second-order cybernetics which arose in the 1970s, but it was pretty vague so not so influential. It’s all about human processes and instead of looking at individual objects and messages, talked about systems which self-created and changed. For this we need to allow fuzzier edges. There should be visibility of those messages being sent around so nearby objects can alter their behaviour and adapt. Systems should be able to complexify, simplify.

Now the reason this is so important, this second trend, is the constructivist nature of cyberspace I mentioned earlier. We use our mental models both to understand the world, and there’s feedback too: we use our mental models to create it.

If we understand the world through the lens of first-order cybernetics, that means we model the world in terms of people being objects sending messages to one another. That’s the world in which all we care about is that person A can send an email to person B.

On the other hand, if we understand the world in terms of dynamic processes, then we’re more interested in how people band together into small groups. We’re more interested in make email work better to send to people you’re really close to. To help defuse arguments, help people save face.

And that’s the world we’re gradually moving into.


What Floridi points out is that cyberspace is still relatively simple. The actions of a single individual can disproportionately effect the composition or evolution of the society that exists online. What’s more, the composition of the environment quite directly affects the kinds of actions people can perform: the existence of the email protocol allows a new form of interpersonal communication.

This combination – of being powerful and having clear consequences – puts us in a similar situation to what’s happening in the real world with the environment. When humans became powerful enough to affect the environment on a global scale, a new kind of ethics emerged, one that gave value to things which might inadvertently be damaged: the atmosphere, rainforests, rocks. We give these things intrinsic value. Actually it happens even on a small scale. Geologists have a code too, where the rocks have an intrinsic worth: you don’t bore holes into them in obvious places, you don’t leave paint splashed around.

In the context of cyberspace, Floridi calls this cyberethics.

Information objects themselves, he says, have moral worth. The more able we are to manipulate and use an object, that is, the more handles it has, the more valuable it is, the more worthy it is. If you improve the information, you’re doing a good deed. That’s wiki gardening, the concept of idly improving a website just as you wander by. If you leave the object open to be used in as many ways as possible, to be more manipulable, you’re doing a good deed. Well, that’s the free software movement.

Floridi underpins with a simple, graspable concept, what we who have lived with the internet feel instinctively is good and bad.

So from this perspective, concepts like adaptable design, and designing for hackability and unintended consequences aren’t just design rules of thumb, they’re aspects of how to be a good person and create a just society.

From Floridi’s environmental cyberethics, wiki gardening and free software are the cyberspace equivalents of respecting rainforests and biodiversity.