1 – Eye contact is a polite way to start conversations
Erving Goffman in his book “Behavior in Public Places” studied the way people interacted in twos and threes and small groups and looked at how people move from unfocused interactions, where they’re in the same place but not together, to encounters, where they’re actually talking to each other.
He saw that people didn’t just start talking but used ambiguous expressive communication to ask if it was okay to start talking first.
Hang on, expressive communication? Right, he made a division into two kinds of messages:
Linguistic messages are your spoken ones. You speak about whatever you want, and deliberately communicate the meaning you want to communicate. Like me giving this talk.
Expressive messages are the ones you – you’re the message receiver – glean about me. The fact I chose to use this particular word rather than another. My body language. The fact I’m here at all! A nervous laugh.
Expressive messages are usually involuntary, but you can pretend if you want: that’s like a poker face.
The great thing about expressive messages is that your intention of sending them is usually unclear — or at least unprovable! The reason I’m talking, and talking is linguistic communication, is to give you information, so much is obvious, but if I look in your direction am I trying to get your attention, or just staring into space?
So Goffman found that a person would try to start a conversation with a glance that is…
“sufficiently tentative and ambiguous to allow him to act as if no initiation has been intended, if it appears that his overture is not desired.”
Which makes sense. It’s a good way of saving face. Rather than being a person other people ignore, you can just say their thoughts were on other things. Letting people save face is really important if you want to keep them happy.
Howard Rhiengold in his book Smart Mobs gives a good example of text messaging being used for this. He talked about kids in Sweden after a party. Say you’ve seen someone you quite liked and you’d like to see them again, but don’t know if the feeling’s shared. You’d send them a blank text message, or maybe just a really bland one like “hey, good party”. If they reply, ask for a date. The first message is almost entirely expressive communication: tentative, deniable.
So what usually happens in cyberspace, if I want to approach someone? I could send them an email to see if it’s okay to start emailing… it’s all quite blunt, and although I can be tentative in what I write in that email it’d be better if it was built into the software itself.
2 – Healthier small groups
So the way eye contact works as a tentative conversation opener is you look at someone, and they give you a clearance sign for that conversation by meeting your eyes. The reason this works, says Goffman, is that the very fact we’re using a sense, that fact can be noticed. And the way we notice that is by using those very same senses!
If two people look at each other, they can see each other and simultaneously see that the other person has seen them. It’s really efficient.
This visibility is used in small groups. Whenever you have more than two people together, there’s the chance that a pair of them might be carrying on with their own secret interaction, just between the two of them. They’re being disloyal to the gathering.
This is no problem in the real world because if it gets too bad then everyone else in the group can see what’s going on. That visibility moderates the behaviour and keeps everyone concentrated on the main activity.
No such luck in cyberspace. If there’s a bunch of us chatting, it’s usually really easy for a couple of people to start a direct connection, to start talking without anyone else noticing, even about the same subject. It doesn’t feel impolite, as it would in the physical world, because nobody’s going to notice, even though it still shifts their attention from the main event.
In the real world, people generally opt to stick with the group and feel uncomfortable about not doing so.
In other words, they’re polite. I’m quite up for this idea of politeness. Number one, people want to be polite. Number two, people don’t want to put other people in the position of having to be rude.
You can see this in software.
There’s an example here in a piece of software called Montage which a research group developed to help a team of people work together even though they were in geographically distributed offices. Montage simulated popping your head into someone’s office to see if they were busy, and if they’re free, you can ask them a question.
The way it did this was to have a button on your computer that brought up the video from a webcam on somebody else’s machine. Looking through this webcam, they called a glance. Glances were reciprocal, so if you looked into someone’s office with the webcam, a video of you fades up on their computer.
It worked pretty well as it happens, but people did say they felt more obliged to let those video glances turn into encounters than if someone looked through the door.
Why? I’d say it’s because there’s no plausible way to pretend you didn’t notice the video approach. You’re working on an Excel spreadsheet when bang a video pops up on your screen. No way you’re not going to notice that. In fact, it’s so obvious that you can’t not notice that, the person who’s glancing in must have a really important request! So either you ignore them, and implicitly accuse them of frivolously wasting your time, or you take the message. People take the message.
So. People want to be polite, in general. In a group situation they’ll moderate disloyal activity and join in with the whole group instead of carrying on with a side-interaction. That’s why, in Glancing, you glance not at individual people but at the whole group. Because in real life, politeness would encourage you to look at the whole group. The software default is to assume you want to be polite.
This isn’t true, for example, with email. It’s all too easy to reply only to the sender on a cc’d email. Even if this doesn’t happen to you, you’re not sure whether anyone else is doing it. There’s a lack of visibility.
Incidentally, I’ll come back to the question of why software doesn’t generally give you visibility of sense use in a bit. But for the moment I’m talking about why eye contact is good, so,
3 – Recognition
When you look at someone, you’re recognising they’re there.
Recognition is important because it helps with human bonding.
Why is bonding important in this context? Well, it’s because in small groups we’re dealing with people who are closest to you, and these are the people who you need to bond with the most.
Here’s a tool to help think about this kind of thing. Transactional Analysis is a psychological tool from the 1950s. It models communication between people in terms of transactions, a request and response. The smallest unit of a transaction, the basic unit of human recognition, TA calls a stroke.
It’s a nice way of thinking about it: recognising someone, making eye contact with someone, is a stroke: think of protohumans on the African savannah grooming one another, swapping strokes.
Now, Robin Dunbar, an anthropologist, talked about grooming in his paper on neocortex size and social group size in primates. He said we have a maximum cohesive social group of about 150. That’s the maximum stable size of your community in a given context — so, we find that scientific research specialities have a size of about 150 people. My mum has about 150 people on her christmas card list. It was the size of early villages across the world 8000 years ago, and in comparable cultures now. It’s been the size of army units through the ages. It’s the maximum number of buddies the AOL instant messenger server allows you to have.
Actually, 150 is the number of people the social computing centres of your brain can work with. You know, if you’re keeping track of who you owe favours, who nicked your berries last time you climbed a tree, that kind of thing. 150.
But actually that number is dictated by how much time you spend grooming your primary network. Primary network? This large social group is made out of many smaller networks.
Dunbar found that the primary network, the small group, they’re cohorts. They protect each other, stand up for each, against the big group as a whole. Individuals in too large a social group get stressed; it’s important to have your supportive primary network around, and you maintain that by expending effort on them.
Grooming, for chimps, is picking fleas and lice, but we have a way which is more efficient: conversation. Whereas you can only pick fleas from one other person at a time, you can talk to several at once. One of the key characteristics of this kind of grooming, however, is that it’s public.
This can be seen in the exchange of text messages in Alex Taylor’s paper looking at 16-19 year olds in an English school. They send each other quite mundane messages, with their mobile phones, but what’s important is the reciprocity. They establish their peer networks and social status, inside their community, by who sent what to whom, and who replied. Taylor said it resembled descriptions of gift-giving cultures in Polynesia.
It’s important you can see who’s grooming who because it’s like a public assertation of “don’t mess with my friends”. It’s meaningful when you publicly put your neck on the line for someone.
The kids simulate visibility of the grooming, the strokes of recognition, by showing each other the text messages. They treat them as things of value to show off.
So SMS is brilliant. The two most important things for what it means to be human: figure out the pecking order in your community and getting dates.
These three together tell us what attributes of eye contact we need to support for small groups. We need use of eye contact to be:
- unconscious or involuntary (but deliberate if you want)
visible to other people in the group
Out of that, as we can see with SMS and mobiles, you can grow the tentative requests for encounters and social grooming.
Two other aspects. You need to feel the presence of people around so you can decide to make eye contact.
And I’d prefer it to be polite.
That’s the why. That’s very roughly what we’re aiming for. Now for the How.
Done in two ways:
1 – Presence
Telepresence is a huge topic I wish I had time to go into more here. As it is, I’ll just point you towards At The Heart Of It All which summarises different kinds of presence and why presence is good.
In a nutshell, we’re interested in the subjective feeling there are other people nearby, and that you all feel like the same people are there. That’s social copresence. And presence is good because it does things like improve social judgement and improves learning ability and memory.
All you really need for presence is to be able to detect the actions of another person on your computer. It can be anything above seeing whether the other person has turned the application on or not. Realism, little avatars or faces, isn’t important.
2 – Interface needs to be close to unconscious, visible, and tentative
To make it so the interface to Glancing is almost backgrounded and to encourage perhaps unconscious use, there’s a number of tricks we can use:
- it’s small, both physically, and how much it stands out among other application. It got a tiny icon and it operates in a very Mac-like way, sitting where these sort of applications usually sit. Looking at the icon and opening the menu is a familiar gesture, so there’s a low cognitive overhead in looking at who’s online, made even lower by the fact you don’t actually choose to glance — it’s a side-effect of seeing the list of who’s in your group. And seeing that list is only a single click away from whatever you’re doing, because that menu is always available.
it’s slow. The icons are deliberately very similar so that when the glancing activity changes it doesn’t immediately catch your attention. If it did, that might mean each person in the group would decide to reciprocate, and suddenly you’re all in an encounter situation you didn’t want. So the icons are different enough to tell you the activity level, but not different enough to be distracting. Given the fact people might not notice the level for a while, Glancing is a slow application. A glance persists for 2 hours — that is, two hours after you’ve done a glance, the eye will still be open a little bit.
ambiguous. These two contribute to the feeling that you don’t know whether people have deliberately opened the menu or not, or whether they’ve even noticed you’ve been sending glances. It brings in that ‘tentative’ aspect I was talking about earlier, hopefully addresses that problem we saw in Montage. Something that adds to this is that you don’t glance at a specific person, you glance at the whole group. Just to restate this politeness thing: if you were sitting round a pub table with your mates, you wouldn’t just keep on looking at a single person — that’s a subactivity and frowned upon. Besides, everyone else would see you doing it and think you were weird. So to be polite you’d distribute your glances, your little strokes of recognition, around the entire table. What Glancing, the application, does by glancing at the entire group is assume in the first instance you want to be polite and just do that instead.
We can place ourselves in the middle of two long-term trends.
The first trend is the mixing of cyberspace and the real world, which has tides in two directions.
Coming from cyberspace we expect to be able to manipulate objects and automate that manipulation. That requires giving things handles and names. Coming into the physical world, we find it’s not like that: it’s a continuous world, we can’t get handles on it. So we end up creating handles for things: MP3s for Music, GeoURL for locations, email addresses for people. Look at how much effort the social software community is spending talking about Identity, which is just moot, not important, when we socialise face to face. Not only do we create handles in the real world, but we get upset when we can’t make full use of them. Why we get upset about that I’ll come back to at the end because I think it’s important.
This isn’t unique to cyberspace. We do the same thing with scientific models, or any way of talking about the world. We externalise our mental models. This process is named constructivist, this way we have to partition and name the world around us in order to interact with it. Now, I’m not saying this is new: it’s the industrial mindset (the conduit metaphor) — the ability to be able to break down a process into discrete steps is
(a) what gave us the ability to make production lines, to commoditise goods, and to complete the second half of the industrial revolution. That was Fordism, early twentieth century
(b) and this is the same as being able to program. That is, to decide that you can represent a process using only numbers and simple manipulations. You break it up into performable steps.
The problem being that to name and identify things is contentious. In cyberspace we’ve limited the number of people who can name things (have ontic powers), who can bring things into existence by contributing to the naming. So we can’t all create webpages, create new email protocols, or whatever. I’ll come back to this, because I want to talk about the other direction of this tide.
Coming into cyberspace we’re bombarded with data. In the physical world we’re used to handling this with our senses, peripheral vision. So we demand to not just read the data about the stockmarket or our social network, but to convert it into a format where it can be gleaned, experienced.
This is what Mark Rantzer has called supersenses: new communication senses to understand the huge mass of information that confronts us.
The idea is that by compressing complex data and presenting it in a way that minimises cognitive overhead, we can have a kind of background awareness of otherwise difficult to understand qualities.
This is the idea behind the Ambient Orb, which glows different colours depending on different variables. So it could glow red if the stockmarket was falling. Once you’d gotten used to the device, you wouldn’t even notice it was there, it would just be sitting there quietly white or green in the corner of your eye. Then one day it glows red and suddenly you become really aware of it: you’re losing money!
What I really love about the Ambient Orb is that it takes advantage of its presence in a physical world to do things I’ve been complaining are hard online. Other people nearby can tell if you’re looking at it, there’s a visibility of use. You can catch it in your peripheral vision, take it for granted, and never really focus on it until you see it’s red.
I think what’s missing from it is an aspect of how we process complex data normally. It doesn’t have an aspect of “look closer”, you know, you don’t examine harder it to get a better representation of the stockmarket.
It’s the same with the Dangling String, which is a device that hangs in your peripheral vision, a piece of string hanging from the ceiling, and it jiggles about the more network traffic there is on your local network. It’s a terrific example of what Mark Weiser, the father of ubiquitous computing, calls “calm technology”. In fact, I think this kind of calm technology is the future of public computing in general. But let’s say it’s jiggling really badly one day and you want to see what’s going on — so you look really close, but what do you see? Just more string!
That ‘look closer’ bit is missing. What we’re finding with these new supersenses – the Ambient Orb, Dangling String and Montage – is that we can’t use our normal computer-world metaphors of objects-and-messages to approximate how human beings really work. How we actually use our senses, not just looking and hearing but our social senses too.
That is, before now we could think about the email and the email client as being separate things. We didn’t have to consider what it really means for one person to send an email to another person, not in the social sense. It’s all abstraction layers, afterall. An email client receives an email: why should the program care who it’s from, whether it was expected or not? They’re orthogonal issues, surely?
Well what we’re finding is that with small groups the abstraction layers break down. From a design perspective we can’t just think about discrete events, we have to enable [garden] the dynamics processes of ongoing communication too. And that’s part of the second big trend.
The second big trend is the gradual improvement of our models for understanding dynamic processes.
A very brief history.
The computing world comes out of first-order cybernetics. This way of looking at the world came from the 1950s and was all about controlling systems with loops and feedback. From that came the idea of sending messages, of systems responding to messages and sending more messages out. If we could structure the world into objects and information, all in messages, all nicely abstracted, that’s all we’d need to do, we’d be sorted.
That’s the worldview that produced the computer chip, programming, and cyberspace. It’s all request and response, messages being sent between boxes.
We’re now confronting issues already identified by the more mature second-order cybernetics which arose in the 1970s, but it was pretty vague so not so influential. It’s all about human processes and instead of looking at individual objects and messages, talked about systems which self-created and changed. For this we need to allow fuzzier edges. There should be visibility of those messages being sent around so nearby objects can alter their behaviour and adapt. Systems should be able to complexify, simplify.
Now the reason this is so important, this second trend, is the constructivist nature of cyberspace I mentioned earlier. We use our mental models both to understand the world, and there’s feedback too: we use our mental models to create it.
If we understand the world through the lens of first-order cybernetics, that means we model the world in terms of people being objects sending messages to one another. That’s the world in which all we care about is that person A can send an email to person B.
On the other hand, if we understand the world in terms of dynamic processes, then we’re more interested in how people band together into small groups. We’re more interested in make email work better to send to people you’re really close to. To help defuse arguments, help people save face.
And that’s the world we’re gradually moving into.
What Floridi points out is that cyberspace is still relatively simple. The actions of a single individual can disproportionately effect the composition or evolution of the society that exists online. What’s more, the composition of the environment quite directly affects the kinds of actions people can perform: the existence of the email protocol allows a new form of interpersonal communication.
This combination – of being powerful and having clear consequences – puts us in a similar situation to what’s happening in the real world with the environment. When humans became powerful enough to affect the environment on a global scale, a new kind of ethics emerged, one that gave value to things which might inadvertently be damaged: the atmosphere, rainforests, rocks. We give these things intrinsic value. Actually it happens even on a small scale. Geologists have a code too, where the rocks have an intrinsic worth: you don’t bore holes into them in obvious places, you don’t leave paint splashed around.
In the context of cyberspace, Floridi calls this cyberethics.
Information objects themselves, he says, have moral worth. The more able we are to manipulate and use an object, that is, the more handles it has, the more valuable it is, the more worthy it is. If you improve the information, you’re doing a good deed. That’s wiki gardening, the concept of idly improving a website just as you wander by. If you leave the object open to be used in as many ways as possible, to be more manipulable, you’re doing a good deed. Well, that’s the free software movement.
Floridi underpins with a simple, graspable concept, what we who have lived with the internet feel instinctively is good and bad.
So from this perspective, concepts like adaptable design, and designing for hackability and unintended consequences aren’t just design rules of thumb, they’re aspects of how to be a good person and create a just society.
From Floridi’s environmental cyberethics, wiki gardening and free software are the cyberspace equivalents of respecting rainforests and biodiversity.