David Heller questions whether there can be affordance and convention in this digital world (via LukeW).

Dave talks about how we as children learn to mimick the behaviors of the people around us. We need only to look at a phone to know how to use it. Children see others talking on the phone, they hear voices through the phone, they have toy phones and most (at least in the so-called developed world) know how to converse on a phone by the time they are three years old. It is only intuitive after demonstration. It is learned socially, through what Dave calls “passive exhibition.”

It reminds me of Vygotsky’s theories of social learning. In his book, “Mind in Society” he notes that “human learning presupposes a specific social nature and a process by which children grow into the intellectual life around them.”

The example of the phone is interesting, since it is learned by both “passive exhibition” and by explicit instruction. Initiating a phone call requires quite a bit of teaching and some advanced intellectual concepts (a series of numbers associated with a person and a place, a series of steps to make the connection). In addition, the effects of the phone conversation are not intuitive or easily observed. I remember my son at three years old talking on the phone and showing off his new toys without realizing that the person on the other end couldn’t actually see him. When everyone else who talks to you can also see you, why should the person on the phone be any different?

David Heller argues that in designing computer software, we cannot take advantage of this social learning. “The computer, the PC more specifically, does not have an analog to that experience. The PC is primarily used as user in front of screen an keyboard. It is an isolating device that you seldom every see others passively exhibit.”

While its true that the occasions to learn how to use software through social observation are limited, I have noticed that people learn this way anyhow. How many times have you worked on a project with someone and picked up tips on using software from them? Only the geekiest amongst us learn from the docs, the rest learn from lore that is passed from colleague to colleague, whether verbally in the office or via the communal written work of blogs and forum postings. I believe that we as humans tend to seek a social learning situation even when the medium isolates us. This is why many writers have writing groups — not where they write together, but where they learn about writing through reading and talking.

“As IT systems finally make their way into the non-info service sector such as health care practices, blue collar e-learning, etc.we are starting to really see that our notions of ?conventions? even the most basic are just totally bogus. People can?t even use a mouse, let alone know that a blue under lined piece of text means something that will show me something else.”

Here are a few basic conventions of GUI:
* objects that are clickable provide feedback when you roll the mouse over them
* if you see a blinking vertical bar you can type text there
* blank bordered rectangles, often with labels, will show that blinking vertical bar if you click on them or tab to them

None of these conventions have anything to do with the real world, and I would guess that the vast majority of people who use computers learned these conventions from another human being. While its true that these conventions are not intuitive, they are none-the-less real conventions that persist across almost all software today.

Most people learn those basic lessons in their first experience using a computer and all the rest of our sophisticated user interface elements are built on that shared knowlege. As people move between applications, user interface elements that are shared between them are established as conventions. Objects such as scrollbars, radio buttons, and menus become familiar, if not intuitive.

While I would argue that current software UI conventions are valid, I completely agree with Dave that what we’ve got now is insufficient to make software and web sites approachable for the general population. I also agree that we can do better, that it is possible to create new UIs that feel familar and make sense to folks who are not members of the digital elite.

2 thoughts on “GUI conventions

  1. My grandparents are in their mid-80’s and have a computer. My grandfather uses it the most, doing basic word processing and email. And despite the fact that he was quite the technologist in his younger years, using “the machine” is always immensely frustrating for him. I believe it’s because most of the metaphors have never sunk in.

    He gets the “click the blue underlined text” thing, and maybe even the “the mouse cursor changes” thing… but even the simple metaphors that go along with word processing, drawing, etc have never clicked. So every time he’s at the computer, he’s relearning most everything from scratch. I’d have pitched the thing out the window already.

  2. I mostly agree, Sarah. Unfortunately, most mortals (not the ones you or I would generally run into at work or in our field) don’t know HTML, don’t know how to make web sites, and even if they can pick up a WYSIWYG (sic) editor, don’t generally understand the corrolation between what they are doing, the resulting HTML, and many problems, including layout and compatibility differences across browsers.

    What Apple was doing with Newton when we met was right on. I kinda got distracted by that stuff, because it was so much better than most of what we have today. Yes, Laszlo is neat, but it’s actually much harder for “mere mortals” to stomach XML (I consider myself one of those for the purpose of this case) instead of the JavaScript syntax.

    An architecture like Newton, which is not so different in theory and many concepts from LZX, is better for UI design because you no longer have to do “hard, mostly static work” to implement UI controls. You simple, a la HyperCard, override certain methods/hooks/handlers to implement functionality. You can rapidly and interatively build, try, and inspect your creation “from within” instead of jumping contexts back and forth between an IDE/development environment and what you’re working on, and UI programming no longer becomes an arduous task that you (sic) “have to debug the same way you debug your other stuff.”

    Instead, you provide an environment where rapid iteration and experimentation are very simple. This widens your audience and user base and overcomes many if not most of the problems inherent in web design and development.

    What do you think?

    Happy New Year. I really like your articles, and try to check in frequently to read what you have to say.

    Steve

Leave a reply

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong> 

required