# User interfaces

I have just bought a new food processor, and I am in awe of its user interface. On the front there are two HUGE buttons, one marked ON, one marked OFF.

That’s it!

If I want to watch TV, however, I need to use two of these four remotes, depending on what exactly I am watching.

and that’s only because I have lost the fifth remote, the one of the sound system. The four surviving remotes have a total of 174 keys (45 + 47 + 44 + 38)…

I can easily reconstruct from memory Hastad’s proof that Max 3SAT is hard to approximate within $\frac 78 – \epsilon$, but every time the sound system gets disconnected from the power, I struggle to remember the particular sequence and duration of key presses needed to set the time. Now it just shows noon, it’s not worth it. And I have seen more than one computer science Ph.D. (not theoreticians!) turn the TV off while trying to turn the cable box on (if you get cable from Comcast you know what I am talking about).

What bothers me is that these interfaces are designed by people whose job is to design them. There must be a person who decides how the time is set and a person who decides what keys you press to turn the TV on or the cable on, or how many keys to put on the remote and how big and so on. Why would they do something that is so obviously wrong? “Let’s see, first the user has to press `clock’ for three seconds, then time will start flashing, at that point he first presses ‘FM’, then ‘volume up’ and then …”

What made Google so successful was certainly the math and the fact that it worked and that it was the first commercial search engine to return relevant answers instead of random ones. But having such a clean and pleasant design, at a time when the notion of a “portal” was popular also played a role, and the design has been widely copied afterwards. And usability and design are probably the main reasons why the iPod has become so popular.

There is an unfortunate tendency among computer scientists, not just among theoreticians, to look down on HCI work. We do so at our own risk. Including the risk that a disgruntled designer, with an evil smirk, thinks to himself “Let’s see who is laughing when you have to press the ‘aux’ key with the ‘seek’ key until it beeps, and then press the ‘grft’ key while…”

## 10 thoughts on “User interfaces”

1. I have found out that at least for me there exist two good user interfaces:

– the one of personal computers
– the very simple ones (like your food processor, but also say digital cameras)

Learning the interface of a PC takes quite a while, and it is good if you can use the same interface when you are struggling with some other piece of electronics.

Programming say VHS is not quite as complex as using a computer but it just takes extra time to learn it, and then you forget it quickly if you don’t routinely do it every day.

Even in the case of digital cameras I would feel more comfortable if I could use a full PC keyboard. This of course would it make nearly impossible to carry your camera around.

That said, there is at least one exception for me: I am also used to Nokia mobile phones, and their UI. (I am using an E61 nowadays.) Switching to some other operator would cause too much learning. Be it iPhone or Prada.

2. my food processor does not have any buttons at all! when you screw the lid on and it is safe to start up, the screw on does that automatically.

as for HCI folks, some of the work they do is very cool, as neat as (gasp!) math theorems. but it rubs some that they choose to include it in computer *science* when it is not. it is more an engineering/design field. as is computer architecture to some extent.

3. All you say is true, but in my view it is not because we under-appreciate HCI and UI design. It is just a case of capitalism gone berserk.

Nowadays features are way too easy to implement in software (and even in hardware). Most corporations still believe that more is better, so if you can have your cell phone come with 97 ringtones, tetris, satellite TV, and a corkscrew built in at no cost, then why not? Sure, to make a phone call now you have to hit 17 buttons, but getting rid of a “feature” that a competitor has is a big no-no.

4. The comment of ‘Samuel Jackson’ gets to the point I was trying to make. That there is a risk in this reasoning that computer science is a science, design is not a science, hence design is not computer science.

The risk being that the whole community will be alienated from computer science and it will find a home somewhere else (or become completely insular), to everybody’s loss. (I rather much prefer a ‘the more the merrier’ approach.)

Another negative consequence of this attitude, is that HCI people could feel pressure to make their work artificially more math-y and science-y, resulting in terrible work that will only reinforce the “this is not a science” attitude leading to a vicious cycle. It is entirely possible that there will eventually be a “science of design,” with surprising connections to math and possibly to theory, but such things cannot be willed into existence, they have to happen on their own time. (And such development will be more difficult if the HCI community is not organically part of the computer science community.)

5. #3: doh! I can remember the proof but not the statement… There must be something wrong with the design of the definition of approximation.

6. Regarding your remote control issues:

I recently purchased an activity-based universal remote (Harmon Kardon TC-30, rebranded version of a Logitech Harmony 880). You can program the remote using a web-based interface to respond to ‘activities.’

If you want to turn on your comcast box, you can simply hit ‘turn on pvr’ and it will