interface

Why is designing a good user interface so hard? Why do so many pieces of software on our mobile devices, desktops and gaming machines leave us lost, frustrated and railing at their arcane and ugly buttons, choices and placements?

There are four reasons, I think.

Constraints are good

As any artist or designer knows, real creativity emerges when confronted by constraint. Architects, for example, have a big one: gravity. Overcoming or working with the undeniable property of mass has led to arches, flying buttresses, girders and Doric columns. Artists working in oil paint have to wrestle with viscosity, water colourists with bleed and spread. Appliance designers weigh the cost of materials and cope with the range of hand and body sizes. Constraint is creativity cocaine.

But in the digital world of interface design on high-powered devices with incredibly high-resolution screens, constraints fade into the background.

Whatever gravity or physical laws your interface elements obey are established by the device’s operating system or through a physics engine you design yourself. So things like viscosity or spread are settings, not constraints. In fact, the only real limits are the actual screen real estate and touch points that are big and well-spaced enough for human fingers to hit consistently.

In short, screen interfaces these days can act and look pretty much anyway a programmer wants them to. That freedom has often had the effect of free beer at a cockfight. This is especially true because even today interface design is often dominated by software engineers who not only like complexity (or love showing off what their physics engine can do), but often have the aesthetic sense of a depressed Klingon. It has taken years for Android to get out from under the thrall of bad design and Samsung still slathers ugly skins on the improving OS. Constraint would do them good.

Humans are flawed

If our brains had to cope with the unfiltered firehose of reality flowing in from our senses we would go insane — literally. We lack the bandwidth for the unadulterated world. So, we selectively see, hear and categorize. We use paradigms, we zone out, we focus our attention on motion at the periphery and ignore the unchanging. We manufacture a simplified reality on the fly. And, we don’t pay attention. We are one place in our body, another in our mind — that meeting coming up, the fight we just had. We are distracted drivers on the highway of living.

So, designing interfaces for flawed and filtering senses is a game of pattern recognition and best guesses. It’s easy to get the labels, colours and placement we use wrong, or wrong for an audience expecting some other pattern, paradigm or choice. Much of interface design is about delivering the familiar wrapped in the enigma of the new. You get the balance of the fresh and the threadbare wrong, and you instill confusion.

Great graphic design and affordance can be at odds

Affordance is the way an object telegraphs its story to a user. A chair with scooped bum prints in it suggests sitting. The curved bottom edge of the first iPad invited users to pick it up. A convex button on a screen suggests pressing. But, recently, as OS interfaces from Microsoft, Google and Apple have gotten flatter and more elegantly sparse, it’s become harder for newcomers to the interface to know what to press, when. In older versions of the interfaces, buttons looked like buttons. Now the same functional element can be a simple word on a flat background, with no affordance to declare its purpose. Naive users don’t know what to press. Meanwhile, veterans welcome the graphic purity. But, the net effect of the newer OS interfaces like Metro, iOS 8 or Material is a decrease of intuitiveness with an increase of purity of design.

We’re not done with user interaction yet

As gadgets like virtual reality glasses and smartwatches show, we’re just starting to explore how we interact with our devices, and how they can tell us what we need to know. Just when we thought apps had supplanted web pages, now we see that notifications and near autonomous actions can usurp apps. Google Now on an Android smartwatch is a great example of this. The Google Now functionality of a smartphone is secondary to the just-in-time notifications the watch can display without being asked. The Uber app on the Apple Watch promises to turn the elegant Uber app into a single-button press. The button is activated on the watch, an Uber car is ordered, and autonomous alerts tell you of its proximity and arrive. Payment takes place in the background. The experience could just as easily be triggered by a voice command, “Order an Uber.” In those cases, interface fades and all that remains is intelligent notification and gesture. In the near future, where and when we are may be the affordance our apps need to take their own actions.

It’s sobering to remember that it took 50 years for the basic user interface elements of a book to emerge after the Gutenberg press. And that was for a physical object with dozens of constraints. In a world where our devices are morphing and shrinking and constraints are a quaint memory, it’s no surprise that the task of creating the liminal surface between the real and the functionally virtual is so impossibly complex.

Listen to an audio version of this column, read by the author, here.

Wayne MacPhail has been a print and online journalist for 25 years, and is a long-time writer for rabble.ca on technology and the Internet.

Photo: Mia Kos/flickr

wayne

Wayne MacPhail

Wayne MacPhail has been a print and online journalist for 25 years. He was the managing editor of Hamilton Magazine and was a reporter and editor at The Hamilton Spectator until he founded Southam InfoLab,...