In the late 80s and early 90s, desktop software was just starting to grow up, and interfaces were, frankly, a mess. Until IBM published its "Common User Access" guidelines and software writers started to take them seriously, users had to learn to navigate every application from scratch. As the CUA page on Wikipedia notes, even contemporary word processing packages had completely different keystroke combinations to achieve identical basic tasks.
We grew up, thankfully, and interfaces started to consolidate around basic good ideas. Human-computer interaction became a science; design and experience guidelines were published; and the world heaved a collective sigh of relief. Today, for instance, anything other than the familiar keys for copy, paste, open, new, and so on feels downright weird.
Until the Web came along, and we threw it all away. Early Web pages were dreadfully ugly, but ugliness is not the issue. I'm talking about behaviour, because consistency in interface behaviour played a big part in driving computing into the reach of non-technical users. If we want Web apps and cloud computing to keep expanding to new users, we need to take a long, hard look at our approach to interface.
Arguably, that's not a bad thing. Beautiful and intuitive products will thrive, ugly ones will not. But even if that's true, it suggests that over time we'll trend towards consistency around the UX components that work. Trend faster, is what I'm saying.
Clutter and chaos
Just look at the mess we have today. Keystrokes and navigation? Ick. Sometimes a keystroke is captured by the browser, sometimes by the Web page, sometimes not at all. Back buttons sometimes go back a page, sometimes back a whole site, sometimes nowhere, and sometimes the Ajax just breaks entirely.
And take menus. Menu consistency was something the 90s introduced - the idea that menus should look and feel roughly the same within an application and between applications. In Windows, that gave us the familiar File menu in the top left, and Help in the top right. Other menu metaphors appeared and became the norm, like the idea of greying out items rather than removing them. Microsoft largely abandoned that thinking with the controversial Ribbon in Office 2007, opting for a more fluid approach, but it does at least aim for consistency in context.
Online, not so much. While Web browsers had traditional menu layouts for a while, we chucked them away when it became obvious that most user interaction wasn't actually with the browser at all, but with the Web page. And if you want a word to describe menus in Web pages, "consistency" would probably not be at the top of your list.
Gmail's CSS is living proof that an infinite number of monkeys on terminals will eventually code a Webmail interface.
Everyone does menus differently. Every site has its own layout, unique stylings, varying approaches to click versus hover, JavaScript which may work differently (or not at all) depending on the browser or device you're using... there really is nothing standard about it. Even placement can be random within a controlled ecosystem - Google rearranges its menu items at the top of the screen, selecting sites to promote versus ones to demote into the "more" dropdown, but the selection is apparently driven by some super-advanced Google algorithm which manages to carefully avoid the sites I actually use. It also differs from one account to the next, which means I generally just give up and turn off the menu entirely to reclaim a little vertical real-estate (via Stylebot).
Desktop applications also taught us that menus are most useful when pinned to the top or side of an application. No matter how far you've scrolled down a spreadsheet, the menu is still right there at the top of the screen. Web applications forgot that, letting menus sometimes scroll off the page, sometimes not, and sometimes both - Facebook (as I write this - it could well change tomorrow!) pins the activity stream on the right of the page, but lets the (arguably more useful) left-hand category menu scroll off the page.
All over the place
Gmail solves the dilemma of which approach to use by embracing all of them. Some menus are pinned, some aren't. Some click to expand, some hover. Some items are in multiple menus. Some menus are embedded midway down message threads... Gmail's CSS is living proof that an infinite number of monkeys on terminals will eventually code a Webmail interface. Contrast this feature-creep (interface creep?) to the simplicity of the product at launch. What happened, Google?
Things are improving, though, and tablet computing has helped a lot: Apple and Google both publish firm design guidelines for both appearance and behaviour, recognising that UX really does matter when it comes to wooing, and retaining, users. So while we do have some oddities in how apps handle interaction, menus, actions and so on, the overall user experience is greatly simplified, and far more consistent, strongly so in the case of Apple's iOS devices, and that has been a major contributing factor to the iPhone and iPad's success.
That new consistency of behaviour is starting to trickle back into Web apps now, since many are designed with mobile devices in mind, or are outright ports from mobile to Web. Trello, for example, has quite different a Web interface compared to its mobile apps, but the navigation metaphors are consistent between them, and the learning curve (from "ooh, nifty" to actual productivity) is short and sweet.
And that is the point. I don't want every Web app to look the same. But I do want them to work consistently to streamline the path to productivity. And of course it's hard - a desktop UI has a relative minimum of weird environment vagaries to deal with, where a Web browser has hundreds. That's a hard problem, for sure. But we're computer scientists - solving hard problems are why we exist, and Web UX is just one hard problem among many. So get solving.
Share