‘Thin’ user interfaces: techno-centric rather than human-centric design

In my recent blog post on TDD, several people suggested  that some of the problems that I was having with TDD were that I may have a system with a tightly coupled UI and that I was trying to test through that UI.  They suggested that TDD is more effective when the UI is made as ‘thin’ as possible.  The problems of testing GUIs using TDD are well-known so the general idea is to build systems where a simple UI interacts with the underlying system through an API.  Most of the testing required is API testing rather than interface testing.

I’d like to argue here that this notion is only applicable to one class of system and, even when it is possible to structure systems in this way, it means that we cannot use the affordances of the digital medium to provide a more effective user experience.

If we step back and look at what a user interface is, it is essentially a way of rendering and interacting with an underlying object model (data and operations). If you have a loosely coupled object model where interactions with one object have minimal effects on other objects, a small number of operations and a sequential interaction model then you can develop a ‘thin’ interface. Objects data is presented in forms, operations as buttons or menus and the sequence of objects presented depends on the the business process supported.  Quite a lot of business processes can be represented in this way (they started out with paper forms after all) so its not too hard to build web-based interfaces. These interfaces are rarely aesthetically satisfying and often pretty user-unfriendly, especially for occasional users,  but they are sort-of tolerable and people put up with them because they have no alternative.

However, if we want to render object models, such as an image, and present users with richer, perhaps more natural, forms of interaction, then the idea of a ‘thin interface’ has to be discarded. Take a simple document editor as an example. You can render the document using characters in a single font, with markup used to show how it is displayed. You can use an editor like emacs to access this and memorise a range of keyboard commands to do so. On the other hand, you can render the document as an image where you show how it looks (fonts, colours, spacing, etc.) and allow the user to directly manipulate the document through the image – adding text, changing styles and so on. This needs an awful lot more UI code and you need to find a way of testing that code.

For some applications, such as a visualisation application, the base application object model may be pretty simple and easy to test (using TDD or any other approach). The majority of the application code is in the UI and if TDD is difficult to use in this situation, then it really isn’t surprising that TDD isn’t that effective for this type of application.

If we want to make our systems human-centric then we need to have more natural models of interaction with them and it may make sense for interfaces to adapt automatically to user characteristics and actions.   Our ‘thin’ interfaces are still mostly based on form-filling and menu selection because to make them ‘thin’ there has to be very little between the user and the system object model. It’s a complete failure of the imagination of system designers to think that we can’t do better than this. Thin interfaces may make TDD easier but it is really quite disappointing that we are still building interfaces that, frankly, haven’t really moved on from the interfaces that were presented on 1980s VT100 terminals .

2 thoughts on “‘Thin’ user interfaces: techno-centric rather than human-centric design

  • March 22, 2016 at 6:27 pm
    Permalink

    It’s true that a word processor (for example) has a much richer user interface than a form. It’s also true that there is a lot more UI code in a word processor, than in a simple form. However, that UI code does not need to be “thick” from the point of view of testability. Indeed, it can be as thin as any simple form.

    The trick here is to make sure that all the complex formatting code is _not_ part of the UI. Are we justifying a paragraph, well that’s just math. No need for UI code until the math is done and the characters are all laid out. Are we offering intelli-sense for misspelled words or poor grammar? Well that’s just string lookups and language parsing; no need for UI code until all the decisions and determinations are made.

    Yes, there will be many gestures that the UI has to contend with; and that means a great deal of UI code. But each gesture can be thin. So thin that the only meaningful tests are eyeball tests. And so the UI, though large, can still be thin. So thin that all significant decisions, and processing can be tested with unit tests, and therefore driven by TDD.

    Reply
    • March 23, 2016 at 10:08 pm
      Permalink

      Actually, things like justifying a paragraph isn’t just math – unless you are just using a Courier font. You need to know about UI things such as kerning which are really quite separate from the application.

      Reply

Leave a Reply

Your email address will not be published. Required fields are marked *