Article

Sculpting water: How to design a hyper-personalised digital experience

Sculpting water: How to design a hyper-personalised digital experience

Artificial Intelligence (AI) is now the biggest buzzword in digital. And though it won't likely put everyone out of a job (or turn us into slaves), it is bound to radically transform our digital experience.

When hearing AI, people tend to think of some futuristic HAL or Skynet, but AI is already here, and it looks more like Spotify and Pinterest recommendations. Anthropomorphic 'intelligences' (such as Google's reservation-making robot) might be taking the limelight. Personalisation algorithms, however, are the applications that will have most impact in the short term.

Traditionally, digital platforms resembled the systems and tools of the material world. Microsoft's Encarta was a finished, static product, like paper-based encyclopaedias. That was also the case with the first GPS systems: like maps, they didn't take notice of road works and live traffic. The Internet freed these platforms from the tyranny of permanence. Unlike their predecessors, Wikipedia and Waze could change and evolve constantly to mirror human knowledge and the condition of the urban landscape.

The shape of these systems, however, remained relatively static. Yes, they became responsive to fit the myriad screens used to access them, but their fundamentals stayed the same. Users needed to learn their interfaces and navigate through them to find what they were looking for.

Machine learning is the latest form of disruption in our digital interfaces. Intelligent algorithms are always getting better at mapping users and their behaviour, and responding to them. Rather than having the user learn the machine, the machine 'learns' the human.

This is now ubiquitous, although limited. The likes of Facebook, Amazon, Netflix and others present content driven by algorithms. All these platforms collect data about our most minute interactions and try to use it to predict what we will respond better to. We say it's limited because this personalisation applies only to the content.

2_buildings_same windows .jpg

It is as if we were looking at a building across the street. Every one of us, based on our past behaviour, would see different scenes playing in each window. But the building itself would be the same.

The possibilities of personalisation, however, go well beyond what content we are presented with. An infinitely flexible platform could change how and when it presents content. It could change the medium (text to audio, for example). And it could respond not only to previous user behaviour or stated preferences, but also to changes in the environment or situation. 

3_buildings_different_structures .jpg

In this scenario, the machine seems to be the designer when, in fact, it is only iterating based on algorithms: it is simply assembling. The underlying system, the components that the machine then assembles, have to be created and designed. And they need to be created and designed by humans.

Machines, by definition, lack empathy. And, as we know very well, user-centred design requires understanding and empathising with the user. Machines might, very soon, be able to always fool humans into thinking they're human. However, if we conceive these intelligent machines as humane, we are only fooling ourselves. 

But how do we design for an infinitely flexible system? What role do traditional, inflexible UX and UI deliverables (from user journeys to static screen mock ups) play in this process? As we anticipated in our previous article, this is a key challenge when thinking about digital. In this article, we want to build a conceptual framework to answer this question.

Atomic Design to the rescue

If the platform is to be assembled, there must be some basic units of assemblage. When thinking about the natural world, the Classic Greek philosophers Leucippus and his pupil Democritus coined the term atomos, which describes the essential units of matter. Much more recently, US web designer Brad Frost applied the concept of an atom to design thinking. 

Frost describes design atoms as "our HTML tags, such as a form label, an input or a button".

 

Periodic-Table-of-the-HTML5-Elements_fdhwiv.png

Image credit: ryanhayes.net

As in nature, these atoms aren't very useful in themselves, and, in fact are very rarely seen in isolation. Rather, they come together to create molecules, such as a label, input and a button combining to make up a form.

Molecules also rarely exist by themselves. Instead, they give rise to organisms: complex, distinct sections of an interface. Collections of organisms are then assembled to form (departing from the metaphor) templates and, once content is introduced, pages. 

Atomic Design was a response to two problems faced by designers. The first one was the problem of responsiveness: the advent of the mobile web meant that design had to be able to adapt to an infinite number of screen sizes. By conceiving each element as an independent part within a system, designers (and developers) could then establish rules about their shape and position in relation to the other elements at different screen sizes.

The other problem had to do with another type of scalability. Unlike software made for the computer, the web and native apps had bespoke user interfaces that responded to their branding, and not to the dictates of an operating system. While this was sustainable in the early days of the Internet or smartphones, it very quickly became a problem. On the one hand, users expected a seamless experience between versions of the same application (i.e. the Facebook website and native apps). On the other, designers strived to maintain consistency across families of products (such as the Google office suite). By identifying basic building blocks and their rules, designers could create pattern libraries. These could then be used to guarantee brand compliance and consistent experiences across versions of a platform or families of products. Material Design, created to support Google's world domination aspirations, is perhaps the most comprehensive of these.

Atomic Design has been very effective in addressing problems of scalability. In a way, an infinitely flexible platform presents the ultimate problem of scalability. Not only do we have to account for a vast array of screen sizes, or an ever-growing family of products, and a system tailored for every single user. What we want to create is a system tailored to every user in the specific context they are currently in.

Traditional deliverables in a brave new world

But how do we wireframe, mock up and prototype a system that, by definition, has no defining shape? Asking that question is, in a way, akin to asking: how do we sculpt water?

At first sight, this seems an impossible problem to solve. But as with Atomic Design, our current tools already hold the answers.

Best practice in UX design is very good at encouraging designers to keep problems and solutions separate. Early UX deliverables, such as personas, empathy maps and user journey maps provide a high-level view of user wants and needs. Even more detailed deliverables, such as service blueprints, keep a comprehensive view. They look at the user in their wider context and not in the specifics of how they interact with content or UI. As only one solution (albeit a powerful one) to meet user needs, a hyper-personalised platform can take advantage of all these methods. 

But what about later-stage deliverables, such as wireframes, user flows and mock ups? Obviously, the days of wireframing and designing progressions of fixed screens would have to be over. But wireframes and mock ups are still useful. 

Rather than wireframing and crafting complete pages or screens, we design the atoms and establish their rules. Rather than delivering a series of wireframes or designs, we deliver a pattern library. These are now so ubiquitous that there are a huge number of programs and principles designers can use to facilitate this process.

And what about prototyping? Our current prototyping tools, built to display a fixed series of states based on user interactions will have to change or expand. It might even prove more useful moving away from them and prototyping in code. The hardest part to code (the algorithm) can be simulated through the Wizard of Oz method.

In this sense, prototyping could be as simple as the designer re-arranging pieces of paper in response to the user's preferences, actions and context.

4_Multistairs.jpg

At higher fidelities, these prototypes can also be used to stress-test the system or to make proofs-of-concept of specific user journeys. Like the final system, they should be composed of modules that can be infinitely reconfigured.

Overall, what we want all these deliverables to do is capture the essence of the experience and the rules that underpin the system. While deliverables (and the software used to create them) will have to expand and change in response to these new challenges, they are already good enough to get us started.

Towards a design chemistry

Even though this system should be flexible enough to adapt to any user, it still needs rules and limits. Like physical atoms, our digital equivalents should have intrinsic values that govern how they are combined. When the AI assembles unique experiences for a user, it should follow this "digital chemistry". There are several reasons:

  • Usability. It’s impossible to guarantee that every single combination of components will be usable. Usability heuristics need to be embedded into the algorithm. The assembling machine must be intelligent enough to understand and privilege more usable combinations. This is made easier if individual components are built around usability and accessibility.
  • Aesthetics. While the machine might be, in a sense, "vomiting" the digital experience, we don't want the result to be a mess of pixels. Beautiful things are easier to use (or at least are perceived to be). The aesthetics of the system should also be embedded in the components.
  • Meaning.  This is perhaps the hardest part to get right. But it is essential if we want to engage with the user. A personalised system is only better if it makes more sense to the user. When algorithms fail (such as when Facebook serves completely inappropriate ads), it makes the experience worse than if there had been no personalisation at all. The result is frustration and mistrust.

We can leverage existing design principles and tools to design the framework of the system. But what can we leverage to construct this chemistry? Our next article will explore the problems encountered in developing rules to guarantee that AI-driven digital experiences are superior to their primitive, though lovingly built, ancestors.

 

Marcos Villaseñor is Lead UX Consultant at Precedent. Illustrations by Design Director, Will Krüger. 

 

Related content

Article

How to design when you can't design everything

Services_page3.jpg

Infographic

The whys and hows of human-centred design

Crossing_flipcrop.jpg