From AI to UI

Kristina Höök, Professor in Human-Machine Interaction at DSV since February 2003

 

Abstract

In my work I have come to, step by step, respect the complexities of designing software systems for people. When I started doing research I naively believed that we could model users’ behaviour and adapt to it. Today, what thrills me the most is when I can design systems that allow users to be the incredibly adaptive, creative, always learning people that we are. Meaning is created by the people and their activities in the space – not by computing systems. A computer system is not built by bricks and wood, but in software that is a fantastically fluid and changeable building material. What we need to do is to extend on this material to provide the end users with substantial power over both the ‘material’ and the content in the systems we build.

In the beginning…

In the early 1990’ies the future looked bright and just about anything was possible, even when it came to computers and their use. The field of artificial intelligence (AI) had just about found its shape and goals and the promises given were fantastic. Soon we would be able to create systems that people could talk to, that could tutor our kids individually, that would adapt their way of functioning to us rather than the other way around, that would have humanoid characteristics and that would be able to act on our behalf, knowing what we wanted even before we knew it ourselves. My own interest was already then in how people and machines should interact. I wanted to invent novel ways for computers to behave so that we would make them accessible to all. There had recently been some insights pointing out that a so-called normal user was probably more than a 30-year old engineer with good programming skills. A normal user could in fact be someone without those skills – still male and young of course – but at least someone who did not understand computers inside-out.

Some believe that if we could only make programming languages that had a similar structure to how the brained worked, then people would just have to express their thoughts and the computer would understand and execute the given statements. Thus, there was a great need to understand human cognition and the brain. How do people really solve problems? And I was one of them. I was very curious as to how people solve problems and how they understood one of the, at the time, very popular programming languages, Prolog. I had done one study while visiting Sussex University, and as it turned out in our study, people solve problems using all kinds of resources, drawing upon everything they new, real-world knowledge, and not only that, they also made things up as they went along! In fact, they were even learning and changing their behaviour during my one-hour study with them – without any feedback from me! While on the one hand this is indeed a rational behaviour, it was on the other hand nothing that easily translated in a one-to-one manner to programming statements.

This experience made me forever reluctant to claim anything about when and why people learn anything – my firm belief was and still is that learning is key in our thinking and cannot be confined to one small process that works only in one way.

Moving to DSV

In this state of mind, I first met Carl-Gustaf Jansson at DSV. He looked exactly as he does today. Long arms waving around in excitement and all that unruly curly hair and beard – a true image of a professor to be. At the time, he was very interested in learning processes just as I had been. His interest was both in human learning, but also, perhaps more important to him, machine learning. He was a true believer in that machine learning was the key to creating intelligence in machines. I am bent to believe that he is still right. Machines that mimic human reasoning without the ability to associate and learn, will not behave intelligently as soon as they are removed out of context.

I had got employed at SICS at the time and as DSV was located in the same building, Electrum, several of us young researchers at SICS became PhD-students at DSV. Calle was building his first research group and it was a fantastic bunch of very interdisciplinary students many of which are still around in Kista, amongst others: Robert Ramberg a very young psychology student, Jussi Karlgren who had studied linguistics, Henke Boström doing machine learning, and many, many others. In a sense the group was spread over both DSV and parts of SICS. We would have joint meetings and study groups from time to time. Calle organised, and still does, a meeting in Åre where we from SICS would sometimes be invited to join in. The meeting in Åre was probably the most intense meeting I have ever been to. Calle would bang on our doors at 7 in the morning, making us work all morning, and after skiing all afternoon, he would make us work again for several hours before dinner. As we were all young and energetic we would stay up all night, and then Calle came banging on the door again at 7, asking us to get up and be creative again. After several days of this treatment we would stagger home again, deadly tired, but also with a bunch of great research ideas and with a strong group feeling.

Meeting reality

It was a glorious time in terms of funding as well. After some initial struggling with building route guidance systems for cars making use of various intelligent route planning methods, me Annika Waern, and some of my colleagues at SICS applied for money from Ellemtel AB (jointly owned by Telia and Ericsson). Through Calle we also applied for money from NUTEK. We were granted money from both sources for 3 years! Given this money we started to investigate whether it was indeed possible to create machines that would adapt to people and provide help just when it was needed. Our idea was that it should be possible to model users’ help needs from their actions at the interface and then present only the most relevant information.

This project started a joint journey towards taking people and their interactions with systems seriously. We spent lots of time at Ellemtel trying to figure out what people were doing and what their help needs were. And of course, the real-life needs of people trying to create complex systems was not at all as neat and tidy as those we had imagined in the research lab. It turned out that when seeking for help, most users had very little use of on-line documentation. Their foremost urge was to get to talk with someone who had experience of the task they were attempting. The information needed to be contextualised to their special problem at hand. And similar to how I learnt that learning is such a fantastic, fluid, on-going process, I now learnt that information search is not a simple rule-based process where a need can easily be matched with some information items. Again, while someone was searching for one piece of information, they would discover other items, learn more about the structure of the overall information, and their help need would change – while searching! We are indeed incredibly adaptive and creative beings.

Users are people

In our joint project, we now had to search for the theoretical and practical foundations needed to understand and address this problem. We found those foundations in the (at the time) recent critique by Lucy Suchman of AI-solutions. She had analysed some of the assumptions made by early AI-researchers and found that their rule- and plan-based approach was not at all capturing the real behaviour of people. People are situated. We act based on changes in our information. Plans are resources for us in these situations, but we change them quickly as soon as some new facts arise. Suchman had an enormous influence on the field she was critiquing. AI researchers turned to new kinds of knowledge representations and rapid situated planning algorithms.

We did the same turn in our project. The system for help that we built continuously adapted to the user behaviour. It did not assume that the use had one and only one information goal. In addition, we made sure that the user could both understand what was going on with the adaptations and that they could reverse them if they did indeed not match the user needs.  Annika Waern, Jussi Karlgren, Calle, myself and the other colleagues in the project wrote up our experiences in a journal paper that according to my current favourite programme, scholar.google.com, is the most cited of all mine and Calle’s scientific writings.

The work we did in this project did of course not exist in a vacuum. The whole AI-world was turning more towards solutions in which the context and context limitations were key. At DSV there were several projects along these lines – studying learning processes as situated learning, studying distributed intelligence, and creating machine learning systems. In a sense, it became a whole strand of work that lay the foundations for both the K2-lab at DSV and the HUMLE-lab at SICS. The K2-lab at DSV, lead by Calle, grew and soon consisted of more than 30 researchers. The HUMLE-lab at SICS, lead first by Annika Waern and then by myself, grew to be about 25 researchers. Not all the research was done from exactly the same theoretical foundation or perspective on the world, but in common was a keen interest in applying AI-techniques in more realistic and humanistic ways.

People are social

My own work after this point was inspired by the social processes around information search that we found at Ellemtel. If information is not and cannot be de-contextualised and people typically wants to either talk directly to others or be able to see what they have done in similar circumstances, then why not try to facilitate this process? We named the process social navigation and have now spent several years trying to figure out exactly how systems can be implemented that makes other users’ actions or visible, or aggregates their behaviour to provide recommendations, or simply just puts users in contact with one-another so that they can help each other. In a sense, this strand of work took me even further way from the original AI dream. Instead of modelling this or that abstractedly, it became more important to let users’ intelligence come to use – putting the human in the loop. The problem, in my mind, shifted from being an AI problem to becoming an UI (User Interface) issue building upon human intelligence rather than artificial intelligence.

If we, metaphorically, look upon system design as a building where the walls have been set up, the floor is laid, the roof is securely in place, we also know that once the building starts to be used, people will start leaving their traces in it. They will put up wallpaper, furnish it, sometimes tear down walls to create the kind of spaces they need for the kinds of activities that the building will host over time. Depending upon the activities in the building and the traces they leave in the physical layout and social activities, new visitors to the building will be able to ‘see’ how to act, where to interact, whom to talk to. The space will be turned into a place as phrased by Harrison and Dourish in 1996.

There are two important aspects of this activity that we need to consider. First, it is important to remember that meaning will not arise from setting up the walls. Meaning is created by the people and their activities in the space. Second, the design of the building seems to be an on-going process where certain spaces are left ‘open’, inscribable, sometimes purposefully by the architect, sometimes because the inhabitants takes charge of the house and rebuild it, but in any case, allowing for the inhabitants of the house to leave their marks on it.

If the architect has made a very strong statement in the building design, it might be harder for users to appropriate the building. They will hesitate to change it because they are scared of destroying the intended meaning. Nevertheless, over time the activities do leave their marks on it – it gets worn, wallpapers have to be changed, new tenants move into the house. And in our daily activities in the building, other people can see what we do and will react to it.

What is truly interesting about computer system architecture is that it is so much easier to change. A computer system is not built by bricks and wood, but in software that is a fantastically fluid and changeable building material. It is not impossible to provide the end users with substantial power over both the ‘material’ and the content.

A fluid design material

Similar to how I turned towards social navigation, K2-lab took a similar turn where some researchers, such as Robert Ramberg, Klas Karlgren, and others, turned to new theoretical viewpoints in order to provide for human learning processes. They were inspired by the idea that much of human learning can be characterised as language games – we learn the lingo of some subject area and thereby learn both how to talk about the subject matter but also obtain the tools that enable us to think about problems in novel ways. In the area of learning, K2-lab also had several projects looking at children’s’ learning processes. The most recent advancements lies in the work by Jakob Tholander and Ylva Fernues. Jakob and Ylva are interested in making the new medium that computers and programming offers available also to kids. In school today we learn how to write, draw, paint, we get music lessons, do woodwork, sewing and cooking, but we do not teach children how to express themselves through programming or other IT-artefacts. Jakob and Ylva have attempted to make this medium accessible to children through making parts of it tangible. That is, kids programme through manipulating physical objects that in turn interact with the digital world, translating their activities into digital activities.

Research in the area that originally had interested us in HUMLE and at K2-lab can perhaps, today, be best characterised as an exploration of the fluid design material that computing is. We are trying to understand its inherent and emergent properties as well as extending it using sensors, tangibles, music, colours, haptics, and just about any material at hand. We are applying it to new areas, such as learning, collaboration, affective interaction and meeting situations.

Through all the research that I have briefly touched upon above, I would say that we are inspired and humbled by one simple fact: people are fantastic! And as long as we humbly address and attempt to build systems that harmonize with this fact, we cannot fail. Using AI techniques in user interaction design might be very fruitful indeed, but most important is the user intelligence. Thus, from AI to UI.

 

 

 

About Kristina Höök

Kristina Höök is a professor in Human-Machine Interaction at DSV since 2003. She also upholds a part-time employment at SICS where she is the manager of the Interaction Laboratory. She became Associate Professor (docent) in 2002, PhD in1996, Ph Licentiate in 1991, and MSc in 1987.

BACK to Kristina Höök's home page