On what good research can be and the problem of design

(This was written in a discussion about the purpose of doing user studies within the project HUMAINE, emotion-research.net. WP9 that is mentioned below, is one of the work packages of the project).

I find this discussion fascinating is it happens to take place during a phase in my life where I have had to reflect on what good design insights are – what knowledge we are producing really – and what is the proper process to get it?

To clarify my position, I’d like to take an example first that in a way illustrates my position more clearly and is somewhat easier to grasp than the problem of designing for affective interaction. One problem we have attempted to tackle is that of designing privacy-protecting solutions. In, for example, mobile phones we need to figure out ways to already from the start build technology in such a way that it protects our privacy. School kids take photographs of their friends in the shower and then send them around, positioning systems used in services such as Friendfinder (put out on the market by the Swedish telecom-operator TeliaSonera) makes it possible to locate your friends, and pervasive games makes use of people nearby as part of the game without them knowing of it (and may even encourage behaviours such as stalking).

How can we design technology so that privacy is protected? Can we provide some design solutions or design insights that can be used in many different systems and produce good solutions? Can we find design principles that can generate several instantiations that all work satisfactorily?

Now the first lesson to be learnt is that privacy is not perceived as this one coherent concept by all end-users. What I want to keep to myself is not necessarily the same pieces of information/behaviour/aspects of myself that you may want to keep to yourself. Even worse, we are brought up in different cultures that encourage openess/closedness in different ways. In Sweden the authorities publish people’s income – you can actually go to the taxation authority and ask to know what my income was last year. And they’ll tell you! In the US this is not at all OK. That information is definitely considered to be private.

And when we think more about this concept of privacy, we realise that really it is not only about protection and setting up a fence to others. It is equally much about feeling so safe in certain situations, with certain people, so that you want to share. I tell my husband much more than I tell you. I tell my friends more than I tell my colleagues. I tell my close colleagues more than I tell my European colleagues, and so on. And whether I tell people stuff or not is a matter of trust. This trust and level of openess/closedness is negotiated over time between us. It is both a matter of becoming friends and a matter of social control mechanisms. As I get to know someone I feel more and more safe. I open up more and more. Social control mechanism are for example ”social translucence” situations where I know as much about you as you know about me, and we both know that the other one knows. This relationship concerns power. Thus, privacy is related to trust and power.  (In the Swedish system, as I know that everyone can know about everybody else’s salary level, then this piece of knowledge cannot be used for or against anyone - it is not a tool of power. It has been neutralised (more or less). At least this is the idea behind making many things “public”).

Thus, privacy cannot be defined only in terms of what types of information should be kept private. Instead it is a dialectic process between people and between people and practice/culture. Thus, good privacy solutions built into technology are those where users are given tools to negotiate privacy between themselves, and where social mechanisms are built into the system so that power can be balanced. (An example of a tool-based solution is one where you in a chat-environment can decide who is your friend, what they can see, etc., over time, in interaction with others.)

Now, when we put new technology into this dialectic process, we will not only build upon the prevailing cultural views and practice. We will also change the current culture. If we build technology where it is easy to take pictures of kids in the shower and then spread them without being able to trace it back to who did it (accountability), then people will start adjusting to this new situation. We often here people saying “I do not care about being logged on the internet, filmed in public spaces, ...[etc.] because it is everywhere anyway”. I am not saying that this is a negative development - that needs to be evaluated against what kinds of consequences this may have, I am just arguing that technology and uptake of technology changes society and our views on various values, such as privacy.

Thus, coming back to the issue of what a design insight might be that may generate many good privacy-protecting systems, I believe that it is not from bringing people into the lab and testing when they feel violated or protected by the technology that we will gain insights. It is when we study how privacy-protecting practices unfold in the “real” world, that we may understand what the real mechanisms are. And then we may find those solutions like providing users with tools for negotiation of privacy or social translucent visibility solutions.

 

A principle like social translucence is a good example of a design insight, a design pattern, that can generate many different particular implemented services, but it does not always work. For some situations, it is impossible to distribute power in this way. (For example, when one of the parties is a company and the other is an end-user). Thus, our design patterns needs to be presented in such a way that we understand their scope.

Now, what does this have to do with affective interaction? Well, again I believe that we need to divide the issue of how to proceed in this area into two different problems if we are going to reach successful, usable affective interactive systems (that is, not theory of what emotions are or what is going on in the brain, etc., but design principles/patterns for what kinds of affective interactive process do indeed work and that can generate many systems - not only one system).

The two problems are (in my mind):First, we need to make sure that when we try to recognise an emotion or make an interface in some way express an emotion, that we indeed are expressing/interpreting the right emotion. Here we can learn a lot of from laboratory studies, isolating variables, checking whether raising an eye-brow in your ECA is indeed producing the intended expression, etc.  But the second problem, in my mind, lies in finding the kinds of design patterns that builds upon the ”real” human practices in affective interaction - and similar to privacy-negotiation, we know that affective interaction is also a dialectic, cultural-dependant process that we need to tap into. We will not find those design patterns through laboratory experiments. We need to study practice “out there – in the wild”, and we need to pick up on design patterns that works to achieve the overall goal. The overall goal can be to make users affectively involved in some interaction with a game or persuasive technology or something, but that is only one class of systems. Other goals may be to make students learn better, to diagnose stress and emotional processes to help end-users manage their own stress level, etc. And once we measure the success of some affective interactive system, it is not enough to look for the criteria that measure whether we managed to express/interpret the correct emotion, but we need to look for the criteria that are relevant vis-ŕ-vis the overall goal of the system. This is where ecological validity is key.

Thus, we can find a basis for design in the “real” practice, out there. But we also need to recognise that we are building something new. And these new affective interactive systems will in turn change the way people interact with systems, our perceptions of systems, etc. And this in turn will change our understanding of, for example, where ECAs can and should be used in interaction.

Now, this is written from a WP9 perspective, that is: how do we create better, usable applications that make use of affective computing-techniques. I do not mean for this to be the overall goal of HUMAINE.

To me, the following concepts need to be revisited in order to really understand what we are up to here:

  • what is design knowledge?
  • what does it mean to have a theory in this field? is there any predictive or generative power in the design principles/patterns we produce?
  • how can we already from the start count on/with change (change of human practice, change of culture, change that the technology will bring to the society)? What are the best theoretical starting points for actually addressing change? (not cognitivism for sure, probably more in phenomenology, activity theory and similar?)

Please observe that I am not arguing for total relativism here. In every culture on earth people have privacy notions. In every culture we do express and experience emotions. But these take on different shapes and forms depending upon culture. And technology will interfere with those shapes and forms. The socialisation from childhood to adult that changes our behaviours, understanding, meaning-making processes, brains, and even our emotional processes is not only done via other human beings but also in dialogue with the artefacts and practice that the prevailing culture provide. Our technologies/innovations will be part of this socialisation process.

 

BACK to Kristina Höök's home page