1984?


Ridley Scott’s advert that launched the Macintosh personal computer in 1983 sought to show:

…the fight for the control of computer technology as a struggle of the few against the many, says TBWA/Chiat/Day’s Lee Clow. Apple wanted the Mac to symbolize the idea of empowerment, with the ad showcasing the Mac as a tool for combating conformity and asserting originality. What better way to do that than have a striking blonde athlete take a sledghammer to the face of that ultimate symbol of conformity, Big Brother? [link]

One of the aims of this project is to connect individuals working in the design for behaviour change field with policy makers looking for ways to encourage behaviour change. When you think of the public policy implications of persuasive technology, do you think it will empower individuals, or open the gate to a Big Brother?

The persuasive technology discourse (despite its rather Orwellian name) in a similarly general way to Google says “don’t be evil”. This emphasis was set at the first Persuasive conference in 2006:

“In the PERSUASIVE 2006 conference, a particular emphasis was put on those applications that serve a beneficial purpose for humans in terms of increasing health, comfort, and well-being, improving personal relationships, stimulating learning and education, improving environmental conservation, et cetera.” [1]

However the ethics of stuff that is designed to change behaviour is still a bit of a minefield. Here are four points that seem to describe the ethical issues made by writers publishing within the persuasive technology discourse.

1. Awareness or Deception

B. J. Fogg’s definition of persuasive technology precludes coercion or deception [2], meaning that persuasion must be voluntary. But doesn’t avoiding deception require the user to have fairly sophisticated knowledge how the techniques employed by a piece of persuasive technology work?

The possible problem is illustrated by the point of view of practioners like Wai and Mortensen [3], who writing from a commercial perspective, suggest that successful adoption by consumers of some devices lies in making them as boring as possible, and making efforts to “mask any behaviour change”.

The point is picked up by Atkinson [4] in a critical review of Fogg’s book, who writes that persuasive technology could only be ethical “if [users] are aware of the intention from the outset of their participation with the program [or product]”. Atkinson maintains that going further than this would be manipulation.

2. Who has the right?

The designer’s mandate is usually to have the desires of the user firmly at the centre of their decision making (user-centred design is the mot juste). As Johnson [5] writes in his review of Persuasive Technology, the techniques of persuasive technology, however, shift the focus from the user’s desires to those areas in which the user could buck up his or her ideas and change behaviour (paraphrased).

This is presumably not such a big deal in a free market, where any person is free to buy a particular product (providing the product is not deceptive – as the previous point) or not, but what happens when the state gets interested?

3. Which behaviours?

The third area of concern raised is around which behaviours are fair game for designers to encourage. Berdichevsky and Neuenschwander note that any persuasive attempt (regardless of whether technology) is on “uneasy ethical ground” and propose a golden rule of persuasion:

“The creators of a persuasive technology should never seek to persuade anyone of something they themselves would not consent to be persuaded of.” [6]

Fallman [7] calls for a philosophy of Human Computer Interaction (HCI) to decide which behaviours could be ethically persuaded by persuasive technology.

4. Infantilisation?

The final point (and to my mind an important one), is well made by Atkinson, who conceding that persuasive technology might be ethical if the designer’s intent were altruistic:

“But would not this sort of benevolent intent be better constructed and represented by the sound reasoning we know as advocacy or even education, where intent is exposed at the outset or revealed through simple inquiry about course content? … Exposure to both is cognitively enriching and can result in attitude, belief and behavioural change, but both remain respectful of the individual’s own ability to synthesise the offerings provided by new information into a worldview that is meaningful for that individual.” [4]

That seems to me to be a whole blog posting in itself… Check back soon for more.

Big Brother or Empowering Individuals? How could ethical public policy be developed?

References:
[1] IJsselsteijn, W., de Kort, Y., Midden, C. Eggen, B., van den Hoven, E. (2006), Preface. Lecture Notes in Computer Science, 3962, V.
[2] Fogg, BJ (2003), Persuasive Technology: Using Computers to Change What We Think and Do. Morgan Kaufmann.
[3] Wai, C. and Mortensen, P. (2007), Persuasive Technologies Should Be Boring. Lecture Notes in Computer Science, 4744, 96.
[4] Atkinson, B.M.C. (2006), Captology: A Critical Review. Lecture Notes in Computer Science, 3962, 171.
[5] Johnson, R. R. (2004), Book Reviews: Persuasive Technology. JBTC. Journal of Business and Technical Communication, April, 251–254.
[6] Berdichevsky, D. and Neuenschwander, E. (1999), Toward an ethics of persuasive technology. Communications of the ACM, 42, 51–58.
[7] Fallman, D. (2007), Persuade Into What? Why Human-Computer Interaction Needs a Philosophy of Technology. Lecture Notes in Computer Science, 4744, 295.

Advertisements

5 Responses to “1984?”


  1. 1 Dan Lockton February 10, 2009 at 2:08 pm

    One of the dilemmas at the heart of the field seems to be whether it is ethical to try to change someone’s behaviour towards something which, though it might ultimately benefit the individual, or society in general, doesn’t benefit the individual at the moment that the intervention happens (or at least, the individual doesn’t think it is of benefit).

    Some would see technology that does that as unethical (the person doesn’t want to change) even though it would seem to satisfy Berdichevsky and Neuenschwander’s golden rule, in the sense that the designer is presumably “better informed” on the issue and would be quite happy to have his/her behaviour changed in this way.

    But if we take ‘technology’ out of the example, it starts to sound different. Say there’s a kid who loves eating chocolate cake, and eats it to excess, because he or she enjoys it. So the parents, worried about their child’s behaviour, read the kid a bedtime story (Simulation in Fogg’s terms) about a very similar little child (Social proof) who eats and eats and eats and is very ill in some amusing way (maybe turns into a chocolate cake him/herself). And it causes the real kid to think twice before eating so much, in real life. Is that unethical? (In the sense it exaggerates the effects, it’s perhaps deceitful too.)

    In this sense (as that Atkinson quote suggests) it’s basically education. A child might not want to learn about healthy eating, but the teacher (or education policy makers) know it’s for the best (for the kid, and on a larger scale, for society), so it’s taught, and in later years the child may (not) come to appreciate that it was beneficial to be taught it.

    Isn’t that what a lot of government policy tries to do anyway? Is the level of tax on, say, cigarettes, aimed at changing public behaviour through persuasion, punishment (of a sort), or not at all?

    • 2 rsadesignbehaviour February 10, 2009 at 7:03 pm

      Thanks for commenting Dan.

      I’m no expert on edukayshun, but I suspect that some of the techniques that persuasive technology (or behavioural economics) uses would be thought of as teaching, but some go a bit further (e.g. by being less transparent or towards the stronger end of persuasion). As I blogged, I think this is fine when you choose to buy a product because it might help you to do something you’ve always wanted to, but there are questions raised when it’s being done to you…

      Relatedly, who’s doing the educating? A parent-child relationship is different to most people’s relationship with the state. Is it the role of government to educate us or to provide us with enough information to decide whether we should change our behaviour or not?

      It’s true that there is lots of policy that already attempts to change behaviour, but I think one of the drivers of interest in behaviour change is the ineffectiveness of these (price signals, awareness campaigns) methods.

      Jamie

  2. 3 rsadesignbehaviour February 11, 2009 at 9:56 am

    Hmm. Just re-reading my above comment and realised that I’ve unconsciously strayed towards a “there’s no role for persuasive technology in public policy” – which isn’t really what I think. I do think there’s a promising overlap between the behaviour change agenda in policy and design that can help us change our behaviour, but I also think the policies need to be carefully developed with people – probably at community level. Which is what this project is all about, really.

    Jamie

  3. 4 Erika February 14, 2009 at 5:33 am

    Another perspective worth taking into account is that designs necessarily drive behavior in some way. Thus, while there are thorny ethical issues surrounding design of systems and objects whose sole purpose is to change user behavior, the issues seem easier to resolve when you are designing something and you have to choose how some aspect of it will affect the user.

    To use a non-technological example from Sunstein and Thaler’s book _Nudge_. A cafeteria designed to make healthier foods easier to access and unhealthy foods harder to access, say by putting the healthy foods at eye level, does not seem problematic. The alternatives, such as putting the junk food at eye level, also drive user behavior. There is, in this case, no neutral default (well, you could have a uniform random distribution of all food items at all access points, but that would just be annoying). In this case, the designer has to make some decision and what is good for the user is an important input into that decision making process.

    There must be relevant literature on the ethical implications of marketing and advertising. It seems that it must address many of the same issues as technological persuasion. Both have to deal with the fact that humans are not rational creatures and are more likely to be persuaded by emotion than reason. Both have to deal with the tension between persuading people with and without their noticing. And marketing has a long history of being used in both the government and the private sector.

  4. 5 Rennie June 21, 2009 at 8:11 pm

    Was just drafting out my thoughts when I read Erika’s – who captures them perfectly. That is, whatever you do is going to ‘nudge’ people towards one way of doing things or another. You can count on the individual to have a really powerful sense of what is in their interest. So the nudge will only work in the face of low differentiation on the part of the individual. In which case, that which serves the community best should be where the design points. The “Nudge” book Erika refers to, as well as “Sway” and Predictably Irrational (Daniel Ariely), have examples of forms, sequences, etc. that actually benefit nobody today – and could be better constructed so that at least somebody benefitted!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




Categories

Rolling Links


%d bloggers like this: