ÐÓ°ÉÂÛ̳

opacity

Understanding Automated Decisions

Using design research to investigate communication of personal data use within automated decision making contexts

This project uses design wire-framing, public engagement including a gallery show and public lecture, and stakeholder consultation to understand different options for making the data behind real-time insurance decision-making transparent.

The Understanding Automated Decisions project uses design methods and public engagement to investigate how to make data-based, automated decision-making understandable to people, how to communicate the processes through which automated systems operate and implications for personal data privacy and collective data governance, and to engage with complex issues of algorithmic transparency.

This is a unique research and design collaboration directed by Dr Alison Powell of the ÐÓ°ÉÂÛ̳ Department of Media and Communications in partnership with service design consultancy . This university/industry partnership is uniquely placed to experiment with ways of communicating and generating public discussion related to automated decision making in a context where automated decisions cannot easily be made transparent.

This project uses design wire-framing, public engagement including a gallery show and public lecture, and stakeholder consultation to understand different options for making the data behind real-time insurance decision-making transparent.

Overview

Automated decisions are present in many everyday services, for example - bank fraud services deciding whether a particular payment from an account was fraudulent. In the past, many of these decisions have been made by people following a flowchart. But as flowcharts are replaced by code and algorithms, the decisions are becoming increasingly opaque. Most people don’t know how a decision has been made, what metrics or data was used or why the decision was set up in a particular way.

In addition code introduces, or extenuates, other issues. Researchers have previously demonstrated that biased outcomes can be
produced as a consequence of a number of interrelated factors, including:

  • ‘Black-box’ design patterns that make decision making opaqe to people using a service
  • Biases in data collection reproduced by the system itself
  • Biases in data used to train an algorithm
  • Biases that result from the way an algorithm is ‘tuned’ by use

These biases, and others, are united by a common theme: how a machine makes a decision can be difficult to interrogate, challenge, or even identify. Working out how to open up automated decisions so they are scrutable and accountable to people is becoming a necessary and critical design problem. It is more urgent than ever as automated decisions are moving into areas like national security, news, and health care. These directly impact people's well being.

To date, much of the response to this opacity has been focused on proving it’s a problem. There have been proliferations in AI partnerships, academic symposiums from which new principles have emerged for transparency, and experiments like ProPublica’s Breaking the Black Box.

Whilst these discussions are important, few have taken place that take the perspective of people who use services and consider what they need. No response has been truly accessible to the public, or shown the positive use cases of making automated decisions more accountable. The conversation needs case studies and demonstrations of practical, applicable approaches to accountability.

We propose taking a user-centred design approach to exploring the accountability of automated decisions. Based on the workings of a live service, we’ll develop high-fidelity prototypes that help us define appropriate approaches to accountability and transparency.

This is a departure from other methods of making services accountable. Computer science techniques can scrutinise codebases, but the results of such scrutiny are often not legible to the people who use a service. Audits of these services can hold the developers to account for biases that go into the machines, but do little to communicate what’s happening with data at point of use. Advocates of regulatory responses have yet to contend with the skill shortage required to parse data effectively.

These approaches to transparency all restate and reinvent an imbalance of power in which only a few people with expert knowledge of systems can hold them to account. We believe instead accountability comes when the people who use a service can object to how data is used or processed, an attitude the EU’s General Data Protection Regulation echoes. This ground-up approach is a departure from the existing literature and debate around accountable algorithms.

This involves researching what the users of a service that relies on an automated decision need, whether they understand what that service is doing, and how to bridge that gap by redesigning the service. This will generate patterns for communicating automated decisions that can guide people building such services so they’re more legible and accountable to the people who use them. Sharing these findings with a wide community – made up of those making, critiquing and regulating such services – has the potential to move the conversation forward into a practical space.

ÐÓ°ÉÂÛ̳ Participants

alison-new-headshot-002

Dr Alison Powell
Principal Investigator

Dr Alison Powell is Assistant Professor in the Department of Media and Communications at ÐÓ°ÉÂÛ̳, where she was inaugural programme director for the . She researches how people’s values influence the way technology is built, and how technological systems in turn change the way we work and live together.

 

Funding

Funding for this project is gratefully received from .