Clash of the Principles

Posted / 14th February 2020

Oliver Smith, Strategy Director and Head of Ethics, Alpha Health

 

I wrote recently about the ethical framework that we integrate into everything we do here at Alpha Health.

 

Within the framework sit our five core principles, which include, among others, the need for user control, transparency and accountability. Principles such as these are not simply a nice to have, but are essential if we are to have the impact we want from AI in healthcare.

 

As with the implementation of any framework or guidelines, there are inevitable hurdles. This week, we published a paper about a clash of our principles, which I recently presented at AIES in New York.

 

Bias vs. privacy – which triumphs?

 

This conflict was unearthed for us by Eticas and Universitat Pompeu Fabra (UPF) in Barcelona who we commissioned to undertake an algorithmic audit of our prototype wellbeing app, REM!X. REM!X used a reasonably simple algorithm to suggest activities related to wellbeing for the user.

 

We were seeking to understand whether the activities the algorithm suggested were in any way discriminatory toward gender, religion, or other groups protected under anti-discrimination legislation. However, when Eticas and UPF first analysed REM!X they told us that there was no direct evidence either way for whether the app was biased, because we hadn’t collected any personal data.

 

In retrospect, we perhaps could have seen this coming. Because we had strongly adhered to our principle relating to user control and privacy, we had not actually collected any of the data that would allow us to understand whether the suggestions made by REM!X were biased. The lengths we had gone to around data minimisation in order to reduce any privacy concerns meant that we were simply unable to understand whether our algorithm was discriminatory – and therefore unable to demonstrate how we were adhering to another of our principles.

 

Fortunately Eticas and UPF were able to use indirect evidence to assess REM!X, such as documentation for how we created the algorithm and digital ethnography through over 200 user comments in Google Play Store. Using this evidence they found that REM!X was not biased.

 

What happens when right meets right?

 

While we’re developing our ethical principles to iron out the possibility of any future clashes, what do we do when two ‘rights’ clash? How do we prioritise one above another?

 

This is an age-old problem that ethicists and philosophers have vigorously debated for hundreds of years. When both outcomes (in our case removal of bias or the preservation of privacy) have value and merit, it’s an incredibly difficult decision to make. There is no consensus on what is the ‘correct’ answer, but in our case we look at the potential harm, and in particular the risk that we further disadvantage an already disadvantaged group.

 

In the case of REM!X, where suggestions related to everyday activities, the risk of further disadvantaging a particular group was considered to be low, and therefore using indirect evidence of bias acceptable. However, in other situations, such as medical treatments, the potential harms from bias could be higher. In such cases we may decide that the ‘right’ balance of our principles is to collect some personal data, from a sample of users, for the purposes of assessing bias.

 

For us, the clash between privacy preservation and the removal of bias served as a valuable reminder that we need to think up-front about all of our principles, and agree how to balance them.

 

If you’d like to read the full paper I presented at AIES, you can find it here.