Intro to our ethical framework

Posted / 15th January 2020

Oliver Smith, Strategy Director and Head of Ethics, Alpha Health. 

I’ve spoken extensively about the importance of trust recently. In my last blog, I argued that artificial intelligence will never be successful without it, before elaborating on the subject with Gavin Freeguard and Jenni Tenison as we closed proceedings at the ODI Summit.

 

In both cases I talked about the importance of companies caring about trust deeply; how is has to be built into their core business and not simply viewed as an add-on. Throughout the course of 2019, an increasing number of large companies committed to ethics programmes, allocating significant resources and time.

 

Here at Alpha Health we use five core principles to underpin our work and our ethical commitments to our users:

 

1. Improve your health and happiness

 

The first step to building trust is delivering against promises. Alpha Health’s mission is to build products that help our users lead happy and healthy lives. This must be achieved for each and every user – one part of this, for example, is ensuring that our algorithms do not discriminate on the basis of gender, race, religion, or any other group protected by anti-discrimination law.

 

2. Put you in control

 

We believe that our applications should work for you, not the other way around. To this end we are developing techniques to allow us to identify and prevent unhealthy patterns of use. With respect to control over data, The General Data Protection Regulation is clear that users must always know exactly how their data is being used, as well as being able to easily take control of their sharing preferences. However, the challenge here is how this is implemented, which for us highlights the importance of our third principle.

 

3. Be understandable and transparent

 

Let’s face it, privacy policies and terms and conditions are often unintelligible, even if each of us was able to dedicate the hours required to read them. At Alpha Health we believe that this is unacceptable, and we are experimenting with more visual designs as well as simpler language in order to ensure that users really do understand what our apps do, and how. This work extends to creating new technologies that allow the results of algorithms to be explained, and not remain a black box. Finally, I recognise that Alpha Health needs to do more to publish its work, such as ethics audits. This is something we’ll start doing in earnest in 2020, both on our website and through presentations at ACM FAT in Barcelona this month and AIES 2020 in New York in February.

 

4. Secure your data

 

Alpha Health is working toward an AI system that will ultimately mean we do not have to see any of our users’ personal data. There’s simply no better way to guarantee a user’s security and privacy than by avoiding the transfer of any private data in the first place. As I am sure many will appreciate, this is incredibly difficult to do with the technology available today, which is why we are investing heavily in leading research and development in the field.

 

5. Be accountable

 

Accountability is fundamental to becoming an ethical leader. People must be able to understand the decisions an algorithm makes, and their creators ultimately be held responsible, rather than point the finger elsewhere. Therefore, at Alpha Health, we regularly get an external view on the work we do, whether through audits by recognised leaders in the field, such as Eticas, or by partnering with industry leaders who have the expertise to help us serve users with full accountability.

 

Putting our principles into practice, however, has resulted in a number of challenges. Firstly, we need to get the level of specificity right, balancing both meaningfulness in the present while allowing room for ambition in the future. Secondly, there may be conflict between the principles as we develop them. For instance, the algorithmic audit of one of our apps identified that we had emphasised privacy to such as extent that we struggled to provide direct evidence on algorithmic bias (indirect evidence showed that we had avoided bias) – I shall be speaking about this at AIES 2020 in February. Lastly, we need to ensure that as we grow and learn, our principles do too.

 

We’re working hard to address all of these issues, and will continue trying to lead the charge in bringing to life truly ethical AI.