Alpha Health’s Approach to Ethics

Posted / 22nd June 2020

Introduction

This paper provides background to the development of Alpha Health’s Ethical Principles and Commitments. It covers why ethics is important to us and how we developed our principles and commitments.

 

Why ethics is important to us

Fundamentally we believe that we won’t be able to help people improve their health unless we are trusted and trustworthy, and that being ethical will underpin this trust. This belief is based on three rationales.

 

Firstly, ethics is intrinsic to issues of health and healthy behaviours, especially when providing advice and recommendations on what to do. Whilst we ensure that our recommendations are as evidenced-based as possible, there will be occasions where it will not be obvious what the ‘right’ answer is. This is not because the evidence is wrong, but because people can have competing priorities, for instance, when deciding whether to recommend activities that favour long-term health (e.g. go to the gym after work), with those that might have more short-term benefits (e.g. socialise with your friends after work). Whilst there may be options that combine the best of both, or it can be argued that over time such choices can be evened out, there will be occasions where Alpha Health will have to choose which recommendation to favour, when, and why, and we must understand the ethical values that underpin such a position. 

 

Secondly, to be able to help people effectively, we need to be able to access large amounts of data. This richness of data is necessarily highly personal and sensitive because unless this is the case, we cannot create the highly personalised support that our ambitions require. Therefore we need to earn people’s trust in order to access such personal data. Being ethical – doing the right thing, and being seen to be ethical are a significant contributor to earning such trust.

 

Last, but not least, we know that people are increasingly demanding that technology is developed and deployed in an ethical and trustworthy manner. We have heard this first hand from the clinicians that we work with, for example, insisting on the explainability of our predictive algorithms. We can also infer this in the general trends on trust. For instance, a survey by Rock Health in 2018 found that only 11% of people would be willing to share their health data with a technology company. 

 

Our assumptions and approach

Alpha Health’s goal is to improve people’s health status. We follow the World Health Organisation’s definition of health: a state of complete physical, mental, and social wellbeing, and not merely the absence of disease or infirmity1. Clearly this definition goes beyond merely looking at life expectancy, but requires us to think about wellbeing as well as the burden of disease.

 

In terms of the latter, there are a number of measures available which try to provide an overall sense of a person’s health, such as Quality Adjusted Life Year, (QALY) or Disability Adjusted Life Year (DALY). Alpha Health is still exploring which of these is the most appropriate to use. This does not mean that we are not measuring health status for our users, instead we are focused on measures that relate to the specific challenges that our apps are currently addressing e.g. measures for stress, depression, anxiety, etc.

 

In terms of wellbeing, emerging evidence from across cognitive science is that people optimise for happiness. To that end, happiness is considered to be a mental feedback signal that some behaviour or experience is good, and should be repeated, with sadness as the converse. This is not to suggest that humans are all hedonistic; happiness can be broken down into two parts: pleasure and purpose2. Improving health status often fits more into the purpose part, although it doesn’t have to be without pleasure, some people find running marathons very enjoyable.

 

Within Alpha, we are expanding the definition of happiness further. We consider that control over our affairs is also an aspect of happiness i.e. a part of what humans optimise for. 

 

In addition, we must also account for the fact that no one exists as an island; for each of us our health and happiness hugely affects and is affected by those around us. However, the reality is that our ability to understand what people, who are not our customers, are doing, and so how they may impact the health of our customers, is very limited. As a result we have chosen to focus on societal harms in general, and the impact that they indirectly have on the health of our customers.

 

To frame our understanding of societal level harms we have turned to the Universal Declaration of Human Rights and the European Convention for Human Rights. Our view is that there are many rights that we are highly unlikely to undermine, such as the right to life and liberty, the right to a fair trial, etc. Instead, we believe that there are two core harms that we should be concerned about: undermining privacy (UDHR art 12 and ECHR art 8) – through the level of data we collect; and discriminating against people based on gender, race, etc (UDHR art 7 and ECHR art 14) – because our products could make recommendations that are biased against one or more group, creating a less effective service and undermining equality.

 

Taken together, health status, pleasure, purpose, and control, provide a rich picture of what it means to support individuals to improve their health, much more so than relying solely on traditional measures of health status; and avoiding societal harms allows us to account for the broader context within which all of us live our lives. 

 

However, achieving all of these factors is not the same as being ethical or trusted, so we must turn to defining these terms3:

Ethics — the study of what is morally4 right and wrong, or a set of beliefs about what is morally right and wrong

Trust — to believe that someone is good and honest and will not harm you, or that something is safe, and reliable. 

Ethics is thus intimately connected to, but not sufficient for trust, since it does not speak to reliability and safety. It is interesting to note that trust in an object or tool is related to safety and reliability, but not necessarily to being good or honest. As such we usually don’t ascribe ethics to things. That this is changing for tools that deploy automated decision making appears to reflect the fact that such tools are entering a domain that has previously been the preserve of humans, and it is this that is driving the need to consider ethical dimensions in how these tools are developed and deployed.

 

To understand how to pull together these threads of meeting our goal, and being ethical and trusted, we have drawn on the ethical frameworks developed by a wide-range of organisations (see the appendix to this paper, below, for an overview of the frameworks that we have considered). Our own ethical principles have resulted from this analysis and cover both ethics and trust. However, we continue to describe them as being an ethical framework in line with common practice.

 

Further reading

You can read our Ethical Principles and Commitments here

Plus you can also read about Alpha Health’s Delivery Plan for its Ethics Strategy, and Alpha Health’s Governance Framework for its Ethics Strategy..


1 Preamble to the Constitution of WHO as adopted by the International Health Conference, New York, 19 June – 22 July 1946; signed on 22 July 1946 by the representatives of 61 States (Official Records of WHO, no. 2, p. 100) and entered into force on 7 April 1948

2  Dolan, P. (2015). Happiness by design. London: Penguin.

3 Definitions from the Cambridge English Dictionary.

4 For completeness, the definition of morals is: relating to the standards of good or bad behaviour, fairness, honesty, etc. that each person believes in, rather than to laws.

 

Appendix

Consumer views on trust and ethics

 

People are increasingly thinking about trust in terms of transparency and data privacy, and indeed people say that they want control over their data.

 

Transparency and privacy are the two primary drivers to build trust. People also want more control over their data.

 

A survey of 4,000 people in the USA by Rock Health in the Autumn of 2018 asked people who they would be willing to share health data with:

    • My doctor: 72% willing to share health data
    • My health insurer: 49%
    • My pharmacy: 47%
    • Research institution: 35%
    • Pharmaceutical company: 20%
    • Government organisation: 12%
    • Tech company: 11%

 

It is unsurprising that the list is topped by doctors given the strong professional framework that underpins their trusted position with patients and the public; something that is true to a lesser extent for pharmacists. It is somewhat surprising that health insurers are second, but we can speculate that the people surveyed may feel that the health insurer is on their side because the health insurer does well when the customer is healthiest – notwithstanding their efforts to hike deductibles or avoid payments when people are actually ill.

 

The fact that tech companies score badly is not surprising, but that they score worse than government is testament to the impact of recent scandals. Rock Health dug deeper into this category to see which companies fare worse:

    • Google: 60%
    • Amazon: 55%
    • Mircrosoft: 51%
    • Apple: 49%
    • Samsung: 46%
    • Facebook: 40%
    • IBM: 34%

 

Rock Health themselves express surprise at this result, having expected Apple to score highly given how much they promote privacy as a core value. Indeed, the top and bottom positions of the list seem to make no sense at all. Perhaps the best that might be said of this is that consumers do not see much differentiation between tech companies. This suggests that we, or any company, will have to work hard to truly demonstrate that we are to be trusted with health data.

 

Ethical frameworks that we have drawn on

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems is an extensive effort aimed at developing standards specific to different uses of AI, such as robotics, mixed reality software (VR, AR, etc),  and autonomous weapons systems. However, the entire effort is guided by a set of general principles:

    • Human Rights — ensuring that AI does not infringe on internationally recognised human rights
    • Prioritising well-being — they define well-being as: human satisfaction with life and the conditions of life, along with an appropriate balance between positive and negative affect.
    • Accountability — designers and operators of AI are aware of what an AI is doing, why, and be able to take responsibility for its actions
    • Transparency — AI systems should be able to explain why they took an action, both to experts and lay individuals
    • Awareness of potential misuse of technology – with a focus on education of developers, operators, and users of AI

 

IEEE is also doing a lot of work to understand what values should be embedded in AI so that it can do good and be ethical. In doing so they inevitably struggle to come up with a set of universal values, instead highlighting that norms must be identified for a particular community. This embrace of moral relativism is to be constrained by universal human rights.

 

The European Commission’s High-Level Expert Group on AI. This group takes a rights based approach, drawing on the UDHR and the EU Charter on Human Rights, with a particular focus on: respect for human dignity; freedom of the individual; respect for democracy, justice, and the rule of law; equality and non-discrimination; and citizens’ rights. The group then sets out five principles for AI:

    • Beneficence: ‘do good’ — improve individual and collective wellbeing. This is not defined in great detail
    • Non maleficence: ‘no no harm’ — reference is made to various human rights but a full definition of harm is not given
    • Preserve human agency — humans must remain in control of their own actions and decisions, and not be undermined by the AI
    • Be fair — ensure that the development, operation, and use of AI remains free from bias
    • Operate transparently — be able to explain the operations of AI to people with varying degrees of knowledge. This principle also relates to transparency with respect to business models

 

Having set out five principles, the EC group then lists ten separate requirements of Trustworthy AI: accountability; data governance; design for all; human oversight of AI; non-discrimination; respect for human autonomy; respect for privacy; robustness; safety; and transparency. It is not clear how these relate to the principles.

 

The Association of Computing Machinery set out its Statement on Algorithmic Transparency and Accountability in January 2017. It sets to seven principles:

    • Awareness — owners, designers, builders, user, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society
    • Access and redress — regulators should encourage the adoption of mechanisms that enable questioning and redress for individuals and groups that are adversely affected by algorithmically informed decisions
    • Accountability — institutions should be held responsible for decisions made by algorithms that they use, even if it is not feasible to explain in detail how the algorithms produce their results
    • Explanation — systems and institutions that use algorithmic decision-making are encouraged to produce explanations regarding both the procedures followed by the algorithm and the specific decisions that are made
    • Data provenance — a description of the way in which the training data was collected should be maintained by the builders of the algorithms, accompanied by an exploration of the potential biases induced by the human or algorithmic data-gathering process. Public scrutiny of the data provides the maximum opportunity for corrections. However, concerns over privacy, protecting trade secrets, or revelation of analytics that might allow malicious actors to game the system can justify restricting access to qualified and authorised individuals
    • Auditability — models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected
    • Validation and testing — institutions should use rigorous methods to validate their models and document those methods and results. In particular, they should routinely perform tests to assess and determine whether the model generates discriminatory harm. Institutions are encouraged to make the results of such tests public

 

The UK House of Lords Select Committee on AI has also proposed five principles:

    • AI should be developed for the common good and benefit of humanity
    • AI should operate on principles of intelligibility and fairness
    • AI should not be used to diminish the data rights or privacy of individuals, families, or communities
    • All citizens should have the right to be educated to enable them to flourish mentally, emotionally, and economically alongside AI
    • The autonomous power to hurt, destroy, or deceive human beings should never be vested in AI

 

Finally, Telefonica also has a set of principles for AI: 

    • Fair AI — guaranteeing that there will be no discrimination by Telefonica AIs
    • Transparent and explainable AI — giving users information about what is done with their data and taking sufficient measures to ensure that this information is comprehensible
    • Human-centric AI — AI that always respects the rights of humans 
    • Privacy and security by design — building this into the architecture
    • Working with partners and third parties — being clear that such organisations are themselves trustworthy with respect to Telefonica customer