Posted / 11th November 2019
Oliver Smith, Strategy Director and Head of Ethics, Alpha Health.
Technology is often hailed as the solution to many of the world’s problems. From the internet connecting the world nearly thirty years ago, to renewable energy solutions now doing their part to save the planet, technology has had many once unimaginable applications.
A much-touted example of a sector ripe for disruption from technology is healthcare. With systems and services under increasing pressure, and struggling with the ever-increasing demands on services, it is widely expected that technology driven by AI, underpinned by healthcare data, could unlock critical benefits ranging from simply improving communications to delivering the ability to personalise treatments.
But we’re not there yet, and it’s important to look at why. While the cost of new technology is certainly a factor, one of the main struggles facing disruption in the healthcare sector is trust. Trust is a small word with huge implications.
Nowhere is trust more pertinent than when it comes to people’s health. Healthcare data is incredibly sensitive, and cultivating patients’ trust is crucial in order to implement technology which relies on healthcare data to unlock life-changing benefits and improve healthcare outcomes.
As patients see technology companies potentially commercialising huge sets of patient data without paying sufficient attention to its sensitivity, it’s understandable that trust wanes and, at some points, disappears entirely. Many AI-driven solutions may even appear mystical to the average patient, with powerful algorithms making increasingly important decisions that, not only patients, but also health professionals find impossible to understand, and therefore trust.
For us to fully harness the benefits that AI can bring to healthcare it’s paramount that we get this right. Without patient trust, these technologies simply won’t be able to have the impact that we need them to. We need to overcome users’ scepticism by building technology which allows users to understand how their data is being captured and used, and which delivers transparent benefits to patients and healthcare systems.
How to build that technology?
Our work at Alpha Health is focused on creating digital personal health assistants that support people to change their everyday behaviours and habits, helping them lead happier and healthier lives, using best in-class-technology, including AI.
Since Alpha Health launched in 2016, we’ve put a great deal of effort into developing and implementing an ethical framework, because we have always understood that without our users’ trust we would not be able to help people with their health and happiness.
Our ethical framework consists of three layers. First, underpinning everything, a set of ethical principles which draw on the myriad of new ethical AI framework, adapted to our work in Health. Second, we put these principles into practice: from re-designing the process of agreeing privacy policies and terms and conditions to make them more transparent, to building ethics into the very heart of our technology, which is why we are investing effort into Privacy Preserving AI and Explainable AI. Finally, we seek to strengthen our work through external review, which is why we use the independent ethics consultancy, Eticas, to audit our work.
How can we do better?
We know there’s always more that we can do. The most important element of our work is that we always strive to be better. To gain, and then hold onto our users’ trust by always putting them first. It’s why we use Eticas to review all of our work so that we learn from everything we do. It’s also why we want to start talking more publicly about our work, including a commitment to start publishing our audits – so that we can learn from others, and hopefully people can learn from us.
Interested in hearing more about what Alpha Health is doing? We’ll be at the Open Data Institute Summit in London tomorrow, where we’ll be demoing our explainable AI work and I’ll be speaking at the closing conversation on the future impact of data.