Health – AI, Governance, Ethics
Ethical, social and political challenges of artificial intelligence in health
Future Advocacy 2018 – A Report with Wellcome Trust
Written and researched by Matthew Fenech, Nika Strukelj, Olly Buston for the Wellcome Trust April 2018 :: 59 pages
Executive Summary [Excerpt]
…As AI systems become better at sorting data, finding patterns, and making predictions, these
technologies will take on an expanded role in health and care, from research, to medical diagnostics, and even in treatment. This increasing use of AI in health is forcing nurses, doctors and researchers to ask: “How do longstanding principles of medical ethics apply in this new world of technological innovation?”
In order to address this question, we have undertaken a detailed review of existing literature, as well as interviewing more than 70 experts all round the world, to understand how AI is being used in healthcare, how it could be used in the near future, and what ethical, social, and political challenges these current and prospective uses present. We have also sought the views of patients, their representatives, and members of the public.
We have categorised the current and potential use cases of AI in healthcare into 5 key areas:
:: Process optimisation e.g procurement, logistics, and staff scheduling
:: Preclinical research e.g drug discovery and genomic science
:: Clinical pathways e.g. diagnostics and prognostication
:: Patient-facing applications e.g delivery of therapies or the provision of information
:: Population-level applications e.g. identifying epidemics and understanding non-communicable chronic diseases…
SUMMARY OF ETHICAL, SOCIAL, AND POLITICAL CHALLENGES AND SUGGESTIONS FOR FURTHER RESEARCH
01 What effect will AI have on human relationships in health and care?
:: What effect will these technologies have on relationships between patients and healthcare practitioners?
:: What effect will these technologies have on relationships between different healthcare practitioners?
:: What do healthcare practitioners think about the potential for these technologies to change their jobs, or to lead to job displacement?
:: How do these tools fit into the trend of enabling patients to have greater knowledge and
understanding of their own conditions? How different are they from looking up one’s symptoms on a search engine before going to see a healthcare practitioner?
:: Given that AI is trained primarily on ‘measurable’ data, does reliance on AI risk missing non-quantifiable information that is so important in healthcare interactions?
:: If AI systems become more autonomous, how should transitions between AI and human control be incorporated into care pathways?
02 How is the use, storage, and sharing of medical data impacted by AI?
:: How is medical data different from other forms of personal data?
:: What is the most ethical way to collect and use large volumes of data to train AI, if the consent model is impractical or insufficient?
:: How do we check datasets for bias or incompleteness, and how do we tackle these where we find them?
:: Should patients who provide data that is used to train healthcare algorithms be the primary beneficiaries of these technologies, or is it sufficient to ensure that they are not exploited?
03 What are the implications of issues around algorithmic transparency and explainability on health?
:: Are expert systems or rule-based AI systems more suitable for healthcare applications than less interpretable machine learning methods?
:: What do patients and healthcare practitioners want from algorithmic transparency and explainability?
:: Are improved patient outcomes, efficiency and accuracy sufficient to justify the use of ‘black box’ algorithms? If such an algorithm outperforms a human operator at a particular healthcare-related task, is there an ethical obligation to use it?
: Could ‘explanatory systems’ running alongside the algorithm be sufficient to address ‘black box’ issues?
04 Will these technologies help eradicate or exacerbate existing health inequalities?
:: Which populations may be excluded from these technologies, and how can these populations be included?
:: Will these technologies primarily affect inequalities of access, or of outcomes?
05 What is the difference between an algorithmic decision and a human decision?
: How do we rank the importance of a human decision as compared to an algorithmic decision, particularly when they are in conflict?
:: Do human and algorithmic errors differ simply in degree, or is there an essential, qualitative difference between a machine ‘giving the wrong answer’ and a human making a mistake?
:: How will patients and service users react to algorithmic errors?
:: Who will be held responsible for algorithmic errors?
06 What do patients and members of the public want from AI and related technologies?
:: How do patients and members of the public think these technologies should be used in health and medical research?
:: How comfortable are patients and members of the public with sharing their medical data to develop these technologies?
:: How do patients and other members of the public differ in their thinking on these issues?
What is the best way to speak to patients and members of the public about these technologies?
07 How should these technologies be regulated?
:: Are current regulatory frameworks fit for purpose?
:: What does ‘duty of care’ mean when applied to those who are developing algorithms for use in healthcare and medical research?
:: How should existing health regulators interact with AI regulators that may be established?
:: How should we regulate online learning, dynamic systems, as opposed to fixed algorithms?
08 Just because these technologies could enable access to new information, should we always use it?
:: What would the impact of ever-greater precision in predicting health outcomes be on patients and healthcare practitioners?
:: What are the implications of algorithmic profiling in the context of healthcare?
09 What makes algorithms, and the entities that create them, trustworthy?
10 What are the implications of collaboration between public and private sector organisations in the development of these tools?
:: What are the most ethical ways to collaborate?
:: How do we ensure value for both the public sector and for the private sector organisation, for example in the use of data? In publicly-owned/taxpayer-funded healthcare systems, such as the UK NHS, how do we ensure that citizens receive value too?
:: What are the implications of the concentration of intellectual capacity in private sector organisations?