Laura Sobola

Socially responsible AI in medicine: Informed consent and unintended consequences

The companies that provide machine learning and AI products and services are justifiably worried about the potential ethics issues that can cause them to lose the trust of their customers. Meanwhile, a recent Capgemini report highlights that healthcare is one of the main AI use cases causing ethical concern to both consumers and executives.

The concern is twofold – that the data will be processed for other purposes and that the data could be obtained without proper consent. The GDPR considers consent as only one of the lawful bases for data processing and data can be used for scientific research even without repeated consent.

Informed consent is actually a term mostly used in medicine and biomedical research and has a strict definition. It states that a person has to have the capacity to make the decision – the ability to understand the options provided and the consequences of action or inaction. The provider of an intervention must disclose relevant information, including the probability of benefits and risks in a way a person can comprehend it fully and evaluate the personal cost. Significantly, consent must be given voluntarily which also means that a person has the right to refuse.

In context of AI, there is an immediate issue that a lot of algorithms such as neural networks are “black boxes” and even their creators would be unable to explain every single aspect of its decision making process. Additionally, the hype and media attention associated with AI has often led to misunderstanding manifested as fear or, on the contrary, overconfidence.

In recent years, several ethics frameworks that consider the guiding principles of either digital health or AI or both, have been published by companies, governments and NGOs. How are these two types of frameworks considering the issue of informed consent? And is there a difference?

Professor Floridi has written that biomedical research and AI has a lot in common due to their nature of novel experimentation. As a former biomedical scientist and current data scientist I would like to share with you my perspective on the ethical issues surrounding both of these fields.


Laura is a Senior Consultant at Elucidata and has participated in management and delivery of healthcare related projects, as well as data science projects. Recently, she has been researching and helping to implement an ethical framework for Elucidata, which has reignited her interest in ethics.

She studied Human Genetics at Newcastle University and has a doctorate in biochemistry from University of Oxford. It was at Oxford that she discovered a passion for technology and software development and started to follow the trends of artificial intelligence. Currently, she is participating in the local healthtech community in Bristol and volunteering for One HealthTech Bristol chapter.