Considerations related to the use of AI in healthcare

By Keith A. Hovan

“Artificial Intelligence (AI) has been increasingly utilized in healthcare delivery, from assisting in medical diagnosis to improving patient outcomes. While AI technology brings many benefits, it also raises ethical concerns, particularly in the areas of privacy, bias, and accountability. The potential misuse of AI in healthcare delivery can have serious consequences, including patient harm and violations of individual rights. In this blog post, we will explore the ethical considerations of using AI in healthcare delivery and discuss potential solutions to mitigate these risks. It is important that we engage in a thoughtful and informed conversation on this topic, as the decisions made now will have a profound impact on the future of healthcare delivery.”

The opening paragraph above was not written by me. It is Open AI’s response to my request for it to write an introduction to a blog post on the ethical considerations of using AI in healthcare delivery. It’s not too bad, is it?

Artificial intelligence (AI) seems to be the topic on everyone’s minds, and this is certainly true in healthcare as well. And, as my AI-written introduction states, I will be discussing several considerations healthcare leaders and providers need to ponder as we navigate the introduction of this new and rapidly evolving technology into our delivery of healthcare.

Often, technological terms can come onto the scene and become part of our day-to-day parlance so quickly that we can skip the important step of defining them. So first, I will pause to explain what I mean by AI in this context. AI, also called machine learning, is essentially computer programs that are given massive amounts of data (e.g. text, images, numbers, video—anything that can be encoded), then analyze and draw inferences from patterns within that data. To use an example from healthcare, oncologists could load millions of pictures of normal and abnormal skin onto a computer. The computer, through the power of AI, would then be able to look for patterns enabling it to identify cancerous skin lesions as it matches image patterns. In seconds, AI could do what it would take a human specialist countless hours to do, freeing them up to care for their patients. Examples like this are only the tip of the iceberg.

However, while AI can look for patterns and accrue and arrange coherent data, AI is limited by an inability to synthesize and evaluate the quality of data, form an argument, and make decisions. Machine learning is not going to be a panacea for all problems, and has great potential to create new ones. There are many considerations that healthcare leaders must make as they evaluate AI in their particular contexts.

The need for interpretation

AI technologies currently fall into two different categories, generative and general purpose. Generative AI creates text or images based on a series of previous examples. It enables systems trained on large amounts of data to create an output based on a predictive analysis. The other type of AI is general purpose, which is not built for a specific reason or application and doesn’t perform at the same level as generative AI. For example, ChatGPT is optimized for dialogue, but not ideal for solving complex computations.

At this point, AI still needs a significant amount of human oversight and intervention in order to be of benefit in a healthcare environment. AI will certainly have a role in care delivery, but initially as an adjunct to the human healthcare provider.  The place for AI is in situations where it can do what it does best: be fed large amounts of data and look for patterns.

This function alone opens up many dynamic uses of AI. This technology can be used to promote patient safety by flagging issues, potential complications, or risk factors for a given patient, which would alert providers to the need for early intervention that could lead to better outcomes by preventing or minimizing complications. Machine learning could be crucial in determining the course of cancer treatment. Armed with large amounts of data regarding malignancy types, location, duration, etc., as well as information from patients with a similar disease burden to make recommendations for the best treatment options to promote optimal outcomes.

The above examples still require human interpretation of the alert or flagged data, processing of context, and acceptance of the finding. I view AI as a promising collaborator, but not a technology to be relied upon without question, as there are risks related to accuracy and potential bias. AI provides a starting point requiring human intervention for refinement.

Data relevance

AI on its own is not intelligent; it mimics human intelligence. As such, it must be properly “refereed.” If we’re using AI, we are responsible for ensuring that the output of that AI isn’t biased by what has been put into it. For example, if the health data available for a certain issue or diagnosis is largely based on American Caucasian males, the observations AI offers won’t apply as helpfully or widely. In fact, their use could even be dangerous for patients who don’t meet that profile.

The data that comes out of machine learning are only as good as the data being put in. AI is inherently probabilistic, making predictions based on data and prompts and how well they are crafted. In ChatGPT, if you ask silly or imprecise questions, you will get silly or imprecise answers. The same principle applies in the generative forms of AI being incorporated into healthcare. A human intervener is essential to serve as a gatekeeper ensuring that relevant data is being input, and providers carry out the best application of AI’s recommendations.

Cultural context

Each part of the world has its own cultural and regional context, unique religious moral structures, and ethical principles and considerations. Because AI isn’t currently suited to account for these differences and nuances, there will be wide variance of AI’s effectiveness based on where it is being used.

Language models are trained on large amounts of text inputs, often in a specific language. The use of a specific language with its associated set of norms (e.g., legal, ethnic, national, religious) can introduce bias if you are attempting to apply them to a non-related population or group. Most of the models are built in English. This creates an immediate bias problem if the user speaks a character-based language, such as Chinese, Japanese, or semitic languages. Furthermore, language models can “hallucinate,” or make up their own interpretations of inputs based on false data (such as “fake news”) or misunderstood patterns.

The English language is the most digitized of the world languages, so there will be social biases. This creates the potential for biases against protected classes such as women, black and brown communities, and LGBTQAI+ persons.

We have worked for many years to create equity in our care delivery and to recognize challenges to providing care for diverse populations. We have also endeavored to ensure that the care we deliver is guideline adherent and meets a high standard of care. This has been a struggle, and I wonder whether introducing AI into healthcare has the potential to make matters even worse if left unchecked.

Regulation and compliance issues

Healthcare leaders, providers, and government officials must consider how we can safeguard against these cultural biases. What level of input data moderation will this require? Will we be able to identify AI-generated outcomes and differentiate them from human-generated ones? How do we incorporate common sense/human interpretation without losing the efficiencies of using AI technology?

Without proper regulation and oversight, AI could not only reinforce inequities, but propagate them. Sixteen U.S. states have some form of regulation, but effective regulation is a challenge because machine learning development is moving so fast.

Furthermore, HIPAA compliance issues have prevented the use of some applications and the acquisition and input of certain types of data. Regulatory and compliance standards must catch up to the times with an eye toward protecting vulnerable populations if AI is to continue to expand to its full potential in healthcare delivery.

Conclusion

AI’s increasing presence in our lives will have an influence on the way we experience and interact with the world. I think it is safe to say that there will be evolving ethical and social issues that impact healthcare and beyond.

Yet we, as healthcare professionals, have committed ourselves to the philosophy of Hippocrates: “do no harm.” As we increasingly adopt AI technologies, we must ensure we cause no harm to those we serve. In order to do that, we have to see to it that AI has significant limitations placed on it in the current phase of its evolution. Similar to Tesla’s version of Autopilot, AI is best suited as an “electronic co-pilot” with the potential to increase the effectiveness of human providers, but we should not rely heavily on just yet.

 

Sources

Irene Solaiman, Hugging Face

Gidian Lichfield, OpenAI

Seth Dobrin, Responsible AI Institute

Tom Simonite, Wired

Lama Nazer

Suchi Saria, AI Professor, Johns Hopkins, CEO, Bayesian Health

Dr. Azra Bihorac, Dean of Research, UFCM

Dr. Tiffany Kung, Researcher, AnsibleHealth