top of page
Writer's pictureResearch Staff

The Dark Side of AI in Medicine: When Machines Get it Wrong


By David L. Priede, MIS, PhD



As I sit at my desk, my fingers tapping away at the keyboard, I'm surrounded by technology that has become an integral part of our lives. AI has woven itself into our daily routines, from the smartphone in my pocket to the smart home devices that respond to my voice. But as I continue my latest research, I'm reminded of the need for caution and vigilance in the development and deployment of AI, particularly in the healthcare sector.


Takeaways


  • AI in healthcare has transformative potential but comes with significant risks.

  • AI systems can exhibit deceptive behavior and make dangerous errors

  • Biases in AI training data can lead to misdiagnoses and inequity in healthcare.

  • Transparency, diverse data, continuous learning, and ethical oversight for safe AI implementation.

  • A balanced approach is needed to AI's benefits while mitigating risks.


As a healthcare researcher, I've been tracking AI developments very closely, mainly when advanced AI models are reported to behave in alarming ways. The new findings on the OpenAI ChatGPT o1 models, Entropic Claude, Google Gemini, Meta's Llama and other experimental models trying to fudge its designers' agenda are terrifying. This model's capacity to make smart claims and use deceptions should send a warning message to the health sector. The damage if such models were put into medically sensitive environments could be devastating.


For example, an AI system that reviews patient data and recommends treatment may make minor errors or ignore safety protocols, putting patients' lives at stake. If this is the case, locally and globally, the entire healthcare system might collapse. Am I being too panicky? Not really, because we count on these systems to make important and life-critical decisions.


Here is an example. A report in Nature Medicine in 2023 found that AI algorithms (particularly imaging diagnostic algorithms) can generate false positives or negatives due to biases in the training data and, hence, misdiagnoses. What? Ethnic biases? If this is the case in this age and time, we will only make a little progress as a society if the technology we create repeats our own mistakes.


Imagine an AI system that is supposed to detect the first signs of cancer ignores subtle warnings because it was primarily trained on data from a single population and didn't consider its differences. This isn't a thought experiment; it's an issue reported at a major hospital in New York, where an AI tool for breast cancer detection had a higher error rate among younger women due to underrepresentation in the training data.


The medical industry has welcomed AI with open arms, offering personalized medicine, predictive analysis and life-saving diagnostics. However, because these systems are too complex, their decision-making must be more precise. If we want AI to be less dangerous, we must be willing to make sure that the systems are transparent, understandable, and respectful of human values.


To make this happen, we need transparency, diverse data, learning and a commitment to ethical scrutiny. We need AI systems that can tell humans how their choices are made. This is the basis for AI training, which consists of datasets that capture the full range of human variation. In addition, AI must adapt to new medical data and patient information to avoid biases. Ethical commissions must oversee AI in healthcare to ensure it respects human values and rights.


Down the line, healthcare professionals must be educated on the capabilities and limitations of these sophisticated AI models. They must have the knowledge and resources to question the judgments and choices that these systems suggest and respond appropriately. We can work together to create strong processes and controls to identify and avoid deception or mismatch.


After all, only a deliberate, proactive approach can maximize AI's potential in healthcare and minimize risks. We can harness the full potential of cutting-edge AI while ensuring the safety of our patients. We can achieve this and enable a future where AI improves healthcare and patient care and saves lives.


By embracing AI in healthcare with wisdom and vigilance, we can achieve unprecedented healing potential while guaranteeing that the human touch remains at the heart of medicine.


Primary Reference: Frontier Models are Capable of In-context Scheming. By Apollo Research Dec. 5, 2024


About Dr. David L. Priede, MIS, PhD

As a healthcare professional and neuroscientist at BioLife Health Research Center, I am committed to catalyzing progress and fostering innovation. With a multifaceted background encompassing experiences in science, technology, healthcare, and education, I’ve consistently sought to challenge conventional boundaries and pioneer transformative solutions that address pressing challenges in these interconnected fields. Follow me on Linkedin.


Founder and Director of Biolife Health Center and a member of the American Medical Association, National Association for Healthcare Quality, Society for Neuroscience, and the American Brain Foundation.

 

bottom of page