Search
Close this search box.

Transforming Healthcare: The Impact of AI on Medical Diagnoses

By Ilan Ackelsberg

March 21, 2024

When you show up at the doctor’s office with an illness, the first step towards getting healthy is making the correct diagnosis. The faster this happens, the better. However, researchers at the Johns Hopkins Armstrong Institute Center for Diagnostic Excellence recently found that a staggering 11% of all medical problems in the US are initially misdiagnosed. Not only does this make recovery harder, it can create new psychological and emotional side effects. If this 11% metric is accurate, there’s a chance that some of you might have first-hand experience with these frustrations.

For more severe conditions, a faster diagnosis may prove to be mortally important. Take cancer, for example. The Mayo Clinic reports that “61% of people diagnosed with early-stage lung cancer live for at least five years after diagnosis. The five-year survival rate for people diagnosed with late-stage lung cancer that has spread to other areas of the body is 7%.” David Newman-Toker, lead investigator of the team at Johns Hopkins, estimates that around 795,000 people die or are permanently disabled each year due to these misdiagnoses. “Reducing diagnostic errors by 50% for stroke, sepsis, pneumonia, pulmonary embolism, and lung cancer could cut permanent disabilities and deaths by 150,000 per year.”

During the summer of 2023, at the time of the publication of the Johns Hopkins report,  ChatGPT had only recently been launched, and medical practitioners were already considering applications of the tool and others like it. Harvard found that ChatGPT could successfully pass the US Medical Licensing Exam, solve internal medicine case files, and “provide safe and helpful answers to questions posed by health care professionals and patients.” 

Generative tools have triggered the beginning of the next epoch in medicine, but AI and machine learning applications have already been used for some time. Clearly, accurate and early diagnoses of illnesses can save lives. How can new advancements be used to make this happen?

Current Uses of AI in Medicine

The average doctor, through a quick appraisal of your vital signs, personal and family health history, medication list, lab work, and environmental indicators, can tell you—in broad terms—the illnesses for which you are most at risk. The best doctor, thanks to her encyclopedic memory and years of experience, will notice trends that the average doctor might miss. 

AI has the potential to do this at a population-wide scale. The American Health Information Management Association defines predictive analytics as “a type of advanced analysis that can be used to predict future outcomes, such as health outcomes, using historical data combined with statistical modeling, data mining techniques, and machine learning.” This allows health practitioners to make interventions before their patients even get sick. 

In addition to identifying potential risk factors, AI can be employed to make more accurate medical diagnoses and help doctors shape their treatment plans. “AI can analyze large amounts of patient data, including medical 2D/3D imaging, bio-signals, vital signs (e.g., body temperature, pulse rate, respiration rate, and blood pressure), demographic information, medical history, and laboratory test results. This could support decision making and provide accurate prediction results.” This approach has already been successful in oncology settings, where researchers have developed targeted therapies based on the molecular profile of cancer cells.

Beyond accuracy, AI algorithms are fast. A single MRI scan takes thousands of images. Image classification technology can label the ones that matter, putting them in front of the doctor’s eyes first. “Machine learning algorithms can accurately interpret medical images like x-rays or MRI scans within seconds without human intervention, potentially saving time for physicians while increasing accuracy rates for diagnoses.” 

Time-saving happens outside of clinical contexts as well. AI, especially new, generative tools, can reduce the administrative burden of health practitioners. “Machine learning algorithms applied to EHRs (electronic health records) could automate record-keeping tasks, freeing up more time for caregivers to focus on improving overall quality of care.” Hospitals and clinics are adopting chatbots or virtual assistants powered by AI technology to answer basic patient questions about symptoms, medications, and treatments while reducing wait times for appointments, and AI allows for the automation of routine tasks such as scheduling appointments or sending reminders to patients about medication refills. 

Challenges and Risks

Of course, risk is unavoidable in any rollout of a new technology. AI in medicine is nothing new, but generative applications have only taken off over the past year; oftentimes risks only reveal themselves through multiple rounds of testing and debugging. This phenomenon is exacerbated in a world where companies are scrambling to get their products to market as fast as possible, knowing that those who get there first may remain there the longest

But even before these programs can work, they must be trained on massive datasets. Obtaining good data is a challenge, and bad data can disrupt the efficacy of the algorithms. “The first challenge,” writes Mugahed A. Al-Antari in Diagnostics, “is due to medical data quality and availability, where AI algorithms require large amounts of high-quality labeled data to be effective, and this can be a challenge in the medical field, where data are often fragmented, incomplete, unlabeled, or unavailable.” And even if the data are high quality, cleaned, and complete, the composition of the datasets can reinforce existing health inequities. “AI algorithms can be biased,” says Al-Antari, “if they are trained on data that is not representative of the population they are intended to serve, leading to incorrect or unfair diagnoses.” Remember: AI algorithms, even those that are “generating” new content, are only as good as the data they are trained on.

Taking this a step further, what would happen if an AI medical diagnostics algorithm, trained on biased data, leads a doctor to give their patient bad advice? Or worse, if an AI tool causes harm to a patient? Shockingly, even as some of these tools are rolled out, “there is no systematic overview of the legal concerns raised by the use of health-related AI.” An article in the Medical Journal, Front Med, puts this well: “Since operators are not allowed to understand and verify the logical processes which drive the machine to the results, human–machine interaction in actions and omissions, especially when AI systems take autonomous choices, poses relevant issues of liability in the event of damage to third parties. New technological approaches introduce new realities that might unlikely fit within the solid edges of the current law.” You can imagine a tense triangle of blame: doctors, AI tech companies, and malpractice insurance providers all pointing fingers at one another while the patient stands to the side scratching his head.

Finally, when it comes to task automation, health workers in administrative roles are concerned that AI algorithms will supplant, rather than assist. An OECD report recently found that, in the aggregate, 27% of administrative jobs are at “high risk of automation.” For those who do find themselves working alongside new tools, adjusting is not always easy. “Numerous health professionals have not yet been equipped with the skills to understand the value of AI tools in their practice to save time and improve health outcomes. Further, the development of health-related AI tools does not always involve health professionals and resulting solutions can add to the burden of already stretched workloads, contributing to worker burnout.” 

The healthy skeptic will wonder whether the consumer will feel these automation-related cost savings or whether medical and tech companies will pocket the difference.

As an experiment, I asked ChatGPT to summarize the Hippocratic oath, which it did without issue. When I asked it to swear by it, it refused: 

“As an AI, I don’t have personal beliefs or the capacity to swear oaths.”

Should an entity that is merely an approximation of a huge amount of human-produced knowledge, that can recite but not swear by an oath—because it has no self and cannot think—be allowed to take on such an outsized role in medicine? This is less a medical question than it is a metaphysical one. 

The reality is that AI is not, at the moment, usurping power from doctors and that there are as many benefits as there are risks. As the technology grows in popularity, the burden may fall on the patients to hold developers and healthcare providers accountable, ensuring benefits are maximized while minimizing risks. The journal Communities in Action: Pathways to Health Equity offers a few suggestions: medical-legal partnerships, and community-focused health equity initiatives, among others. None of these are foreign to NYC. The Connections to Care and Building Healthy Communities initiatives both attempt to address health equity at a neighborhood level. But when it comes to AI in healthcare right here in NYC, Mayor Adams’s recently issued AI Action Plan fails to mention the subject beyond the introduction. 

It may be up to us to shape the regulation of this technology locally, and we can do so by staying involved, going to community board meetings, and even talking to our neighbors. We must ensure that this transformative technology ultimately serves patients, supports doctors, and builds a more robust workforce—it should increase the overall quality of healthcare delivery without cutting corners. 

If it doesn’t, we must diagnose and treat the issue—before it spreads.

Related Posts

No data was found

Join Newsletter

Stay in the know with news, announcements, and expert insights from Public Works Partners.