Sperling Prostate Center

Artificial Intelligence in Medicine – Part 3: Is There a Downside to AI in Medicine?

From deep learning algorithms that can read CT scans faster than humans to natural language processing (NLP) that can comb through unstructured data in electronic health records (EHRs), the applications for AI in healthcare seem endless.[i]

Endless application implies endless possibilities for Artificial Intelligence (AI) in medicine and healthcare. Sounds terrific! So, what’s the downside?

So far, one has not yet clearly emerged. No one has died, no systems have collapsed, no physicians have been replaced by robots. And yet, many experts are concerned about unintended negative consequences in a variety of areas: harmful medical errors, violations of patient privacy, change in physician-patient relationships, loss of radiologists’ jobs, economics of developing machine learning, and ethical/legal/regulatory issues. Let’s look at some examples.

Harmful medical errors

Would you trust your doctor’s treatment plan to a computer diagnosis? Machine learning requires an enormous amount of input. How much is enough? A recent study employed artificial intelligence to screen for lung cancer, based on 42,290 CT lung scans from nearly 15,000 patients. “The researchers did not tell the AI what to look for, just which patients went on to get cancer and which did not.”[ii] The computer proved to be more effective than experienced radiologists, with 5% better accuracy and 11% fewer false diagnoses. However, no one knows how many images it takes to initiate the training of self-learning computers, and if the data itself is skewed, even machines will develop biased output.

Violations of patient privacy

Patients’ medical records must be accessed in order to help machine learning develop predictive analysis. How many women whose mammograms had a certain type of suspicious area actually developed breast cancer? How many people carrying a particular gene mutation became demented? Tens of thousands of electronic medical records must be sifted through. Who is doing the sifting? How secure are the electronic systems from possible hackers? There are legal implications for a data breach: who is responsible? A computer can’t be sued or sent to prison over mishandling of patient files.

Change in the physician-patient relationship

While some experts believe that AI will liberate physicians’ time so they can spend more quality time with patients, others worry that if doctors become dependent on AI reports, the trade-off will be loss of the intangible qualities of physician intuition and empathy. After all, the deep learning by which diagnostic reports, etc. are generated is not governed by emotional intelligence, but by complex algorithms, logic and probability. Will this add an element of coldness or distancing to the doctor-patient relationship? Will patients trust their doctors less, or perhaps more if computer-generated results are more reliable than “old-fashioned” human experience and intuition?

Loss of jobs

In a previous blog I address this in more depth. Suffice it to say that despite some fear that human readers will be put out of business by machine learning, human radiologists will still be needed to explain computer reports in terms patients can understand and, in dialogue with patients, to exercise judgment help solve problems in guiding patients to treatment—something it’s hard to envision a computer doing.

How much will it cost?

Artificial intelligence for medical applications is not cheap. One report states, “The AI in healthcare market is slated to expand from its current $2.1 billion to $36.1 billion in 2025, representing a staggering compound annual growth rate (CAGR) of 50.2 percent…Hospitals and physician providers will be the major investors in machine learning and artificial intelligence solutions and services, the report predicts.”[iii] Just as many end users became disillusioned or cynical over the challenges posed by keeping electronic medical records (EMR), will AI demonstrated that it doesn’t create more burdens for doctors, nurses and hospital systems? Will it prove worth the investment?

Ethical/legal/regulatory issues

What are the rules that will govern the development and implementation of medically intelligent computer programs? Who will develop them? State and federal legislators? Lawyers? Academics? Creating AI programs for self-teaching electronic brains does not mean the brains will operate as expected. If that happens, do you sue the computer, the hospital, or the individuals who wrote the program code? Artificial intelligence in medicine, as in other areas like self-driving cars, raises problems “…related to predictability and the ability to act independently while at the same time not being held responsible.”[iv] When polled, “Two-thirds of executives believe that they are developing platforms and services that fall into ‘regulatory grey areas that do not clearly address the privacy, security, and ethical concerns of an extremely liquid big data environment.”[iv] You can see how thorny this will be.

I hope you’re looking forward to the next article on specific AI challenges, as much as I am to writing it.

NOTE: This content is solely for purposes of information and does not substitute for diagnostic or medical advice. Talk to your doctor if you are experiencing pelvic pain, or have any other health concerns or questions of a personal medical nature.

[i] Bresnick, Jennifer. “Arguing the Pros and Cons of Artificial Intelligence in Healthcare.” Health IT Analytics, Sep. 17, 2018.
[ii] Gallagher, James. “Artificial Intelligence Diagnoses Lung Cancer.” BBC News Online, May 20, 2019. https://www.bbc.com/news/health-48334649
[iii] Bresnick, Jennifer.” Artificial Intelligence in Healthcare Spending to hit $36B.” Health IT Analytics. Dec. 28, 2018.
[iv] Maksim Karliuk. “The Ethical and Legal Issues of Artificial Intelligence.” Modern Diplomacy. Apr. 24, 2018.
[v] Bresnick, J. Ibid.

 

About Dr. Dan Sperling

Dan Sperling, MD, DABR, is a board certified radiologist who is globally recognized as a leader in multiparametric MRI for the detection and diagnosis of a range of disease conditions. As Medical Director of the Sperling Prostate Center, Sperling Medical Group and Sperling Neurosurgery Associates, he and his team are on the leading edge of significant change in medical practice. He is the co-author of the new patient book Redefining Prostate Cancer, and is a contributing author on over 25 published studies. For more information, contact the Sperling Prostate Center.

You may also be interested in...

WordPress Image Lightbox