Bioethics and Artificial Intelligence

Image by Marcel Scholte

Artificial Intelligence, often simply AI, refers to the rapidly developing technical field of computer systems or algorithms that can imitate intelligent human behavior. There are many bioethical questions raised by the increasing use of AI in the field of medicine.

There was an interesting recent experiment in telemedicine that used AI to replace doctors in the diagnosing of illnesses. Patients reported their symptoms to the AI system and received a correct diagnosis from the computer more frequently than from different clinicians. It is not surprising that computers linked to enormous databases and with extremely fast processing speeds would perform better than human beings who get tired, cannot easily recall rare diseases they studied in medical school years earlier, etc. AI programs have long demonstrated that they can consistently defeat chess grandmasters by “thinking” many more moves ahead. There also seems to be a revolutionary change coming concerning the medical analysis of diagnostic images. Simply put, the AI systems can give highly accurate diagnoses in almost real-time at a significant cost savings.

The New England Journal of Medicine announced the launch in 2024 of a new journal, NEJM AI. Their aim is to provide an interdisciplinary forum for the best ways to integrate AI into health care with the goal of transforming medicine. The enthusiasm for AI seems boundless with advocates seeing potential applications in almost all areas of medicine and health care delivery.

And yet … the thought of AI flooding health care and replacing humans raises significant ethical and other concerns. Few people would feel comfortable putting their lives in the “hands” of an autonomous robotic surgeon, for instance, even if assured that it performs significantly better than a human surgeon. That is why these systems are still operated by flesh and blood surgeons even though they do not require them in many circumstances. There is something cold and even threatening about a computer or robot supplanting a living doctor or nurse. Human beings have moral consciences and empathy that cannot be computer generated.

A bioethicist colleague commented on the ethical problems associated with AI by pointing out that the “black box” aspect of these computer intelligences is disturbing. One cannot typically see or clearly understand the inner workings or reasoning used to come up with the solutions or outputs. To act ethically one must understand why one is doing something and the circumstances of the situation. Simply surrendering understanding or reasoning to computers is not ethical. We use them as tools but reserve judgment for people with moral consciences.

A big challenge of AI is that it is blurring the distinction between serving as an advanced tool and substituting for human intelligence. Practically speaking, it has the potential to put huge numbers of people out of work by doing certain human tasks more efficiently. The problem with efficiency, however, is that it is almost the opposite of love. A key aspect of love is “wasting time” with the beloved, just being with them without doing anything “productive.” AI is all about productivity and rapidly mimicking deeper realities. Using ChatGPT to almost effortlessly write a term paper subverts the purpose of education. Students learn to think well and argue clearly by the expended effort of research and writing.

I can understand why Elon Musk is both fascinated by and extremely fearful of AI. His solution to the problem seems to be a version of the old proverb, “if you can’t beat them, join them.” Musk’s Neuralink company is developing brain-computer interfaces that directly connect a person’s brain to outside computers. I wrote about the problematic transhumanist agenda involved in this company’s goals in a previous column.

Personalist bioethics develops the key insight that the dignity and sacredness of the human person must guide our ethical decision-making. Artificial Intelligence comes from a very different perspective. Certain applications can and should be used as tools for the service of humanity, but we should be aware of the dangers of surrendering what is essential about humanity to machines. Theologically, we have assurances that only God creates life and so there is no risk of creating an “artificial life form” with self-awareness as some science fiction posits, that has the potential to turn against and exterminate humanity. On the other hand, we could set in motion highly dangerous computer intelligences. There were instances during the Cold War when human beings refused to launch nuclear missiles when there was no doubt automated computer systems would have set them in motion and brought about the nuclear destruction of the world without a second thought.

Calculating and algorithms should not be seen as superior to that which makes humans unique. There is a real danger of human beings having an inferiority complex vis-à-vis computers. Yes, they can perform some tasks better than we can. That list is rapidly growing with the development of AI. We do have to draw the line, however, when it comes to love, moral decision-making, and our own education. These are uniquely important and human; they cannot be outsourced to AI. Certainly, it is also vital that such a key area as health care is not stripped of its distinctively human element. The bedside manner of a robot is not the future for which we should be striving.