CMDAToday Human

What Does It Mean to Be Human?

This article is an excerpt transcribed from an interview recorded for a CMDA Matters podcast episode released in November 2023 with CMDA CEO Mike Chupp, MD, FACS; CMDA Senior Vice President of Bioethics and Public Policy Jeff Barrows, DO, MA (Ethics); and Professor John Wyatt. During the conversation, they discussed what it means to be

John Wyatt

This article is an excerpt transcribed from an interview recorded for a CMDA Matters podcast episode released in November 2023 with CMDA CEO Mike Chupp, MD, FACS; CMDA Senior Vice President of Bioethics and Public Policy Jeff Barrows, DO, MA (Ethics); and Professor John Wyatt. During the conversation, they discussed what it means to be human in the age of artificial intelligence (AI), especially how it relates to healthcare. To listen to the full episode, visit cmda.org/cmdamatters.

                                                                                                                                                           

 

Mike Chupp, MD, FACS: Jeff and I were introduced to one of your newest books entitled The Robot Will See You Now: Artificial Intelligence and the Christian Faith. And you joined Stephen Williams, a theologian, to write it. What goals did you have as you wrote that book? What have you seen come from this project since 2021?

 

Professor John Wyatt: Together, Stephen and I helped lead a project at the Faraday Institute at Cambridge where we called together computer scientists, theologians, thinkers and philosophers to have a conversation about what artificial intelligence is doing and what it means to be human. The book is the product of that research collaboration, and I wanted to make it more accessible to general readers. It’s a preliminary contribution. One of the challenges of this area is the speed with which things are changing. This research project came to an end just before the pandemic started in 2020, then was published in 2021. It’s already massively out of date and that’s part of the challenge, isn’t it? The fundamental issues are unchanged and very, very troubling.

 

Jeffrey Barrows, DO, MA (Ethics): I know the introduction of the book examines distinctions between various things, including information technology (IT), artificial intelligence and road robots. Could you help us understand the difference between these various things within your book?

 

Professor Wyatt: Information technology is a broad phrase which covers anything to do with the storage, processing and transformation of information by computers. IT has been around us for decades. Artificial intelligence is a very poorly defined phrase. Many computer scientists don’t like it. It’s a bucket term, but in general it means developing computer systems to do tasks which until now only the human brain could do. Now the idea is to make computers take over as many tasks as possible that human beings previously did, including speech, language and visual processing. Increasingly, whereas before the previous generation of AI systems had largely to do with taking in information and processing it, there are astonishing new developments in what is called generative AI. These are AI systems that can generate text, images, music, videos and so on. They can generate a stream of new information, and this feels like science fiction. I can have a conversation with a computer system and instantaneously it responds in an apparently intelligent, thoughtful and incredibly well-informed way. This is like science fiction, and yet it is becoming part of our commonplace experience.

 

Dr. Chupp: Several prominent folks—Elon Musk, Stephen Hawking—have called for a halt to progress within AI because of concerns of what’s coming. What’s your perspective on those concerns coming from folks who seem to stand to gain a great deal from development of AI, including within healthcare?

 

Professor Wyatt: I think it’s important to understand science fiction plays a huge role in all this, so it’s a very unusual situation. Previously in the history of the world, technology came first and then all the authors, writers and creators came along. You have the Industrial Revolution, then along comes Dickens writing novels about the Industrial Revolution and the impact it’s having on families. What is completely unique is that for 100 years, science fiction writers have been imagining a science future in which the machines take control; in which the few machines become as intelligent as we are. The fascinating thing is, nearly all these science fiction stories end badly. Once the machines start to become as intelligent as human beings, you can pretty well guarantee you know the ending of this story and it’s not going to be good.

 

The second thing is that the guys in Silicon Valley and the technologists who are making this come through, they’ve all been reared in science fiction since their mother’s knee, and many of them, including Elon Musk, are trying to make it come true. At the very same time they are trying to make it come true, they also know how science fiction ends and they’re frightened.

 

Dr. Barrows: One of the chapters you wrote in this book is entitled “Being Human in a World of Intelligent Machines,” and you spend a good portion of the chapter discussing modern controversies and various topics of anthropology, intelligence and personhood, then you conclude with this quote: “The ubiquity and effectiveness of various forms of machine intelligence have created a distorting lens through which our humanity is being perceived in new ways. The dangers seem obvious…but perhaps this time in history represents a unique opportunity for creative thought and engagement as a Christian community to deepen and enrich our understanding of what it means to be human, of the extraordinary possibilities of the tools we are creating, and of the strange new world in which we find ourselves.” I’d like you to expand on how the Christian community should use the societal turmoil being brought about with AI and robots to deepen and enrich our understanding of what it means to be human.

 

Professor Wyatt: “What does it mean to be human?”, is a question I hear more and more people asking. That question is being asked in chat shows and by scientists, technologists, thinkers and philosophers, because the technology raises that question again. It says, if we can make a machine that seems to perform all the things human beings do, then what does this tell me about myself, about what it means to be a human being? Perhaps I really am just a computer made of meat. But, number two, what are human beings for?

 

I think this gives us, as Christians, a unique opportunity. As Christians we have a unique understanding of what it means to be human, that we are these embodied, vulnerable, fragile biological creatures created in God’s image; created to reflect the very being and character of God. There are numerous people in our secular society watching this rush of AI who feel deeply, intuitively, this can’t be right; this cannot be the future; this is not the kind of world I want my child to grow up in. They can’t give you a reasoned and logical and philosophically robust answer for why it’s not right. I think we as Christians do. We have a deep understanding because of our faith and revealed in the Scriptures of what it means to be human, which provides an alternative to this technological, computer-based understanding of what the future holds.

 

Dr. Barrows: You also wrote a chapter entitled “The Impact of AI and Robotics on Health and Social Care” and you state: “It seems likely that AI technology will become ubiquitous within healthcare across the world, although its pervasive role will be largely hidden from view.” What are some examples that stick out to you today, and what are the potential negatives for us in the healthcare profession?

 

Professor Wyatt: When I was brought up, we were told science was going to be so amazing. We were going to have hover cars and colonies on the moon, and the biggest problem for modern people in the 21st century would be spending all their free time. You know, because the machines would do everything, and it would be a real problem. Everybody would sit around wondering how to fill the endless hours. Well, they got that a bit wrong, didn’t they?

 

When I think of those kinds of predictions and then I predict how healthcare is going to develop in the next 20 years, I find I’m very cautious about it. I suspect there are several areas where it is going to make a big difference. I think the whole issue of diagnostics and interpretation of records, from interpretation of scans to analysis of blood work is going to be transformed. We’re all going to have available a review of world literature up to date and instantaneously on your smartphone or laptop. In the middle of a consultation, you may want to know the latest and most authoritative perception on anything you like, and it will be instantaneously available.

 

Where I am most troubled is in the human interface. There are already technologists and physicians saying kinds like ChatGPT—ChatGPT is just the very first version—are already massively improving. Give it another year or two and they’re going to be so much better.

What is being suggested are these being the interface between physician and patient. Instead of the two-phase process like meeting with the patient and having face-to-face conversation, what’s going to be new is a three-way process. There’s going to be a third “person” in the room, and it’s the AI. Interestingly, AI is interacting simultaneously with the physician and patient. There is this three-way exchange of information going on with human language. Yet, this third “person” sitting in the room is the most experienced, profound expert in any field.

 

How does that change the nature of healthcare? It makes me think about medical education. For instance, do you, to be involved in that three-way process, really need to do all that anatomy and physiology, dissecting the human body, learning the Krebs cycle, pharmacology, you name it? Does it really take six, nine or 12 years to train a health professional to sit in that room with AI and provide expert, quality healthcare? I’m not sure it does. Interestingly, this is one of the things this kind of automation technology does. It takes a traditional professional role, like a lawyer, banker, accountant, physician and so on, and it decomposes that role. It looks at what those individual tasks are and then it divides them off saying, “Well, actually this box can do that, and this box can do that and where we need the human being is here.” I think that’s what’s going to happen to the physician. I can see both great positives and negatives.

 

Dr. Chupp: That makes me think of the transition in electronic medical records and how we were told this was going to make things go so much faster. Dr. Eric Topal, you mentioned, in his 2019 work, wrote how AI can make healthcare human again and vastly improve efficiency giving human physicians more time for human-to-human interaction. I’m just a little bit skeptical.

 

Professor Wyatt: I’m with you. So often it seems in the real world whatever is designed will achieve exactly its opposite effect. I do fear this new world. To me, I try to distill the essence of the physician’s role, particularly with people facing terminal illness or catastrophic health problems. The role of the physician is to be a wise friend who says, “I am a human being like you. I know what it means to suffer. I know what it means to be terrified of death. I know what it means to be in pain and frightened, and I’m here to walk alongside you and promise we won’t abandon you.” No machine can ever say that and mean it.

 

Dr. Chupp: I finally signed up and did a test asking ChatGPT, “What are the bioethical concerns we should have within healthcare because of artificial intelligence?” Within less than a second, a list of 10 things generated, which I shared with you just a few days ago. What do you think about that list AI generated about itself? Did AI leave anything out of that list of bioethical concerns?

 

Professor Wyatt: It’s brilliant. It’s a comprehensive survey of a very complex and rapidly developing field. Somehow it has managed to extract the major headlines across a huge range of issues, from data privacy, informed consent, bias, fairness, transparency, accountability, equity, patient autonomy and regulation, with the ability to create this instantaneously at the touch of a button. What’s more, you could take each of these, push it back and you would then get a detailed paper on each issue. It is extraordinary technology, but I’m afraid all these ethical issues are very real and very problematic.

 

Behind this apparently human language, there is a mind bogglingly complex series of probabilistic equations churning away and coming out with text. Therefore, accountability for what is said is extremely problematic. What are the hidden biases? What’s been twisted to give an impression not entirely truthful, or which has hidden ethical, philosophical, even spiritual bias? All these things are hidden and, therefore, for me they come with an immense health warning. I think any Christian who is using AI generated text without the most scrutiny and reflection is potentially opening themselves and anybody they send it to quite significant and hidden forces and biases.

 

Dr. Barrows: I’m curious about your thoughts in terms of other authors that have made significant contributions in this whole field of study, especially relating it internationally as well as to healthcare.

 

Professor Wyatt: I have to say, sadly, I think Christian thinking is way behind the curve. It’s one of the things that really troubles me, when I think about, for instance, the amount of energy and effort which is being quite rightly spent on issues like sexuality, abortion and so on. I have been involved very much in debates like that; yet, I think of this new AI technology but also, human enhancement, transhumanism, brain machine interfaces and so on—these are issues Christians need to be engaged in and trying to work out how can we be salt and light in this astonishing place God has put us. Of all the time in world history to be serving Christ. There were physicians in the time of Christ like Luke and the physicians in the age of the great missionary expansion. They were going out to Africa and taking their coffins with them, but God, in His sort of providential purposes has placed you and me as healthcare professionals in a very unusual time of world history. How can we be faithful? How can we be salt and light in this time at this place?

 

Dr. Barrows: Are there any specifics that come to mind from your research of how individual healthcare professionals can make a difference by, for instance, speaking to their elected representatives, or are there things they can ask for? Are there specifics in terms of advocacy efforts we can undertake that will have a positive impact in directing how the whole realm of AI unfolds?

 

Professor Wyatt: There are several different things I’d want to say. Number one is I think we have a responsibility to be informed. There’s a tendency to think sometimes, “This is just all too much, it’s all too complicated. I don’t want to go there. I just want to carry on in my little corner.” I think that’s not a responsible Christian response. I think of the phrase Jesus said, “Much is required from those to whom much is given.” If we’re healthcare professionals, God has given us a great deal, and much is required. We owe it to our Christian brothers and sisters to try to inform ourselves as much as possible, to read and to listen, to keep up to date and to use all the tools that are out there to do so.

 

Number two, it’s about being salt and light. Salt is about preserving, minimizing corruption and evil. Light is about doing good and shining truth into dark places. If all of us, as Christian believers, are acting as salt and light in the world of healthcare and in the world of AI, then I do believe by God’s grace we can really have an influence. To change the metaphor, it seems to me a lot of this technology has this fundamental twist towards the dark side.

 

It’s ultimately about power, about human power using machines for human power, and because human beings are fallen, this technology contains within itself the seeds of the fall, the seeds of evil. But by faith, I believe this technology can be redeemed. It can be brought out of the hands of the evil one and used for good, for the kingdom and for human flourishing. That’s the challenge, isn’t it? How do we redeem this technology? It’s not that we’re opposed to it, that we are Luddites that we say, you know, take me back, I want to go to the agrarian past. This is the future. I think it is inevitable. It’s almost like a new tower of Babel is coming in, but in the age of grace. It’s coming into reality out of the coming of Christ after the resurrection, in the age of the Spirit, and it’s all part of God’s ultimate purpose. The question is, how can we who’ve been called to be servants at this extraordinary time learn to redeem this technology and use it for God’s purposes?

 


About John Wyatt

John Wyatt is Emeritus Professor of Neonatal Pediatrics, Ethics and Perinatology at University College London and a Senior Associate at the Faraday Institute, Cambridge. He has a clinical background as a neonatologist and as a medical researcher in applied neuroscience, developing new methods for the prevention of brain injury in newborn babies. He has a long-standing interest in ethical dilemmas raised by advances in medical and digital technologies and has frequently engaged in public and professional consultations and debates on complex ethical issues concerning the beginning and ending of life. His book Matters of Life and Death: Human Dilemmas in the Light of the Christian Faith has been translated into more than 10 languages. He co-edited a new book on the ethics of artificial intelligence called The Robot Will See You Now published in 2021, and he is currently writing another book on artificial intelligence to be published by IVP. He is married to Celia, and they have three grown-up children and four grandchildren. To learn more, visit johnwyatt.com.