How should Christians respond to the challenges of AI?

To use AI wisely, we need to develop an ethical framework that prioritizes human dignity and adapts to changing technologies.
Our Faith

A friend recently confided to me that he and his dad had used ChatGPT, an artificial intelligence text chatbot, to write an obituary for his uncle. His dad knew the details he wanted to include but was unsure how to write it.

After benefiting from ChatGPT’s stylistic ability and conversational writing skill, my friend and his dad added details and corrections. Even though the obituary turned out well, at the wake he shushed his mom when she attempted to tell a family member that they used ChatGPT.

ChatGPT writes a pretty good obituary if you provide the right prompts. However, the average wake attendee might judge someone negatively for using AI to write it.

What does this say about humans, AI, and ethics? No matter how useful AI is, we need to think more deeply about how it reshapes human relationships, education, economies, and creativity.

Advertisement

ChatGPT and other chatbots that employ neural network language modeling dominate current headlines about AI. However, artificial intelligence is found in lots of digital technologies.

AI is a digital technology that performs tasks generally associated with human intelligence such as sorting, recognizing patterns, and making decisions related to images, words, and translation. AI can process information on a scale that humans cannot. ChatGPT uses natural language processing to interpret data and follow commands but also to learn and change in response to the data.

When AI makes manual tasks easier, society often responds positively. But when ChatGPT shows signs of being smart, such as when it passes the bar exam, society becomes wary.

Digital technologies are never just tools. They shape us, and we shape them. For example, ChatGPT won’t put teachers out of a job, but it will influence those jobs. Many teachers have had to change the kind of assignments they give because ChatGPT can provide written essays that get past cheating software.

Advertisement

We have all witnessed how digital technology shifts the way we form relationships. We keep up with family and friends on social media. We game with strangers halfway around the world. As I detail in Sex, Tech, and Faith (Eerdmans), more people meet their partners on dating apps than offline.

Technological growth rapidly outpaces social response and government regulation, sometimes making us feel like we must get on board or get left in luddite dust. It’s okay to slow down and ask questions.

Pope Francis, in his 2015 encyclical Laudato Si’ (On Care for Our Common Home), addresses digital technological development from social and ecological perspectives. And for the past three years, the Vatican has been trying to promote conversations specific to AI. The “Rome Call for AI Ethics” is a helpful document when considering issues of design, implementation, and impact from the perspective of government regulation, business practices, and industry standards.

Pope Francis utilized the document’s value-based framework in January 2023 when he criticized the use of artificial intelligence surveillance systems against asylum seekers.

Advertisement

In addition to government and corporate regulations and social policies, people in the pews need guidance for everyday encounters with AI. As Christians, we should already be asking how our faith affects how we live out our values when it comes to responding to strangers and promoting justice. Why do we so rarely bring such an approach to digital technological engagement?

We need an ethical framework to help Christians respond to AI tools such as ChatGPT. In Christian Ethics for a Digital Society (Rowman & Littlefield), I advocate for flexible, adaptive, and responsive ethical approaches grounded in shared values rather than specific rules.

For example, if we consider the use of AI to write something “cheating” in all cases, we will negatively judge my friend and his father, the obituary writers, as lacking integrity. This is a rules-based ethical approach: Writing should be done only by a human and not AI.

Yet most of us do not judge ourselves as dishonest if we select the response prompts provided after a text message. Is clicking “OK” or “sounds good” morally neutral?

Advertisement

In the case of the obituary writers, we might say they were acting out of love for their family member, wanting to represent them in the best form possible. Is ChatGPT’s assistance any different from a funeral director’s help in writing an obituary? AI takes stylistic cues from millions of obituaries available on the internet, just as the funeral director draws on their years of experience.

Context matters when making ethical judgements about AI. Using a values-based approach rather than relying on broad rules allows for adaptability of ethical response as technologies change. Evaluating a technology should include learning how the technology works and how it was designed in addition to its impact on human use.

Advertisement

The “Rome Call for AI Ethics” suggests the following principles when determining the ethics of a technology:

“Transparency—AI systems must be understandable to all; inclusion—these systems must not discriminate against anyone because every human being has equal dignity; accountability—there must always be someone who takes responsibility for what a machine does; impartiality—AI systems must not follow or create biases; reliability—AI must be reliable; and, security and privacy—these systems must be secure and respect the privacy of users.”

Advertisement

Even if the use of ChatGPT results in a human expression of love, we have a responsibility to ask whether the tech design leads to inclusivity, is accountable, and protects the users’ privacy.

In everyday encounters with AI, we could follow these steps:

  • Start with an issue people are already talking about such as ChatGPT, an AI image generator such as Dall-E, or even a social media platform.
  • Find resources to increase digital literacy. Christian Ethics for a Digital Society offers discussion resources at the end of each chapter covering such topics as algorithms, online presence, digital surveillance, technology, and environmental issues. For discussions on relationships, there is a study guide at the end of Sex, Tech, and Faith. WIRED magazine and podcasts offer helpful tech primers. And the documentary The Social Dilemma explores how algorithms and AI relate to social media.
  • Ask critical questions such as: What values guide my Christian belief and actions?
  • Revise your own use practices to align with your values.
  • Talk with friends and family about your knowledge of how a technology is designed and why you have changed your practices. This builds the digital literacy of others and starts ethics conversations about AI and relationships.
  • Join or lead a congregational study using resources named above.
  • Engage on a wider community level. Write your elected officials asking for greater regulation in line with some of the principles outlined by the Vatican’s “Rome Call.”

Although these may seem inconsequential interventions, these steps can help grow the ethical muscles needed to apply our values to digital technologies. 


This article also appears in the July 2023 issue of U.S. Catholic (Vol. 88, No. 7, pages 15-16). Click here to subscribe to the magazine.

Image: Generated with the assistance of AI.

About the author

Kate Ott

Kate Ott is a professor of Christian social ethics at Garrett-Evangelical Theological Seminary in Evanston, Illinois, where she serves as the director of the Stead Center on Ethics and Values. She is the author of Sex, Tech, and Faith: Christian Ethics in a Digital Age (Eerdmans). Learn more about her work at kateott.org.

Add comment