robotic-hand-reaching-upwards

AI isn’t all doom and gloom, says this theologian

We can tell a better story about the future of artificial intelligence, says theologian Joshua K. Smith.
Our Faith

Although the connection between artificial intelligence and theology might seem a bit tenuous to some, for Joshua K. Smith, author of Robot Theology (Wipf and Stock), the connection couldn’t be clearer. “Technology is very much a theological, eschatological conversation,” he says.

joshua-k-smith
Joshua K. Smith is a pastor, theologian, and author of multiple books about artificial intelligence and robotics.

Now a pastor, theologian, and writer, Smith began his career in the military, working on teams that programmed and operated semiautonomous weapons systems. That firsthand experience working side by side with machines eventually led him to explore the connections between humanity, robots, and faith in Robot Theology. The book tackles a wide range of theological and ethical topics—from personhood and rights to racial justice and pastoral ministry—as they relate to the new wave of AI-based technologies quickly making their way into society.

As Smith surveys this complicated landscape, his core conclusion is quite simple: Far from being the harbingers of a technological apocalypse, these technologies may actually have a place in building a society oriented toward the common good. While he notes that there are certainly some perils that cannot be ignored, including algorithm-driven racism and negative impacts on the working class, Smith sees real potential for robots to become not only tools, but also friends. The key to working toward that future? Setting aside our long-held prejudices and telling a better, more nuanced story about not just AI but also technology as a whole.

How did you end up working at the intersection of theology and robotics? They seem like unlikely topics to mix.

You have to go back to my time in the military to see how the two intersect. My original career trajectory was to work with robotics in the industrial field, where you’re programming robots and observing what you’ve created. That’s always fascinated me. In high school, that’s where I thought my life was going to go.

Advertisement

There weren’t a lot of programs for that type of work when I graduated. There were very few options to do something like robotics, and those opportunities weren’t open to me because of where I came from economically and socially. I joined the military primarily to go to college. I ended up working with the Phalanx system, a semiautonomous weapon system. It was really interesting to be up close to it and see it work.

It’s fascinating how the human–machine interaction plays out. It’s this really intimate space where a human designs, codes, and then implements a robotic system. You form bonds with it, in a way, because it becomes part of your team. It’s the same thing with bomb disposal robots: They are a very intricate part of the team.

It’s always been that way: Humans have always tried to figure out where we stand in between machines and natural biology. There is this longing for something more, to be something different, to merge with something that is more powerful. I’ve always been fascinated with this desire, and that led me to the humanities. I’ve always thought a little differently than some of my theological peers, and I’ve always been a little more open to some things that make them uncomfortable. But I think that’s why I’m here: to challenge some of those assumptions and to ask questions alongside people who are in the fields of engineering and technology.

Christian ethics often takes an apocalyptic, fearmongering approach to AI and robotics. Why do you think that’s the case? Is there a way to move past it?

I think the biggest reason why we are drawn to apocalypse is because of the narrative around it. If you look at science fiction, images such as the Terminator are very hard to get out of our narrative of what AI and advanced robotics are. But that’s not realistic: We’ve had human–machine teams for a long time, and we’re quite intertwined with our technology.

Advertisement

Technology is very much a theological, eschatological conversation. There’s a reason why technology and theology are tied together and why we have two extreme views regarding the fear of new technology. On one end you have people such as Nick Bostrom, Steven Hawking, and Elon Musk who say, “Well, we should be afraid of technology. It’s going to bring about the end of humanity.” Then you have people on the other side of the spectrum such as Ray Kurzwell who are inside these deep echelons of power in tech who say, “No, no, no. Technology is going to usher in a utopia. It’s going to solve issues of poverty and bring about equality and social justice.”

Does it have to be one or the other? I don’t think technology is deterministic or fatalistic. It just depends on whether we’re asking the right questions and whether we’re open to dialogue about them. There are all kinds of things that we don’t completely understand that we use, develop, and love. AI can be a part of that, if we’re willing to tell a better story about it.

You talk about how robots have always been part of our history and our culture. Is that a part of how we could tell a better story?

I think so, once we get past the initial fear. Since the 1920s, when we were discerning how to integrate more machines into the workforce, there’s been this idea that we’re going to be replaced by something. But that’s not quite the full story. The story is that we find some implement, or we discover something, and then it makes these slow modifications to daily life. As it creeps into our life, it slowly changes the social fabric. Even if you go back to the Middle Ages, before economic trade and commerce were really wide-scale, simple implements like the plow drastically changed society.

I think many times, especially in Christian circles, people tend to promote this kind of digital monasticism where they try to exclude themselves from the world and hide away from technology. Yes, we have the potential to abuse technology and bring about harm, but fire is the same. The fire that warms you and that you’re able to cook your food over also has a potential to destroy you and burn down your house. I think it’s the same way with any tech.

Advertisement

Even if we go back and look step-by-step in history, there are so many different stories and narratives about what technology is for. I think it’s a very theological story: As we exist in bondage, brokenness, and estrangement from the world and estrangement from a Creator, how do we close that gap? How do we deal with death, disease, and brokenness?

God is the ultimate engineer, and God is a person of healing. I think God gives us the ideas and inspirations that lead to technological developments as a common grace.

In your book, you discuss the possibility of AI one day being considered people. To start with, what does it mean to be a person?

A person is more than just a human being. There’s this idea in philosophy that persons are a much broader concept of what it means to relate to the world, to our environment, to animals, to inanimate objects. What I mean by persons is simply that we are inside of a narrative that is much bigger than our individual self.

In the Western world, we tend to think of a person as a human alone, in that we are on the top of the scaffolding of creation. It’s just us, and we determine and define everything that matters. But personhood is really much broader than that. People don’t live in isolation. Without the legacy of my grandparents and parents, I don’t exist, literally. In the wider scheme, without an ecosystem to live in, without trees, oxygen, and animals, people don’t exist. If you zoom out even more, there’s God: Without God, none of those other things exist. What it means to be a person is understanding how we connect to all those tiers and categories.

Advertisement

I think a person is a character within a story, even going back to the root of what person means.

You make a distinction between legal and moral personhood, saying that you expect robots will one day be considered legal, though maybe not moral, people. What’s the difference?

There are multiple tiers of persons, both in philosophy and in the legal world. Legal personhood essentially grants a limited set of rights or protections to inanimate objects for the sake of, for example, protecting this river from pollution or this animal from extinction. More recently in history, with the Starship robot, we’ve granted certain pedestrian rights: In this case, so the robot can use the sidewalk to deliver goods.

Advertisement

When I talk about robot rights, it’s under the category of legal personhood. Moral rights are a little bit different. Those are what you and I share as human beings. When we are born, we have a certain set of standards and rights that are afforded to us based on our status as human. That’s not quite the same as when a robot is created.

As objects move into different relationships with us, they take on different accountabilities and responsibilities. If I’m working in close proximity with an AI or an embodied robot, it’s not quite the same as if you and I are working together. We need to acknowledge that.

Advertisement

Traditionally, at least in the legal community, we want to treat robotics the same way we would treat any other piece of technology. The difference with AI is that it’s making decisions. They’re not conscious, sentient decisions: It is using its mathematical models to make a decision, left or right, fast or slow, hard or soft. It doesn’t care whether you’re in proximity, whether that harms you or someone you love. It’s just computation. But that doesn’t make it any less problematic as we try to deal with how much we want to incorporate that into our society.

If we genuinely think about robots and AI as people in at least some sense, what kind of change might that bring on a societal level?

There are different theories about this. Depending on who you ask, I think the major narrative is that it will accomplish what we thought would happen with the Industrial Revolution. People will work less, and there will be greater economic income. I think those pieces are true, but a lot of times people believe that all people are going to profit from AI. I don’t think that’s true.

On one side, automation produces a lot of goods and makes them cheaper. That drives down the prices of goods, but there are going to be fewer jobs because of automation. That’s just a hard reality: Companies are going to cut costs where they can. It’s cheaper to buy a $100,000 robotic arm and have one or two people maintaining it than to have a whole staff who do one isolated job. If you have two or three or four robots that can do one job extremely well and do that one job all day without pay, it’s a lot cheaper than paying somebody minimum wage. Once that happens, a major shift will happen in our economy and we’ll become more reliant on technology than we’ve ever been.

But I think there’s also another shift happening, one that is similar to both the widespread introduction of electricity and smartphones. I think once we find a way to make a robot that is affordable, that can help with the dishes, homework, and the family calendar, there’s no going back. We’ll never see these AI as less than persons in our story.

Advertisement

Just like a dog, it will become a part of the family. I think that’s going to happen in tandem with the disruption in our economy as we integrate more robotics into the work field.

You note that a lot of our contemporary conception of robots, machines, and technology in general has roots in racism and slavery.

Most people don’t make the connection, unfortunately. But the initial use of the word robot was in the 1920 Czech play R.U.R., Rossum’s Universal Robots, written by Karel Čapek. The word means “forced labor” or “servant.” The reason why we create robots is to make them servants. We don’t want a piece of technology that’s not going to serve us.

In science fiction, robots are frequently envisioned as Black. Black scholarship on robotics and race and tech tells us that yes, many times the vision of modern robotics is to invent a type of slave. My question for us is: Do we want to do that? Do we want to make arguments for a new type of slavery? People say, “No, it’s just math. It’s just programming. There’s nothing underneath it.” But the message is there, and the narrative is there. We just don’t want to listen.

Technology doesn’t just happen in a vacuum. You can’t erase our culture from it. That is a witness to the tension between race and tech, but I think there’s some positive to that. Instead of trying to remove all culture out of tech, there are scholars such as Philip Butler who are saying, “Well, is there a way that we can make a distinctly Black AI?” That’s a very different way to shape the narrative: that there might be a way to help people see themselves reflected in the technology they create. Are we making things in a way that a person of color might see themselves in the things we create? Or are we colonizing the technology we produce?

I think in a lot of ways it is the latter, especially if we don’t acknowledge the bias that is present even in things such as Google. There was one point where if you Googled Black hair, the results would show very pornographic images of Black women. That is based on the data set and on how the search engine was trained. Google has addressed some of those issues, but the problem is so massive that even Microsoft, when they released a chatbot, had to take it off the internet within a couple hours, because it started saying racist stuff like, “Why was Hitler so bad?”

In some ways the early thinking about robotics was that it’s like a child that we can train. There’s some truth to that, but it’s going to be a very sociopathic and messed up child if we don’t train it on the right datasets. It can be a very harmful thing. Automated programs are passing people over for job interviews based on their name, their sex, and what information they put in their resume. These are real harms that are happening in real life based on an algorithm’s decision.

What do you see as the most ideal role that robots could play in society?

I really like Kate Darling’s approach: She thinks about robots the same way that we think about animals. They are a distinct entity, they’re not human, but they’re also not alien. They have a communal role to play in our society. When we do that, it takes away the burden of trying to say, “Well, how is AI distinct from a human? Will they ever be human?”

The answer is no. AI will never be human, but that doesn’t mean that it won’t fool us. It doesn’t mean that it won’t be helpful, either. Once we make that leap philosophically and theologically, we’re able to incorporate these systems into our life and also set up proper boundaries and put them in proper places. That’s a healthier way to think about it.

Advertisement

Do you see any role for AI within the church or in ministry?

As far as ministry goes, we need to see AI as a supplement, not as a substitute. It’s the same thing with online church. Nobody is arguing for a purely online church—it’s always a supplement. Robotics and AI can be the same way. Nobody is arguing that we should replace humans with robots. I don’t think robots will ever replace clergy or caregivers, but they could be great supplements to them.

I would really like to see AI used in a similar way to how we might use therapy dogs or therapy robots. There’s nothing on the market like that right now. How can we use AI for mental health and spiritual growth?

I think we trust robots, for whatever reason. We trust our search engines more than we trust embodied people sometimes. It takes off that layer of vulnerability. 


This article also appears in the July 2023 issue of U.S. Catholic (Vol. 88, No. 7, pages 10-14). Click here to subscribe to the magazine.

Image: Pexels/Tara Winstead