Shion-Guha

AI and ethics aren’t mutually exclusive, says this scientist

This professor is on a mission to get data scientists to think with their hearts.
Religion

As a data scientist and professor at Marquette University, Shion Guha knows computer science isn’t just about math. There are “lots and lots of issues” with artificial intelligence (AI), he says. He lists plenty of examples of AI that have had unintended consequences, from hiring filters to criminal sentencing decisions made by privately designed software.

These growing ethical issues in AI have provoked a movement among data scientists to promote better ethics education for scientists. Guha is one of the scientists in the United States encouraging AI-specific ethical training for students that goes beyond core ethics and theology courses—making sure that future computer and data scientists consider the common good when designing new technologies. “Broadly, the field of computer science is starting to take notice of the fact that we can’t pretend to be neutral technology designers,” he says. “There’s no such thing as a neutral technology designer.”

Guha saw a need for an AI-specific ethics course in his department, so he created one, thinking a few graduate students might sign up. But the course quickly filled—and had a waiting list. “I think our students are smart,” he says. “They felt that there was this intersectional hole in their curriculum that needed to be filled.”

He’s also busy developing AI that works for the public good. At Marquette, Guha is glad his goal aligns with the Catholic university’s mission. “The basic mission at Marquette—social justice—very much aligns with my research program,” he says.

Advertisement

Marquette isn’t alone in encouraging this type of research and work. To address the growing need for ethics in tech, the theme of the Vatican’s Pontifical Academy for Life’s plenary meeting this month is artificial intelligence. Pope Francis has stated that, “Artificial intelligence, robotics and other technological innovations must be so employed that they contribute to the service of humanity and to the protection of our common home, rather than to the contrary.”

Together, Guha and his students are working to answer the pope’s call.

What is AI?

There’s a difference between the popular narrative of AI and what we actually have in the world right now. The popular narrative of AI is driven by the imagination and derived from things like science fiction—such as Star Trek with its Ultimate Computer or Isaac Asimov’s anthropomorphic robots, which have their own intelligence. In general, that kind of AI is known as “hard AI,” which is something that we have not been able to accomplish yet. We are nowhere near that.

Nowadays we have what we call “soft AI,” or the ability to build a predictive system that simulates intelligence. So, something like Apple’s Siri is not actually intelligent but simulates intelligence and gives us the perception of intelligence. Or, for instance, think about your humble Roomba vacuum. Roomba is a robot, but it does not have true intelligence. Roomba is a good example of “soft AI,” because it has these underlying predictive systems that enable it to map out where it should travel in your house and vacuum up dirt.

Advertisement

There’s a lot of debate about where AI is going, but at the end of the day we don’t have true AI. Instead, what we call AI right now is this backbone of large amounts of data stored in databases and extremely sophisticated and complicated algorithms that derive inferences from that data to try to provide value to our lives. That’s the definition of AI that I work with.

You’ve spoken about the need for “holistic computer scientists.” Why is it important that computer scientists broaden their skill set?

There are lots and lots of issues with computing technology—and with big technology companies, period. Many of these issues have arisen from a historical tendency for computer scientists to say things like, “Oh, we’re just engineers, we just develop a system, we don’t know how it’s going to be used.”

However, every system that is built will reflect societal norms as well as the biases and values of the designers and developers. So if we just train computer scientists to write code and build systems, and if we don’t train them to understand how they engage in the design of these systems and what potential implications their design, deployment, and implementation may have down the line, then we’re not doing our job properly.

What’s an example of AI gone wrong?

A few months ago Amazon received bad press because the company for years used algorithms to make initial hiring ilters, because they receive a lot of applicants since everyone wants to work at Amazon. So they built an internal algorithm to help filter applicants and used it for hiring.

Advertisement

But then they found that the algorithm over time became insanely sexist. Amazon’s computer models were trained to vet applicants by observing patterns in résumés submitted to the company over 10 years. But what they didn’t realize is that most of those applications came from men because of male dominance across the tech industry. So Amazon’s system taught itself that male candidates are preferable. This is a bad thing, because no one thought about it critically and considered the possibility that this could happen.

How can AI that’s become racist or sexist affect society more broadly?

There was a Wisconsin Supreme Court case a few years ago, Loomis v. Wisconsin. The case challenged the State of Wisconsin’s use of private risk assessment software in the sentencing of a man named Eric Loomis to six years in prison. This was the first time that a case went in front of the criminal justice system and asked whether it’s constitutional to use an algorithm developed by a private third party to make sentencing decisions.

If you consider the Sixth, Seventh, and Eighth Amendments, there are definitely some concerns with an algorithm deciding sentencing. The Wisconsin Supreme Court agreed with the plaintiff and retracted that sentencing decision. And the reason for the issue is that in our court system, lawyers and judges have little to no empirical training. They don’t know anything about statistics or data or algorithms, yet as we increasingly apply more and more of these types of predictive systems in public policy, we are going to see these types of issues.

So on the one hand you have issues in the private sector, like the sexist Amazon hiring algorithm, but on the other hand you also have algorithms being used for sentencing in the criminal justice system. And this is not just a Wisconsin issue. This is a nationwide issue, where anecdotally we have established that these algorithms are incredibly racist, yet they continue to be employed.

Advertisement

I’m not suggesting that anyone is doing anything with malicious intent. One of the biggest reasons algorithms are used in public policy is because of cost. The underlying rationale is that most of our public services—be it the criminal justice system, the education system, or the child welfare system—are overburdened and understaffed in most states. And so one way in which we get over this is by using algorithms to make decisions. Then you don’t have to depend on people.

On the face of it, that’s fine. I believe in a data-driven world. But I also believe in a data-driven world where we do things properly, where we do things ethically. And that’s not quite what’s happening right now.

Advertisement
Can AI ever work for the social good?

Right now I’m working with the Wisconsin Department of Children and Families, which is the child welfare services department in Wisconsin. That department has been under an American Civil Liberties Union (ACLU) lawsuit for the past 20 years because the outcomes have been very poor in Wisconsin. The department has been using various data-driven assessments for a long time trying to get out of this lawsuit.

The ACLU settlement proposed 13 different things that the department has to achieve. The department has been able to do most of them in the past 20 years, but unfortunately it’s had trouble with something called “placement stability.” So I work with one of the department’s private nonprofit partners, and we are trying to look at this particular issue using better AI.

Advertisement

If you make a decision to remove a child from their home to put them in a foster home, you only want to put them in one foster home. You don’t want to constantly bounce them around from home to home. It has really bad outcomes for a child down the line. So we decided to think about a data-driven project that will develop a predictive system that might give us better results for placement stability yet guard against some of these ethical issues that arise whenever you use any of these algorithms.

How can AI help solve this problem?

One way in which we’re approaching this is that, historically, there have been a lot of social workers and case workers who know exactly what’s happening on the ground. They know exactly the right conditions under which a child thrives. Unfortunately, if you use data-driven systems, you can’t quite take these messy qualitative perceptions of social workers into account in your algorithm. Most algorithms work well when based on data that is easily quantifiable. But here we’re talking about data that’s not easily quantifiable.

However, these social workers write detailed narratives about each case that are then stored in a system. A lot of this data has been stored in the system for the past 25 years, and no one’s ever used it. Remember, this is not quantitative data, this is qualitative data—written narratives about the kids in which the social workers describe what worked well and what didn’t work well.

But AI has evolved so much that we can now use it to look at these unstructured qualitative pieces of text. And if we have enough of this information over time, then we can quantify what these workers say works well or doesn’t work well in the context of the child welfare system. We can actually get the context of the case workers, and we can develop a predictive system that could incorporate this qualitative context, like these social worker narratives, in a quantitative form.

Advertisement

We don’t want to repeat the same mistakes that people have made over and over, because data that is easily quantifiable is also data that usually doesn’t tend to work very well. We want to look at what’s hard—we want to look at the professionals on the ground and what has worked for them. And if we can quantify what has worked for them in these types of data-driven systems, we can have a positive impact. It’s something that I’m pretty passionate about.

You created a course on ethics for your computer science students at Marquette. What made you interested in this?

Catholic and Jesuit schools, especially in the United States, have been leaders in the field of ethics education. However, there’s a big difference between a few radical studies of ethics or even an applied study of ethics and the types of ethical implications that we deal with, especially in terms of data and algorithms. This demands an intersectional knowledge of computing systems. It demands an intersectional knowledge of the technical underpinnings of algorithms and data. And it also demands an appreciation for learning about these theoretical ethical frameworks and how they integrate with algorithms and data.

When I first proposed the course, it was welcomed at Marquette because we have both an undergraduate and graduate data science curriculum that we’ve developed in the last three years. And our computer science majors and our data science majors take other ethics courses from the theology and philosophy departments, because it’s part of the core curriculum at Marquette. You can’t graduate with a Marquette education without having a rigorous liberal arts curriculum that is informed by different ethics courses.

However, our computer science and data science majors also learn a lot of technical systems. They have a lot of technical proficiency about computing, technology systems, data algorithms, etc. And they need and want to know more about the specific ethical consequences and implications that arise from designing, building, deploying, and maintaining these types of systems. So I wanted to propose the course and see what the interest was.

The interest in the course was pretty high—it had a waiting list!

It did. And to be perfectly honest, I didn’t really try to advertise or popularize the course. I was thinking, “All right, we’re going to have 10 people sitting around a table and we are going to discuss things.” And lo and behold the entire class filled up, and more and more people were emailing me about it. It makes a lot of sense, because in the past few years our computer science and data science majors have seen a tremendous surge in enrollment. Marquette has invested more resources into computing, computer science, and data science because of it.

Ultimately, I think that if we do not inculcate our computer science and data science majors with ethical training, then we’re really doing them a disservice. The world is a lot more complicated, and therefore we have to help them think in complicated ways.

This article also appears in the February 2020 issue of U.S. Catholic (Vol. 85, No. 2, pages 26–29). Click here to subscribe to the magazine.

Image: Courtesy of Shion Guha