Home » Centre County Gazette » PSU study: People trust AI  more than they should

PSU study: People trust AI  more than they should

Hand pointing at glowing digital brain. Artificial intelligence and future concept. 3D Rendering

Lloyd Rogers


UNIVERSITY PARK — Artificial intelligence might be able to write essays, diagnose diseases and predict the weather, but according to new research from Penn State, most people can’t recognize when the technology itself is biased, even when the evidence is right in front of them.

A study led by S. Shyam Sundar, James P. Jimirro Professor of Media Effects at Penn State’s Bellisario College of Communications and the Director of the Penn State Center for Socially Responsible Artificial Intelligence, found that users overwhelmingly failed to notice racial bias in an emotion-recognition system, despite being shown an obviously skewed training dataset. The findings, published in “Media Psychology,” point to a deeper problem: people tend to trust machines more than they should.

“In our experiment, participants saw a mock dataset where all the happy faces were white and all the unhappy faces were Black,” Sundar explained. “We thought this would be easily noticed as a problem. But they didn’t think twice about it because they thought the system was simply doing its job by recognizing whether a face was happy or sad, rather than think about the race of the people portrayed.”

That misplaced confidence, Sundar said, stems from what psychologists call the “machine heuristic”, which is the belief that machines are objective, neutral and infallible.

“Humans can be subjective and fallible,” he said. “We tend to think machines are not like that. What we don’t think about is the fact that machines might be fed data or trained on data that might have some systematic biases built into them.”

To illustrate the danger, Sundar offered a simple analogy.

“If an AI learns that most pictures of dogs are taken in parks, it might decide that any animal with a green background is a dog,” he said. “In the same way, if the system sees more white happy faces and Black unhappy faces, it can internalize that racial pattern as part of what defines happiness or sadness, even though race has nothing to do with emotion.”

The implications reach far beyond facial recognition. In one real-world case, Amazon abandoned an AI hiring tool in 2018, after discovering it favored men over women, a bias inherited from the company’s own historical data.

“AI doesn’t magically have its own mind or have an independent way of figuring out the truth about things,” Sundar said. “It reflects the data on which it’s trained. So the historic practices of sexist and racist hiring will get perpetuated if you use the historic data to create, to make decisions about newer incoming applicant pool, for example.”

Equally troubling, he said, is how resistant users are to seeing bias even when it’s shown to them.

“Where they do recognize the bias is if in fact the AI performs badly. When it actually takes a person who’s African American and classifies that person as unhappy when they’re clearly smiling, that is when people take note,” Sundar said. “But even then, many chalk it up to a one-time failure of the system.”

Sundar believes the solution starts with education and transparency, but with a caveat.

“The biggest kind of concern going forward from a study like this is we talk so much about the companies being transparent in their products and telling the users everything about a system so that the users can see for themselves,” he said. “But as we saw in our study, even when users see for themselves, they are not willing to let go of the strong machine heuristic.”

AI literacy, Sundar added, is now a survival skill.

“Users need to understand that not everything they see online is real,” he said. “With deepfakes and generative AI, people must approach content with a journalist’s mindset and verify before believing.”

wrong short-code parameters for ads