
By Megan Schumann, Rutgers University Office of Communications
Results from a new Rutgers University–New Brunswick survey tracking public trust in artificial intelligence (AI) suggest a growing divide in how Americans engage with the technology.
People with higher income and education levels are more likely to use and trust AI and have greater knowledge about the technology.
The survey, part of the National AI Opinion Monitor (NAIOM), was conducted between Oct. 25 and Nov. 8 and gathered insights from nearly 4,800 respondents across demographic groups, socioeconomic status and geographic location. It examined public attitudes toward AI, including trust in AI systems, the companies using them and AI-generated news content.
When asked about their trust in AI to act in the public interest, 47% of Americans reported having "a fair amount" or "a great deal" of confidence in the technology. This level of trust was higher than that for social media (39%) or Congress (42%)
Trust in AI was highest among individuals ages 18 to 24 (60%), those earning $100,000 or more a year (62%) and graduate degree holders (60%).
“At this point, the AI divide does not seem insurmountable,” said Katherine (Katya) Ognyanova, an associate professor of communication at the Rutgers School of Communication and Information and a coauthor of the report. “Yet, if these tools remain more accessible and trusted among higher-income groups, they could deepen existing economic disparities. Given AI’s growing role across industries, unequal access and understanding could lead to missed opportunities for many.”
“At this point, the AI divide does not seem insurmountable,” said Katherine (Katya) Ognyanova, an associate professor of communication at the Rutgers School of Communication and Information and a coauthor of the report. “Yet, if these tools remain more accessible and trusted among higher-income groups, they could deepen existing economic disparities. Given AI’s growing role across industries, unequal access and understanding could lead to missed opportunities for many.”
The researchers define AI as a collection of advanced technologies that allow machines to perform tasks typically requiring human intelligence, such as understanding language, decision making and recognizing images.
“While AI is quickly becoming an increasingly important part of our work, education, and public life, its adoption and use are still premised on public trust,” she added.
Americans Trust Journalists More Than AI-Generated News
The survey also found Americans trust news produced by mainstream journalists more than AI-generated content. While 62% of respondents said they trust journalistic content “some” or “a lot,” 48% said the same about AI-generated information.
Despite concerns over AI-generated misinformation, many Americans are unsure of their ability to distinguish between human- and AI-produced content. About 43% of respondents said they were “somewhat” or “very” confident they could tell the difference – yet less than half felt certain they could spot AI-generated content accurately.
“Research suggests a significant amount of online content is AI-generated, from machine-translated pages to social media posts,” said Vivek Singh, an associate professor with the School of Communication and Information, a coauthor of the report and an expert in AI and algorithmic fairness. “Even major news organizations use AI tools, like Reuters’ Lynx Insight, to produce short news stories reviewed by human editors.”
The Need for AI Education
As AI continues to shape daily life, Ognyanova emphasized the need for education to help people make informed decisions about the technology.
To measure AI knowledge, respondents were presented with eight statements about AI and asked to classify each as “accurate,” “inaccurate” or “not sure.” Three statements were correct; the rest were false. Participants were then scored based on the number of correct responses.
Respondents were categorized into three knowledge groups:
- Low knowledge (0-2 correct answers) – 27% of respondents
- Medium knowledge (3-4 correct answers) – 51% of respondents
- High knowledge (5-8 correct answers) – 23% of respondents
The survey revealed a correlation between education, income and AI literacy. Among graduate degree holders, 29% demonstrated high AI knowledge, compared with 20% of those without a college degree. Twenty-seven percent of respondents earning more than $100,000 were classified as highly knowledgeable about AI, compared with 19% of those earning less than $25,000.
“Research suggests a significant amount of online content is AI-generated, from machine-translated pages to social media posts,” said Vivek Singh, an associate professor with the School of Communication and Information, a coauthor of the report and an expert in AI and algorithmic fairness. “Even major news organizations use AI tools, like Reuters’ Lynx Insight, to produce short news stories reviewed by human editors.”
“We need to integrate AI literacy into school curricula, starting in K-12,” Ognyanova said. “Information literacy training must evolve alongside technological advancements. Right now, a third of respondents are familiar with basic AI facts, and that needs to change.”
This survey is part of an ongoing, long-term project aimed at monitoring public attitudes toward AI. Members of the team plan to conduct national surveys three times annually, with a sample of 5,000 respondents. This sample will be nationally representative, with additional oversampling of groups such as individuals under age 25, those over 65 and Hispanic and Black respondents.
Learn more about the Communication Department at the Rutgers School of Communication and Information on the website.
This story was originally published in Rutgers Today on February 10, 2025.
Image: Courtesy of the Rutgers University Office of Communications