September 27, 2023

PU Professor and His Graduate Student Join List of Most Influential People in AI

By Donald Gilpin

Arvind Narayanan (Photo by Nick Donnoli, Princeton University)

Leading the discussion on one of the most hotly debated subjects in the news are a Princeton University professor and his graduate student, who have captured the attention of millions in their explorations of the dangers and potential benefits of artificial intelligence (AI).

Arvind Narayanan, a computer science professor and director of the Center for information Technology Policy (CITP), and Sayash Kapoor, a CITP science Ph.D. candidate, have been selected for Time magazine’s list of the 100 most influential people in artificial intelligence. 

Narayanan and Kapoor are now collaborating on a book titled AI Snake Oil, due out next year, and they write a digital newsletter of the same name providing critical commentary on AI. Narayanan’s research focuses on the societal impact of digital technologies, especially AI, and he has co-taught a related course at Princeton on limits to prediction.

“We started our AI Snake Oil newsletter because it allows us to dissect false or misleading claims about AI in real time,” Kapoor wrote in an email. “Through it, we have been able to share our scholarship and analysis on AI with the public at large. Many false or misleading claims about AI are made every day, and we hope the newsletter has helped push back against some of the misleading narratives.”

About the AI 100, Time Editor-in-Chief Sam Jacobs wrote in the September 7 issue, “This group of 100 individuals is in many ways a map of the relationships and power centers driving the development of AI. They are rivals and regulators, scientists and artists, advocates and executives — the competing and cooperating humans whose insights, desires, and flaws will shape the direction of an increasingly influential technology.”

In an interview with Time, Narayanan noted that there is a great deal of demand in our society for “snake oil” or “bullsh-t.”

“Our thesis in the book is that broken AI is very appealing to broken institutions,” he said. “The reason that HR departments are desperate for AI that claims to predict which candidates are going to be good is because they’re getting hundreds, perhaps thousands, of applications per open position. And the idea of evaluating them thoroughly in the way that one would hope for is just not working. And that’s because the hiring processes themselves seem to be broken.”

He added, “A lot of people who are pushing hype are not just fooling others, they’re fooling themselves.”

Both Narayanan and Kapoor suggest that AI has been a net positive for society. “As an engineer by training, I think AI has been super useful and the impact can hardly be overstated,” said Kapoor. “I think the reason a lot of what we write on the blog seems to be critical is because there is a pretty big vacuum in the space of people who have the capacity to respond to AI critically, but also have the incentive. If either of us were in the industry, or if we were working at one of these companies, it would be much harder for us to talk about the harmful impacts of AI.”

Narayanan added, “I’ve been working on tech accountability for 15-plus years … I think tech in general can and largely has been a force for good in this world. But that can only continue to happen if there are strong countervailing forces that try to align incentives between companies and the rest of society. So it is in that sense that I do all this work.”

In addition to Narayanan and Kapoor, four Princeton University alumni are featured in the Time 100 AI list. They include Dario Amodei, a Ph.D. alumnus and CEO and president of Anthropic; Fei-Fei Li, a 1999 graduate and professor of computer science at Stanford University; Eric Schmidt, a 1976 graduate, co-founder of Schmidt Futures, and former CEO of Google; and Richard Socher, a 2008-2009 graduate student and the CEO and founder of