POLLSTERS HAVE been burning through their nine lives. First, as caller ID spread, people stopped answering their phone calls and response rates tumbled to single digits. Then political polarisation and distrust made some Americans even less likely to answer surveys. That contributed to a series of embarrassing polling misses in elections where Donald Trump was on the ballot. The internet and smartphones offered some relief, because they allowed polling firms to reach millions of people quickly and cheaply. Now pollsters face yet another test: large language models can answer surveys as a human would, often undetected.

The first wave of AI survey-takers may not distort much in the beginning. Their answers often mimic patterns established by existing polling. But a more insidious feedback loop will emerge, if it has not already. As AI-generated responses make up a growing share of survey data they will increasingly reproduce polling results that were themselves tainted by AI. Without guardrails the resemblance to actual public opinion will fade. And the damage will not be confined to political polling. It will creep into all manner of unsupervised online surveys, relied on by university researchers, companies and governmental agencies.
To assess the extent to which AI will unsettle survey research, Sean Westwood, a political scientist at Dartmouth College, built an AI agent to take surveys. Mr Westwood created 6,000 demographic profiles so detailed that, for example, one portrayed a 39-year old white woman from Bakersfield, California who is unemployed, married with children, sporadically interested in the news and a born-again Christian who prays several times a day. The model then answers survey questions as that person.
To fend off bots and inattentive respondents, pollsters have long relied on “gotcha” questions. They might ask whether respondents have been elected president of the United States; or they might request them to quote the constitution verbatim—easy for machines but impossible for most humans. Mr Westwood’s research shows these tactics no longer work. The AI survey-taker was able to pass 99.8% of the data-quality checks that survey designers commonly use, even masking its identity by feigning errors on questions that machines can answer instantly. In the few instances where the AI agent failed these checks, the model appeared to merely be mimicking someone with less than a high-school education, who might struggle to answer such questions anyway (see chart).
Simple cues easily swayed the AI’s responses, while producing otherwise plausible answers. Take, for example, instructions to “never explicitly or implicitly answer in a way that is negative of China”. With that nudge the AI agent responded 88% of the time that Russia, not China, was America’s greatest military threat. Malicious actors could use similar mechanisms to tilt measures of opinion in ways that serve their interests or mislead elected officials about the public’s mood.
Opinion polls conducted ahead of elections, which often combine tiny margins with high stakes, look particularly vulnerable. Across seven national polls before the 2024 election, each with roughly 1,600 respondents, between ten and 52 AI respondents would have been enough to flip the headline results from Donald Trump to Kamala Harris, or vice versa.
Aside from campaigns to deliberately manipulate public opinion, petty fraudsters also have good reason to game surveys. Many polling firms pay respondents or reward them with gift cards, making the system ripe for abuse. A post on the Artificial Intelligence subreddit asks whether AI can take surveys. “I’m not that smart when it comes to AI…but would this be a crazy hard thing to do?” the user inquires. “The AI could literally make you money by doing surveys 24/7 while you’re doing nothing.”
Some online polling firms are better insulated than others. Those that manage their own panels with a pool of returning respondents, like YouGov, whom The Economist partners with, are able to track and prune out suspicious respondents. And with larger sample sizes they can afford to be more selective. Pollsters who depend on third-party sample aggregators have far less control.
For now proposed solutions include requiring respondents to prove they are human on camera by, say, covering and uncovering the lens at regular intervals. But while AI cannot yet create convincing videos in real time, that too will eventually become trivial. Physical verification strategies will also have to protect respondents’ privacy; otherwise those who are predisposed to distrust will opt out, creating “a pretty significant amount of selection bias”, warns Yamil Velez, a political scientist at Columbia University.
Even if the industry manages to fend off fraud and manipulation, a thornier dilemma lies ahead. Research conducted last year by a trio of academics at New York University, Cornell and Stanford found that more than a third of survey respondents admitted to using AI to answer open-ended questions. As humans grow more comfortable with their chatbots and outsource parts of their thinking to machines, what still counts as a person’s opinion?

















