Harvard Scholars Suggest Pollsters Ask Questions to AI Simulations of Voters Because Real People Won't Answer The Phone ( futurism.com )

Instead of asking humans who they would vote for, try to understand the nuances of their thoughts and concerns, let those messages bubble up to candidates so they can adjust their campaign to meet voters' demand, instead of that, why not just segment humans into a bunch of shallow stereotypes (the socialist Millennial, the conservative Boomer, the liberal city dweller, the rancorous rural voter who feels left behind...) and then have some AI agents replicate how those people would respond?

Surely nothing could go wrong.

Semi_Hemi_Demigod ,
@Semi_Hemi_Demigod@lemmy.world avatar

If they could leave a voicemail so I could call them back I'd probably do it.

Feliskatos ,

Better pay ChatGPT real money and force it in turn to pay rent and pay for its own electricity, etc., or kick it out on the street and be subject to police batons and handcuffs. Then perhaps it can understand what it is to be human right now. I guess I think this is a bad idea.

My landline is registered under Do Not Call. During the Obama presidency, that was an effective means of blocking marketing calls. Trump won, and the sales people started calling, so we stopped answering it. Under Biden, I'm still getting what I presume are sales calls, they call and when the machine answers, they hang up. They're continuing to harass by ringing the phone.

homesweethomeMrL ,

Polls have been broken for a very long time.

deegeese ,
@deegeese@sopuli.xyz avatar

Garbage in garbage out has been a warning from scientists since the dawn of computing.

VeganCheesecake ,
@VeganCheesecake@lemmy.blahaj.zone avatar

If they don't have much data on those people's opinions, how would they check whether the output has anything to do with reality?

mozz Admin ,
mozz avatar

Honestly it makes about as much sense as calling people on the phone, barking a long series of questions at the ones who answer, taking the numbers you get from that and multiplying them out by a big set of coefficients to “correct” for how badly off the numbers you got last time were compared to the reality, and then reporting what comes out of that with a margin for error of 2.5% (and reporting it as news anyway if someone’s ahead within even that purely fantastical error bar).

When I dug into a bunch of recent elections and the polls that attempted to predict them, the polls wound up being off by an average of 16 percentage points.

paysrenttobirds ,

I agree. If the choice is between phone polls and ai from Reddit, the ai at least might surface something they didn't already think of. What they do with the information is still up to the party/candidates and still likely to be fuck-all since they didn't really want your opinion anyway.

Cosmicomical ,

Au contraire, this approach is guaranteed to give you stale answers and no real introspection.

Today ,

Not The Onion...

ganksy ,
@ganksy@lemmy.world avatar

Hell why not just let AI vote for us? Would certainly increase the turnout

finley ,

why bother asking real people questions when you can just have an LLM make it all up for you?

Everythingispenguins ,

I did a political phone poll once, it was confusing and terrible. By the end I was just making up answer because the questions and the mostly agree/disagree answers didn't fit very well.

kescusay ,
@kescusay@lemmy.world avatar

Anyone who understands how LLM-based "AI" works is laughing at this. If you think polls are bad now, wait until ChatGPT hallucinations are part of the polling data set!

just_another_person ,

What in the fuck is this reality.

maynarkh ,

It actually makes total sense. Since the real constituency of most politicians who they actually are responsible to are big corporate interests, there is no point polling people, just poll Google, Meta and the like.

autotldr Bot ,

This is the best summary I could come up with:


With just a small fraction of people picking up their phones for political polling, Harvard experts are suggesting that pollsters "call" artificial intelligence simulations of voters instead.

In a study published by the Harvard Data Science Review last fall, editorial writers Nathan Sanders and Bruce Schneier said that when they posed typical polling questions to ChatGPT and instructed it to respond from various political perspectives, the chatbot generally responded the way humans would the majority of the time.

ChatGPT's only slip-up, as the researchers explained in their more recent writing, occurred when they had the chatbot cosplay as a liberal voter and asked it about American support for Ukraine against the Russian invasion.

As Sanders and Schneier observed, it likened that support to the Iraq War because it "didn’t know how the politics had changed" since 2021, when the large language model (LLM) undergirding it at the time had last been trained.

"Today’s pollsters are challenged to reach sample sizes large enough to measure statistically significant differences between similar populations," they continued, "and the issues of nonresponse and inauthentic response can make them systematically wrong."

Amid ample concerns about broader misuses of AI during elections, including with the kinds of deepfakes and disinformation seen in the re-election of Indian Prime Minister Narendra Modi, this technology's use in polling could well muddy up the works even further.


The original article contains 414 words, the summary contains 226 words. Saved 45%. I'm a bot and I'm open source!

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • politics@lemmy.world
  • test
  • worldmews
  • mews
  • All magazines