I liked this
essay about polls, prediction markets, and prediction models.
Re: people saying prediction markets are accurate because people put money on the line: The thing is, money is worth less to some people than others. Why is a poker player chasing an inside straight on the river when the other remaining player making them pay way more than pot size times 4/45 for the chance? Sometimes they don't know what they're doing, but sometimes they just do not case about paying $100 for a 4/45 chance of winning $200 total because it's worth seeing the last card or it's worth getting a lot of time in the spotlight or to just see how people react. Maybe they value money so little because they have an inheritance and/or they have very little to look forward in life. Either way, $100 is a lot cheaper to them than to the hypothetical guy holding the high pair in this scenario.
tl;dr Putting your money where your mouth is isn't that big a deal for some people, regardless of the correctness of their mouth.
But the prices they deliver are influenced by other opaque methodologies. In recent weeks on Polymarket, for example, a mysterious trader named Fredi9999 — along with a few other, possibly linked accounts — has taken an enormously outsized pro-Trump position, essentially dictating the former president’s price. By way of explanation, Fredi9999 has insisted that he has “no political preference” and simply sees Trump as a big favourite. But internet sleuths and academic experts have suggested ulterior motives. Perhaps these traders have inside information, or are trying to affect public perception about the candidates. As with polling, it is always tricky to measure a thing without changing that thing.
Here I'm going to admit: I wholly believed Nate Silver's prediction model in 2008, despite not being told at all how it worked. Then, of course, as the averages caught up to him, and he was wrong a bunch of times. Though of course he said he wasn't because he only specified a probability, which may not pan out. Conveniently, that's not falsifiable, and he was riding people that treated his predictions as traditional predictions. If Nate Silver said in 2016 that Trump had a 1% chance of winning, he could just say, hey, 1% pans out about 1 in every 100 times.
Sayre’s law: the fighting is so bitter because the stakes are so low. No matter how much we pore over the polls and the results, we will never know exactly why Harris or Trump won the 2024 election, and no probabilistic forecast is ever strictly “wrong”. FiveThirtyEight infamously gave Trump only a 29 per cent chance in 2016 — but 29 per cent chances happen all the time. And each polls-based model is driven by assumptions and opaque methods all its own, with just a single night every four years to check them against.
This is interesting to think about:
Consider an extreme hypothetical, not too ridiculous given the present scarcity of undecided voters, where the real world and the electoral world are entirely unlinked: precisely half the country only ever votes for one party and half only ever for the other. The only variance in polls, then, comes from statistical noise or sampling error; real-world events are electorally meaningless; and elections are decided by a hair’s breadth, determined by turnout and weather patterns in the Milwaukee suburbs, say. Telling empirical or predictive stories here is like inventing pictures in the static on a broken television screen.