The other important thing I want to say is if the Comey quote is true then he actually had to be listening Well According to election forecasts, the number was closer to 70 percent. So that becomes an argument for further predictions.
Now, what is a “good” prognosis? If we go back to 2016, Nate Silver’s prediction, as you say, gave Trump a 30 percent chance of winning. Other models put Trump’s odds closer to 1 percent or low single digits. The sense is that because Trump won, Nate Silver was “right.” But of course we can’t really say that. If you say there’s a 1 in 100 chance something will happen and it does, it could mean you underestimated it, or it could just mean a 1 in 100 chance of it happening.
This is the problem when it comes to figuring out if election forecasting models are correctly aligned with real events. Since 1940, we’ve only had 20 presidential elections in our sample size. So there is no real statistical justification for an exact probability here. 97 versus 96 – with our limited test size, it’s insanely hard to know if these things are properly calibrated to 1 percent. This whole exercise is far more uncertain than the press, I think, would like consumers of surveys and forecasts to believe.
In your book you talk about Franklin Roosevelt’s pollster who was an early polling genius – but even his career eventually caught fire later, right?
This guy, Emil Hurja, was Franklin Roosevelt’s top pollster and election forecaster. He developed the first type of survey, the first tracking survey. A truly fascinating figure in the history of polling. He’s insanely accurate at first. In 1932 he predicted that Franklin Roosevelt would win by 7.5 million votes, although others predicted that Roosevelt would lose. He wins with 7.1 million votes. So Hurja is better calibrated than the other pollsters of the time. But then he flops in 1940, and later he’s basically as accurate as your average pollster.
In investing, it’s difficult to beat the market over a long period of time. Even with surveys, you need to constantly rethink your methods and assumptions. Although Emil Hurja was early dubbed “the wizard of Washington” and “the crystal gazer of Crystal Falls, Michigan,” his record slips over time. Or maybe he just got lucky early on. It’s hard to say in hindsight if he really was that brilliant predictor.
I mention this because – well, I’m not trying to scare you, but your biggest mistake may be somewhere in the future and yet to come.
That’s sort of the lesson here. I want people to think about it, just because the polls were biased in one direction in the last few elections doesn’t mean they will be just as biased in the next few elections for the same reasons. The smartest thing we can do is read every single survey with a view to how that data was generated. Are these questions phrased correctly? Does this poll reflect American demographic and political trends? Is this outlet a reputable outlet? Is there anything in the political environment that might cause a Democrat or Republican to answer phone calls or online polls at higher or lower rates than the other party? You must think through all of these possible outcomes before accepting the data. So that’s an argument for treating polls with more uncertainty than we’ve treated them in the past. I think that’s a pretty self-evident conclusion from the last few elections. But more importantly, what matters more is how pollsters arrive at their estimates. Ultimately, these are uncertain estimates; they are not the basic truth about public opinion. And that’s how I want people to think about it.
https://www.wired.com/story/2022-midterms-polls-g-elliott-morris-q-a/ Have Pollsters Cleaned Up Their Act in Time for the Midterms?