Except when he gets it wrong, like Bernie winning Michigan.


"If Bernie Sanders were to defeat Hillary Clinton in Michigan’s Democratic primary, it would be “among the greatest polling errors in primary history,” our editor in chief, Nate Silver, wrote Tuesday evening when results started to come in. Sanders pulled it off, and now we’re left wondering how it happened. How did Sanders win by 1.5 percentage points when our polling average showed Clinton ahead by 21 points and our forecasts showed that Sanders had less than a 1 percent chance of winning?"


Nate Silver didn't get it wrong; the polls did. Nate does not forecast state primaries. He says they're too unpredictable.


The current crop of polls has Clinton winning by a six point margin. That's my gut feeling as well. I think she'll win and it will be on par with Obama vs. McCain.

But I'm not going to pretend that because Nate Silver says she has an 80% chance she actually has an 80% chance.


If you think your gut feeling is more accurate that Nate Silver's statistical model, you're delusional.
 
Yes, but a bookie doesn't care if <insert local sports team> *really* has a 75% chance at winning their next game. They are setting odds where they think 3/4 of people will take the favorite and 1/4 will take the underdog. If 75% of their customers pick the favorite it doesn't matter who wins the game, they break even, so long as they gave the underdog 3 to 1 odds. They charge (usually) 5% extra to makea bet, thats where they make money. If more people bet on one team or the other than expected they'll adjust the odds before the event.

That is not what Nate Silver does. He looks at polling data. If the data is flawed his predictions will not be accurate. And anyways how many times has he been wrong when giving a candidate that long of odds? If he's called 99 other elections at 100:1 correctly then so what. Its like people being angry at the weatherman, well you said there was a 90% chance of sunshine and it rained!!!

People keep saying that he called 99 out of 100 states like it's some remarkable feat. There are four problems with that. First, only about 10 states are actually in play, so pretty much any pollster can get 80 out of 100 right in two consecutive elections. Second, of the swing states, half of them weren't really that close, so you probably had at least an 80% chance of being right on those. Third, there is huge correlation between even the coin toss states, so if you get one right, you're likely to get them all. Fourth, we're mostly only hearing about Silver because he did well. If he had done poorly and some guy named Gold or Platinum had gotten lucky instead, we'd probably be talking about them.

Bottom line, yes, he did well, but it doesn't prove much.
 
Are those the only two events they've ever predicted?

We're not testing the accuracy of Nate Silver's predictions. We're testing the accuracy of a specific statistical model to predict a specific event.

That particular model has been used twice before to near perfect success.

That Nate Silver made some personal prediction about some unrelated event that proved to be inaccurate is irrelevant to the efficacy and accuracy of this statistical model.
 
Bookies absolutely have to know what they're doing when they set odds or they lose their asses. It's not just punditry, there's real money at stake.

And are you talking about Nate Silver? Who completely blew it on Trump


Nate is a statistician, not a pundit. At least not usually: he admits that he was just guessing about Trump. Don't confuse Nate the occasional pundit with Nate the world-class statistician. His predictive model for the U.S. presidential general election is incredibly sophisticated, and for the umpteenth time, it correctly picked the winner in all 50 states in 2012.
 
Last edited:
People keep saying that he called 99 out of 100 states like it's some remarkable feat. There are four problems with that. First, only about 10 states are actually in play, so pretty much any pollster can get 80 out of 100 right in two consecutive elections. Second, of the swing states, half of them weren't really that close, so you probably had at least an 80% chance of being right on those. Third, there is huge correlation between even the coin toss states, so if you get one right, you're likely to get them all. Fourth, we're mostly only hearing about Silver because he did well. If he had done poorly and some guy named Gold or Platinum had gotten lucky instead, we'd probably be talking about them.

Bottom line, yes, he did well, but it doesn't prove much.

What other pollsters or pundits can you name who matched or exceeded Silver's accuracy?
 
I would say that Nate Silver is making an educated bet using what he thinks is the best data and he's usually pretty close. However, there is no denying that his selection of criteria for his model is completely subjective.


I'll deny that. There is certainly some subjectivity involved in statistical model building, but to say that the process is "completely subjective" is more wrong than right. The elements to include in the model have largely been determined by an objective metric: how accurate the final model is in predicting actual election results.
 
Nate Silver didn't get it wrong; the polls did. Nate does not forecast state primaries. He says they're too unpredictable.

I don't know about Nate, but his website certainly forecasts state primaries. Are you claiming Nate wasn't involved in all the primary forecasts that are on his site?





If you think your gut feeling is more accurate that Nate Silver's statistical model, you're delusional.

I think a lot can happen between now and November. Silver is one of the best in the business but that doesn't make him infallible.
 
Again, I'm not even a gambler, but I know that poker is part chance (meaning odds do come into effect) and part skill ("gut" reactions and whatnot).

An election is clearly more akin to a straight up game of chance (roulette wheel) than a skill-based game like poker.

That's funny, I would compare it more to poker, where some factors are known and some are unknown. In a straight-up game of chance, the odds of winning never change. The odds in a political matchup change constantly, just like the odds of winning a poker hand change after each card is turned over.
 
Bookies absolutely have to know what they're doing when they set odds or they lose their asses. It's not just punditry, there's real money at stake.
In general, bookies establish odds based on anticipated wagering in order to get equal action, not based on predicted outcome. I don't know if this holds true for election wagering, but I suspect so. FWIW.
 
Last edited:
Sam Wang did better last election, getting 50/50 states, 10/10 senate races and the exact popular vote percentage. He has Clinton at an 85% chance of winning.

Okay, that's one...

sunmaster14's claim is that this level of accuracy is relatively easy to achieve. So I expect there to be many others who matched Silver's success rate.
 
I have a lot of respect for political polls, and for Silver's methods of weighting them. But the fact remains, his models can't properly account for effects which have not happened yet. His models can tell you with petty good accuracy what the results of an election today would be. He doesn't have much to go on as to how much polls can change over 4 months in an uncertain world made far more uncertain by an extremely unpredictable Republican candidate and a Democratic candidate who is being investigated by the FBI.


He actually has different models, including one for the candidates' chances in November (a forecast model) and one for what the candidates' chances would be if the election were held today (a now-cast model). The difference between the two is that the forecast model includes additional uncertainty due to the election being in the future. But you are correct that he has made no attempt to adjust his models specifically for Trump's idiosyncratic unpredictability or the FBI's investigation of Hillary.

.
His models aren't attempting to account for unpredictable future events, nor do they need to. They are based on the data as it currently exists. And as before, the results will be modified as that data changes.


Nate's forecast models do not account for specific unpredictable future events, an impossible feat by definition; but they do account for the inherent uncertainty in early polling. This uncertainty, which diminishes with time until the election, is accounted for in the model.
 
Last edited:
I don't know about Nate, but his website certainly forecasts state primaries. Are you claiming Nate wasn't involved in all the primary forecasts that are on his site?


Can you post a link to those primary forecasts? I might be wrong.


I think a lot can happen between now and November.


Indeed, but that uncertainty is built into the forecast model, although it's true that if this election is really less predictable than those on which his model was derived, his model will underestimate the uncertainty.
 
Not according to Nate's now-cast model: 77% chance of Hillary winning if the election were held today.
The now-cast had Hillary at a higher chance at winning than the regular one just a little while ago. I'm guessing the difference is one Rasmussen poll that has Trump at +4. That poll is an outlier and a 9 point swing from just a week ago. Rasmussen is a terrible pollster.
 
In general, bookies establish odds based on anticipated wagering in order to get equal action, not based on predicted outcome. I don't know if this holds true for election wagering, but I suspect so. FWIW.

That's true. I was thinking more about prediction markets.
 
Can you post a link to those primary forecasts? I might be wrong.





Indeed, but that uncertainty is built into the forecast model, although it's true that if this election is really less predictable than those on which his model was derived, his model will underestimate the uncertainty.

Here's the Michigan one, where 538 had Hillary forecast at 99%:

http://projects.fivethirtyeight.com/election-2016/primary-forecast/michigan-democratic/

ETA: For someone who likes to call people delusional, how did you end up thinking 538 doesn't do primary forecasts? There must be at least a dozen links to various 538 primary forecasts in this subforum.
 
Last edited:
The now-cast had Hillary at a higher chance at winning than the regular one just a little while ago. I'm guessing the difference is one Rasmussen poll that has Trump at +4. That poll is an outlier and a 9 point swing from just a week ago. Rasmussen is a terrible pollster.


He already adjusts for the bias of the Rasmussen poll (and others).

The candidates' chances should always be closer in the forecast than the now-cast, because the forecast model has more uncertainty built in. The less certain a model is about the election outcome, the closer it's prediction should be to 50:50, which would imply complete uncertainty.
 
Here's the Michigan one, where 538 had Hillary forecast at 99%:

http://projects.fivethirtyeight.com/election-2016/primary-forecast/michigan-democratic/

ETA: For someone who likes to call people delusional, how did you end up thinking 538 doesn't do primary forecasts? There must be at least a dozen links to various 538 primary forecasts in this subforum.


You're right. He does forecast primaries; he just states that they are inherently harder to predict than the general election.
 
For the Michigan one in particular, check out the list of polls - none had Sanders winning, or were even particularly close. Some of the (usually) more reliable ones had total landslides. Now I'll have to go and read more about what the hell happened there, because that's really extreme... anyway, I wouldn't use that as any specific evidence against FiveThirtyEight or Silver since it doesn't seem like anyone really called it, but it does serve as a great reminder that 99% is not the same as 100%, and no polling is ever perfect.
 

Back
Top Bottom