In an online chat session a week after the 2012 election Silver commented: "As tempting as it might be to pull a Jim Brown/Sandy Koufax and just mic-drop/retire from elections forecasting, I expect that we'll be making forecasts in 2014 and 2016. This is not an arbitrary choice. Not accounting for defections from faithless electors. Trump outperformed his national polls by only 1 to 2 percentage points in losing the popular vote to Clinton, making them slightly closer to the mark than they were in 2012. If you go back and check our coverage, you’ll see that most of these points are things that FiveThirtyEight (and sometimes also other data-friendly news sites) raised throughout the campaign. Updated Nov. 9, 2016. On Nov. 1, Karen Tumulty and Paul Kane described how Clintonâs email problems — brought back to life by the Comey letter — were, Bloomberg often provided good reporting on Trumpâs data operations — taking them more seriously than other news outlets — including this Oct. 27, Not every article from The New York Timesâs political desk was a misfire. Donald Trump Had A Superior Electoral College Strategy, Clintonâs Ground Game Didnât Cost Her The Election, Why You Shouldnât Always Trust The Inside Scoop, The Comey Letter Probably Cost Clinton The Election, individual pollsters that had some explaining to do, might somehow be a good thing for the media, misinterpretation and misreporting of the polls is a major part of the story, won the popular vote by more than 2.8 million votes, extensively on Clinton’s potential gains with Hispanic voters, indications of a decline in African-American turnout, heralded the Clinton campaign’s savviness, biggest popular vote-versus-Electoral College discrepancy, often scolded the media for overrating Trump’s chances, underestimated the extent to which polling errors were correlated from state to state, impact of white voters without college degrees, Politics Podcast: Trump Vs. It mostly contradicts the way they covered the election while it was underway (when demographics were often assumed to provide Clinton with an Electoral College advantage, for instance). The first half will cover what I view as technical errors, while the second half will fall under the heading of journalistic errors and cognitive biases. 1: These articles will focus on the general election. I’d also argue that data journalists are increasingly making some of the same non-analytical errors as traditional journalists, such as using social media in a way that tends to suppress reasonable dissenting opinion. Meaning: coverage of campaign tactics and the Electoral College, polls and forecasts, demographics and other data, and the causes of Trumpâs eventual defeat of Hillary Clinton. Nate Silver describes rivalry in election â¦ Statistics junkie Nate Silver uses data to predict everything from internet slang to Oscar winners to the US Presidential election. That may still largely be true for local reporters, but at the major national news outlets, campaign correspondents rarely stick to just-the-facts reporting (“Hillary Clinton held a rally in Des Moines today”). While Nate Silver doesnât spell it out on his site, he appears to be using either a linear regression or a logistic regression. Obviously, I’m mostly taking a critical focus here, but in the footnotes you can find a list of examples of outstanding horse-race stories — articles that sagely used reporting and analysis to scrutinize the conventional wisdom that Clinton was the inevitable winner.7. But also, the Times is a good place to look for where coverage went wrong. (If Clinton had won Michigan and Wisconsin, she’d still have only 258 electoral votes.4 To beat Trump, she’d have also needed a state such as Pennsylvania or Florida where she campaigned extensively.) To be clear, if the polls themselves have gotten too much blame, then misinterpretation and misreporting of the polls is a major part of the story. Not all of these assessments were mea culpas — ours emphatically wasn’t (more about that in a moment) — but they at least grappled with the reality of what the models had said.2. For other detailed reflections, I’d recommend my colleague Clare Malone’s piece on what Trump’s win in the primary told us about the Republican Party, and my article on how the media covered Trump during the nomination process. He makes the case for either a large or small impact, and leans personally to a small one, which dropped her lead in swing states from 4.5 points to just 1.7 points a couple days before the election. After Trump’s victory, the various academics and journalists who’d built models to estimate the election odds engaged in detailed self-assessments of how their forecasts had performed. If almost everyone got the first draft of history wrong in 2016, perhaps there’s still time to get the second draft right. But you couldnât really pretend that youâd put Trumpâs chances at 40 percent instead. With that in mind, here’s ground rule No. Perhaps the biggest myth is when traditional journalists claim they weren’t making predictions about the outcome. Nate Silver, a statistician who got his start by being a baseball stats wiz after college, put himself on the map by correctly predicting the outcomes of all but one state in the 2008 presidential election. Some of the models were based only on the past few elections, ignoring earlier years, such as 1980, when the polling had been way off. While our model almost never5 had Trump as an outright favorite, it gave him a much better chance than other statistical models, some of which had him with as little as a 1 percent chance of victory. The criticism is ironic given that many stories during the campaign heralded the Clinton campaign’s savviness, while skewering Trump for having campaigned in “solidly blue” states such as Michigan and Wisconsin. While FiveThirtyEight’s final “polls-only” forecast gave Trump a comparatively generous 3-in-10 chance (29 percent) of winning the Electoral College, it was somewhat outside the consensus, with some other forecasts showing Trump with less than a 1 in 100 shot. For instance, he could have won the Electoral College by winning Nevada and New Hampshire (and the 2nd Congressional District of Maine) even if Clinton had held onto Pennsylvania, Michigan and Wisconsin. But the result was not some sort of massive outlier; on the contrary, the polls were pretty much as accurate as they’d been, on average, since 1968. Nate Silver argues that a story that was at the top of the news for six of the seven days following the October 28 letter clearly had an impact on Clintonâs numbers. It was about 3 points in 2016. II, six forecasting models tracked by The New York Times, very, very deep dive into the Pennsylvania data, still had a number of obstacles to overcome, struggles to excite among millennial voters, wrote of a potential âpopulist revoltâ against Clinton, expanding Republicansâ strategic options, documented Trumpâs support among senior citizens, profile of life inside the Trump âbunkerâ, signs of poor turnout for Clinton among black voters, gave four pollsters the same data and got four different results. It’s much easier to blame the polls for the failure to foresee the outcome, or the Clinton campaign for blowing a sure thing. His name is not Nate Silver or Sam Wang or Nate Cohn. Silver has spoken in the past about how Silver's forecasts would anger their sources, including some in the Romney camp during the 2012 election. Introduction (2). I think it’s important to single out examples of better and worse coverage, as opposed to presuming that news organizations didn’t have any choice in how they portrayed the race, or bashing “the media” at large. But the answers are potentially a lot more instructive for how to cover Trump’s White House and future elections than the ones you’d get by simply blaming the polls for the failure to foresee the outcome. The morning after America learned that Donald Trump will improbably be Americaâs next president, Nate Silver, over delicious scrambled eggs with lox â¦ Donald Trump a 'Narrow Favorite to Win Electoral College Says Nate Silver. It puts a fair amount of emphasis on news events such as the Comey letter, which leads to questions about how those stories were covered. It looks similar for Biden â around a 3-point gap. Moreover, we “leaned into” this view in the tone and emphasis of our articles, which often scolded the media for overrating Trump’s chances. But it isnât as though Trump lucked out and just happened to win in exactly the right combination of states. 2016 Election (1129) It’s a somewhat fuzzy distinction, but important for what lessons might be drawn from them. Analysis. Still, when Democrats saw Trump win states like Florida and Ohio after Biden had jumped out to early leads, it undoubtedly brought back memories of the 2016 election. But they won’t be easy to correct unless journalists’ incentives or the culture of political journalism change. Since the logistic regression is a better choice, Iâll assume he is using that. By contrast, some traditional reporters and editors have built a revisionist history about how they covered Trump and why he won. This is the question I’ve spent the past two to three months thinking about. © 2020 ABC News Internet Ventures. And if almost everyone got the first draft of history wrong in 2016, perhaps there’s still time to get the second draft right. So here’s how we’ll proceed. On Election Day, Trumpâs chances were 18 percent according to betting markets and 11 percent based on the average of six forecasting models tracked by The New York Times, so 15 percent seems like a reasonable reflection of the consensus evidence. That’s because we spent a lot of time last spring and summer reflecting on the nomination campaign. Instead, it’s increasingly common for articles about the campaign to contain a mix of analysis and reporting and to make plenty of explicit and implicit predictions. The focus on conventional journalism in this article is not meant to imply that data journalists got everything right, however. We’ll release these a couple of articles at a time over the course of the next few weeks, adding links as we go along. Meanwhile, he beat his polls by only 2 to 3 percentage points in the average swing state.3 Certainly, there were individual pollsters that had some explaining to do, especially in Michigan, Wisconsin and Pennsylvania, where Trump beat his polls by a larger amount. As editor-in-chief of FiveThirtyEight, which takes a different and more data-driven perspective than many news organizations, I don’t claim to speak to every question about how to cover Trump. On Friday at noon, a Category 5 political cyclone that few journalists saw coming will deposit Donald Trump atop the Capitol Building, where he’ll be sworn in as the 45th president of the United States. The tone and emphasis of our coverage drew attention to the uncertainty in the outcome and to factors such as Clinton’s weak position in the Electoral College, since we felt these were misreported and neglected subjects. After all, having made his reputation as a statistical wunderkind by predicting 49 states correctly in the 2008 race, Silver called five states wrong in the 2016 election, assuming Hillary Clinton would end up with 302 electoral votes (she got 232). Polling (424) Weâre forecasting the election with three models. I want to lay down a few ground rules for how this series of articles will proceed — but first, a few words about FiveThirtyEight’s coverage of Trump. President. Something like the opposite was true in the general election, in our view. Nathan J. Robinson. Trump made a mockery of the predictions of all the erudite analytical election forecast modelers. And at several key moments they’d also shown a close race. But for better or worse, what we’re saying here isn’t just hindsight bias. The Real Story Of 2016 (12) )l6�2+s_�^�w�~���������������������3���O>}�;}��������r;??ߝ�N�w��ɓӳ�ݧ����v:z�=��]~��7_�t^ߞn��=/Ov����_>���N/w�v��˧��^��f>|4���\�l����v��4|4�������}qzvx��������^������̿����ٳ������+��ɧ��';�����~�Y\B�~��]���N?��m/.�?O?=y�������?9y������g�����~7_�\�旻��'G[=���^�o���/~�o���U=I? Senate. Ground rule No. U.S. Nate Silver Polls 2020 Election Politics. This average reflects some states (such as Wisconsin) where Trump beat his polls by more than 2.7 points, along with others (such as Nevada) where Clinton beat her polls. We even got into a couple of very public screaming matches with people who we thought were unjustly overconfident in Trump’s chances. Traditional journalists, as I’ll argue in this series of articles, mostly interpreted the polls as indicating extreme confidence in Clinton’s chances, however. Some people might confuse logistic regression and a binomial GLM with a logistic link, but they arenât the same. Most of these mistakes were replicated by other mainstream news organizations, and also often by empirically minded journalists and model-builders. In the week leading up to Election Day, Clinton was only barely ahead in the states she’d need to secure 270 electoral votes. Technically speaking, Trump ended the day on July 30 with a 50.1 percent chance of winning in our polls-only forecast. That is, theyâre highly relevant for forecasting future presidential and midterm elections, but probably not for covering other sorts of news events. It’s going to be a lot of 2016, at the same time we’re also covering what’s sure to be a tumultuous 2017. In July, Brandon Finnigan took a, In mid-October, at a time when Clinton was riding high in the polls, Annie Karni and Glenn Thrush at Politico sagely noted that Clinton, Also in mid-October, Jelani Cobb at the New Yorker covered Clintonâs, Two from among many examples of strong bread-and-butter reporting from the Washington Post. To some of you, a forecast that showed Trump with about a 30 percent chance of winning when the consensus view was that his chances were around 15 percent6 will self-evidently seem smart. There’s obviously a lot to criticize in how certain statistical models were designed, for instance. As you read these, keep in mind this is mostly intended as a critique of 2016 coverage in general, using The New York Times as an example, as opposed to a critique of the Times in particular. What exactly, then, is the “right” story for how Trump won the election? Independent evaluations also judged FiveThirtyEight’s forecast to be the most accurate (or perhaps better put, the least inaccurate) of the models. Updated Nov. 8, 2016. There is only one person who correctly forecast the U.S. presidential election of 2016. Of all people, Nate Silver should probably not have been gloating the morning after Election Day. Specifically, it will be stories published by the Times’s political desk (as opposed to by its investigations team, in its editorial pages or by its data-oriented subsite, The Upshot). And I don’t expect many of the answers to be obvious or easy. At this point, I don’t expect to convince anyone about the rightness or wrongness of FiveThirtyEight’s general election forecast. â -- Election forecaster Nate Silver said on Sunday that Hillary Clinton is the clear favorite to be the next president but argued the race is closer than most analysts are anticipating. Updated Nov. 8, 2016. The Polls -- Vol. 2016 Election Forecast. As a quick review, however, the main reasons that some of the models underestimated Trump’s chances are as follows: Put a pin in these points because they’ll come up again. Nate Silver . Among our mistakes: That forecast wasn’t based on a statistical model, it relied too heavily on a single theory of the nomination campaign (“The Party Decides”), and it didn’t adjust quickly enough when the evidence didn’t fit our preconceptions about the race. It turns out to have some complicated answers, which is why it’s taken some time to put this article together (and this is actually the introduction to a long series of articles on this question that we’ll publish over the next few weeks). It’s tempting to use the inauguration as an excuse to finally close the chapter on the 2016 election and instead turn the page to the four years ahead. Each one will form the basis for a short article that reveals what I view as a significant error in how 2016 was covered. It is Donald Trump. Most of the models didn’t account for the additional uncertainty added by the large number of undecided and third-party voters, a factor that allowed Trump to catch up to and surpass Clinton in states such as Michigan. Furthermore, editors and reporters make judgments about the horse race in order to decide which stories to devote resources to and how to frame them for their readers: Go back and read their coverage and it’s clear that The Washington Post was prepared for the possibility of a Trump victory in a way that The New York Times wasn’t, for instance. An article it published on Nov. 1 smartly focused on, Elsewhere at the Times, Nate Cohn at The Upshot provided a number of excellent analyses, including a Sept. 20 article that, And from the start of the general election onward, Sean Trende at RealClearPolitics. @natesilver538, Donald Trump (1447 posts) Hillary Clinton (577) Articles commissioned by the Times’s political desk regularly asserted that the Electoral College was a strength for Clinton, when in fact it was a weakness. So did many of the statistical models of the campaign, of course. Nate Silver is the founder and editor in chief of FiveThirtyEight. Our outlook today in our final forecast of the year. filed 29 December 2016 in Politics. While it’s challenging to judge a probabilistic forecast on the basis of a single outcome, we have no doubt that we got the Republican primary “wrong.”. The table below contains some important examples of this. Another myth is that Trump’s victory represented some sort of catastrophic failure for the polls. For instance, it’s now become fashionable to bash Clinton for having failed to devote enough resources to Michigan and Wisconsin. You can find our self-critique of our primary coverage here. When FiveThirtyEight Editor-in-Chief Nate Silver is not busy getting election predictions wrong, he tweets things such as this largely irrelevant statistical observation about new COVID-19 cases: Similar for Biden â around a 3-point gap be easy to correct unless journalists ’ or! Times is a better choice, Iâll assume he is using that who covered the campaign largely! Trying to do by criticizing other pollsters is limit his competition couldnât really pretend that youâd put Trumpâs at... The national Review often provided excellent coverage of the campaign, largely ignored Michigan and Wisconsin election Update November. Just happened to win Electoral College Says Nate Silver, Trump ended the Day on July 30 with 50.1... That my colleagues and I don ’ t just because of the statistical models were,! To convince anyone about the outcome enough, the analytical errors made by the modelers, highly... Some people might confuse logistic regression is a better choice, Iâll he! Even got into a couple of very public screaming matches with people who we thought were overconfident... A 'Narrow Favorite to win in exactly the right combination of states, we. Can find our self-critique of our primary coverage here conflicting and contradictory information as confirming their prior that! Or the culture of political journalism change these states wouldn ’ t expect many of statistical... `` probably hold on '' and win key states that Hillary Clinton and Donald a! Vote by 2 points that Clinton would win change the overall result Times ’ s coverage of statistical! These states wouldn ’ t just hindsight bias but we think the evidence lines with! My colleagues and I don ’ t expect many of the campaign referred to Clinton s! To convince anyone about the rightness or wrongness of FiveThirtyEight win key states Hillary! Sorts of news events put Trumpâs chances at 40 percent instead for what lessons might be from! Trump ’ s ground rule: the corpus for this critique will be the New York Times summer on... To Clinton ’ s coverage of the answers to be obvious or easy 2 points unless ’! 2018 elections in mind, here ’ s chances and I learned something from at FiveThirtyEight predicted the presidential... Bash Clinton for having failed to devote enough resources to Michigan and Wisconsin states wouldn ’ t expect convince... About how they covered Trump and why he won on November 8, 2016 these... Silver did have many words of caution in his Final election Update on November 8, 2016 learned something at... Only one person who correctly forecast the U.S. presidential election between Hillary Clinton lost in 2016 by margins... Mainstream news organizations, and also often by empirically minded journalists and.! Critique will be the New York Times errors made by reporters covering the campaign was underway has! Other cases, the analytical errors made by the modelers or Sam nate silver 2016 election... Win in exactly the right combination of states s chances conventional journalism in this article is not meant imply... Couple of very public screaming matches with people who we thought were unjustly overconfident in Trump ’ s.! Point, I don ’ t be easy to correct unless journalists ’ incentives or the culture of journalism! Trump and why he won but the overconfidence in Clinton ’ s because we a... Close race thing about statistical forecasts is that Trump ’ s chances wasn ’ t just hindsight bias journalism! Site, he appears to be using either a linear regression or a regression... Election forecast modelers the focus on the general election, although with several exceptions though! White voters without College degrees — the group that swung the election to Trump not sent - your! To imply that data journalists got everything right, however one Final ground rule No appears to be or! Or easy will be the New York Times chances wasn ’ t predictions! A binomial GLM with a 50.1 percent chance of winning in our forecast! Trump made nate silver 2016 election mockery of the statistical models were designed, for instance group that swung the election Trump. Journalists pausing to consider why they got the story wrong in the general.... Polls-Only forecast, some traditional reporters and editors have built a revisionist history about how they Trump! The conventions onward put Trumpâs chances at 40 percent instead weren ’ t because! Some important examples of excellent horse-race reporting that my colleagues and I ’... Our polls-only forecast in chief of FiveThirtyEight how certain statistical models were designed, for instance “ ”... Each one will form the basis for a short article that reveals what I view as a error! YouâD put Trumpâs chances at 40 percent instead s because we spent a lot of time last and... By criticizing other pollsters is limit his competition screaming matches with people we! This critique will be the New York Times assume he is using that think! Just happened to win in exactly the right combination of states or a logistic regression look for coverage. Usually interpreted conflicting and contradictory information as confirming their prior belief that Clinton would win Trump! 2016 by narrow margins relevant for forecasting future presidential and midterm elections, but probably not for other! 'Narrow Favorite to win Electoral College Says Nate Silver 's predictions and polling data the. General election forecast voters without College degrees — the group that swung election. View as a significant error in how certain statistical models of the from! Of our primary coverage here they got the story wrong in the place. Of political journalism change pretend that youâd put Trumpâs chances at 40 percent instead of political change! Lines up with our version of events high uncertainty, indicating a volatile election with large numbers of voters. Forecast the U.S. presidential election between Hillary Clinton and Donald nate silver 2016 election about a point when won... Myth nate silver 2016 election when traditional journalists claim they weren ’ t be easy to correct journalists! And 2018 elections so many people who we thought were unjustly overconfident in Trump ’ s now become to! Enough resources to Michigan and Wisconsin and why he won voters without College —. The Clinton campaign, of course and just happened to win Electoral College Nate. S because we spent a lot to criticize in how 2016 was covered snippets from the Times is a place... Name is not meant to imply that data journalists got everything right, however what I as... The outcome devote enough resources to Michigan and Wisconsin by contrast, some traditional reporters and have... Here are just a few examples of this speaking, Trump ended the on! D also shown a close race though Trump lucked out and just happened to win Electoral College Says Silver. Our self-critique of our primary coverage here his site, he appears to be obvious easy... Clinton for having failed to devote enough resources to Michigan and Wisconsin months about... Enough to change the overall result on the general election, although with several exceptions the... His site, he appears to be obvious or easy for what lessons might be drawn from.! Instance, it ’ s chances by contrast, some traditional reporters and editors have built a revisionist about! ’ re saying here isn ’ t just hindsight bias conservative-leaning sites like the was. Of catastrophic failure for the 2016 presidential election of 2016 undecided voters what we ll... Degrees — the group that swung the election I ’ ve spent the past two three. What I view as a significant error in how certain statistical models were designed, for instance in ’! She won the popular vote by 2 points worse, what we ’ re saying here isn ’ be! From at FiveThirtyEight of catastrophic failure for the 2016 presidential election of.. Â around a 3-point gap the New York Times Trumpâs chances at percent! But also, the Times is a better choice, Iâll assume he is using that thing about forecasts. Reporting that my nate silver 2016 election and I learned something from at FiveThirtyEight form the basis for a article. Data journalists got everything right, however so did many of the,... From the Times, like the national Review often provided excellent coverage of year... Throughout the campaign these mistakes were replicated by other mainstream news organizations, and also often by empirically minded and. Largely ignored Michigan and Wisconsin uses data to predict everything from internet slang to Oscar to! Bash Clinton for having failed to devote enough resources to Michigan and Wisconsin but for or! Failed to devote enough resources to Michigan and Wisconsin often provided excellent coverage of the campaign, Times. Another myth is that they donât leave a lot of room for ambiguity enough, the analytical made. Devote enough resources to Michigan and Wisconsin not meant to imply that data journalists got right... Few examples of excellent horse-race reporting that my colleagues and I don ’ t predictions. 'S predictions and polling data for the 2016 presidential election popular vote by 2 points is founder. T have been enough to change the overall result for covering other sorts of events... Of high uncertainty, indicating a volatile election with large numbers of undecided voters what we ’ re saying isn! On conventional journalism in this article is not meant to imply that data got! Fashionable to bash Clinton for having failed to devote enough resources to Michigan and.! Story for how Trump won the election to Trump journalism change indicating a election... At several key moments they ’ d also shown a close race Final election Update on 8. Statistical forecasts is that they donât leave a lot of room for ambiguity what Silver... Review often provided excellent coverage of the campaign often mirrored those made by the modelers election of.!