In 1936, the United States faced a decision between the Democratic incumbent Franklin D Roosevelt and the Republican hopeful Alfred Landon. The world was still reeling from the great depression only 7 years ago, and Fred Astaire topped the charts. A popular magazine, The Literary Digest, included a poll in its latest edition. With a sample size of 2.4 million Americans, it was one of the largest and most expensive polls of all time. It predicted that sitting President Roosevelt would be ousted by Landon in a 57 to 43 defeat.

On election day, even the Democrats could not believe how decisive their victory was. Roosevelt took 62% of the popular vote, meaning the Literary Digest poll was out by a whopping 19%. How was a poll of 2.4 million Americans so inaccurate? Can we trust political polls at all?

Polls can be inaccurate through several mechanisms – sampling error, non-response error, processing error, and social desirability bias.

The Literary Digest poll only surveyed Americans who owned a car or telephone. Although the sample was large, it was intrinsically flawed and skewed toward wealthy Americans.

The Literary Digest poll began with a list of 10 million Americans from whom they would take their poll: Americans who owned a car or a telephone. Of those 10 million, only 2.4 million responded, a response rate of 24%. This low response rate also contributed to the poll’s errors. There tend to be differences in people who are more or less likely to respond to polls. Do they have leisure time? Are they politically interested? Are they rural or urban?

One way that modern polls try to address non-responder bias is through processing or weighting the results. For example, if the pollsters believe that someone from a rural area is half as likely to pay the postage back to Literary Digest HQ, they will double the weighting of every rural person who responds to the poll. This brings polls closer to the mark but relies on an accurate estimate or measurement of which people are more or less likely to respond and remains a guessing game.

Finally, social desirability can drive people’s responses in polls or surveys. People want to be seen as virtuous amongst their peers. Even though polls are anonymous, people still tend to answer questions as their virtuous selves, that is, until it’s time to vote for real at the polling booth (in private). This is a possible reason that few media outlets, commentators, or polls predicted Donald Trump’s victory in the 2016 United States presidential election.


With most of the world voting this year, the Nexus APAC team has investigated what factors contribute to polls’ accuracy and how they can help a leader or company understand a political party’s priorities and likelihood of victory.


Modern polling utilises the Gallup method, where 1000 adults are polled randomly while simultaneously approximating a nation’s demographic spread. Gallup used a much smaller and cheaper poll process than the Literary Digest in 1936 to correctly predict that President Roosevelt would win a second term.

Polls can be a relatively accurate determinant of public sentiment.

Australia’s own Newspoll has correctly predicted 80% of election outcomes. Unsurprisingly, polls become more accurate the closer to the election they are taken. Polls are not a crystal ball, but when taken with a grain of salt, they can be a useful indicator of changing strategy or tact.

Modern Gallup polls are now completed through mobile phones and internet surveys. However, this still creates some bias. Exit polls are another form of polling that uses a mock ballot at polling stations to survey people who have just voted. These are accurate as pollsters ask people what they just did rather than a hypothetical. For countries with voluntary voting, Exit Polls only collect data on people who have voted. UK Exit Polls are a notable example and have displayed consistent accuracy for years. Recently, they correctly predicted the Labor landslide in the 2024 election.

Currently, Newspoll has Labor at 51% in the 2PP, the Coalition at 49%, and current Prime Minister Anthony Albanese at 46% in the preferred Prime Minister position, compared to Opposition Leader the Hon. Petter Dutton MP at 39%.

With these slim margins, the 5% margin of error that many polls claim to have becomes decisive. This current Newspoll tells us that the race remains close, but time can only elucidate the result.

Political parties poll each electorate and opinion issue to understand where resources should be allocated to win the most seats rather than the most votes.

It is worth looking at several polls to observe a consistent trend. The Nexus APAC team recommends Roy Morgan, Freshwater, Redbridge, and the Newspoll. Importantly, it should be remembered that the analysis of the results and the phrasing of the questions for opinion polls can all affect the final results or their interpretation.

People struggle with long-term planning and extrapolation. Polls ask who they will vote for in months when they do not have all the information yet. Data analysis is only as good as the data itself, and when it’s garbage in, it’s garbage out.

Polls will always be limited by human error, and when small margins decide elections, polls cannot delineate the result in advance. Polls can give you a ballpark, but you never know until the dust settles.