Straight Talk for Twisted Numbers: Challenging the Results of Polls and Research Studies

mmn logo
Posted: 04/29/2017

Lawmakers, journalists, and talking heads are always quick to fire off the results of polls – yet gloss over or completely overlook how they were obtained.

But the actual results and the method by which they were obtained are inseparable – a fact that came into light after the recent elections.

The public's “opinion” on an issue will be a function of three factors, namely, the questions asked, the responses, and the analysis.

Most importantly, the results of any poll depend upon sampling a clearly defined "statistical universe" or "population." (See Perils of Polling: Part I

The Foundations of Sampling Design 

The group of individuals that we want information about is what statisticians call the statistical universe or population.

A sample is a representative subset of the population that we are trying to examine in order to gather information about the entire population. We draw conclusions about the entire population from the sample.

Put bluntly: If the population is improperly defined, the sample will produce misleading conclusions. (See: Perils of Polling Part 2)

The simplest example of poorly defined statistical universes are the non-statistical/scientific polls conducted by nightly cable news hosts where they ask viewers to answer a specific question about a given topic. This is called voluntary response sampling.

The results of these polls are worthless representations of public opinion. The sample was self-selected and represented only the viewers of that show who in all likelihood over-represent people with strong opinions, most often negative opinions about a person or topic…and are the people most likely to respond to the survey. 

Unfortunately, social media outlets sometimes present these poll results as fact. 

A Neglected Truth About Opinion Polls 

In a marvelous book entitled Technopoly, author Neil Postman delightfully exposed an inherent weakness in interpreting surveys: 

"Pollsters ask questions that will elicit yes or no answers. Polling ignores what people know about the subjects they are queried on.

In a culture that is not obsessed with measuring and ranking things, this omission would probably be regarded as bizarre.

But let us imagine what we would think of opinion polls if the questions came in pairs, indicating what people 'believe' and what they 'know' about the subject.

If I may make up some figures, let us suppose we read the following:

The latest poll indicates that 72 percent of the American public believes we should withdraw economic aid from Nicaragua.  

Of those who expressed this opinion; 28 percent thought Nicaragua was in Central Asia, 18 percent thought it was an island near New Zealand... and 27.4 percent believed that ‘Africans should help themselves, obviously confusing Nicaragua with Nigeria.

Moreover, of those polled, 61.8 percent did not know that Americans give economic aid to Nicaragua, and 23 percent did not know what ‘economic aid’ means.

Were pollsters inclined to provide such information, the prestige and power of polling would be considerably reduced." 

Dissecting Data Collection Methods 

The results of research reports relating to topics ranging from sexual harassment on campuses to employee discontent in the workplace are frequently quoted on nightly news shows and in media sources of all kinds. 

Various talk show hosts and their guests often jump to conclusions about the results without ever questioning how the data was collected. 

One particular study serves as an excellent example of the potential flaws and fallacies of all the "truths" we are constantly told – and the importance of questioning data collection methods. 

For years "statistical evidence" claimed that instigators of bar room brawls in London were more likely to be killed in such fights than those "forced" to defend themselves. 

Sociologists, psychologists, neuroscientists and substance abuse agencies studying this phenomenon formulated theories and produced voluminous articles that attempted to explain this rather unexpected and unusual outcome.    

The Royal Statistical Society, however, was somewhat suspicious and set out to check for themselves their accuracy. 

The statisticians quickly discovered that the data was indeed questionable, as the collection methods did not encourage accurate results.   

What the statisticians discovered was that every time a bar brawl ended with a fatality or with serious injuries, the investigating police officer dispatched to the incident would invariably ask: "Who started this?"  Witnesses would immediately point to the victim lying on the floor and say "he did!" 

This response was duly noted and because of the victim’s inability to dispute the accusation, the witnesses' assertions were usually taken to be true. 

Summary & Conclusions

Meaningless statistics are not new. But their wide distribution due to the advent of social media and cable news shows is quite new indeed.

People with even a minimum of rigorous statistical training are well aware of the ever increasing faulty and misleading ways information is reported in newspapers, cable shows, books, and speeches. 

Said celebrated statistician Stephen K. Campbell: "For many years I have been distressed by the frequency with which (1) relatively simple statistical tools such as percents, graphs, and averages are misused and (2) faulty conclusions are drawn from, perhaps, flawless data in our news media simply because the purveyors of the information don't know any better…

...Moreover, I have been annoyed – indeed, made {angry}  – by the frequency with which bogus statistical evidence is used intentionally by some unconscionable people to sell their products or pet ideas to others…"

Perhaps Shakespeare summed it up when he wrote: "… A tale told by an idiot, full of sound and fury, signifying nothing…"

mmn logo
Posted: 04/29/2017