“How easy it is,” said Mark Twain, “to make people believe a lie, and how hard it is to undo that work again!” Nowhere has Twain’s timeless insight been more accurate than during the Covid-19 and Global Warming scandals. In this essay, David Solway illustrates the many ways that people get easily brainwashed about these two subjects by those in power who further their own political agendas by manipulating statistics to spread fear and control the public.
PART I: CLIMATE CHANGE
“There are three kinds of lies: lies, damned lies and statistics,” a quote which Mark Twain in his Autobiography attributed to Benjamin Disraeli—though it more likely derives from the obiter dicta of the First Earl of Balfour. We all know—or should know—that statistics can be deceptive. Like language itself, they serve a dual function: to tell the truth and to lie—except that, unlike ordinary language, statistical contrivances appear to share the property of pure mathematics, that is, they seem objective, factual, impartial, and irrefutable. People are easily convinced, writes Darrell Huff in How to Lie with Statistics, by a “spurious air of scientific precision.”
The only way to disarm plausible but specious statistical accounts is to dig down into the source data or, when feasible, simply to use one’s common sense. Of course, statistics can be woven out of whole cloth, total fabrications which are easily rumbled with a modicum of attention, but it is their subtlety, their playing with half-truths, that can be most persuasive and damaging. Telling half the truth can be more insidious than a manifest falsehood.
The tactic is to present a lesser truth that disguises a greater one.
Global Warming statistics are among the most readily manipulable, delivering factoids that are true and yet false—in other words, in other words. The tactic is to present a lesser truth that disguises a greater one. For brevity’s sake, let’s take just a few examples of how “climate change” statistics can rank among the most effective means of producing assent to outright mendacities, coating whoppers with honey.
Consider the twaddle that came out of the University of Illinois’ 2009 survey that 97.4 percent of scientists agree that mankind is responsible for global warming, a finding which is easily debunked when one accounts for the selection methodology.
As Lawrence Solomon explains in a crushing putdown, the Illinois researchers decided that of the 10,257 respondents, the 10,180 who demurred from the so-called consensus “weren’t qualified to comment on the issue because they were merely solar scientists, space scientists, cosmologists, physicists, meteorologists, astronomers and the like.” Of the remaining 77 scientists whose votes were counted, 75 agreed with the proposition that mankind was causing catastrophic changes in the climate. And, since 75 is 97.4 percent of 77, overwhelming consensus was demonstrated.
The real percentage, however, of concurring scientists in the original survey is a paltry .73 percent. That the chosen 75 were, as Solomon writes, “scientists of unknown qualifications” adds yet another layer to the boondoggle. This sort of thing is not a little white lie or an inadvertent statistical error. Once it reaches the point where a deliberate misconstrual must be maintained by the omission of details, the distortion of data and the suspicious liability to intentional error, we are in the presence of the great statistical charade as it is practiced by our accredited “experts.”

Not to be outdone, the Climate Research Unit (CRU) at the University of East Anglia developed a graph showing the trend to global warming, but neglected to note that it is calibrated in tenths of degrees rather than whole degrees, giving the misleading impression that the world is heating up when there is, in effect, little to no global warming to speak of. Similarly, the British climate journal The Register points out that NASA data have been “consistently adjusted towards a bias of greater warming. The years prior to the 1970s have again been adjusted to lower temperatures, and recent years have been adjusted towards higher temperatures.” Moreover, NASA data sets, as is so often the case, were predicated on omission, so-called “lost continents” where temperature readings were colder than the desired result.
As The Register writes, “The vast majority of the earth had normal temperatures or below. Given that NASA has lost track of a number of large cold regions, it is understandable that their averages are on the high side.” Additionally, NASA reports their global temperature measurements “within one one-hundredth of a degree. This is a classic mathematics error, since they have no data from 20 percent of the earth’s land area. The reported precision is much greater than the error bar.”
The whole point, of course, is obfuscation, to keep people in the dark.
The problem, warns Joel Best in Damned Lies and Statistics, is that “bad statistics live on; they take on a life of their own.” Their longevity supports their putative truthfulness. And the public is gullible, prey to the baked-in lies that Best calls “mutant statistics,” no matter how implausible.
Similarly, Tim Harford in The Data Detective, a celebration of good and useful statistical models, refers to the tendency toward motivated reasoning, i.e., “thinking through a topic with the aim, conscious or otherwise, of reaching a particular kind of conclusion.” Obviously, such thinking can work both ways, disparaging reliable statistics as well as valorizing dubious ones. The whole point, of course, is obfuscation, to keep people in the dark. Our soi-disant climatologists could just as well have written that climate is defined by a statistical curve in relation to a congruence subgroup of a modular elliptic, and the effect would have been the same. Whatever it means, it sounds official and incontrovertible.
In his essay, “March of the Zealots,” John Brignell comments on such acts of dissimulation. “If the general public ever got to know of the scandals surrounding the collection and processing of data [about global warming] . . . the whole movement would be dead in the water . . . It is a tenuous hypothesis supported by ill-founded computer models and data from botched measurement, dubiously processed.”
Examples of data manipulation abound. For more thorough analyses, see Michael Shellenberger’s Apocalypse Never, Steven Koonon’s Unsettled, Tim Balls’ The Deliberate Corruption of Climate Science, and Rupert Darwall’s Green Tyranny, all of which are eye-openers. As Stanford professor Dr. John Ioannidis writes in a much-circulated paper provocatively titled Why Most Published Research Findings Are False, “There is increasing concern that in modern research, false findings may be the majority or even the vast majority of published research claims. However, this should not be surprising.”
Flawed statistical analyses have become the established currency of the climate economy.

PART II: COVID-19
It would appear that to accept statistics at face value is a fool’s bargain. People are obviously swayed by the media hype, which assures them that the casualty numbers of the virus are statistically significant and that adverse reactions to the vaccines are statistically negligible. But are they?
Sucharit Bhakdi, formerly of the Max Planck Institute of Immunobiology and Epigenetics, currently chair of Medical Microbiology at the University of Mainz and co-author of Corona False Alarm? shows how Germany’s federal government and research agency for disease control, the RKI—the country’s counterpart of the CDC in the U.S.—had juggled the numbers. The RKI, he writes, “calculated that 170,000 infections with 7000 coronavirus deaths equals a 4% case fatality rate.” The problem is that “the number of infections was at least ten times higher because mild and asymptomatic cases had not been sought and detected. This would bring us to a much more realistic fatality rate of 0.4%.”
“If I drive to the hospital to be tested and later have a fatal car accident . . . I become a coronavirus death.”
Dr. Sucharit Bhakdi
Additionally, deaths from other causes were folded into the mortality count. A true statistical correction “would yield an estimate of between 0.24% and 0.26%.” Sucharit wryly provides a hypothetical example. “If I drive to the hospital to be tested and later have a fatal car accident . . . I become a coronavirus death. If I am diagnosed positive for coronavirus and jump off the balcony in shock, I also become a coronavirus death.” Statistically speaking, it’s a good gig if you can get it.
The trim-and-shuffle seems to be par for the course. For example, the FDA’s report on Pfizer’s vaccine counts “3410 total cases of suspected, but unconfirmed covid-19 in the overall study population.” Nonetheless, the pharmaceutical company reported “a 95% relative risk-reduction figure.” As Dr. Peter Doshi explains in the British Medical Journal : “With 20 times more suspected than confirmed cases, this category of disease cannot be ignored simply because there was no positive PCR test result.” Bundling in both the suspected and confirmed cases, Doshi notes, would drop “the 95% relative risk-reduction figure down to only 19%.”
A relative risk-reduction of 19 percent is far below the 50 percent effectiveness threshold for authorization set by regulators. And even the 19 percent tally assumes that the data are veridical. Bluntly speaking, what these interested participants are doing is eliminating unfavorable factors.

The Defender (Children’s Health Defense) points out that when one does the real math, the Pfizer clinical trial numbers showed that “the risk reduction in absolute terms [was] only 0.7%, from an already very low risk of 0.74% [in the placebo group] to a minimal risk of 0.04% [in the vaccine group].” Dividing 0.7 by 0.74 is the mathematical calculation that produced the touted “95% effective” number. The result has been corroborated by the quality virological journal Vaccines, which reports that the absolute risk reduction is less than 1 percent. Clearly, Pfizer cooked the books. The math was right, so far as it went—which was not very far—but the statistical implications were misleading. What cannot be denied is that, by any metric, vaccine efficacy remains low. It seems evident that data collection is often intended to paint the wished-for statistical canvas.
Equivocal statistical lattices are the optimal way of actually suppressing inauspicious or compromising information. A recent case in point: the CDC took steps to censure unfavorable data concerning breakthrough infections from their reporting systems. In fact, as The Off-Guardian reports, the CDC is now adjusting its testing protocols to reduce the number of “breakthrough cases” by lowering test cycles for vaccinated people and raising them for the unvaccinated. Higher test cycles will pick up junk virus and dead virus, creating a disease “that can appear or disappear depending on how you measure it . . . This is a policy designed to continuously inflate one number, and systematically minimize the other. What is that if not an obvious and deliberate act of deception?” The ginned-up statistical count will be through the roof. The Off-Guardian concludes: “If these new policies had been the global approach to ‘Covid,’” that is, employing reasonable test cycles for all participants, “there would never have been a pandemic at all.”
In effect, the tendency is to boost the casualty count with respect to the virus and to reduce it with respect to the vaccines. That is how the game is played by disreputable proponents of progressivist, “social justice,” and authoritarian causes, of which the most prominent today is COVID prevention. A medicated statistic is nothing less than a damn lie. The distinction is moot, the only difference being that between an outright falsehood and a clever dissimulation. Damn lies and statistics are among the best weapons our so-called “experts” can deploy. And it must be admitted these are effective instruments of subterfuge and control, owing both to their deceptiveness, their air of authority and their volume.
In The Data Detective, Tim Harford puts a more benign slant on the issue, pointing to inevitable selection bias in statistical claims. There are huge disparities in such claims since, for various reasons, many facts may not be recorded. Sometimes, what he wittily calls “premature evaluation” plays a role, i.e., early overcounting, undercounting or counting the wrong items. Harford shows how easy it is for researchers to get things wrong. Researchers may not be duplicitous—or at least, not always or often—but merely sloppy.
[T]he pandemic may be “a once-in-a-century evidence fiasco.”
Renowned Stanford University epidemiologist John Ioannidis is less sanguine. Ioannidis predicted in a landmark 2020 research paper that we would see exaggerated estimates regarding COVID cases, infection spread and mortality rates, in other words, false research findings. As he writes, “the majority of modern biomedical research is operating in areas with very low pre-and post-study probability for true findings.” Moreover, “conflicts of interest tend to bury significant findings.” Ironically, data relationships “reaching formal statistical significance . . . may often be simply accurate measures of prevailing bias” rather than of objective facts. In another paper written at approximately the same time, Ioannidis speculates that the pandemic may be “a once-in-a-century evidence fiasco.”
Instead of heeding Ioannidis’ warnings, the official echelons and media platforms chose to follow the doomsday statements, contradictory evidence and “voodoo mitigation efforts” promulgated by the increasingly distrusted Dr. Anthony Fauci, as Steve Deace and Todd Erzen amply detail in their cleverly-titled, must-read volume Faucian Bargain. They show how data can be routinely finessed to support prior assumptions and to erect statistical scaffolds for problematic arguments and false conclusions. Samples are regularly used either to swell or shrink averages. If one instrument doesn’t work, switch to another.
This is standard-issue practice, a form of statistical gerrymandering, manipulating the boundaries of a given data-set in order to produce a desired result. Pandemicians have not only, to use a popular phrase, continued to “move the goalposts,” they have performed the magic trick of moving the whole football field. For example, Fauci has constantly changed his “herd immunity” percentages from 60 to 70 to 75 to 80 to 85 and even 90 percent. Similarly, the WHO suddenly and for no sound reason changed its definition of herd immunity several times. The instances I have flagged above are merely illustrations of systematic deceptive protocols to shore up spurious or hypothetical clinical claims.
It has been said that if the facts don’t fit the theory, change the facts. Analogously, if the statistics don’t accommodate the wished-for results, massage the statistics. The game is rigged from the start.
*Aside from those mentioned within, I cite three important books that deal, wholly or in part, with the nature of data manipulation and institutional guile: Liberty or Lockdown by Jeffrey Tucker, The Price of Panic by Douglas Axe, William Briggs and Jay Richards, and The FEAR-19 Pandemic: How lies, damn lies and statistics created a pandemic of fear by Tommy Madison. They make for essential reading.
David Solway is a Canadian poet and essayist. His most recent volume of poetry The Herb Garden appeared in spring 2018 with Guernica Editions. A partly autobiographical prose manifesto Reflections on Music, Poetry & Politics was released by Shomron Press in spring 2016. A CD of his original songs Blood Guitar and Other Tales appeared in 2016 and a second CD Partial to Cain accompanied by his pianist wife Janice Fiamengo appeared in June of this year. Solway continues to write for American political sites such as PJ Media, American Thinker and WorldNetDaily. His latest book Notes from a Derelict Culture (Black House, 2019) was delisted from Amazon as of Sunday December 13, 2020.
I appreciate your analysis of the data-cooking by NASA with regard to the average temperatures around the earth from year to year. I have suspected for a while that “climate change” and the responses to it are things that the industrialists are willing to work with, and think they can do so profitably. They would much rather have the masses fixated on climate rather than the much more all-around damage they are doing to the air, water and soil, in fact the entire biome, with their multiple types of pollution, genetic manipulation, nanoparticles and so much else.
I do have one question though about the climate: what is causing the retreat of so many glaciers? While It may be true that the melting of ice sheets in Greenland and Antarctica are exaggerated, it’s incontrovertible that the glacier in the US Glacier National Park is extremely diminished from just 100 years ago, and never before in my life have I heard of sea lanes open through the north polar region. Are these things part of cycles that last for several hundreds of years, while our records only go back 2 or 3 hundred at most? Or is there a more concerning picture there?