Theme: A wrong turn in the mathematical modeling of time-series was taken almost a century ago. Today, Academia should engage in remediation to overcome the detrimental influence on the teaching and practice of time-series analysis in Science and Engineering.
The objective of this page is to discuss the proper place in science and engineering of the fraction-of-time (FOT) probability model for time-series data, and to expose the resistance this proposed paradigm shift has met with from those indoctrinated in the theory of Stochastic processes, to the exclusion of the alternative FOT-probability theory. To understand this resistance and the failure of this proposed paradigm shift to be adopted in the 33 years since it was proposed in comprehensive form in the tutorial book Statistical Spectral Analysis: A Nonprobabilistic Theory, the discussion here first broadens to surface the historical backdrop for the wider issue of the ubiquitous resistance to paradigm shifts in science and engineering.
The following selection of quotations was compiled by this website’s content manager for the Inaugural Symposium of the Institute for Venture Science, 25 September 2015.
All the sciences have a relation, greater or less, to human nature; and …
however wide any of them may seem to run from it, they still return back by one passage or another
David Hume, 1711 – 1776
All great truths begin as blasphemies
George Bernard Shaw, 1856 – 1950
… First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident
Arthur Schopenhauer, 1788 – 1860
What is right is not always popular and what is popular is not always right
Albert Einstein, 1879 – 1955
A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it
Max Planck, 1858 – 1947
Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change
Thomas Samuel Kuhn, 1922 – 1996
Science progresses funeral by funeral
George Bernard Shaw, 1856 – 1950
The mind likes a strange idea as little as the body likes a strange protein and resists it with similar energy. It would not perhaps be too fanciful to say that a new idea is the most quickly acting antigen known to science. If we watch ourselves honestly, we shall often find that we have begun to argue against a new idea even before it has been completely stated.
Wilfred Batten Lewis Trotter, 1872 – 1939
The study of history is a powerful antidote to contemporary arrogance. It is humbling to discover how many of our glib assumptions, which seem to us novel and plausible, have been tested before, not once but many times and in innumerable guises; and discovered to be, at great human cost, wholly false
Paul Bede Johnson, 1928 –
All of physics is either impossible or trivial. It is impossible until you understand it, and then it is trivial.
Ernest Rutherford, 1871 – 1937
A central lesson of science is that to understand complex issues (or even simple ones), we must try to free our minds of dogma and to guarantee the freedom to publish, to contradict, and to experiment. Arguments from authority are unacceptable
Carl Sagan, 1934 – 1996
Physicists, being in no way different from the rest of the population, have short memories for what is inconvenient
Anthony Standen, 1906 – 1993
As for your doctrines I am prepared to go to the Stake if requisite … I trust you will not allow yourself to be in any way disgusted or annoyed by the considerable abuse & misrepresentation which unless I greatly mistake is in store for you… And as to the curs which will bark and yelp – you must recollect that some of your friends at any rate are endowed with an amount of combativeness which (though you have often & justly rebuked it) may stand you in good stead –
I am sharpening up my claws and beak in readiness.
Thomas Henry Huxley, 1825 – 1895
Letter (23 Nov 1859) to Charles Darwin a few days after the publication of Origin of Species
The inability of researchers to rid themselves of earlier ideas led to centuries of stagnation. An incredible series of deliberate oversights, indefensible verbal evasions, myopia, and plain pig-headedness characterize the pedestrian progress along this elusive road for science. We must be constantly on our guard, critically examining all the hidden assumptions in our work
Simon Mitton, 1946 –
In Review of The Milky Way by Stanley L. Jaki, New Scientist, 5 July 1973
Almost every major revolutionary breakthrough had some thinkers who rejected it as “crackpot” at first –Frank J. Sulloway, historian and sociologist of science. Sulloway provides 20 examples from the past:
Hutton’s theory of the earth (modern geology, deep time, gradual)
Evolution before and after Darwin
Bacon and Descartes—scientific method
Harvey and blood circulation
Newtonian celestial mechanics
Lavoisier’s chemical revolution
Lyell and geological uniformitarianism
Planck’s Quantum hypothesis
Einstein and general relativity
Indeterminacy in physics
Refutation of spontaneous generation
Lister and antisepsis
Semmelweis and puerperal fever
The following was written by Founders
From Galileo to Jesus Christ, heretical thinkers have been met with hostility, even death, and vindicated by posterity. That ideological outcasts have shaped the world is an observation so often made it would be bereft of interest were the actions of our society not so entirely at odds with the wisdom of the point: troublemakers are essential to mankind’s progress, and so we must protect them. But while our culture is fascinated by the righteousness of our historical heretics, it is obsessed with the destruction of the heretics among us today. It is certainly true the great majority of heretical thinkers are wrong. But how does one tell the difference between “dangerous” dissent, and the dissent that brought us flight, the theory of evolution, Non-Euclidean geometry? It could be argued there are no ‘real’ heretics left. Perhaps we’ve arrived at the end of knowledge, and dissent today is nothing more than mischief or malice in need of punishment. But be the nature of our witches unclear, it cannot be denied we’re burning them. The question is only are our heretics the first in history who deserve to be burned?
We don’t think so.
We believe dissent is essential to the progressive march of human civilization. We believe there’s more in science, technology, and business to discover, that it must be discovered, and that in order to make such discovery we must learn to engage with new — if even sometimes frightening — ideas.
Every great thinker, every great scientist, every great founder of every great company in history has been, in some dimension, a heretic. Heretics have discovered new knowledge. Heretics have seen possibility before us, and portentous signs of danger. But our heretics are also themselves in persecution, a sign of danger. The potential of the human race is predicated on our ability to learn new things, and to grow. As such, growth is impossible without dissent. A world without heretics is a world in decline, and in a declining civilization everything we value, from science and technology to prosperity and freedom, is in jeopardy.
People of science were repressed and persecuted by medieval prejudices for over 1500 years—more than 75 generations of mankind, with the exception of a brief reprieve during the Renaissance. In the two centuries since this suppression was largely overcome, science has had an immense positive impact on humanity.
Yet throughout this period, great scientists have consistently decried the penalty science is paying for not being practiced according to the Scientific Method, the essential operating principles of science. The low level of fidelity with which the Scientific Method is said to be followed today by both scientists and systems for administering science is most likely culpable for the quantifiable decline in the number and magnitude of scientific breakthroughs and revolutions in scientific thought over the last century—a decline unanimously confirmed by the National Science Board in 2006. Where are the solutions to today’s unprecedented threats to human existence: dwindling energy resources; diminishing supplies of potable water; increasing incidence of chronic disease—where are today’s counterparts to yesterday’s discovery of bacterial disease which led to antibiotics and of electricity which led to instantaneous worldwide communication and of other major breakthroughs?
The source of this unsolved problem has been recognized by many of science’s greatest achievers throughout history to be human nature: the ingrained often-subconscious behavioral motivations that sociologists tell us are responsible for our species’ very existence. The Scientific Method consists of systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses—all with the utmost objectivity. To implement this method with fidelity, scientists must be honest, impersonal, neutral, unprejudiced, incorruptible, resistant to peer pressure and open to the risks associated with probing the unknown. But to be all this consistently is to be inhuman. Thus, the Scientific Method is an unattainable ideal to strive for, not a recipe to simply follow—that scientists are true to the Scientific Method is argued to be a myth in Henry H. Bauer’s 1992 book The Myth of the Scientific Method.
Max Planck, the originator of the quantum theory of physics, said a hundred years ago, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”
Thomas Kuhn, author of The Structure of Scientific Revolutions, written a half century ago, put forth the idea that to understand what holds paradigm shifts back, we must put more emphasis on the individual humans involved as scientists, rather than abstracting science into a purely logical or philosophical venture.
Considering that this problem source is inherent in people, it should not be surprising that we have not yet solved this problem: its source cannot be removed. Regarding his Theory of the Human condition, the biologist Jeremy Griffith’s said in the 1980s “The human condition is the most important frontier of the natural sciences.” But, in reaction, it has been predicted that no idea will be more fiercely resisted than the explanation of the human condition because the arrival of understanding of the human condition is an extremely exposing and confronting development.
Nevertheless, civilizations have indeed made progress toward correcting for undesirable effects of human nature on society, especially through social standards for upbringing, social mores, legal systems, and judicial processes. So, why have our systems for administering science, where integrity is so important, not been more successful?
So, why have our systems for administering science—especially venture science—not been MORE successful WITH this?
Armed with insights provided by the science of Science, direct experience from careers of conducting and administering science, and humanitarian compassion, the founders of The Institute for Venture Science (IVS) formulated a thesis (www.theinstituteforventurescience.net/). This thesis goes to the heart of the problem—the specific reasons our present inherited systems for administering science are failing to support venture science—and it proposes specific solutions that reflect solutions that are working today in other human endeavors, such as incubators for venture capitalism. It faces head-on those aspects of human nature and also those systems of science administration that are evidently at odds with venture science—the key to major advances in our understanding of Nature.
The IVS Operative Principles, paraphrased from the IVS website, are:
A prime example of what traditional administration and practice of science has not yet been able to deliver is an understanding of the physical origin of inertia, mass, and gravitation: This remains an outstanding puzzle. And the same is true for electric and magnetic fields: We can measure them, predict their behavior, and utilize them; but we still do not understand their origins.
IAI News – Changing how the world thinks. An online magazine of big ideas produced by The Institute of Arts and Ideas. Issue 84, 8 January 2020
Why the foundations of physics have not progressed for 40 years, by Sabine Hossenfelder,
Research fellow at the Frankfurt Institute for Advanced Studies and author of blog Backreaction
Physicists face stagnation if they continue to treat the philosophy of science as a joke
In the foundations of physics, we have not seen progress since the mid-1970s when the standard model of particle physics was completed. Ever since then, the theories we use to describe observations have remained unchanged. . . .
The consequence has been that experiments in the foundations of physics past the 1970s have only confirmed the already existing theories. None found evidence of anything beyond what we already know.
But theoretical physicists did not learn the lesson and still ignore the philosophy and sociology of science. I encounter this dismissive behavior personally pretty much every time I try to explain to a cosmologist or particle physicists that we need smarter ways to share information and make decisions in large, like-minded communities. If they react at all, they are insulted if I point out that social reinforcement – aka group-think – befalls us all, unless we actively take measures to prevent it.
Instead of examining the way that they propose hypotheses and revising their methods, theoretical physicists have developed a habit of putting forward entirely baseless speculations. Over and over again I have heard them justifying their mindless production of mathematical fiction as “healthy speculation” – entirely ignoring that this type of speculation has demonstrably not worked for decades and continues to not work. There is nothing healthy about this. It’s sick science. And, embarrassingly enough, that’s plain to see for everyone who does not work in the field.
And so, what we have here in the foundation of physics is a plain failure of the scientific method. All these wrong predictions should have taught physicists that just because they can write down equations for something does not mean this math is a scientifically promising hypothesis. String theory, supersymmetry, multiverses. There’s math for it, alright. Pretty math, even. But that doesn’t mean this math describes reality.
Why don’t physicists have a hard look at their history and learn from their failure? Because the existing scientific system does not encourage learning. Physicists today can happily make career by writing papers about things no one has ever observed, and never will observe. This continues to go on because there is nothing and no one that can stop it.
A contrarian argues that modern physicists’ obsession with beauty has given us wonderful math but bad science
Whether pondering black holes or predicting discoveries at CERN, physicists believe the best theories are beautiful, natural, and elegant, and this standard separates popular theories from disposable ones. This is why, Sabine Hossenfelder argues, we have not seen a major breakthrough in the foundations of physics for more than four decades. The belief in beauty has become so dogmatic that it now conflicts with scientific objectivity: observation has been unable to confirm mindboggling theories, like supersymmetry or grand unification, invented by physicists based on aesthetic criteria. Worse, these “too good to not be true” theories are actually untestable and they have left the field in a cul-de-sac. To escape, physicists must rethink their methods. Only by embracing reality as it is can science discover the truth.
Having set the stage in the previous 5 pages for discussing the resistance to adoption of the particular proposed paradigm shift from stochastic processes to FOT models of time-series data, the discussion can now proceed with the understanding that proposers of paradigm shifts in science not infrequently deserve to be heard to be and considered credible unless/until proven otherwise.
The 1987 book, Statistical Spectral Analysis: A Nonprobabilistic Theory, argues for more judicious use of the modern stochastic-process-model (arising from the work of mathematicians in the 1930s, such as Khinchin, Kolmogorov, and others) instead of the more realistic predecessor: the time-series model first developed mathematically by Norbert Wiener in 1930 (see also page 59 of Wiener 1949, written in 1942, regarding the historical relationship between his and Kolmogorov’s approaches), that was briefly revisited in the 1960s by engineers before it was buried by mathematicians. The brief tongue-in-cheek essay Ensembles in Wonderland, published in IEEE Signal Processing Magazine, AP Forum, 1994, reproduced below, is an attempt at satirizing the outrage typified by narrow-minded thinkers exemplified by two outspoken skeptics, Neil Gerr and Melvin Hinich, who wrote scathing remarks and a book review characterizing this book as utter nonsense.
Consider the parallel to the book Alice in Wonderland; the following is comprised of excerpts taken from https://en.wikipedia.org/wiki/Alice’s_Adventures_in_Wonderland: Martin Gardner and other scholars have shown the book Alice in Wonderland [written by Lutwidge Dodgson under the pseudonym Lewis Carroll] to be filled with many parodies of Victorian popular culture. Since Carroll was a mathematician at Christ Church, it has been argued that there are many references and mathematical concepts in both this story and his later story Through the Looking glass; examples include what have been suggested to be illustrations of the concept of a limit, number bases and positional numeral systems, the converse relation in logic, the ring of integers modulo a specific integer. Deep abstraction of concepts, such as non-Euclidean geometry, abstract algebra, and the beginnings of mathematical logic, was taking over mathematics at the time Alice in Wonderland was being written (the 1860s). Literary scholar Melanie Bayley asserted in the magazine New Scientist that Alice in Wonderland in its final form was written as a scathing satire on new modern mathematics that was emerging in the mid-19th century.
Today, Dodgson’s satire appears to be backward looking because, after all, there are strong arguments that modern mathematics has triumphed. Similarly, Stochastic processes have triumphed in terms of being wholly adopted in mathematics and science and engineering, except for a relatively small contingent of empirically-minded scientists and engineers. Yet, recent mathematical arguments [Napolitano’s book, 2019 (to be added to the bibliography page and, if possible, link to the complete book] provide a sound mathematical basis for reversing this outcome, especially when the overwhelming evidence of practical and pragmatic and pedagogic and overarching conceptual advantages provided in the 1987 book is considered. The present dominance of the more abstract and less realistic stochastic process theory might be viewed as an example of the pitfalls of what has become known as groupthink or the inertia of human nature that resists changes in thinking, which is exemplified in pages 4.1 – 1.4.
July 2, 1995
To the Editor:
This is my final letter to SP Forum in the debate initiated by Mr. Melvin Hinich’s challenge to the resolution made in the book , and carried on by Mr. Neil Gerr through his letters to SP Forum.
In this letter, I supplement my previous remarks aimed at clarifying the precariousness of Hinich’s and Gerr’s position by explaining the link between my argument in favor of the utility of fraction-of-time (FOT) probability and the subject of a plenary lecture delivered at ICASSP ’94. In the process of discussing this link I hope to continue the progress made in my previous two letters in discrediting the naysayers and thereby moving toward broader acceptance of the resolution that was made and argued for in  and is currently being challenged. My continuing approach is to show that the position taken by the opposition–that the fraction -of-time probability concept and the corresponding time-average framework for statistical signal processing theory and method have nothing to offer in addition to the concept of probability associated with ensembles and the corresponding stochastic process framework–simply cannot be defended if argument is to be based on fact and logic.
David J. Thomson’s Transcontinental Waveguide Problem
To illustrate that the stochastic-process conceptual framework is often applied to physical situations where the time-average framework is a more natural choice, I have chosen an example from D. J. Thomson’s recent plenary lecture on the project that gave birth to the multiple-window method of spectral analysis . The project that was initiated back in the mid-1960s was to study the feasibility of a transcontinental millimeter waveguide for a telecommunications transmission system potentially targeted for introduction in the mid-1980s. It was found that accumulated attenuation of a signal propagating along a circular waveguide was directly dependent on the spectrum of the series, indexed by distance, of the erratic diameters of the waveguide. So, the problem that Thomson tackled was that of estimating the spectrum for the more than 4,000-mile-long distance-series using a relatively small segment of this series that was broken into a number of 30-foot long subsegments. (It would take more than 700,000 such 30-foot sections to span 4,000 miles.) The spectrum had a dynamic range of over 100 dB and contained many periodic components, indicating the unusual challenge faced by Thomson.
When a signal travels down a waveguide (at the speed of light) it encounters the distance-series of erratic waveguide-diameters. Because of the constant velocity, the distance-series is equivalent to a time-series. Similarly, the series of diameters that is measured for purposes of analysis is—due to the constant effective velocity of the measurement device—equivalent to a time-series. So, here we have a problem where there is one and only one long time-series of interest (which is equivalent to a distance-series)–-there is no ensemble of long series over which average characteristics are of interest and, therefore, there is no obvious reason to introduce the concept of a stochastic process. That is, in the physical problem being investigated, there was no desire to build an ensemble of transcontinental waveguides. Only one (if any at all) was to be built, and it was the spectral density of distance-averaged (time-averaged) power of the single long distance-series (time-series) that was to be estimated, using a relatively short segment, not the spectral density of ensemble-averaged power. Similarly, if one wanted to analytically characterize the average behavior of the spectral density estimate (the estimator mean) it was the average of a sliding estimator over distance (time), not the average over some hypothetical ensemble, that was of interest. Likewise, to characterize the variability of the estimator, it was the distance-average squared deviation of the sliding estimator about its distance-average value (the estimator variance) that was of interest, not the variance over an ensemble. The only apparent reason for introducing a stochastic process model with its associated ensemble, instead of a time-series model, is that one might have been trained to think about spectral analysis of erratic data only in terms of such a conceptual artifice and might, therefore, have been unaware of the fact that one could think in terms of a more suitable alternative that is based entirely on the concept of time averaging over the single time-series. (Although it is true that the time-series segments obtained from multiple 30 ft. sections of waveguide could be thought of as independent random samples from a population, this still does not motivate the concept of an ensemble of infinitely long time-series–a stationary stochastic process. The fact remains that, physically, the 30-foot sections represent subsegments of one long time-series in the communications system concept that was being studied.)
It is obvious in this example that there is no advantage to introducing the irrelevant abstraction of a stochastic process (the model adopted by Thomson) except to accommodate lack of familiarity with alternatives. Yet Gerr turns this around and says there is no obvious advantage to using the time-average framework. Somehow, he does not recognize the mental gyrations required to force this and other physical problems into the stochastic process framework.
Having explained the link between my argument in favor of the utility of FOT probability and Thomson’s work, let us return to Gerr’s letter. Mr. Gerr, in discussing what he refers to as “a battle of philosophies,” states that I have erred in likening skeptics to religious fanatics. But in the same paragraph we find him defensively trying to convince his readers that the “statistical/probabilistic paradigm” has not “run out of gas” when no one has even suggested that it has. No one, to my knowledge, is trying to make blanket negative statements about the value of what is obviously a conceptual tool of tremendous importance (probability) and no one is trying to denigrate statistical concepts and methods. It is only being explained that interpreting probability in terms of the fraction-of-time of occurrence of an event is a useful concept in some applications. To argue, as Mr. Gerr does again in the same paragraph, that in general this concept “has no obvious advantages” and using it is “like building a house without power tools: it can certainly be done, but to what end?” is, as I stated in my previous letter, to behave like a religious fanatic — one who believes there can be only One True Religion. This is a very untenable position in scientific research.
As I have also pointed out in my previous letter, Mr. Gerr is not at all careful in his thinking. To illustrate his lack of care, I point out that Gerr’s statement “Professor Gardner has chosen to work within the context of an alternative paradigm [fraction-of-time probability]”, and the implications of this statement in Gerr’s following remarks, completely ignore the facts that I have written entire books and many papers within the stochastic process framework, that I teach this subject to my students, and that I have always extolled its benefits where appropriate. If Mr. Gerr believes in set theory and logic, then he would see that I cannot be “within” paradigm A and also within paradigm B unless A and B are not mutually exclusive. But he insists on making them mutually exclusive, as illustrated in the statement “From my perspective, developing signal processing results using the fraction-of-time approach (and not probability/statistics) … .” (The parenthetical remark in this quotation is part of Mr. Gerr’s statement.) Why does Mr. Gerr continue to deny that the fraction-of-time approach involves both probability and statistics?
Another example of the lack of care in Mr. Gerr’s thinking is the convoluted logic that leads him to conclude “Thus, spectral smoothing of the biperiodogram is to be preferred when little is known of the signal a priori.” As I stated in my previous letter, it is mathematically proven* in  that the frequency smoothing and time averaging methods yield approximately the same result. Gerr has given us no basis for arguing that one is superior to the other and yet he continues to try to make such an argument. And what does this have to do with the utility of the fraction-of-time concept anyway? These are data processing methods; they do not belong to one or another conceptual framework.
To further demonstrate the indefensibility of Gerr’s claim that the fraction-of-time probability concept has “no obvious advantages,” I cite two more examples to supplement the advantage of avoiding “unnecessary mental gyrations” that was illustrated using Thomson’s waveguide problem. The first example stems from the fact that the fundamental equivalence between time averaging and frequency smoothing referred to above was first derived by using the fraction-of-time conceptual framework . If there is no conceptual advantage to this framework, why wasn’t such a fundamental result derived during the half century of research based on stochastic processes that preceded ? The second example is taken from the first attempt to develop a theory of higher-order cyclostationarity for the conceptualization and solution of problems in communication system design. In , it is shown that a fundamental inquiry into the nature of communication signals subjected to nonlinear transformations led naturally to the fraction-of-time probability concept and to a derivation of the cumulant as the solution to a practically motivated problem. This is, to my knowledge, the first derivation of the cumulant. In all other work, which is based on stochastic processes (or non-fraction-of-time probability) and which dates back to the turn of the century, cumulants are defined, by analogy with moments, to be coefficients in an infinite series expansion of a transformation of the probability density function (the characteristic function), which has some useful properties. If there is no conceptual advantage to the fraction-of-time framework, why wasn’t the cumulant derived as the solution to the above-mentioned practical problem or some other practical problem using the orthodox stochastic-probability framework?
Since no one in the preceding year has entered the debate to indicate that they have new arguments for or against the philosophy and corresponding theory and methodology presented in , it seems fair to proclaim the debate closed. The readers may decide for themselves whether the resolution put forth in  was defeated or was upheld.
* A more detailed and tutorial proof of this fundamental equivalence is given in the article “The history and the equivalence of two methods of spectral analysis,” in this issue of Signal Processing Magazine.
But regarding the skeptics, I sign off with a humorous anecdote:
When Mr. Fulton first showed off his new invention, the steamboat, skeptics were crowded on the bank, yelling ‘It’ll never start, it’ll never start.’
It did. It got going with a lot of clanking and groaning and, as it made its way down the river, the skeptics were quiet.
For one minute.
Then they started shouting. ‘It’ll never stop, it’ll never stop.’
— William A. Gardner
Excerpts from earlier versions of above letter to the editor before it was condensed for publication:
April 15, 1995
In this, my final letter to SP Forum in the debate initiated by Mr. Melvin Hinich’s challenge to the resolution made in the book , I shall begin by addressing two remarks in the opening paragraph of Mr. Neil Gerr’s last letter (in March 1995 SP Forum). In the first remark, Mr. Gerr suggests that the “bumps and bruises” he sustained by venturing into the “battle” [debate] were to be expected. But I think that such injuries could have been avoided if he had all the relevant information at hand before deciding to enter the debate. This reminds me of a story I recently heard:
Georgios and Melvin liked to hunt. Hearing about the big moose up north, they
went to the wilds of Canada to hunt. They had hunted for a week, and each had
bagged a huge moose. When their pilot Neil landed on the lake to take them out
of the wilderness, he saw their gear and the two moose. He said, “I can’t fly out of
here with you, your gear, and both moose.”
“Why not?” Georgios asked.
“Because the load will be too heavy. The plane won’t be able to take off.”
They argued for a few minutes, and then Melvin said, “I don’t understand. Last year, each of us had a moose, and the pilot loaded everything.”
“Well,” said Neil, “I guess if you did it last year, I can do it too.”
So, they loaded the plane. It moved slowly across the lake and rose toward the mountain ahead. Alas, it was too heavy and crashed into the mountain side. No one was seriously hurt and, as they crawled out of the wreckage in a daze, the bumped and bruised Neil asked, “Where are we?”
Melvin and Georgios surveyed the scene and answered, “Oh, about a mile farther than we got last year.”
If Mr. Gerr had read the book  and put forth an appropriate level of effort to understand what it was telling him, he would have questioned Mr. Hinich’s book review and would have seen that the course he was about to steer together with the excess baggage he was about to take on made a crash inevitable.
A friend of mine recently offered me some advice regarding my participation in this debate. “Why challenge the status quo”, he said, “when everybody seems happy with the way things are.” My feeling about this is summed up in the following anecdote:
“Many years ago, a large American shoe manufacturer sent two sales reps out to different parts of the Australian outback to see if they could drum up some business among the aborigines. Sometime later, the company received telegrams from both agents.
The first one said. ‘No business. Natives don’t wear shoes.’
The second one said, ‘Great opportunity here–natives don’t wear shoes.'”
Another friend asked “why spend your time on this [debate] when you could be solving important problems.” I think Albert Einstein answered that question when he wrote:
“The mere formulation of a problem is far more essential than its solution, which may be merely a matter of mathematical or experimental skills. To raise new questions, new possibilities, to regard old problems from a new angle requires creative imagination and marks real advances in science”
This underscores my belief that we are overemphasizing “engineering training” in our university curricula at the expense of “engineering science.” It is this belief that motivates my participation in this debate. Instead of plodding along in our research and teaching with the same old stochastic process model for every problem involving time-series data, we should be looking for new ways to think about time-series analysis.
In the second remark in Mr. Gerr’s opening paragraph, regarding my response to Mr. Gerr’s October 1994 SP Forum letter in sympathy with “Hinich’s gleefully vicious no-holds-barred review” of , Mr. Gerr says “Even by New York standards, it [my response] seemed a bit much.” Well, I guess I was thinking about what John Hancock said, on boldly signing the Declaration of Independence:
There, I guess King George will be able to read that!
Like the King of England who turned a deaf ear to the messages coming from the new world, orthodox statisticians, like Messrs. Hinich and Gerr who are mired in tradition seem to be hard of hearing–a little shouting might be needed to get through to them.
Nevertheless, I am disappointed to see no apparent progress, on Mr. Gerr’s part, in understanding the technical issues involved in his and Hinich’s unsupportable position that the time-average framework for statistical signal processing has, and I quote Gerr’s most recent letter, “no obvious advantages.” I hasten to point out, however, that this most recent position is a giant step back from the earlier even more indefensible position taken by Hinich in his book review, reprinted in April 1994 SP Forum, where much more derogatory language was used.
In this letter, I make a final attempt to clarify the precariousness of Hinich’s and Gerr’s position by explaining links between my arguments and the subjects of two plenary lectures delivered at ICASSP ’94. In the process of discussing these links and this paper, I hope to continue the progress made in my previous two letters in discrediting the naysayers and thereby moving toward broader acceptance of the resolution that was made and argued for in  and is currently being challenged. My continuing approach is to show that the position taken by the opposition, that the fraction -of-time probability concept and the corresponding time-average framework for statistical signal processing theory and method have nothing to offer in addition to the concept of probability associated with ensembles and the corresponding stochastic process framework, simply cannot be defended if argument is to be based on fact and logic.
Lotfi Zadeh and Fuzzy Logic
I wish that Mr. Gerr would let go of the fantasy about “the field where the Fraction-of-Timers and Statisticians do battle.” There do not exist two mutually exclusive groups of people—one of which can think only in terms of fraction-of-time probability and the other of which call themselves Statisticians. How many times and in how many ways does this have to be said before Mr. Gerr will realize that some people are capable of using both fraction-of-time probability and stochastic process concepts, and of making choices between these alternatives by assessing the appropriateness of each for each particular application? Mr. Gerr’s “battle” of “fraction-of-time versus probability/statistics” simply does not exist. This insistence on a dichotomy of thought is strongly reminiscent of the difficulties some people have had accepting the proposition that the concept of fuzziness is a useful alternative to the concept of probability. The vehement protests against fuzziness are for most of us now almost laughable.
To quote Professor Lotfi Zadeh in his recent plenary lecture 
“[although fuzzy logic] offers an enhanced ability to model real-world phenomena…[and] eventually fuzzy logic will pervade most scientific theories…the successes of fuzzy logic have also generated a skeptical and sometimes hostile reaction…Most of the criticisms directed at fuzzy logic are rooted in a misunderstanding of what it is and/or a lack of familiarity with it.”
I would not suggest that the time-average approach to probabilistic modeling and statistical inference is as deep a concept, as large a departure from orthodox thinking, or as broadly applicable as is fuzzy logic, but there are some definite parallels, and Professor Zadeh’s explanation of the roots of criticism of fuzzy logic applies equally well to the roots of criticism of the time-average approach as an alternative to the ensemble-average or, more accurately, the stochastic-process approach. In the case of fuzzy logic, its proponents are not saying that one must choose either conventional logic and conventional set theory or their fuzzy counterparts as two mutually exclusive alternative truths. Each has its own place in the world. Those opponents who argue vehemently that the unorthodox alternative is worthless can be likened to religious fanatics. This kind of intolerance should have no place in science. But it is all too commonplace and it has been so down through the history of science. So surely, one cannot expect to find its absence in connection with the time-average approach to probabilistic modeling and statistical inference. Even though experimentalists in time-series analysis (including communication systems analysis and other engineered-systems analysis) have been using the time-average approach (to various extents) for more than half a century, there are those like Gerr and Hinich who “see no obvious advantages.” This seems to imply that Mr. Gerr has one and only one interpretation of a time-average measurement on time series data—namely an estimate of some random variable in an abstract stochastic process model. To claim that this mathematical model is, in all circumstances, the preferred one is just plain silly.
David J. Thomson and the Transcontinental Waveguide –addition to published discussion:
[It is obvious in this example that there is no advantage to introducing the irrelevant abstraction of a stochastic process except to accommodate unfamiliarity with alternatives. Yet Gerr turns this around and says there is no obvious advantage to using the time-average framework.] It is correct in this case that a sufficiently capable person would obtain the same result using either framework, but it is incorrect to not recognize the mental gyrations required to force this physical problem into the stochastic process framework. My claim—and the reason I wrote the book —is that our students deserve to be made aware of the fact that there are two alternatives. It is pigheaded to hide this from our students and force them to go through the unnecessary and sometimes confusing mental gyrations required to force-fit the stochastic process framework to real-world problems where it is truly an unnecessary and, possibly, even inappropriate artifice.
Gerr’s Letter—addition to published letter:
To further demonstrate the indefensibility of Gerr’s claim that the fraction-of-time probability concept has “no obvious advantages,” I cite two more examples to supplement the advantage of avoiding “unnecessary mental gyrations” that was illustrated using Thomson’s waveguide problem. The first example stems from the fact that the fundamental equivalence between time averaging and frequency smoothing, whose proof is outlined in the Appendix at the end of this letter, was first derived by using the fraction-of-time conceptual framework .
An Illustration of Blinding Prejudice
To further illustrate the extent to which Mr. Gerr’s prejudiced approach to scientific inquiry has blinded him, I have chosen one of his research papers on the subject of cyclostationary stochastic processes. In , Mr. Gerr (and his coauthor) tackle the problem of detecting the presence of cyclostationarity in an observed time-series. He includes an introduction and references sprinkled throughout that tie his work to great probabilists, statisticians, and mathematicians. (We might think of these as the “Saints” in Mr. Gerr’s One True Religion.) This is strange, since his paper is nothing more than an illustration of the application of a known statistical test (and a minor variation thereof) to synthetic data. It is even more strange that he fails to properly reference work that is far more relevant to the problem of cyclostationarity detection. But I think we can see that there is no mystery here. The highly relevant work that is not cited is authored by someone who champions the value of fraction-of-time probabilistic concepts. The fact that the relevant publications (known to Gerr) actually use the stochastic process framework apparently does not remove Mr. Gerr’s blinders. All he can see–it would seem–is that the author is known to argue (elsewhere) that the stochastic process framework is not always the most appropriate one for time-series analysis, and this is enough justification for Mr. Gerr to ignore the highly relevant work by this “heretic” author (author of the book  that Hinich all but said should be burned).
To be specific, Mr. Gerr completely ignores the paper  (published 1-1/2 years prior to the submission of Gerr’s paper) and the book  (published 4 years prior) wherein the problem of cyclostationarity detection is tackled using maximum-likelihood , maximum-signal-to-noise ratio , , and other optimality criteria, all of which lead to detection statistics that involve smoothed biperiodograms (and that also identify optimal smoothing) which are treated by Gerr as if they were ad hoc. Mr. Gerr also cites a 1990 publication (which does not appear in his reference list) that purportedly shows that the integrated biperiodogram (cyclic periodogram) equals the cyclic mean square value of the data (cf. (12)); but this is a special case of the much more useful result, derived much earlier than 1990, that the inverse Fourier transform of the cyclic periodogram equals the cyclic correlogram. The argument, by example, that Gerr proffers to show that (12) (the cyclic correlogram at zero lag) is sometimes a good test statistic and sometimes a bad one is trivialized by this Fourier transform relation (cf. ) and the numerous mathematical models for data for which the idealized quantities (cyclic autocorrelations, and cyclic spectral densities) in this relation have been explicitly calculated (cf. , ). These models include, as special cases, the examples that Gerr discusses superficially. The results in ,  show clearly when and why the choice of zero lag made by Gerr in (12) is a poor choice. As another example, consider Mr. Gerr’s offhand remark that a Mr. Robert Lund (no reference cited) “has recently shown that for the current example (an AM signal with a square wave carrier) only lines [corresponding to cycle frequencies] spaced at even multiples of d=8 [the reciprocal of the period of the carrier] will have nonzero spectral (rz) measure.” This result was established in a more general form many years earlier in his coauthor’s Ph.D. dissertation (as well as in ) where one need only apply the extremely well-known fact that a symmetrical square wave contains only odd harmonics.
To go on, the coherence statistic that Gerr borrows from Goodman for application to cyclostationary processes has been shown in  to be nothing more than the standard sample statistic for the standard coherence function (a function of a single frequency variable) for two processes obtained from the one process of interest by frequency-shifting data transformations–except for one minor modification; namely, that time-averaged values of expected values are used in place of non-averaged expected values in the definition of coherence because the processes are asymptotically mean stationary, rather than stationary. Therefore, the well-known issues regarding frequency smoothing in these cross-spectrum statistics need not be discussed further, particularly in the haphazard way this is done by Gerr, with no reliance on analysis of specific underlying stochastic process models.
Continuing, the incoherent average (13) proposed by Gerr for use with the coherence statistic is the only novel contribution of this paper, and I claim that it is a poor statistic. The examples used by Gerr show that this “incoherent statistic” outperforms the “coherent statistic,” but what he does not recognize is that he chose the wrong coherent statistic for comparison. He chose the cyclic correlogram with zero lag (12), which is known to be a poor choice for his examples. For his example in Figure 9, zero lag produces a useless statistic, whereas a lag equal to T/2 is known to be optimum, and produces a “coherent statistic” that is superior to Gerr’s incoherent statistic. Thus, previous work ,  suggests that a superior alternative to Gerr’s incoherent statistic is the maximum over a set of lag-indexed coherent statistics.
Finally, Mr. Gerr’s vague remarks about choosing the frequency-smoothing window-width parameter M are like stabs in the dark by comparison with the thorough and careful mathematical analysis carried out within–guess what–the time-average conceptual framework in  in which the exact mathematical dependence of bias and variance of smoothed biperiodograms on the data-tapering window shape, the spectral-smoothing window shape, and the ideal spectral correlation function for the data model are derived, and in which the equivalence between spectral correlation measurement and conventional cross-spectrum measurement is exploited to show how conventional wisdom [1, chapter 5, 7] applies to spectral correlation measurement [1,chapters 11, 13, 15].
In summary, Gerr’s paper is completely trivialized by previously published work of which he was fully aware. What appears to be his choice to “stick his head in the sand” because the author of much of this earlier highly relevant work was not a member of his One True Religion exemplifies what Gerr is trying to deny. Thus, I repeat it is indeed appropriate to liken those (including Gerr) who Gerr would like to call skeptics to religious fanatics who are blinded by their faith.
In closing this letter, I would like to request that Mr. Gerr refrain from writing letters to the editor on this subject. To say, as he does in his last letter, “There are many points on which Professor Gardner and I disagree, but only two that are worthy of further discussion,” is to try to worm his way out of the debate without admitting defeat. I claim to have used careful reasoning to refute beyond all reasonable doubt every point Mr. Gerr (and Mr. Hinich) has attempted to make. Since he has shown that he cannot provide convincing arguments based on fact and logic to support his position, he should consider the debate closed. To sum up the debate:
– The resolution, cited in the introductory section of my 2 July 1995 letter to the in contrapositive form, was made by myself in .
– The resolution was challenged by Hinich and defended by myself in April 1994 SP Forum.
– Hinich’s challenge was supported and my defense was challenged by Gerr in October 1994 SP Forum.
– Gerr’s arguments were challenged by myself in January 1995 SP Forum.
– Gerr defended his arguments in March 1995 SP Forum.
– Gerr’s presumably-final defense was challenged and the final arguments in support of the resolution are made by myself in this letter.
APPENDIX – Proof of Equivalence Between Time-Averaged and Frequency-Smoothed Cyclic Periodograms
Mr. Gerr refuses to believe the following statement, which is of fundamental importance:
…when the data block, over which spectral smoothing of the biperiodogram is performed, is partitioned into subblocks over which time averaging of the biperiodogram is performed instead, the results from these two methods can closely approximate each other if the subblock length and window shape are chosen properly.
Mr. Gerr also is apparently unwilling to look at the references I cited where a mathematical proof of this statement is given. In an attempt to put an end to his continued refutation, I am including a concise summary of that proof here.
Let a(t) be a data-tapering window satisfying a(t) = 0 for let ra() be its autocorrelation
and let A(f) be its Fourier transform
Let Xa(t,f) be the sliding (in time t) complex spectrum of data x(t) seen through window a,
Similarly, let b(t) be a rectangular window of width V centered at the origin, and let Xb(t,f) be the corresponding sliding complex spectrum (without tapering).
Also, let be the sliding cyclic correlogram for the tapered data
and let be the sliding cyclic correlogram without tapering
To complete the definitions needed, let Sa(t; f1,f2) and Sb(t; f1,f2) be the sliding biperiodograms (or cyclic periodograms) for the data x(t),
It can be shown (using = f1 – f2) that (cf. [1, Chapter 11])
The above approximation, namely
for , becomes more accurate as the inequality V >> T grows in strength (assuming that there are no outliers in the data near the edges of the V-length segment, cf. exercise 1 in [1, Chapt. 3], exercise 4b in [1, Chapt. 5], and Section B in [1, Chapt. 11]). The first and last equalities above are simply applications of the cyclic-periodogram/cyclic-correlogram relation established in [1, Chapter 11] together with the convolution theorem (which is used in the last equality). The left-most member of the above string of equalities (and an approximation) is a biperiodogram of tapered data seen through a sliding window of length T and time-averaged over a window of length V. If this average is discretized, then we are averaging a finite number of biperiodograms of overlapping subsegments over the V-length data record. (It is fairly well known that little is gained—although nothing but computational efficiency is lost—by overlapping segments more than about 50%.) The right-most member of the above string is a biperiodogram of un-tapered data seen through a window of length V and frequency-smoothed along the anti-diagonal using a smoothing window . Therefore, given a V-length segment of data, one obtains approximately the same result, whether one averages biperiodograms on subsegments or frequency smooths one biperiodogram on the undivided segment. Given V, the choice of T determines both the width of the frequency smoothing window in one method and the length of the subsegments in the other method. Given V and choosing T << V, one can choose either of the two methods and obtain approximately the same result (barring outliers within T of the edges of the data segment of length V). In the special circumstance where T << V cannot be satisfied because of the degree of spectral resolution (smallness of 1/T) that is required, there is no general and provable argument that either method is superior to the other.
— William A. Gardner
The debate preceding the above final argument:
WAG—Copy here or reproduce the missing parts of the chronological sequence of contributions to the debate from both sides, including Hinich’s review.
1 — April 1994, reprint of Hinich in SP Mag
2 – April 1994, Author’s Comments including Ensembles in Wonderland
3 — Oct 1994, Gerr’s comments
4 — Jan 1995, My comments
It is hard for me to decide whether or not Mr. Gerr’s letter in the Forum section of the October 1994 issue of this magazine deserves a response. He does not seem to address the basic issue of whether or not fraction-of-time probability is a useful concept. This is the issue being debated, isn’t it? In fact, I cannot find one technical point in his letter that is both valid and clearly stated. But, because Mr. Gerr has clearly stated in his letter that, regarding philosophical issues in science and engineering, he prefers “New York” style vicious attacks like Hinich’s to carefully worded slyly mocking replies, like mine, it has occurred to me that I might get through a little better to the Mr. Gerrs out there if I tried my hand at being just a little vicious. I hope the readers will understand that I am new at this; I give them my apologies now in case I fail to overcome my propensity for writing carefully and, when appropriate, slyly.
Mr. Gerr’s letter reveals a lot of misunderstanding and this provides us with some insight into what may motivate vicious attacks on attempts to educate people about alternative ways to conceptualize problem solving. It is hard for me to imagine how Mr. Gerr could have missed the main point of my response to Hinich’s review. This point, which is clearly stated in both the book  under attack and the unappreciated response to this attack, is that, and I quote from my response,
“…there is really no basis for controversy. The only real issue is one of judgement—judgement in choosing for each particular time-series analysis problem the most appropriate of two alternative approaches.”
To argue against this point is to be a zealot in the truest sense of the word, fanatically fighting for the One True Religion in statistics.
Sociologists and psychologists tell us that vicious behavior is often the result of paranoia born out of ignorance. In the example before us, both Hinich and Gerr demonstrate substantial ignorance regarding nonstochastic statistical concepts and methods, including fraction-of-time (FOT) probability. This case has already been made for Hinich in the Forum section of the April 1994 issue of this magazine. So, let us consider Gerr’s letter. First off, Gerr admits to the kind of behavior that is supposed to have no place in science and engineering, by identifying himself as a “partisan spectator”. Webster’s Ninth New Collegiate Dictionary defines partisan as “a firm adherent to a party, faction, or cause, or person, esp: one exhibiting blind, prejudiced, and unreasoning allegiance.” On the basis of this admission alone one has to wonder whether to continue reading Gerr’s letter or flip the page. (It’s interesting that Gerr is into partisanship and Hinich’s university appointment is in the Government Department.) But what the heck, let’s see if we can find some technical content in his letter.
Mr. Gerr’s first of three technical remarks is quoted here:
“For me, the statistical approach to signal analysis begins with a probabilistic model (e.g., ARMA) for the signal. The signal time series is viewed as a single realization and as data arising from the model. The time series data is used in conjunction with statistical techniques (e.g., maximum likelihood) to infer parameters, order, appropriateness, etc. of the model. The abstract notion of an infinite population plays no role.”
Not too surprisingly, it is difficult to tell what point Mr. Gerr is trying to make here. He starts with a probabilistic model and ends with a denial of the notion of a population. Would Mr. Gerr care to tell us how he interprets “probability” in “probabilistic model” if he denies the notion of population? My guess is that his thinking does not go this deep. But let’s try to extract some meaning by reading between the lines. In spite of his sympathy with Hinich, Mr. Gerr seems to be agreeing that the problem-solving machinery of probability theory (e.g., ARMA modeling and maximum likelihood estimation) can be used regardless of whether one conceptualizes its use in terms of stochastic probability (with its associated ensembles or populations) or in terms of fraction-of-time (FOT) probability. This is the point that is made by the book  under attack: This book does include ARMA models and the maximum likelihood method as parts of the nonstochastic theory. True to the “blind allegiance” definition of partisanship, Mr. Gerr is apparently agreeing with the book while sympathizing with the attack on the book. Either Mr. Gerr has not read the book at all, or he may simply not have thought hard enough and long enough about these things. This is important to point out because I suspect it is the primary reason that there is any controversy at all.
Mr. Gerr then goes on to admit that the FOT approach may be required for chaotic time series. But again, true to form, he then makes a remark that is difficult to interpret:
“…the fraction-of-time approach may be required, though not necessarily: in , it is shown that statistical model-fitting techniques developed for stochastic time series models can also be useful in fitting chaotic time series models.”
This sounds like Mr. Gerr is again confused about the fact that many probabilistic models can be interpreted or conceptualized in terms of either stochastic probability or FOT probability. Thus, regardless of the fact that a model was originally derived in the stochastic probability framework, it can—depending on the particular model—still be used (and/or rederived) in the FOT framework. In fact, AR models were originally derived within the FOT framework, not the stochastic framework  – . This will probably surprise Mr. Gerr. And if he is not confused about this, then he is again agreeing with the book  whose attack he supports.
On the assumption that people working with stochastic processes would have enough of an understanding of the subject to compare it with the nonstochastic theory presented in , this comparison was not made very explicit in . Responses to , such as those of Messrs. Hinich and Gerr, suggest that this assumption is false more often than it is true. To make up for this, an explicit comparison and contrast between the theories of stochastic processes and nonstochastic time-series is made in Chapter 1 of .
Mr. Gerr concludes his letter by considering transient time-series and erroneously concluding that time averaging a biperiodogram over successive blocks of data (which he identifies with FOT methodology) is inappropriate, whereas spectrally smoothing a biperiodogram is appropriate. Obviously, he does not realize that the infamous book  that proposes FOT concepts and methods shows that when the data block, over which spectral smoothing of the biperiodogram is performed, is partitioned into subblocks over which time averaging of the biperiodogram is performed instead, the results from these two methods can closely approximate each other if the subblock length and window shape are chosen properly. In other words, it is very clearly explained in  that the FOT framework for spectral analysis includes frequency smoothing as well as time-averaging methods. This again brings up the question, did Mr. Gerr read the book , and if so, did he comprehend anything?
It is my recommendation to Mr. Gerr, and others who would entertain joining this discussion of the merit of considering alternatives to stochastic thinking, that the book  that started the furor so nicely exemplified by Hinich’s review, and Chapter 1 of , be read carefully, the way they were written. This should be a prerequisite to criticism, vicious or otherwise.
Before closing this letter, I should point out that the so-called controversy that statisticians like Hinich and Gerr are promoting is about as productive as the statisticians’ endless debate between the “Bayesians” and the “frequentists” over whether or not prior probabilities (“prior” meaning “before data collection”) should be included in the One True Religion of statistics . The debate is endless, because it is based on the faulty premise that there is One True Religion. In fact, the subject of our “controversy” is not unrelated to the Bayesian/frequentist debate. This debate dates back to the 1920s, and involves many well-known statisticians, some 40 of whom are referenced in  for their contributions to this debate. The conclusion in , published just last month, is, I am happy to report:
“The Bayesians have been right all along! And so have the frequentists! Both schools are correct (and better than the other) under specific (and complementary) circumstances . . . Neither approach will uniformly dominate the other . . . knowing when to [use] one or the other remains a tricky question. It is nonetheless helpful to know that neither approach can be ignored”
This is very encouraging! These pragmatic statisticians are attempting to dispel belief in the One True Religion.
I conclude this reply with a little dialogue that I find both amusing and supportive of my response to vicious attacks:
Can old dogs be taught new tricks?
Maybe, but the teacher might get barked at for trying.
Should the teacher accept the barking graciously?
Maybe, but if the old dogs band together into a pack, the teacher better bark back.
— William A. Gardner
5 – March 1995, Gerr’s second try
6 – July 1995, my final response (inserted at the beginning above)