As an example that suggests that the unnecessary use of abstract stochastic Stochastic | adjective Involving random variables, as in stochastic process which is a time-indexed sequence of random variables. process models can interfere with conceptualization and progress in developing methodology and even performance analysis, a straightforward theoretical development in non-stochastic probabilisticProbabilistic | noun Based on the theoretical concept of probability, e.g., a mathematical model of data comprised of probabilities of occurrence of specific data values. analysis of methods of statistical Statistical | adjective Of or having to do with Statistics, which are summary descriptions computed from finite sets of empirical data; not necessarily related to probability. spectral analysis is reviewed here. The source of this non-stochastic characterization of the problem of designing quadratic signal processors for estimation of ideal statistical spectral densities (and, more generally, cross-spectral densities and spectral correlation densities) is Section C of Chapter 5 and Section B of Chapter 15 of the 1987 book [Bk2].
As shown in [Bk2], essentially all traditional and commonly used methods of direct statistical spectral analysis, which excludes indirect methods based on data modeling reviewed on Page 11.2, can be characterized as quadratic functionals of the observed data which, in the most general case, slide along the time series of data. These functionals are, in turn, characterized by the weighting kernels of quadratic forms in both cases of discrete-time and continuous-time data. It is further shown that the spectral resolution, temporal resolution, and spectral leakage properties of all these individual spectrum estimators (collectively causing estimator bias) and the reliability (variance and coefficient of variation) of these estimators are characterized in terms of properties of these kernels. These kernels are explicitly specified by the particular window functions used by the specific spectrum estimators, including data tapering windows, autocorrelation tapering windows, time-smoothing and frequency-smoothing windows, and variations on these. With the use of the simple tabular collection of these kernels, Table 5-1 in [Bk2], all the direct spectrum estimators are easily qualitatively and quantitatively compared. Many users guided by traditional stochastic process treatments of spectrum estimation appear to be perplexed about the true role, interaction, and quantitative impact of operations such as time-smoothing, frequency smoothing, data tapering, autocorrelation tapering, direct Fourier transformation of data, indirect Fourier transformation of autocorrelation functions, time-hopped time averaging vs continuously sliding time averaging, and more.
With this general approach, the often-confusing (judging from the literature) comparison of spectrum estimators is rendered transparent.
Given Table 5-2 and the results in the above-cited chapter sections of [Bk2], which quantitatively compare all the direct methods of spectrum analysis, one can set about optimizing the design of a spectrum analyzer so as to achieve a desired level of temporal resolution, spectral resolution, spectral leakage, and reliability within the limits of this entire class of estimators. One can easily see the tradeoffs among all these performance parameters and select what one considers the optimal tradeoff for the particular application at hand. This is done through the selection, from among existing catalogs of window functions, particular windows for operations such as time-smoothing, frequency smoothing, data tapering, and autocorrelation tapering.
It should be mentioned before concluding this overview that there is another performance characteristic of spectrum estimators, and especially spectral correlation analyzers, which typically require the computation of many cross-spectra for a single record of data, and this is computational cost and data storage requirements. These performance parameters are not characterized in Table 5-1 mentioned above. This performance evaluation task is generally more challenging than that discussed above, especially given Table 5-1. Nevertheless, thorough analysis of the competing algorithms for spectral correlation analysis (also called cyclic spectrum analysis) has been reported in the literature on cyclic spectrum analysis dating back to the seminal paper by my colleagues R. Roberts, W. Brown, and H. Loomis in the special issue of the 1991 IEEE Signal Processing Magazine [JP36] on pp 38 to 49.
One of the reasons the achievements in [Bk2] are not commonly found in the literature based on stochastic processes may be that the key tradeoff between spectral resolution and spectral leakage is not a stochastic phenomenon. It is simply a characteristic of deterministic functions. This fact also may be responsible for the misleading claims about the superiority of the multi-taper method of direct spectrum estimation relative to the classical methods based on time-averaged and/or frequency-smoothed periodograms of possibly tapered data treated in [Bk2]. An in-depth discussion is provided in the careful analysis presented here (In progress: being completed in Nov 2024, entitled “The Multi-Taper Method of Spectrum Estimation: Another Comparison with Periodogram Methods”.)
The theory and methodology of parametric spectrum analysis developed over a long period of time with increased emphasis during later periods focused on observations of data containing multiple sine waves with similar frequencies, i.e., frequencies whose differences are comparable to or smaller than the reciprocal of the observation time, particularly when only one time record of data is available. After early initial work in time-series analysis on what was called the “problem of hidden periodicities” (see Page 4.1) using non-probabilistic models, a concerted effort based on the use of stochastic Stochastic | adjective Involving random variables, as in stochastic process which is a time-indexed sequence of random variables. process models led to a substantial variety of methods particularly for high-resolution spectral analysis (resolving spectral peaks associated with additive sine waves with closely spaced frequencies) ensued. This effort is another example of methodology development based on unnecessarily abstract data models; that is, stochastic process models that mask the fact that ensembles of sample paths and abstract probability measures on these ensembles (sample spaces) are completely unnecessary in the formulation and solution of the problems addressed (cf. Page 3).
The first, and evidently still the only, comprehensive treatment of this methodology within the non-stochastic framework of Fraction-of-Time Probability Theory is presented in Chapter 9 of the book [Bk2]. The treatment provided covers the following topics:
The extensive comparison of methods in the experimental study leads to the conclusion that, in general, for data having spectral densities including both smooth parts and impulsive parts (spectral lines), the best performing methods are hybrids of direct methods based on processed periodograms and indirect methods based on model fitting. A well designed hybrid method can take advantage of the complementary strengths of both direct and indirect methods.
Since the WCM’s initial theoretical considerations of (1) cyclostationary signal modeling of star radiation and (2) interplanetary baseline interferometry between the mid-1980s and the mid-2010s, technology has caught up and a number of relevant reports are available in the literature. The associated theoretical developments and increased capabilities for topic (2) over the last 10 years are quite impressive and render the WCM’s initial preliminary work irrelevant. Consequently, the modest content of this section—written in the mid-2010s—has been removed. The ongoing work in the field on topic (1) is revealing the utility of cyclostationarity modeling of astrophysical times series measurements and observations.
The purpose of this page is to describe a substantial advance in theory and methodology for high-performance location of RF emitters using multiple moving sensors based on statistically optimum aperture synthesis. This advance was developed at SSPI during the several years preceding 2010 as part of the work outlined on Page 12.1. I developed the Basel concept and mathematical formulation described below without SSPI support because of the lack of indirect cost budget for this work. The software implementation and testing with real data was carried out with full support by SSPI originating with government funded contracts.
What’s New About Bayesian Emitter Location Technology?
In this introductory discussion, key differences between the following alternative technologies are exposed: (1) classical TDOA/FDOA-based and AOA-based estimation of unknown (non-random) emitter location using multiple sensors and calculation of corresponding confidence regions using stochastic Stochastic | adjective Involving random variables, as in stochastic process which is a time-indexed sequence of random variables. process models of the received data at the collection system’s sensors and averaging over the sample space of all possible received data; (2) the relatively novel Bayesian formulation of estimation of random Random | adjectiveUnpredictable, but not necessarily modeled in terms of probability and not necessarily stochastic. emitter location coordinates using multiple sensors and calculation of Bayesian confidence regions conditioned on observation of the actual received data; and (3) ad hoc methods for RF Imaging of spatially discrete emitters, using multiple sensors, derived from VLBI (Very Long Baseline Interferometry) concepts and methods developed for Radio Astronomy.
A key concept in (2) and (3) is aperture synthesis and the production of RF images: the output image produced—in which bright spots or corresponding peaks in amplitudes are sought as the probable locations of sources—can be thought of as being somewhat analogous to an antenna pattern. The image is designed to exhibit peaks above the surface upon which the emitters reside (e.g., Earth) at whatever coordinates RF emitters are located, and the height of such peaks typically increase as the average transmitted power of the emitters increases, relative to background radiation and/or antenna and receiver noise.
Before proceeding, the topic to be addressed is put in perspective relative to more well-known methods of aperture synthesis.
Synthetic Aperture Photography (SAP) is analogous to Synthetic Aperture RF Imaging, (SARFI) neither of which is as strongly analogous to Synthetic Aperture Radar (SAR). The former two are passive and endeavor to image single or spatial arrangements of radiating energy sources whereas the latter is active and transmits energy at reflecting structures and synthesizes an image of the structure from the received reflections. To be more precise, SARFI normally images sources of RF energy, but can under certain circumstances be used to distinguish between sources and reflectors of sources and can thereby perform imaging without line of sight (given favorable characteristics of reflectors) and can mitigate multipath propagation degradation of images. In contrast, SAP typically images structures on the basis of the optical energy they reflect from the environment.
Consequently, these three types of aperture synthesis are quite distinct. In Medical Imaging, a CAT scan using computer aided tomography does perform a type of synthetic aperture imaging, but it is distinct from SAP, SARFI, and SAR. Somewhat more closely related is Radio Telescope Imaging (RTI), intended for imaging essentially contiguous spatially extended source of RF radiation, like stars. The assumption of a single RF point source to be located or a configuration of spatially discrete sources of RF energy to be jointly located justifies a unique mathematical model and statistical inference method.
I formulated such a model and developed a method of aperture synthesis that appears to have capability advantages beyond that of earlier RF imaging techniques for locating RF emitters on Earth using non-geostationary satellite receivers. I never received funding to compare performances of these two competing approaches, and I have no idea of what might have been done in this vein during the 13 years since I retired from that work. To my knowledge, any work in this area was likely classified by the US government on the basis of national defense. My internal (independent) R&D on this approach, performed at my company SSPI in the first decade of the 21st Century, was not classified and is summarized in what follows here.
In the Bayesian Aperture Synthesis for Emitter Location (BASEL)) system described herein, the Image is actually either the likelihood function or the posterior probability density function of emitter location coordinates, given the received data impinging on the array. This image can be displayed as either grey-level or multicolor intensity overlaid on a map of the geographical area seen by the sensors, or it can be displayed as the height of a surface above the relevant region of Earth’s surface.
Significant reductions in computational complexity can be realized by producing images that are only proportional to these statistical Statistical | adjective Of or having to do with Statistics, which are summary descriptions computed from finite sets of empirical data; not necessarily related to probability. functions or even only monotonic nonlinearly warped versions of these. Contours of constant elevation for such images produce probability containment regions, and can be computed exactly, unlike the classical location-error containment regions (typically ellipses or ellipsoids) used with TDOA/FDOA-based location systems which are only approximations and, in some cases, quite crude approximations.
In addition to the Image derived from the received data, explicit formulas for an ideal Image can be obtained by substituting a multiple-signal-plus-noise model for the received data in the image formula and then replacing the finite-time-average autocorrelation functions of the signals and the noise that appear in the formula with their idealized versions, which produces a sort of antenna pattern—but not in the usual sense. Rather, this produces an idealized (independent of specific received data) image for whatever multi-signal spatial scenario is modeled.
One of the keys to transitioning from TDOA/FDOA and AOA based RF-emitter location, to RF Imaging, is replacing the unknown parameters, TDOA, FDOA, and AOA, in data models with their functional dependence on unknown emitter location coordinate variables on Earth’s surface and known sensor locations overhead. This process is called geo-registration. This way, when sensors move along known trajectories during data collection and TDOA, FDOA, and AOA functions vary in a known manner, but the unknown emitter-location coordinates do not change as long as the emitter is not moving, long coherent integration over periods during which the sensor positions change substantially is enabled. This enables long integration over periods during which the sensor positions change substantially. These sensor trajectories increase the size of the synthesized aperture beyond that of the distance between fixed sensors.
In many applications, prior probability densities for emitter locations used in the Bayesian formulation of statistical inference are unknown, in which case they can be specified as uniform over the entire region of practically feasible locations of interest. This is not a weakness of this approach. It simply means that the location estimation optimization criterion is based on the likelihood function of candidate emitter coordinates over a specified region instead of the posterior probability density function of those coordinates. Moreover, when prior information is available, this Bayesian approach incorporates that information in a statistically optimum manner through the prior probability densities. Furthermore, these priors play a critical role in location updating through Bayesian Learning for which posteriors from one period of collection become priors for the next period of collection.
Also, starting with the posterior probability formulation leads to a natural choice of alternative to the classical (and often low-quality) approximate error containment regions for performance quantification, and this choice is the regions contained within contours of constant elevation of the emitter location posterior probability density function above the surface defined by the set of all candidate emitter coordinates.
This alternative metric for quantifying location accuracy has the important advantage of not averaging over all possible received data (the population) in a stochastic process model, but rather conditioning the probability of emitter location on the actual data received. It also has the advantage of not requiring the classical approximation for containment regions which is known to be inaccurate in important types of applications where attainable location accuracy is not high.
Besides this advantageous difference in location-accuracy quantification, the Bayesian approach produces optimal imaging algorithms that have aspects in common with now-classical Imaging systems for radio astronomy, referred to as VLBI, and also open the door to various methods for improving the capability of RF imaging for spatially discrete emitters. The improvements over ad hoc VLBI-like processing result from abandoning the interpretation in terms of imaging essentially continuous spatial distributions of radiation sources in favor of locating spatially discrete sources of radiation. In contrast to radio astronomy, where the sensors are on the surface of rotating Earth and the diffuse sources of radiating energy to be imaged are in outer space, in RF Imaging of manmade radio emitters, the sources are typically located on the surface of Earth and the sensors follow overhead trajectories.
In conclusion, BASEL technology improves significantly on both (1) traditional radar emitter location methods based on sets of TDOA and/or FDOA and/or AOA measurements applied to communications emitters and (2) more recent methods of RF Imaging for communications emitters derived from classical VLBI technology for radio astronomy. The improvement comes in the form of not only higher sensitivity and higher spatial resolution resulting from increases in coherent signal processing gain, but also novel types of capability that address several types of location systems impairments.
An especially productive change in the modeling of received data that is responsible for some of the unusual capability of BASEL technology, particularly coherent combining of statistics over multiple sensor pairs with unknown phase relationships, is the adoption of time-partitioned models for situations in which signals of interest are known in some subintervals of time and unknown in others. This occurs, for example, when receiver-training data is transmitted periodically, such as in the GSM cellular telephone standard. For such signals, the posterior PDF calculator takes on two distinct forms, one of which applies during known-signal time intervals and the other of which applies during unknown-signal time intervals. When the emitter location is not changing over a contiguous set of such intervals of time, these two distinct posterior PDFs (images) can be combined into a single image, which enjoys benefits accruing from both models. Thus, BASEL can perform two types of time partitioning when appropriate: (1) partitioning the data into known-signal time-intervals and unknown-signal time intervals, and (2) partitioning time into subintervals that are short enough to ensure the narrowband approximation to the Doppler effect due to sensor motion is accurate in each reduced subinterval. It doesn’t matter which partitioning results in shorter time intervals: The data from all intervals is optimally combined.
There is more than one way to beneficially combine RF images, depending on the benefits desired. For example, the presence of known portions of a signal in received data enables signal-selective geolocation. In addition, it also enables measurement of the phase offsets of the data from distinct sensor pairs. I have formulated a method for combining images that essentially equalizes these phases over multiple contiguous subintervals of time thereby enabling coherent combining of signal selective images over distinct pairs of moving sensors. This method combines CAFs (Cross Ambiguity Functions) from the data portions in which the signal is unknown with RCAFs (Radar CAFs) from the remaining data portions using a simple quadratic form. In this configuration, BASEL coherently combines images over both time and space, using narrowband RCAF and CAF measurements.
Another way the BASEL Processor provides innovative geolocation capability is through processing that mitigates multipath propagation and blocked line of sight. Highlights of this aspect of BASEL are listed below
SOME EXAMPLES OF BASEL CAPABILITY
The following material provides a few details about BASEL RF Imaging.
This page presents a way to use the Bayesian Minimum-Risk Inference methodology subject to structural constraints on the functionals of available time-series data to be used for making inferences.
Because the approach of minimizing risk subject to such a constraint is not tractable and, in fact, is even less tractable than unconstrained minimum-risk inference, an alternative suboptimum method is developed. This method produces minimum-risk (i.e., minimum-mean-squared-error) structurally constrained estimates of the required posterior probabilities or PDFs, and then uses these estimates as if they were exact in the standard Bayesian methodology for hypothesis testing and parameter estimation. Since all the computational complexity in the Bayesian methodology is contained in the computation of the posterior probabilities or PDFs, this approach to constraining the complexity of computation is appropriate and it is tractable. It requires only inversion of linear operators, regardless of the nonlinearities allowed by the structural constraint. The dimension of the linear operators does however increase as the polynomial order of the allowed nonlinearities is increased.
A full presentation of this different method of moments can be viewed here. An in-depth study, in the form of an unpublished PhD dissertation, on inference from scalar-valued time series based on the Constrained Bayesian Methodology invented by the WCM, upon which this method of moments is based, is available here.
This method is an alternative to both the classical Method of Moments and the Generalized Method of Moments. The well-known four primary moment-based probability density estimation and associated moment-based parameter estimation methods (Pearson, Provost, Lindsay) are briefly described as background for introducing the new method. This method is radically different in approach yet provides a solution that requires essentially the same information as the existing methods: (1) Model moments with known dependence on unknown parameters and (2) associated sample moments. However, the new method, unlike the classical method of moments and its generalized counterparts, requires only the solution of simultaneous linear equations. A theoretical comparison between the new and old methods is made, and reference is made to the Author’s earlier work on analytical comparisons with Bayesian parameter estimation and decision for time series data arising in digital communication systems receivers.
Considering the depth of technical detail in the above-linked source, the WCM has chosen to also give users access to a podcast which provides an excellent overview in the form of an easy-to-listen-to chat between two conversationalists. This podcast was produced by AI using Google’s experimental NotebookLM. The WCM confirms that the technical content is accurate and does an admirable job of getting across the main points of the research paper.
The work cited on page 11.4 ([JP2], [JP4], and [JP9]) provides the first derivations of linearly constrained Bayesian receivers for synchronous M-ary digital communications signals. These derivations reveal insightful receiver characterizations in terms of linearly constrained MMSE estimates of posterior probabilities of the transmitted digital symbols, which provides a basis for formulating data-adaptive versions of these receivers.
Further work here reveals that linearly constrained Bayesian receivers for optical M-ary digital communications signals are closely related to the above receivers for additive-noise channels such as those used in radio-frequency transmission systems and cable transmission systems. That is, these linear receivers for the optical signals can be insightfully characterized in terms of linearly constrained MMSE estimates of posterior probabilities of the optically transmitted digital symbols over fiber optic channels.
The signal models for both these classes of digital communications signals are cyclostationary and lead to a bank of matched filters for the M distinct pulse types followed by symbol-rate subsamplers and a matrix of Fractionally Spaced Equalizers which in fact perform more than simply equalization. They are more akin to discrete-time multi-variate Wiener filters.
This similarity in receiver structures can be interpreted as a direct result of an equivalent linear model for the Marked and Filtered Doubly Stochastic Stochastic | adjective Involving random variables, as in stochastic process which is a time-indexed sequence of random variables. Poisson Point Processes used to model the optical signals. This equivalent model is derived in the above linked paper [JP5]. The signal models for both the RF and optical signals used in this work are cyclostationary.
The initial concept discussed here is to use the approximation of narrowband FM by DSB-AM plus quadrature carrier, and then exploit the 100% spectral redundancy of DSB-AM, due to its cyclostationarity, to suppress co-channel interference. This suppression is achieved with the technique of FRESH filtering (see page 2.5.1).
The simplest version of the problem addressed by this approach is that for which interference exists only on one side of the carrier. Nevertheless, it is possible to correct for interference on both sides of the carrier, provided that the set of corrupted sub-bands on one side of the carrier has a frequency-support set with a mirror image about the center frequency that does not intersect the set of corrupted sub-bands on the other side of the center frequency.
For WBFM signals, in order to meet the conditions under which FM is approximately equal to DSB-AM plus a quadrature carrier, we must first pass the signal through a frequency divider which divides the instantaneous frequency of the FM signal by some sufficiently large integer. This approach is explained here, and the challenges presented by the impact of interference on the behavior of the frequency divider are identified and discussed. This leads to the identification of a threshold phenomenon for FM signals in additive interference that is similar to but distinct from the well-know threshold phenomenon for demodulation of FM signals in additive noise.
The concept here are preliminary and attainable performance in expected to be limited by threshold phenomenon for WBFM.
Content in preparation by Professor Davide Mattera to be posted to this website in the future.