State-space multitaper time-frequency analysis
- aPicower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139;
- bDepartment of Electronics and Control Engineering, Hanbat National University, Daejeon 34158, Korea;
- cDepartment of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
- dSchool of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138;
- eDepartment of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114;
- fInstitute of Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA 02139
See allHide authors and affiliations
Edited by David L. Donoho, Stanford University, Stanford, CA, and approved November 10, 2017 (received for review February 19, 2017)

Significance
Rapid growth in sensor and recording technologies is spurring rapid growth in time series data. Nonstationary and oscillatory structure in time series is commonly analyzed using time-varying spectral methods. These widely used techniques lack a statistical inference framework applicable to the entire time series. We develop a state-space multitaper (SS-MT) framework for time-varying spectral analysis of nonstationary time series. We efficiently implement the SS-MT spectrogram estimation algorithm in the frequency domain as parallel 1D complex Kalman filters. In analyses of human EEGs recorded under general anesthesia, the SS-MT paradigm provides enhanced denoising (>10 dB) and spectral resolution relative to standard multitaper methods, a flexible time-domain decomposition of the time series, and a broadly applicable, empirical Bayes’ framework for statistical inference.
Abstract
Time series are an important data class that includes recordings ranging from radio emissions, seismic activity, global positioning data, and stock prices to EEG measurements, vital signs, and voice recordings. Rapid growth in sensor and recording technologies is increasing the production of time series data and the importance of rapid, accurate analyses. Time series data are commonly analyzed using time-varying spectral methods to characterize their nonstationary and often oscillatory structure. Current methods provide local estimates of data features. However, they do not offer a statistical inference framework that applies to the entire time series. The important advances that we report are state-space multitaper (SS-MT) methods, which provide a statistical inference framework for time-varying spectral analysis of nonstationary time series. We model nonstationary time series as a sequence of second-order stationary Gaussian processes defined on nonoverlapping intervals. We use a frequency-domain random-walk model to relate the spectral representations of the Gaussian processes across intervals. The SS-MT algorithm efficiently computes spectral updates using parallel 1D complex Kalman filters. An expectation–maximization algorithm computes static and dynamic model parameter estimates. We test the framework in time-varying spectral analyses of simulated time series and EEG recordings from patients receiving general anesthesia. Relative to standard multitaper (MT), SS-MT gave enhanced spectral resolution and noise reduction (>10 dB) and allowed statistical comparisons of spectral properties among arbitrary time series segments. SS-MT also extracts time-domain estimates of signal components. The SS-MT paradigm is a broadly applicable, empirical Bayes’ framework for statistical inference that can help ensure accurate, reproducible findings from nonstationary time series analyses.
- nonparametric spectral analysis
- spectral representation theorem
- complex Kalman filter
- statistical inference
- big data
The importance of developing principled methods to solve big data problems is now broadly appreciated (sites.nationalacademies.org/DEPS/BMSA/DEPS_171738). Time series are an important big data class that includes signals ranging from gravitational waves (1), solar variations (2), radar emissions (3), seismic activity (4), global positioning data (5), and stock prices (6) to neural spike train measurements (7), EEG recordings (8), vital signs (9), and voice recordings (10). Rapid growth in sensor and recording technologies in science, engineering, and economics is increasing time series data production and with it, the importance of conducting rapid, accurate analyses. Such analyses require extracting specific data features and characterizing their uncertainty in a way that makes possible formal statistical inferences the same way as they are conducted in simpler problems.
A range of time-frequency methods is used to characterize the nonstationary and often oscillatory features in time series data. Standard nonparametric spectral methods estimate the spectrum (i.e., the frequency content of the time series in a small time interval on which the data are presumed to be stationary) (8, 11, 12). Fourier-based spectral methods estimate only signal power as a function of frequency and therefore, cannot provide time-domain signal estimates. Spectrogram estimation (time-varying spectral analysis), which entails estimating the frequency content as function of time for nonstationary data, is carried out by simply repeating spectrum estimation on overlapping or nonoverlapping time intervals. Spectrum estimates on adjacent intervals (8, 10⇓⇓⇓–14) are not formally related. While recently developed time-frequency methods address the general problem of minimizing the resolution tradeoff between time and frequency, these techniques are computationally intensive, give their best performance in high signal-to-noise problems, and to date, have had limited application in actual time series analyses (15, 16). Despite their use to study important problems, current time-frequency methods have a critical shortcoming. None of these methods provides a statistical inference framework applicable to the entire time series. (1⇓⇓⇓⇓⇓⇓⇓⇓–10).
State-space (SS) modeling is a flexible, established inference framework for analyzing systems with properties that evolve with time (17⇓⇓⇓–21). This paradigm has been used for time-frequency analysis of nonstationary time series with parametric time series models (22⇓–24), harmonic regression models (25), and nonparametric time series models using batch processing (26). Given stationary data recorded on a finite interval, multitaper (MT) methods are optimal for balancing the bias–variance tradeoff in spectrum estimation conducted by combining Fourier-based methods with tapering (8, 11, 12). Therefore, a plausible approach to analyzing nonstationary and oscillatory time series is to combine SS modeling with MT methods to optimize locally the bias–variance tradeoff, relate spectral estimates across local intervals, and conduct formal statistical inference.
The important advances that we report are state-space multitaper (SS-MT) methods, which provide a statistical inference framework for time-varying spectral analysis of nonstationary time series. The balance of the paper is organized as follows. In Theory, we define the SS-MT time-frequency model, SS-MT spectrogram estimation algorithm, time-domain signal extraction algorithm, and empirical Bayes’ inference framework. In Applications, we illustrate use of the algorithms in the analysis of a simulated nonstationary time series and of EEG time series recorded from patients under general anesthesia. Discussion summarizes the properties of the SS-MT paradigm and highlights the implications of this future research for solving large data inference problems.
Theory
An SS-MT Time-Frequency Model.
Assume that we observe a nonstationary time series of the form
We present the data on stationary interval k as the vector
To relate the data on adjacent intervals, we assume that the Gaussian increment differences are linked by the random walk model
To represent the observation model [2] in the frequency domain, we let F be the Fourier transform operator defined as the
To combine the SS and MT paradigms, we note that, in the absence of Eq. 3, MT methods with Slepian functions selected as tapers would be used to estimate the spectrum on each stationary interval (11, 12). Therefore, as in the application of MT methods, given J, the number of data points in the stationary interval and Δ, the data sampling rate, we also assume that
We let
SS-MT Spectrogram Estimation Algorithm.
The linear complex Gaussian form of Eqs. 5 and 6 suggests that a Kalman filter algorithm can be used to compute the sequence of increment differences (23) and thus, the sequence of spectrum estimates. For this problem, the Kalman filter has a special structure. Because the M
Eqs. 7–9 define the SS-MT algorithm for spectrogram estimation for nonstationary time series. For each tapered series, the increment difference estimate on interval k is a weighted average between the increment difference estimate on interval
In the absence of the state models [3 and 6], Eq. 9 becomes the MT spectrogram estimate
Time-Domain Signal Extraction.
Given the
Inferences for Functions of the Increment Differences.
To make inferences for functions of the increment differences at any time points, we compute the joint distribution of the increment differences conditional on all of the data in the time series using the fixed interval smoothing algorithm (20, 21), which is
Model Parameter and Initial Condition Estimation.
The Kalman filter [7 and 8], Kalman smoother [16], and covariance smoothing [17] algorithms assume that the initial states
Applications
Spectrogram Analysis of Simulated Data.
We first tested the SS-MT algorithm on the simulated nonstationary process (Fig. 1A) defined by the sixth-order autoregressive model adapted from ref. 12:
Spectrogram analysis of the time-varying sixth-order autoregressive process defined in Eq. 18. (A) Ten-second segments from the simulated time series starting at 5, 16, and 25 min. (B) True spectrogram. (C) Periodogram. (D) MT spectrogram. (E) SS periodogram. (F) SS-MT spectrogram. Right shows for each panel a zoomed-in display of the 3 min between 24 and 27 min. The color scale is in decibels.
Fig. 1 C–F shows the periodogram (Fig. 1C), the MT spectrogram (Fig. 1D), the SS periodogram (Fig. 1E), and the SS-MT spectrogram (Fig. 1F). While all four methods capture the general structure of the true spectrogram (Fig. 1B), there are clear differences. The MT spectrogram (Fig. 1D) shows, as expected, better resolution of (less variability in estimating) the three peaks compared with the periodogram (Fig. 1C). However, when comparing the power in the frequency bands outside the three peaks, both the periodogram (Fig. 1C) and the MT spectrogram (Fig. 1D and SI Appendix, Fig. S6) overestimate the noise relative to the true spectrogram (Fig. 1B and SI Appendix, Fig. S6 C and D) by 10–15 dB. The SS periodogram (Fig. 1E) and the SS-MT spectrogram (Fig. 1F and SI Appendix, Fig. S6 C and D) estimate the noise outside the three peaks to be at or near −10 dB as in the true spectrogram (Fig. 1B and SI Appendix, Fig. S6 C and D). The MT and the SS-MT spectrograms capture well and agree closely in their estimates of the power in the three peaks (Fig. 1 D, Right and F, Right and SI Appendix, Fig. S6).
A key difference between the MT and the SS-MT spectrograms appears as the power increases. As the heights of the spectral peaks at 9.5 and 11.5 Hz increase, the depth of the “valley” in the spectrogram between them increases also (Fig. 1B and SI Appendix, Fig. S6 C and D). The valley is at 5 dB between minutes 24 and 27 (Fig. 1B, Right and SI Appendix, Fig. S6D). The MT spectrogram estimates the valley to be at 10 dB (Fig. 1D, Right and SI Appendix, Fig. S6D). In contrast, the SS-MT spectrograms estimates the valley to be at 4 dB (Fig. 1D, Right and SI Appendix, Fig. S6D). In addition, the mean squared error was lower at all frequencies for the SS-MT algorithm compared with the other three methods (SI Appendix, Table S2). We explain the enhanced denoising and enhanced spectral resolution of the SS-MT algorithm relative to the MT algorithm after we analyze the real EEG recordings in the next examples. In SI Appendix, we assess the effect of stationary interval length on spectrogram estimation.
Spectrogram Analysis of the EEG During General Anesthesia.
Anesthetic drugs act in the brain to create the altered states of general anesthesia by producing highly structured oscillations that disrupt normal information flow between brain regions (34, 35). Because these oscillations are readily visible in the EEG, EEG and EEG-derived measures are commonly used to track in real time the brain states of patients receiving general anesthesia and sedation (36). We illustrate the SS-MT methods by comparing them with the other three spectrogram methods in the analysis of EEG recordings from patients during general anesthesia.
The EEG recordings, in this example and the subsequent examples, are deidentified data collected as part of protocols at the Massachusetts General Hospital (MGH) that have been approved by the MGH Human Research Committee. For EEGs recorded from patients, informed consent was not required, whereas for EEGs recorded from volunteer subjects, informed consent was required and was obtained. For each patient, the EEG was continuously recorded during general anesthesia using the Sedline monitor (Masimo) with the standard six-electrode frontal montage. The Sedline array records from electrodes located approximately at positions Fp1, Fp2, F7, and F8. On each channel, the electrode impedance was less than 5 kohms. We used the EEG data recorded at Fp1 for the spectral analyses and the EEG data recorded at Fp1 and Fp2 for the coherence analyses (SI Appendix, Fig. S12). We began the EEG recordings approximately 3–5 min before induction of general anesthesia and continued the recordings for ∼3–5 min after extubation.
The data analyzed in Fig. 2 consist of 190 min of EEG recorded at 250 Hz during maintenance of general anesthesia using the ether anesthetic sevoflurane with oxygen. Hence, we take T = 2,850,000. We set J = 500 based on our several years of experience analyzing the EEG of anesthetized patients. Hence, we have K = 5,750. We chose M, the number of tapers, to be three. This corresponds to a spectral resolution of 2 Hz for the MT method. We estimated the model parameters and initial conditions from the first 5 min of data using the EM algorithm. To estimate the observation noise variance in the EM algorithm, we restricted the analysis to frequencies in the physiologically relevant range from 0.1 to 30 Hz. The raw EEG signal (Fig. 2B) shows strong modulation with changes in the sevoflurane concentration (Fig. 2A).
Spectrogram analysis of EEG time series recorded from a patient under general anesthesia maintained with sevoflurane and oxygen. (A) The expired concentration of sevoflurane. (B) Raw EEG signals. (C) Periodogram. (D) MT method spectrogram. (E) SS periodogram. (F) SS-MT spectrogram. The color scale is in decibels.
All four spectrograms for these data show the well-known alpha-beta oscillations (8–17 Hz) and slow-delta oscillations (0.1–4 Hz) that are characteristic of general anesthesia maintained by sevoflurane (36). When the sevoflurane concentration increases, the power in the alpha-beta band shifts to lower frequencies, while the power in the slow-delta band power shifts to higher frequencies. The opposite changes occur when the sevoflurane concentration decreases. The spectral changes associated with increases in the sevoflurane concentration appear as increases in theta oscillation power (4–8 Hz) (36). The periodogram (Fig. 2C) shows diffuse, grainy power between 10 and 17 Hz and in the slow-delta range. As expected, the MT spectrogram (Fig. 2D) has higher spectral resolution relative to the periodogram. Both the periodogram and the MT spectrogram show diffuse power ranging from 7 to −2 dB in the theta range and from −5 to −15 dB in the beta-gamma range (>17 Hz). Relative to the periodogram and the MT spectrogram, the SS periodogram (Fig. 2E) and the SS-MT spectrogram (Fig. 2F) show substantially greater denoising defined as a reduction in power in the frequency bands with low power. For the latter two spectrograms, the power in the beta-gamma range is uniformly at −15 dB, which is a 5–15 dB power reduction relative to the MT spectrogram. Both the SS periodogram and the SS-MT spectrogram estimate the power in the theta band to be 10–15 dB less than that for either the periodogram or the MT spectrogram. Like the periodogram, the prominent alpha-beta and the slow-delta power in the SS periodogram is grainy and diffuse.
Spectrogram Analysis of the EEG During Transitions Among Anesthetic States.
To illustrate the full potential of the SS-MT algorithm, we reanalyze 155.4 min of EEG data recorded at 250 Hz from a frontal lead in a human volunteer subject receiving i.v. propofol administered by a computer-controlled infusion at an increasing and then, a decreasing infusion rate (35). Gradually increasing the propofol infusion rate allows the subject to transition gradually from being awake to unconsciousness (UNC). Gradually decreasing the propofol infusion rate from the rate required to achieve the maximum target effect site concentration (model-derived brain concentration) allows the subject to transition from UNC to the awake state. Baseline EEG was recorded for 20 min while the subject lay supine with eyes closed and received no propofol. After the baseline period, propofol was administered through a computer-controlled infusion to achieve five different effect site concentrations in increasing levels (Fig. 3A) (35). After completing the fifth level, the propofol infusion rate was systematically decreased to achieve a similar sequence of target effect site concentrations in decreasing order until the infusion was stopped. Each target effect site concentration was maintained for 14 min.
Spectrogram analysis of EEG recorded in a volunteer subject receiving a computer-controlled infusion of propofol. (A) Time course of propofol target effect site concentrations based on the Schneider model (35). The black dashed vertical lines define the anesthetic states determined by the behavioral analysis: awake1, baseline conscious state; LC; UNC; RC; and awake2, final conscious state. (B) Two seconds of unprocessed EEG (black curves) and of EEG extracted from the SS-MT analysis (red curves) [14] at different target effect site concentrations. (C) MT method spectrogram. (D) SS-MT spectrogram. The color scale is in decibels.
Based on the analyses of the subject’s responses to yes–no questions administered every 4 s, we identified five distinct behavioral or anesthetic states: conscious, loss of consciousness (LC), UNC, recovering consciousness (RC), and conscious (Fig. 3A, black dashed vertical lines) (35). The important scientific question to answer is whether these distinct anesthetic states are associated with distinct EEG signatures.
We take
Inferring Differences in Spectral Power Between Anesthetic States.
To make a formal statistical inference about the relationship between these anesthetic states and EEG signatures, we compare the power across the spectrogram from 0.1 to 30 Hz among representative 100-s intervals during each of the anesthetic states (Fig. 3A). The 100-s intervals are baseline awake period awake1, 500–600 s; LC, 3,100–3,200 s; UNC, 4,600–4,700 s; RC, 6,600–6,700 s; and final awake state awake2, 9,000–9,100 s. To compare two 100-s intervals for each frequency ω in a given frequency range, we compute the average difference spectrogram between two intervals:
The 95% CIs for comparisons of the average power differences between the anesthetic states in Fig. 3A. Each panel shows the 95% empirical Bayes confidence interval (CI) for the average power difference defined by the lower 2.5% CI bound (blue curves), and the upper 97.5% CI bound (red curves). (A) LC-awake1: LC compared with baseline conscious state. (B) UNC-awake1: UNC compared with baseline conscious state. (C) RC-awake1: RC compared with baseline conscious state. (D) Awake2-awake1: final conscious state compared with baseline conscious state. (E) LC-UNC: LC compared with the UNC state. (F) LC-RC: LC compared with RC.
We show 6 of 10 possible comparisons of differences in power among the five anesthetic states (Fig. 4). The LC and UNC states show significant increases in power in the slow-delta and alpha bands relative to awake1, the baseline awake state (Fig. 4 A and B). There is also a significant increase in power in the upper part of the slow-delta and in the alpha bands between RC and awake1 (Fig. 4C) and between LC and the UNC state (Fig. 4E). In contrast, there are no appreciable differences in power between awake2 and awake1 (Fig. 4D) or between LC and RC (Fig. 4F). These findings are in complete agreement with the structure of the power in the spectrogram (Fig. 3D). We can conclude that there are significant quantifiable differences in EEG power between different anesthetic states and that those differences can range from 10 to 20 dB (95% CI). These findings also agree with and go beyond the original analyses of these data, in which hypothesis testing methods with Bonferroni corrections were used to compare the anesthetized states with just awake1 (35). Using the model in Eq. 18, we assessed the coverage probability of the empirical Bayes’ CIs in a simulation study (SI Appendix) and found that the actual and nominal coverage probabilities are in good agreement.
Spectrogram Denoising.
The SS-MT spectrogram has greater denoising than the MT spectrogram (Figs. 1–3) because of the stochastic continuity constraints (Eq. 3) and the eigenspectrogram averaging (SI Appendix, Figs. S3 and S4). The stochastic continuity constraint has a different independent effect at each frequency. In both the theoretical and the real data examples, the state variances,
Averaging the MT eigenspectrograms reduces the variability in the MT spectrogram (SI Appendix, Fig. S3) (11). Each SS-MT eigenspectrogram (SI Appendix, Fig. S4 A–C) has variability comparable with the SS periodogram (Fig. 2E) (29). Averaging the SS-MT eigenspectrograms reduces the variability of the SS-MT spectrogram at each frequency relative to the SS periodogram (SI Appendix, Fig. S4D) by
Spectral Resolution and Leakage Reduction.
Kalman filter updating [7c and 20] enhances the spectral resolution of the SS-MT spectrogram relative to the MT spectrogram by reducing leakage. To see why, assume that
For example, at 70 min (Fig. 2 D and F and SI Appendix, Fig. S9 A and C), the MT and SS-MT spectrograms agree in the high-power frequency bands (i.e., 0.1–5 and 9.5–15.5 Hz) and disagree in the low-power frequency bands (5.1–9.4 and 15.6–30 Hz); 6 Hz is just on the border of the narrow-band leakage from 5 Hz for the MT spectrogram (SI Appendix, Fig. S9A). The 12-dB difference between the MT and the SS-MT spectrograms at 6 Hz results, because the former has leakage from the power at 5 Hz, whereas the latter has enhanced denoising and reduced leakage. A 10- to 15-dB power difference persists between the MT and SS-MT spectrograms beyond 15 Hz because of the small values of the Kalman gain in this frequency band (SI Appendix, Fig. S9 C and D).
At 80 min (Fig. 2 D and F and SI Appendix, Fig. S9 B and D), the MT and SS-MT spectrograms also agree in the high-power frequency bands (0.1–5 and 10.5–15 Hz) and disagree in the low-power frequency bands (i.e., 5.1–9.4 and 15.1–30 Hz). A similar argument explains the 7-dB difference in power at 16 Hz between the MT and the SS-MT spectrograms at minute 80. The same argument also explains in the analysis of the simulated data example the 7-dB difference in power at 11 Hz in the MT and SS-MT spectrograms at 25 min (SI Appendix, Fig. S6D). The differences between MT and SS-MT tapering are discussed in SI Appendix.
Discussion
Time series are a prominent big data class with growth that is being spurred by innovations in sensing and recording technologies. These data track dynamic processes, making accurate real-time analyses an important objective. Aside from simply extracting important features from the series, the analysis should provide associated measures of uncertainty, so that formal statistical inference can be conducted the same way that it would be conducted for questions arising from smaller, simpler datasets. No current time-frequency analysis method provides an inference framework applicable to the entire series (8, 11⇓⇓⇓⇓–16). To address the inference problem for nonstationary time series, we combined the MT and the SS approaches into an empirical Bayes’ paradigm for frequency- and time-domain analyses of nonstationary time series. We showed the SS-MT inference paradigm by analyzing differences in EEG power between different propofol-induced anesthetic states (Fig. 4). By reporting the results in terms of 95% empirical Bayes’ CIs, we measure directly the effect size (i.e., how much EEG power differs among propofol’s anesthetic states). We base our inferences on 95% CIs derived from the empirical Bayes’ estimate of the joint posterior distribution of the power across all of the frequencies and all times in the time series rather than on tests of multiple hypotheses. Our small simulation study (SI Appendix) suggests that the nominal and actual coverage probabilities of the empirical Bayes’ CIs are in good agreement. The empirical Bayes’ paradigm has been suggested as a practical approach to solving large-scale inference problems (37).
Our analyses offer scientific insights. The SS-MT spectrograms show denoising and spectral resolution that more clearly define the frequency content of anesthetic EEG signals. As a consequence, the results in Fig. 2 suggest that most of the theta oscillation power in the sevoflurane spectrogram could be caused by power shifts from both the alpha band above and the slow-delta band below. The results in Fig. 4 allow us to make formal inferences about EEG power difference as a function of the level of unconsciousness in a single individual.
SS-MT Time-Frequency Analysis of Nonstationary Time Series.
In addition to providing a statistical inference framework, the SS-MT paradigm has other desirable features. By the spectral representation theorem, the orthogonal increment differences are the fundamental process that underlies a stationary time series (27, 38). Hence, we defined nonstationarity by starting with the common practice of choosing a minimal interval on which the time series is assumed stationary (SI Appendix, Table S1) and then applying a stochastic continuity constraint [3 and 6] to link the increment differences across the minimal intervals. We constructed the SS model by taking the observed data to be the Fourier transforms of the M tapered data series [5]. For a given taper, the Fourier transform of the tapered data is J independent, complex Gaussian observations in the frequency domain. Hence, to estimate the increment differences, we implemented in parallel J independent 1D complex Kalman filters [7a–7d]. Given the M tapers, the
Both the SS and the MT components of SS-MT analysis contribute significantly to spectrogram denoising. The state variances and Kalman gains are high (low) at frequencies with high (low) power (SI Appendix, Fig. S5). Therefore, the Kalman filter updating [7c and 20] denoises the spectrogram by heightening the contrast between high- and low-power spectrogram ordinates (Figs. 1 D and F, 2 D and F, and 3 C and D). The MT component of the SS-MT algorithm further contributes to the denoising by averaging the eigenspectrograms to reduce the variance at all frequencies by
By applying the spectral representation theorem to the estimated increment differences [14 and 15], we extracted the time-domain signals within specified frequency bands as well as instantaneous phase and amplitude (Fig. 3B and SI Appendix, Figs. S1 and S2). The SS-MT paradigm is highly flexible, because arbitrary combinations of frequency components can be chosen to construct the time-domain signal. Time-domain signal extraction is not possible with standard nonparametric spectral methods, which only estimate power as a function of frequency. Estimating instantaneous phase and amplitude with conventional approaches requires a separate analysis (14). The SS-MT paradigm conducts spectral analysis, signal extraction, and instantaneous phase and amplitude estimation as parts of a unified framework.
Theoretical and Problem-Specific Methods Development.
Our local stationarity definition differs from the Dahlhaus definition, which assumes time-invariant increment differences with deterministic continuity of the time-dependent transfer function in the spectral representation (33). The lengths of the local intervals are either assumed (33) or estimated (39). The local spectral estimates are computed as the squared modulus of the Fourier transform of the data after applying a single taper or by fitting a parametric model using local Whittle likelihoods (40). Like the MT methods, these local estimation approaches do not combine information across the local stationary intervals.
In contrast, the stochastic continuity constraint imposed by the random walk model enables recursive estimation of the time-dependent increment differences and the spectrogram. The current form of the continuity constraint has a theoretical drawback. It implies that, at each frequency, spectral power grows with time, since the theoretical spectrum on stationary interval k at frequency
In our data analyses, we followed the common practice of setting the stationary interval a priori (SI Appendix, Table S1). Our analyses (SI Appendix, Fig. S14 and Table S2) suggest that the spectrogram estimates can be sensitive to interval choice and that different stationary intervals could be optimal for different frequencies. Therefore, in future work, we will incorporate interval choice into the estimation by evaluating the likelihood as a function of stationary interval length.
At present, both our model-fitting algorithms and inference framework depend critically on the Gaussian observation and state models [2 and 3]. Description of signal frequency content and inferences may be inaccurate when these assumptions do not hold. As an alternative, model-fitting and inference using non-Gaussian SS models can be readily carried out using sequential Monte Carlo methods (42). This extension will be the topic of a future investigation.
Application of the SS-MT paradigm in time-frequency analyses of different types of nonstationary time series is a productive way to extend our methods by allowing the question under investigation to guide problem-specific development of the framework. We will continue to study EEG time series recorded under different anesthetic drugs (36). The EEG recorded during sevoflurane general anesthesia (Fig. 2) suggests a smooth process defining continuity among the time intervals. Therefore, higher-order stationary models could be chosen to impose a greater degree of smoothness on the increment differences and the spectrogram. In contrast, the EEG recorded during induction of and recovery from propofol-induced unconsciousness (Fig. 3) suggests that a process with jump discontinuities may be a more appropriate state model for these data. SI Appendix, Table S1 summarizes problems from different fields of science that have used MT spectral methods to study nonstationary processes. These several applications suggest a rich testbed for further development of the SS-MT paradigm.
Adaptive model parameter estimation using local likelihood, SS, or method of moments techniques can be combined with different continuity models. A key decision in using adaptive estimation is defining the timescale of the parameter updates. We used the target anesthetic levels as covariates—subject matter constraints—to set this timescale in Fig. 3. Subject matter constraints may also be used to reduce the number of parameters. We limited state variance estimation to frequencies below 30 Hz based on knowledge of the frequency range relevant for tracking anesthetic states (36). The naïve empirical Bayes’ CIs had good coverage probabilities based on our small simulation study of the model in Eq. 18. These intervals can be further calibrated by taking account of the uncertainty in the maximum likelihood parameter estimates obtained from the EM algorithm (43). Computation of the SS-MT cross-spectrogram (SI Appendix, Fig. S12) suggests that our paradigm can be extended to inference problems for multivariate nonstationary time series.
The SS-MT paradigm provides a computationally efficient framework for spectrogram estimation, time-domain signal extraction, and statistical inference for nonstationary time series.
Acknowledgments
This work was partially supported by National Research Foundation of Korea Grant 2017R1C1B5017254 from the Korean Government (to S.K.), a Guggenheim Fellowship in Applied Mathematics (to E.N.B.), NIH Transformative Research Award R01-GM104948 (to E.N.B.), NIH Award P01-GM118629 (to E.N.B.,), funds from MGH (E.N.B.), and funds from the Picower Institute for Learning and Memory (E.N.B.).
Footnotes
↵1S.K., M.K.B., and D.B. contributed equally to this work.
- ↵2To whom correspondence should be addressed. Email: enb{at}neurostat.mit.edu.
Author contributions: S.K., D.B., and E.N.B. designed research; S.K., M.K.B., D.B., and E.N.B. performed the research; S.K., M.K.B., D.B., and E.N.B. contributed new analytic tools; S.K., M.K.B., D.B., and E.N.B. analyzed the data; and S.K., and E.N.B. wrote the paper.
Conflict of interest statement: The authors have applied for patents on the use of these methods to track brain states of patients receiving general anesthesia and sedation.
This article is a PNAS Direct Submission.
This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1702877115/-/DCSupplemental.
- Copyright © 2017 the Author(s). Published by PNAS.
This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND).
References
- ↵
- Chatterji S,
- Blackburn L,
- Martin G,
- Katsavounidis E
- ↵
- ↵
- Haykin SS,
- Steinhardt AO
- ↵
- Emery WJ,
- Thomson RE
- ↵
- Misra P,
- Enge P
- ↵
- Zheng X,
- Chen BM
- ↵
- ↵
- Mitra P,
- Bokil H
- ↵
- Yilmaz T,
- Foster R,
- Hao Y
- ↵
- Quatieri TF
- ↵
- Percival DB,
- Walden AT
- ↵
- ↵
- Koornwinder TH
- ↵
- Huang NE, et al.
- ↵
- ↵
- ↵
- Fahrmeir L,
- Tutz G
- ↵
- Durbin J,
- Koopman SJ
- ↵
- Brown EN,
- Frank LM,
- Tang D,
- Quirk MC,
- Wilson MA
- ↵
- ↵
- ↵
- ↵
- Kitagawa G,
- Gersch W
- ↵
- ↵
- Qi Y,
- Minka TP,
- Picard RW
- ↵
- Ba D,
- Babadi B,
- Purdon PL,
- Brown EN
- ↵
- Shumway RH,
- Stoffer DS
- ↵
- Prerau MJ,
- Brown RE,
- Bianchi MT,
- Ellenbogen JM,
- Purdon PL
- ↵
- Brillinger DR
- ↵
- ↵
- Carlin BP,
- Louis TA
- ↵
- Dempster NM,
- Laird NM,
- Rubin DB
- ↵
- Dahlhaus R
- ↵
- Cimenser A, et al.
- ↵
- Purdon PL, et al.
- ↵
- ↵
- Efron B
- ↵
- Priestley MB
- ↵
- Adak S
- ↵
- Dahlhaus RD
- ↵
- ↵
- Doucet A,
- De Freitas N,
- Gordon N
- ↵
- McLachlan GJ,
- Krishnan T
Citation Manager Formats
Article Classifications
- Physical Sciences
- Applied Mathematics