State-space multitaper time-frequency analysis

Edited by David L. Donoho, Stanford University, Stanford, CA, and approved November 10, 2017 (received for review February 19, 2017)
December 18, 2017
115 (1) E5-E14

Significance

Rapid growth in sensor and recording technologies is spurring rapid growth in time series data. Nonstationary and oscillatory structure in time series is commonly analyzed using time-varying spectral methods. These widely used techniques lack a statistical inference framework applicable to the entire time series. We develop a state-space multitaper (SS-MT) framework for time-varying spectral analysis of nonstationary time series. We efficiently implement the SS-MT spectrogram estimation algorithm in the frequency domain as parallel 1D complex Kalman filters. In analyses of human EEGs recorded under general anesthesia, the SS-MT paradigm provides enhanced denoising (>10 dB) and spectral resolution relative to standard multitaper methods, a flexible time-domain decomposition of the time series, and a broadly applicable, empirical Bayes’ framework for statistical inference.

Abstract

Time series are an important data class that includes recordings ranging from radio emissions, seismic activity, global positioning data, and stock prices to EEG measurements, vital signs, and voice recordings. Rapid growth in sensor and recording technologies is increasing the production of time series data and the importance of rapid, accurate analyses. Time series data are commonly analyzed using time-varying spectral methods to characterize their nonstationary and often oscillatory structure. Current methods provide local estimates of data features. However, they do not offer a statistical inference framework that applies to the entire time series. The important advances that we report are state-space multitaper (SS-MT) methods, which provide a statistical inference framework for time-varying spectral analysis of nonstationary time series. We model nonstationary time series as a sequence of second-order stationary Gaussian processes defined on nonoverlapping intervals. We use a frequency-domain random-walk model to relate the spectral representations of the Gaussian processes across intervals. The SS-MT algorithm efficiently computes spectral updates using parallel 1D complex Kalman filters. An expectation–maximization algorithm computes static and dynamic model parameter estimates. We test the framework in time-varying spectral analyses of simulated time series and EEG recordings from patients receiving general anesthesia. Relative to standard multitaper (MT), SS-MT gave enhanced spectral resolution and noise reduction (>10 dB) and allowed statistical comparisons of spectral properties among arbitrary time series segments. SS-MT also extracts time-domain estimates of signal components. The SS-MT paradigm is a broadly applicable, empirical Bayes’ framework for statistical inference that can help ensure accurate, reproducible findings from nonstationary time series analyses.
The importance of developing principled methods to solve big data problems is now broadly appreciated (sites.nationalacademies.org/DEPS/BMSA/DEPS_171738). Time series are an important big data class that includes signals ranging from gravitational waves (1), solar variations (2), radar emissions (3), seismic activity (4), global positioning data (5), and stock prices (6) to neural spike train measurements (7), EEG recordings (8), vital signs (9), and voice recordings (10). Rapid growth in sensor and recording technologies in science, engineering, and economics is increasing time series data production and with it, the importance of conducting rapid, accurate analyses. Such analyses require extracting specific data features and characterizing their uncertainty in a way that makes possible formal statistical inferences the same way as they are conducted in simpler problems.
A range of time-frequency methods is used to characterize the nonstationary and often oscillatory features in time series data. Standard nonparametric spectral methods estimate the spectrum (i.e., the frequency content of the time series in a small time interval on which the data are presumed to be stationary) (8, 11, 12). Fourier-based spectral methods estimate only signal power as a function of frequency and therefore, cannot provide time-domain signal estimates. Spectrogram estimation (time-varying spectral analysis), which entails estimating the frequency content as function of time for nonstationary data, is carried out by simply repeating spectrum estimation on overlapping or nonoverlapping time intervals. Spectrum estimates on adjacent intervals (8, 1014) are not formally related. While recently developed time-frequency methods address the general problem of minimizing the resolution tradeoff between time and frequency, these techniques are computationally intensive, give their best performance in high signal-to-noise problems, and to date, have had limited application in actual time series analyses (15, 16). Despite their use to study important problems, current time-frequency methods have a critical shortcoming. None of these methods provides a statistical inference framework applicable to the entire time series. (110).
State-space (SS) modeling is a flexible, established inference framework for analyzing systems with properties that evolve with time (1721). This paradigm has been used for time-frequency analysis of nonstationary time series with parametric time series models (2224), harmonic regression models (25), and nonparametric time series models using batch processing (26). Given stationary data recorded on a finite interval, multitaper (MT) methods are optimal for balancing the bias–variance tradeoff in spectrum estimation conducted by combining Fourier-based methods with tapering (8, 11, 12). Therefore, a plausible approach to analyzing nonstationary and oscillatory time series is to combine SS modeling with MT methods to optimize locally the bias–variance tradeoff, relate spectral estimates across local intervals, and conduct formal statistical inference.
The important advances that we report are state-space multitaper (SS-MT) methods, which provide a statistical inference framework for time-varying spectral analysis of nonstationary time series. The balance of the paper is organized as follows. In Theory, we define the SS-MT time-frequency model, SS-MT spectrogram estimation algorithm, time-domain signal extraction algorithm, and empirical Bayes’ inference framework. In Applications, we illustrate use of the algorithms in the analysis of a simulated nonstationary time series and of EEG time series recorded from patients under general anesthesia. Discussion summarizes the properties of the SS-MT paradigm and highlights the implications of this future research for solving large data inference problems.

Theory

An SS-MT Time-Frequency Model.

Assume that we observe a nonstationary time series of the form
yt=xt+εt,
[1]
where xt is a zero mean, second-order, locally stationary Gaussian process and εt is independent, zero mean Gaussian noise with common variance σε2 for t= 1,2,,T. A common practice in analyses of nonstationary time series is to assume a minimal interval length and that data are stationary on intervals having this minimal length (SI Appendix, Table S1). We define the local stationarity of xt by assuming that we can write T=KJ, where K defines the number of distinct, nonoverlapping stationary intervals in xt and J is the number of observations per stationary interval. We index the stationary intervals as k= 1,,K and the points per interval as j= 1,,J. For example, if we have 1,440 s of a time series that is stationary on 1-s intervals and recorded at 250 Hz, then K= 1,440, J= 250, and T= 3,60,000.
We present the data on stationary interval k as the vector Yk of length J, with the jth element that is Yk,j=yJ(k1)+j, Xk,j=xJ(k1)+j, and εk,j=εJ(k1)+j for k= 1,,K and j= 1,,J. By the spectral representation theorem (27), we can express each Yk as
Yk=Xk+εk=WΔZk+εk,
[2]
where W is a J×J matrix with the (l,j)th element that is (W)l,j=J1/2exp(i2π(l1)j/J). ΔZk=(ΔZk(ω1),,ΔZk(ωJ)) are differences of orthogonal Gaussian increments, and we define ωj= 2π(j1)J1.
To relate the data on adjacent intervals, we assume that the Gaussian increment differences are linked by the random walk model
ΔZk=ΔZk1+vk,
[3]
where we assume that vk is a zero mean, independent complex Gaussian process with J×J diagonal covariance matrix I(σv,j2) for j= 1,,J. Eq. 3 defines a stochastic continuity constraint on the nonstationary time series in the frequency domain.
To represent the observation model [2] in the frequency domain, we let F be the Fourier transform operator defined as the J×J matrix with the (j,l)th element that is (F)j,l=J1/2exp(i2π(l1)j/J). Taking the Fourier transform of Eq. 2 yields
YkF=ΔZk+εkF,
[4]
where YkF=FYk, FW=I, and εkF=Fεk is a zero mean, complex Gaussian vector with J×J diagonal covariance matrix I(σε2). Eqs. 2 and 3 define a frequency-domain SS model for the nonstationary time series.
To combine the SS and MT paradigms, we note that, in the absence of Eq. 3, MT methods with Slepian functions selected as tapers would be used to estimate the spectrum on each stationary interval (11, 12). Therefore, as in the application of MT methods, given J, the number of data points in the stationary interval and Δ, the data sampling rate, we also assume that ωr, the desired frequency resolution for the spectral analysis, has been specified for the problem. Next, M, the number of tapers, is chosen to minimize the local bias–variance tradeoff for spectrum estimation on each stationary interval using the standard MT formula M2[JΔ1ωr]1 (28). We index the tapers as m= 1,,M.
We let S(m) denote the operator for applying the mth Slepian taper to the data, Yk(m)=S(m)Yk denote the tapered data, and Yk(m),F=FYk(m) denote the Fourier transform of the tapered data. If we take the Slepian tapers to be orthonormal, then by theorem 4.4.2 in ref. 29, the Fourier transform of each tapered series has the same probability distribution and thus, the same spectral representation as YkF in Eq. 4. Therefore, we write
Yk(m),F=ΔZk(m)+εk(m),F,
[5]
and we view ΔZk(m) and εk(m),F as a realization of ΔZk and of εkF, respectively, observable through the mth tapered series. It follows that the random walk model in Eq. 3 has the realization
ΔZk(m)=ΔZk1(m)+vk(m),
[6]
where we assume that vk(m) is a zero mean, independent complex Gaussian vector with a J×J diagonal covariance matrix I(σv,j2,(m)) for j= 1,,J and m= 1,,M. Eq. 6 induces stochastic continuity constraints on the tapered nonstationary time series. Eqs. 5 and 6 define an SS-MT time-frequency model. Use of the Slepian tapers to achieve the desired frequency resolution given the assumed length of the local stationary intervals transforms the original time series [2] and its state model [3] into M independent time series [5] and their respective M independent state models [6].

SS-MT Spectrogram Estimation Algorithm.

The linear complex Gaussian form of Eqs. 5 and 6 suggests that a Kalman filter algorithm can be used to compute the sequence of increment differences (23) and thus, the sequence of spectrum estimates. For this problem, the Kalman filter has a special structure. Because the M ΔZk(m) are independent, there are M separate, independent J-dimensional Kalman filters. In addition, because ΔZk(m)(ωj) is orthogonal across frequencies, there are, for each tapered series, J parallel 1D complex Kalman filter algorithms, one for each frequency ωj. Hence, the Gaussian increment differences can be recursively estimated by applying MJ 1D complex Kalman filter algorithms to the M tapered time series. Assuming that the increment difference estimates have been computed on interval k1, then for tapered series m, the 1D complex Kalman filter algorithm for estimating ΔZk(m)(ωj) on interval k is
ΔZk|k1(m)(ωj)=ΔZk1|k1(m)(ωj)
[7a]
σk|k1,j2,(m)=σk1|k1,j2,(m)+σv,j2,(m)
[7b]
ΔZk|k(m)(ωj)=ΔZk|k1(m)(ωj)+Ck,j(m)(Yk,j(m),FΔZk|k1(m)(ωj))
[7c]
σk|k,j2,(m)=(1Ck,j(m))σk|k1,j2,(m),
[7d]
where the Kalman gain for m= 1,,M, k= 1,,K, and j= 1,,J is
Ck,j(m)=(σε2,(m)+σk|k1,j2,(m))1σk|k1,j2,(m).
[8]
The notation k|s denotes the estimate on stationary interval k given all of the data observed through stationary interval s. The derivation of the Kalman filter algorithm is in SI Appendix. We assume that the algorithm has initial conditions ΔZ0(m)(ωj) and σ0,j2,(m). We carry out their estimation along with the model parameters using an expectation–maximization (EM) algorithm, which we describe in SI Appendix. Given the Kalman filter estimate of the increment differences on interval k, the SS-MT spectrogram estimate at frequency ωj on interval k is
fk|kSSMT(ωj)=M1m=1M||ΔZk|k(m)(ωj)||2,
[9]
where ||ΔZk|k(m)(ωj)||2 is the mth SS eigenspectrogram at frequency ωj (11). Each SS eigenspectrogram is a spectrogram estimate computed by weighting the data with a different Slepian taper. Like the MT spectrogram defined below [10], SS-MT spectrogram estimate [9] is the average of the M approximately independent SS eigenspectrograms.
Eqs. 79 define the SS-MT algorithm for spectrogram estimation for nonstationary time series. For each tapered series, the increment difference estimate on interval k is a weighted average between the increment difference estimate on interval k1 and the difference between the Fourier transform of the tapered series and the increment difference estimate on interval k1. The weighting depends on the Kalman gain, which is between zero and one by construction. If the Kalman gain is close to zero, then the one-step prediction variance σk|k1,j2,(m) is small relative to the observation variance σε2,(m), and hence, the increment difference estimate on interval k is close to the estimate on interval k1. If the Kalman gain is close to one, then the one-step prediction variance is large relative to the observation variance, meaning that the uncertainty in the prediction of the increment difference on interval k based on the data up through interval k1 is large. In this case, the increment difference estimate on interval k is close to the Fourier transform of the tapered series observed on interval k.
In the absence of the state models [3 and 6], Eq. 9 becomes the MT spectrogram estimate
fkMT(ωj)=M1m=1MYk,j(m),F2,
[10]
where Yk(m),F=(Yk,1(m),F,,Yk,J(m),F) and ||Yk,j(m),F||2 is the mth MT eigenspectrogram at frequency ωj (12). In the absence of tapering, Eq. 9 becomes the SS periodogram estimate
fk|kSSP(ωj)=||ΔZk|kSSP(ωj)||2,
[11]
which is computed by applying J parallel 1D complex Kalman filters to the Fourier transformed data YkF. In the absence of tapering and the SS model, Eq. 11 becomes the periodogram estimate
fkP(ωj)=||Yk,jF||2,
[12]
where YkF=(Yk,1F,,Yk,JF). By comparing the SS-MT algorithm [79] with the standard MT [10], the periodogram [12], and the SS periodogram [11] algorithms, it is possible to understand the effects on spectrogram estimation of combining the MT approach with SS modeling. In addition, the SS-MT paradigm can be applied to compute cross-spectrograms between two or more time series that are described in SI Appendix (SI Appendix, Fig. S12).

Time-Domain Signal Extraction.

Given the ΔZk|k(m), we can estimate the denoised time-domain signal as
Xk|k=WΔZk|k,
[13]
where ΔZk|k=M1m=1MΔZk|k(m). The extracted signal is a linear combination of the estimated increment differences across all of the frequencies. Frequency components on different stationary intervals are related, because all are estimated by the complex Kalman filter algorithm in Eqs. 7a7d. Hence, selective filtering, such as high-pass, low-pass, and band-pass filtering, can be performed by simply choosing the components of ΔZk|k in the desired frequency range. Given a set of L, not necessarily sequential frequencies, ωj for j=s1,,sL, we can obtain the filtered time-domain signal as
Xk|kL=WΔZk|kL,
[14]
where the components of ΔZk|kL, outside the L frequencies and their conjugate symmetric frequencies, are all zero. Eq. 14 provides a highly flexible alternative to an empirical mode decomposition that allows extraction of a time-domain signal composed of any specified frequency components. The analytic version of the filtered time-domain signal can be computed as
Rk|k,tL+iIk|k,tL= 2J12j=s1sLΔZk|kL(ωj)eiωjt
[15]
for t=J(k1)+l and l= 1,,J. Here, [(Rk|k,tL)2+(Ik|k,tL)2]1/2 and tan(Ik|k,tL/Rk|k,tL) are the instantaneous amplitude and phase of the time-domain signal in the specified frequency range, respectively (SI Appendix, Figs. S1 and S2). This computation obviates the need to apply a Hilbert–Huang transform to either filtered data or data processed by an empirical mode decomposition to estimate instantaneous amplitude and phase.

Inferences for Functions of the Increment Differences.

To make inferences for functions of the increment differences at any time points, we compute the joint distribution of the increment differences conditional on all of the data in the time series using the fixed interval smoothing algorithm (20, 21), which is
ΔZk|K(m)(ωj)=ΔZk|k(m)(ωj)+Ak,j(ΔZk+1|K(m)(ωj)ΔZk+1|k(m)(ωj))σk|K,j2,(m)=σk|k,j2,(m)+Ak,j2(σk+1|K,j2,(m)σk+1|k,j2,(m))Ak,j=σk|k,j2,(m)(σk+1|k,j2,(m))1,
[16]
where the initial conditions are ΔZK|K(m)(ωj) and σK|K,j2,(m) for k=K1,K2,,1 and j= 1,2,,J. To compute the covariances between any two states, we use the covariance smoothing algorithm defined as (20)
σk,u|K,j(m)=Ak,jσk+1,u|K,j(m)
[17]
for 1kuK. Eqs. 16 and 17 allow us to compute the joint distribution of the increment differences conditional on all of the data. Therefore, we can compute the distribution of any function of the increment differences by Monte Carlo methods (30). For each Monte Carlo sample, we draw from the joint distribution and compute the function of interest. The histogram of the function of interest provides a Monte Carlo estimate of its posterior probability density. The estimate is empirical Bayes, because it is computed conditional on the maximum likelihood parameter estimates (31).

Model Parameter and Initial Condition Estimation.

The Kalman filter [7 and 8], Kalman smoother [16], and covariance smoothing [17] algorithms assume that the initial states ΔZ0(m)(ωj), the initial state variances σ0,j2,(m), and the model parameters σv,j2,(m) and σε2,(m) are known. We use an EM algorithm (32) designed after refs. 20 and 21) to compute maximum likelihood estimates of the initial conditions and the model parameters. The details are given in SI Appendix.

Applications

Spectrogram Analysis of Simulated Data.

We first tested the SS-MT algorithm on the simulated nonstationary process (Fig. 1A) defined by the sixth-order autoregressive model adapted from ref. 12:
xt=3.9515xt17.8885xt2+9.7340xt37.7435xt4+ 3.8078xt50.9472xt6+16tTvt,
[18]
where T= 1,28,000 s and vt is independent, zero mean Gaussian noise with unit variance. The spectrogram of the model has three peaks at 3.5, 9.5, and 11.5 Hz (Fig. 1B). All three peaks grow linearly in height and width with time. We added an independent zero mean Gaussian noise with variance set to achieve a signal-to-noise ratio of 0 dB. The sampling rate is 64 Hz. Eq. 18 is nonstationary at each time t. However, because it can be approximated in small intervals by a stationary sixth-order autoregressive process, it satisfies the Dahlhaus definition of local stationarity (33). We set J= 1024 and K= 125. We choose M, the number of tapers, to be four, which corresponds to a spectral resolution of 0.5 Hz for the MT methods. We estimated the model parameters from the first 50 observations using the EM algorithm (SI Appendix).
Fig. 1.
Spectrogram analysis of the time-varying sixth-order autoregressive process defined in Eq. 18. (A) Ten-second segments from the simulated time series starting at 5, 16, and 25 min. (B) True spectrogram. (C) Periodogram. (D) MT spectrogram. (E) SS periodogram. (F) SS-MT spectrogram. Right shows for each panel a zoomed-in display of the 3 min between 24 and 27 min. The color scale is in decibels.
Fig. 1 CF shows the periodogram (Fig. 1C), the MT spectrogram (Fig. 1D), the SS periodogram (Fig. 1E), and the SS-MT spectrogram (Fig. 1F). While all four methods capture the general structure of the true spectrogram (Fig. 1B), there are clear differences. The MT spectrogram (Fig. 1D) shows, as expected, better resolution of (less variability in estimating) the three peaks compared with the periodogram (Fig. 1C). However, when comparing the power in the frequency bands outside the three peaks, both the periodogram (Fig. 1C) and the MT spectrogram (Fig. 1D and SI Appendix, Fig. S6) overestimate the noise relative to the true spectrogram (Fig. 1B and SI Appendix, Fig. S6 C and D) by 10–15 dB. The SS periodogram (Fig. 1E) and the SS-MT spectrogram (Fig. 1F and SI Appendix, Fig. S6 C and D) estimate the noise outside the three peaks to be at or near −10 dB as in the true spectrogram (Fig. 1B and SI Appendix, Fig. S6 C and D). The MT and the SS-MT spectrograms capture well and agree closely in their estimates of the power in the three peaks (Fig. 1 D, Right and F, Right and SI Appendix, Fig. S6).
A key difference between the MT and the SS-MT spectrograms appears as the power increases. As the heights of the spectral peaks at 9.5 and 11.5 Hz increase, the depth of the “valley” in the spectrogram between them increases also (Fig. 1B and SI Appendix, Fig. S6 C and D). The valley is at 5 dB between minutes 24 and 27 (Fig. 1B, Right and SI Appendix, Fig. S6D). The MT spectrogram estimates the valley to be at 10 dB (Fig. 1D, Right and SI Appendix, Fig. S6D). In contrast, the SS-MT spectrograms estimates the valley to be at 4 dB (Fig. 1D, Right and SI Appendix, Fig. S6D). In addition, the mean squared error was lower at all frequencies for the SS-MT algorithm compared with the other three methods (SI Appendix, Table S2). We explain the enhanced denoising and enhanced spectral resolution of the SS-MT algorithm relative to the MT algorithm after we analyze the real EEG recordings in the next examples. In SI Appendix, we assess the effect of stationary interval length on spectrogram estimation.

Spectrogram Analysis of the EEG During General Anesthesia.

Anesthetic drugs act in the brain to create the altered states of general anesthesia by producing highly structured oscillations that disrupt normal information flow between brain regions (34, 35). Because these oscillations are readily visible in the EEG, EEG and EEG-derived measures are commonly used to track in real time the brain states of patients receiving general anesthesia and sedation (36). We illustrate the SS-MT methods by comparing them with the other three spectrogram methods in the analysis of EEG recordings from patients during general anesthesia.
The EEG recordings, in this example and the subsequent examples, are deidentified data collected as part of protocols at the Massachusetts General Hospital (MGH) that have been approved by the MGH Human Research Committee. For EEGs recorded from patients, informed consent was not required, whereas for EEGs recorded from volunteer subjects, informed consent was required and was obtained. For each patient, the EEG was continuously recorded during general anesthesia using the Sedline monitor (Masimo) with the standard six-electrode frontal montage. The Sedline array records from electrodes located approximately at positions Fp1, Fp2, F7, and F8. On each channel, the electrode impedance was less than 5 kohms. We used the EEG data recorded at Fp1 for the spectral analyses and the EEG data recorded at Fp1 and Fp2 for the coherence analyses (SI Appendix, Fig. S12). We began the EEG recordings approximately 3–5 min before induction of general anesthesia and continued the recordings for ∼3–5 min after extubation.
The data analyzed in Fig. 2 consist of 190 min of EEG recorded at 250 Hz during maintenance of general anesthesia using the ether anesthetic sevoflurane with oxygen. Hence, we take T = 2,850,000. We set J = 500 based on our several years of experience analyzing the EEG of anesthetized patients. Hence, we have K = 5,750. We chose M, the number of tapers, to be three. This corresponds to a spectral resolution of 2 Hz for the MT method. We estimated the model parameters and initial conditions from the first 5 min of data using the EM algorithm. To estimate the observation noise variance in the EM algorithm, we restricted the analysis to frequencies in the physiologically relevant range from 0.1 to 30 Hz. The raw EEG signal (Fig. 2B) shows strong modulation with changes in the sevoflurane concentration (Fig. 2A).
Fig. 2.
Spectrogram analysis of EEG time series recorded from a patient under general anesthesia maintained with sevoflurane and oxygen. (A) The expired concentration of sevoflurane. (B) Raw EEG signals. (C) Periodogram. (D) MT method spectrogram. (E) SS periodogram. (F) SS-MT spectrogram. The color scale is in decibels.
All four spectrograms for these data show the well-known alpha-beta oscillations (8–17 Hz) and slow-delta oscillations (0.1–4 Hz) that are characteristic of general anesthesia maintained by sevoflurane (36). When the sevoflurane concentration increases, the power in the alpha-beta band shifts to lower frequencies, while the power in the slow-delta band power shifts to higher frequencies. The opposite changes occur when the sevoflurane concentration decreases. The spectral changes associated with increases in the sevoflurane concentration appear as increases in theta oscillation power (4–8 Hz) (36). The periodogram (Fig. 2C) shows diffuse, grainy power between 10 and 17 Hz and in the slow-delta range. As expected, the MT spectrogram (Fig. 2D) has higher spectral resolution relative to the periodogram. Both the periodogram and the MT spectrogram show diffuse power ranging from 7 to −2 dB in the theta range and from −5 to −15 dB in the beta-gamma range (>17 Hz). Relative to the periodogram and the MT spectrogram, the SS periodogram (Fig. 2E) and the SS-MT spectrogram (Fig. 2F) show substantially greater denoising defined as a reduction in power in the frequency bands with low power. For the latter two spectrograms, the power in the beta-gamma range is uniformly at −15 dB, which is a 5–15 dB power reduction relative to the MT spectrogram. Both the SS periodogram and the SS-MT spectrogram estimate the power in the theta band to be 10–15 dB less than that for either the periodogram or the MT spectrogram. Like the periodogram, the prominent alpha-beta and the slow-delta power in the SS periodogram is grainy and diffuse.

Spectrogram Analysis of the EEG During Transitions Among Anesthetic States.

To illustrate the full potential of the SS-MT algorithm, we reanalyze 155.4 min of EEG data recorded at 250 Hz from a frontal lead in a human volunteer subject receiving i.v. propofol administered by a computer-controlled infusion at an increasing and then, a decreasing infusion rate (35). Gradually increasing the propofol infusion rate allows the subject to transition gradually from being awake to unconsciousness (UNC). Gradually decreasing the propofol infusion rate from the rate required to achieve the maximum target effect site concentration (model-derived brain concentration) allows the subject to transition from UNC to the awake state. Baseline EEG was recorded for 20 min while the subject lay supine with eyes closed and received no propofol. After the baseline period, propofol was administered through a computer-controlled infusion to achieve five different effect site concentrations in increasing levels (Fig. 3A) (35). After completing the fifth level, the propofol infusion rate was systematically decreased to achieve a similar sequence of target effect site concentrations in decreasing order until the infusion was stopped. Each target effect site concentration was maintained for 14 min.
Fig. 3.
Spectrogram analysis of EEG recorded in a volunteer subject receiving a computer-controlled infusion of propofol. (A) Time course of propofol target effect site concentrations based on the Schneider model (35). The black dashed vertical lines define the anesthetic states determined by the behavioral analysis: awake1, baseline conscious state; LC; UNC; RC; and awake2, final conscious state. (B) Two seconds of unprocessed EEG (black curves) and of EEG extracted from the SS-MT analysis (red curves) [14] at different target effect site concentrations. (C) MT method spectrogram. (D) SS-MT spectrogram. The color scale is in decibels.
Based on the analyses of the subject’s responses to yes–no questions administered every 4 s, we identified five distinct behavioral or anesthetic states: conscious, loss of consciousness (LC), UNC, recovering consciousness (RC), and conscious (Fig. 3A, black dashed vertical lines) (35). The important scientific question to answer is whether these distinct anesthetic states are associated with distinct EEG signatures.
We take T= 2,331,000 and set J= 1,000, K= 2,331, and M= 5. This gives a spectral resolution of 1.5 Hz for the MT method. Because we expect the EEG frequency content to change substantially with changes in target effect site concentration, we estimated, using the EM algorithm, a new set of model parameters from the first 5 min of EEG data recorded at each level. The effects of changing the propofol infusion rate are apparent in the unprocessed EEG (Fig. 3B, black curves), the denoised time-domain signal (Eq. 14 and Fig. 3B, red curves), the MT spectrogram (Fig. 3C), and the SS-MT spectrogram (Fig. 3D). At baseline, moderate-amplitude slow oscillations dominate the EEG. Low-amplitude beta-gamma oscillations appear midway through level 2 and transition into narrow-band, high-amplitude alpha oscillations by level 4. At the same time, the slow oscillations transition to high-amplitude, slow-delta oscillations. By level 5, the alpha oscillations have nearly dissipated, and the EEG is dominated by slow-delta oscillations. As the propofol infusion rate is decreased, EEG dynamics are recapitulated in reverse. As in the previous examples, the SS-MT spectrogram shows substantial spectral denoising and increased resolution relative to the MT spectrogram. The denoised time-domain signals and the SS-MT spectrogram strongly suggest that different oscillatory components are present in the EEG when the subject is in different anesthetic states.

Inferring Differences in Spectral Power Between Anesthetic States.

To make a formal statistical inference about the relationship between these anesthetic states and EEG signatures, we compare the power across the spectrogram from 0.1 to 30 Hz among representative 100-s intervals during each of the anesthetic states (Fig. 3A). The 100-s intervals are baseline awake period awake1, 500–600 s; LC, 3,100–3,200 s; UNC, 4,600–4,700 s; RC, 6,600–6,700 s; and final awake state awake2, 9,000–9,100 s. To compare two 100-s intervals for each frequency ω in a given frequency range, we compute the average difference spectrogram between two intervals:
Δf¯r,s(ω)=1100[rftSSMT(ω)dtsftSSMT(ω)dt],
[19]
where r and s are two distinct 100-s intervals. To determine if there is a significant change in the spectrogram between any two of the anesthetic states, we use a Monte Carlo procedure to compute an approximate empirical Bayes’ 95% confidence interval (95% CI) for Δf¯r,s(ω) (30). Together, the Kalman filter [7 and 8], the Kalman smoothing [16], and the covariance smoothing [17] algorithms define the multivariate complex Gaussian joint posterior density of ΔZk|K for k= 1,.K, conditional on the parameter estimates. The quantity Δf¯r,s(ω) is a function of the ΔZk|K, so that given a random sample of the ΔZk|K, we can compute Δf¯r,s(ω). By drawing a large number of the ΔZk|K, say 1,000, we can, therefore, compute 95% CIs for Δf¯r,s(ω) (Fig. 4). A significant difference in power is observed if the zero is not in the 95% CI.
Fig. 4.
The 95% CIs for comparisons of the average power differences between the anesthetic states in Fig. 3A. Each panel shows the 95% empirical Bayes confidence interval (CI) for the average power difference defined by the lower 2.5% CI bound (blue curves), and the upper 97.5% CI bound (red curves). (A) LC-awake1: LC compared with baseline conscious state. (B) UNC-awake1: UNC compared with baseline conscious state. (C) RC-awake1: RC compared with baseline conscious state. (D) Awake2-awake1: final conscious state compared with baseline conscious state. (E) LC-UNC: LC compared with the UNC state. (F) LC-RC: LC compared with RC.
We show 6 of 10 possible comparisons of differences in power among the five anesthetic states (Fig. 4). The LC and UNC states show significant increases in power in the slow-delta and alpha bands relative to awake1, the baseline awake state (Fig. 4 A and B). There is also a significant increase in power in the upper part of the slow-delta and in the alpha bands between RC and awake1 (Fig. 4C) and between LC and the UNC state (Fig. 4E). In contrast, there are no appreciable differences in power between awake2 and awake1 (Fig. 4D) or between LC and RC (Fig. 4F). These findings are in complete agreement with the structure of the power in the spectrogram (Fig. 3D). We can conclude that there are significant quantifiable differences in EEG power between different anesthetic states and that those differences can range from 10 to 20 dB (95% CI). These findings also agree with and go beyond the original analyses of these data, in which hypothesis testing methods with Bonferroni corrections were used to compare the anesthetized states with just awake1 (35). Using the model in Eq. 18, we assessed the coverage probability of the empirical Bayes’ CIs in a simulation study (SI Appendix) and found that the actual and nominal coverage probabilities are in good agreement.

Spectrogram Denoising.

The SS-MT spectrogram has greater denoising than the MT spectrogram (Figs. 13) because of the stochastic continuity constraints (Eq. 3) and the eigenspectrogram averaging (SI Appendix, Figs. S3 and S4). The stochastic continuity constraint has a different independent effect at each frequency. In both the theoretical and the real data examples, the state variances, σv,l2,(m), are small (0.05–4 dB) for frequencies with low power and large (27–38 dB) for frequencies with high power (SI Appendix, Fig. S5, blue curves). The Kalman gains, Ck,l(m), reached steady-state values within 5–10 updates, and like the state variances, the Kalman gains were small (0.1–0.4) for frequencies with low power and large (0.7–0.95) for frequencies with high power (SI Appendix, Fig. S5, red curves). Rewriting Eq. 7c as
ΔZk|k(m)(ωl)=(1Ck,l(m))ΔZk1|k1(m)(ωl)+Ck,l(m)Yk,l(m),F
[20]
shows that the increment difference estimate on interval k is a weighted average between the increment difference estimate on interval k1 and the Fourier transform of the tapered data on interval k. In particular, frequencies with low power weight Zk1|k1(m)(ωl) more than Yk,l(m),F. This weighting favors suppressing increases or fluctuations in the low-power or noise frequencies. In contrast, frequencies with high power weight more the new information in Yk,l(m),F. These differential effects denoise the spectrogram by heightening the contrast between frequencies with high power and those with low power in the analysis of the simulated data (Fig. 1F and SI Appendix, Fig. S6 C and D) and in the analysis of the actual EEG recordings (Fig. 2F and SI Appendix, Figs. S6C and S9C).
Averaging the MT eigenspectrograms reduces the variability in the MT spectrogram (SI Appendix, Fig. S3) (11). Each SS-MT eigenspectrogram (SI Appendix, Fig. S4 AC) has variability comparable with the SS periodogram (Fig. 2E) (29). Averaging the SS-MT eigenspectrograms reduces the variability of the SS-MT spectrogram at each frequency relative to the SS periodogram (SI Appendix, Fig. S4D) by M1 [9], thus giving the SS-MT spectrogram a further denoising enhancement.

Spectral Resolution and Leakage Reduction.

Kalman filter updating [7c and 20] enhances the spectral resolution of the SS-MT spectrogram relative to the MT spectrogram by reducing leakage. To see why, assume that fk(ωj) and fk(ωl) are the true spectrograms on time interval k at two frequencies ωj and ωl, respectively, and that fk(ωj)fk(ωl). Let Δωr be the frequency resolution chosen for the MT analysis. If |ωjωl|<Δωr (|ωjωl|>Δωr), then in the MT analysis, the narrow (broad)-band power at ωj leaks into the power at ωl (12). The extent of the leakage is governed by the power spectral density of each taper (SI Appendix, Figs. S7, S8, S10, and S11). In the SS-MT analysis, because ωl has low power, ΔZk|k(m)(ωl) weights ΔZk1|k1(m)(ωl) much more than Yk,l(m),F, the term in Eq. 20 that carries the leakage from ωj. Hence, broad- and narrow-band power leakage from ωj into the power at ωl is reduced, because the Kalman gain at ωl is small.
For example, at 70 min (Fig. 2 D and F and SI Appendix, Fig. S9 A and C), the MT and SS-MT spectrograms agree in the high-power frequency bands (i.e., 0.1–5 and 9.5–15.5 Hz) and disagree in the low-power frequency bands (5.1–9.4 and 15.6–30 Hz); 6 Hz is just on the border of the narrow-band leakage from 5 Hz for the MT spectrogram (SI Appendix, Fig. S9A). The 12-dB difference between the MT and the SS-MT spectrograms at 6 Hz results, because the former has leakage from the power at 5 Hz, whereas the latter has enhanced denoising and reduced leakage. A 10- to 15-dB power difference persists between the MT and SS-MT spectrograms beyond 15 Hz because of the small values of the Kalman gain in this frequency band (SI Appendix, Fig. S9 C and D).
At 80 min (Fig. 2 D and F and SI Appendix, Fig. S9 B and D), the MT and SS-MT spectrograms also agree in the high-power frequency bands (0.1–5 and 10.5–15 Hz) and disagree in the low-power frequency bands (i.e., 5.1–9.4 and 15.1–30 Hz). A similar argument explains the 7-dB difference in power at 16 Hz between the MT and the SS-MT spectrograms at minute 80. The same argument also explains in the analysis of the simulated data example the 7-dB difference in power at 11 Hz in the MT and SS-MT spectrograms at 25 min (SI Appendix, Fig. S6D). The differences between MT and SS-MT tapering are discussed in SI Appendix.

Discussion

Time series are a prominent big data class with growth that is being spurred by innovations in sensing and recording technologies. These data track dynamic processes, making accurate real-time analyses an important objective. Aside from simply extracting important features from the series, the analysis should provide associated measures of uncertainty, so that formal statistical inference can be conducted the same way that it would be conducted for questions arising from smaller, simpler datasets. No current time-frequency analysis method provides an inference framework applicable to the entire series (8, 1116). To address the inference problem for nonstationary time series, we combined the MT and the SS approaches into an empirical Bayes’ paradigm for frequency- and time-domain analyses of nonstationary time series. We showed the SS-MT inference paradigm by analyzing differences in EEG power between different propofol-induced anesthetic states (Fig. 4). By reporting the results in terms of 95% empirical Bayes’ CIs, we measure directly the effect size (i.e., how much EEG power differs among propofol’s anesthetic states). We base our inferences on 95% CIs derived from the empirical Bayes’ estimate of the joint posterior distribution of the power across all of the frequencies and all times in the time series rather than on tests of multiple hypotheses. Our small simulation study (SI Appendix) suggests that the nominal and actual coverage probabilities of the empirical Bayes’ CIs are in good agreement. The empirical Bayes’ paradigm has been suggested as a practical approach to solving large-scale inference problems (37).
Our analyses offer scientific insights. The SS-MT spectrograms show denoising and spectral resolution that more clearly define the frequency content of anesthetic EEG signals. As a consequence, the results in Fig. 2 suggest that most of the theta oscillation power in the sevoflurane spectrogram could be caused by power shifts from both the alpha band above and the slow-delta band below. The results in Fig. 4 allow us to make formal inferences about EEG power difference as a function of the level of unconsciousness in a single individual.

SS-MT Time-Frequency Analysis of Nonstationary Time Series.

In addition to providing a statistical inference framework, the SS-MT paradigm has other desirable features. By the spectral representation theorem, the orthogonal increment differences are the fundamental process that underlies a stationary time series (27, 38). Hence, we defined nonstationarity by starting with the common practice of choosing a minimal interval on which the time series is assumed stationary (SI Appendix, Table S1) and then applying a stochastic continuity constraint [3 and 6] to link the increment differences across the minimal intervals. We constructed the SS model by taking the observed data to be the Fourier transforms of the M tapered data series [5]. For a given taper, the Fourier transform of the tapered data is J independent, complex Gaussian observations in the frequency domain. Hence, to estimate the increment differences, we implemented in parallel J independent 1D complex Kalman filters [7a7d]. Given the M tapers, the MJ algorithms run in parallel, and the SS-MT spectrogram (cross-spectrogram) estimates are computed by summing the M eigenspectrograms (eigen cross-spectrograms) in Eq. 9 (SI Appendix, Eq. S14) at each Kalman filter update. Parallel computation makes SS-MT spectrogram estimation attractive for real-time applications. Each 1D complex Kalman filter has an associated Kalman smoother [16], covariance smoothing [17], and an EM algorithm for parameter estimation at each frequency (SI Appendix).
Both the SS and the MT components of SS-MT analysis contribute significantly to spectrogram denoising. The state variances and Kalman gains are high (low) at frequencies with high (low) power (SI Appendix, Fig. S5). Therefore, the Kalman filter updating [7c and 20] denoises the spectrogram by heightening the contrast between high- and low-power spectrogram ordinates (Figs. 1 D and F, 2 D and F, and 3 C and D). The MT component of the SS-MT algorithm further contributes to the denoising by averaging the eigenspectrograms to reduce the variance at all frequencies by M1 (SI Appendix, Figs. S3 and S4). In addition, SS estimation [7c and 20] enhances the spectral resolution in the SS-MT spectrogram relative to the MT spectrogram by reducing both narrow-band and broad-band leakage (SI Appendix, Figs. S6 and S9). Because the Kalman gains at low-power frequencies are small, leakage from even nearby frequencies with high power is reduced (SI Appendix, Figs. S6 and S9). In our simulated and real data examples, the effect of SS updating on denoising and spectral resolution was a 10- to 15-dB difference between the SS-MT and the MT spectrograms in the low-power frequency bands (Figs. 1 D and F, 2 D and F, and 3 C and D and SI Appendix, Figs. S6 and S9).
By applying the spectral representation theorem to the estimated increment differences [14 and 15], we extracted the time-domain signals within specified frequency bands as well as instantaneous phase and amplitude (Fig. 3B and SI Appendix, Figs. S1 and S2). The SS-MT paradigm is highly flexible, because arbitrary combinations of frequency components can be chosen to construct the time-domain signal. Time-domain signal extraction is not possible with standard nonparametric spectral methods, which only estimate power as a function of frequency. Estimating instantaneous phase and amplitude with conventional approaches requires a separate analysis (14). The SS-MT paradigm conducts spectral analysis, signal extraction, and instantaneous phase and amplitude estimation as parts of a unified framework.

Theoretical and Problem-Specific Methods Development.

Our local stationarity definition differs from the Dahlhaus definition, which assumes time-invariant increment differences with deterministic continuity of the time-dependent transfer function in the spectral representation (33). The lengths of the local intervals are either assumed (33) or estimated (39). The local spectral estimates are computed as the squared modulus of the Fourier transform of the data after applying a single taper or by fitting a parametric model using local Whittle likelihoods (40). Like the MT methods, these local estimation approaches do not combine information across the local stationary intervals.
In contrast, the stochastic continuity constraint imposed by the random walk model enables recursive estimation of the time-dependent increment differences and the spectrogram. The current form of the continuity constraint has a theoretical drawback. It implies that, at each frequency, spectral power grows with time, since the theoretical spectrum on stationary interval k at frequency ωj is
fk(ωj)dωj=fk1(ωj)dωj+σj2.
[21]
In practice, the spectrogram estimate does not explode, because on every stationary interval k, Ck,l(m), the Kalman gain is bounded between zero and one [8]. If the Kalman gain is zero, then the spectrogram estimate on interval k is the SS-MT spectrogram estimate on interval k1, whereas if the Kalman gain is one, then the spectrogram estimate on interval k is the MT spectrogram estimate on interval k [20]. Nevertheless, a possible correction to Eq. 21 is to assume that the increment differences follow a stationary model, such as
ΔZk(ωj)=ρjΔZk1(ωj)+vk(ωj),
[22]
where we assume that 0<ρj<1. Hence, we have
fk(ωj)dωj=EΔZk(ωj)2=ρj2EΔZk1(ωj)2+σj2=ρj2fk1(ωj)dωj+σj2.
[23]
Eq. 22 means that the nonstationary time series has an underlying stationary increments process. The parameter ρj can easily be estimated in the EM algorithm. Our SS approach falls into the class of regularization methods for solving big data problems (41). Thus, the current wealth of regularization research can be applied to the SS-MT paradigm.
In our data analyses, we followed the common practice of setting the stationary interval a priori (SI Appendix, Table S1). Our analyses (SI Appendix, Fig. S14 and Table S2) suggest that the spectrogram estimates can be sensitive to interval choice and that different stationary intervals could be optimal for different frequencies. Therefore, in future work, we will incorporate interval choice into the estimation by evaluating the likelihood as a function of stationary interval length.
At present, both our model-fitting algorithms and inference framework depend critically on the Gaussian observation and state models [2 and 3]. Description of signal frequency content and inferences may be inaccurate when these assumptions do not hold. As an alternative, model-fitting and inference using non-Gaussian SS models can be readily carried out using sequential Monte Carlo methods (42). This extension will be the topic of a future investigation.
Application of the SS-MT paradigm in time-frequency analyses of different types of nonstationary time series is a productive way to extend our methods by allowing the question under investigation to guide problem-specific development of the framework. We will continue to study EEG time series recorded under different anesthetic drugs (36). The EEG recorded during sevoflurane general anesthesia (Fig. 2) suggests a smooth process defining continuity among the time intervals. Therefore, higher-order stationary models could be chosen to impose a greater degree of smoothness on the increment differences and the spectrogram. In contrast, the EEG recorded during induction of and recovery from propofol-induced unconsciousness (Fig. 3) suggests that a process with jump discontinuities may be a more appropriate state model for these data. SI Appendix, Table S1 summarizes problems from different fields of science that have used MT spectral methods to study nonstationary processes. These several applications suggest a rich testbed for further development of the SS-MT paradigm.
Adaptive model parameter estimation using local likelihood, SS, or method of moments techniques can be combined with different continuity models. A key decision in using adaptive estimation is defining the timescale of the parameter updates. We used the target anesthetic levels as covariates—subject matter constraints—to set this timescale in Fig. 3. Subject matter constraints may also be used to reduce the number of parameters. We limited state variance estimation to frequencies below 30 Hz based on knowledge of the frequency range relevant for tracking anesthetic states (36). The naïve empirical Bayes’ CIs had good coverage probabilities based on our small simulation study of the model in Eq. 18. These intervals can be further calibrated by taking account of the uncertainty in the maximum likelihood parameter estimates obtained from the EM algorithm (43). Computation of the SS-MT cross-spectrogram (SI Appendix, Fig. S12) suggests that our paradigm can be extended to inference problems for multivariate nonstationary time series.
The SS-MT paradigm provides a computationally efficient framework for spectrogram estimation, time-domain signal extraction, and statistical inference for nonstationary time series.

Acknowledgments

This work was partially supported by National Research Foundation of Korea Grant 2017R1C1B5017254 from the Korean Government (to S.K.), a Guggenheim Fellowship in Applied Mathematics (to E.N.B.), NIH Transformative Research Award R01-GM104948 (to E.N.B.), NIH Award P01-GM118629 (to E.N.B.,), funds from MGH (E.N.B.), and funds from the Picower Institute for Learning and Memory (E.N.B.).

Supporting Information

Appendix (PDF)

References

1
S Chatterji, L Blackburn, G Martin, E Katsavounidis, Multiresolution techniques for the detection of gravitational-wave bursts. Classical Quantum Gravity 21, S1809–S1818 (2004).
2
JD Haigh, AR Winning, R Toumi, JW Harder, An influence of solar spectral variations on radiative forcing of climate. Nature 467, 696–699 (2010).
3
SS Haykin, AO Steinhardt Adaptive Radar Detection and Estimation (Wiley-Interscience, New York) Vol 11 (1992).
4
WJ Emery, RE Thomson Data Analysis Methods in Physical Oceanography (Elsevier Science, Amsterdam, 2001).
5
P Misra, P Enge Global Positioning Systems: Signals, Measurements, and Performance (Granga-Jamuna, Lincoln, MA, 2006).
6
X Zheng, BM Chen Stock Market Modeling and Forecasting (Springer, London) Vol 442 (2013).
7
W Truccolo, UT Eden, MR Fellows, JP Donoghue, EN Brown, A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. J Neurophysiol 93, 1074–1089 (2005).
8
P Mitra, H Bokil Observed Brain Dynamics (Oxford Univ Press, New York, 2007).
9
T Yilmaz, R Foster, Y Hao, Detecting vital signs with wearable wireless sensors. Sensors 10, 10837–10862 (2010).
10
TF Quatieri Discrete-Time Speech Signal Processing: Principles and Practice (Prentice Hall, Englewood Cliffs, NJ, 2008).
11
DB Percival, AT Walden Spectral Analysis for Physical Applications (Cambridge Univ Press, Cambridge, UK, 1993).
12
B Babadi, EN Brown, A review of multitaper spectral analysis. IEEE Trans Biomed Eng 61, 1555–1564 (2014).
13
TH Koornwinder Wavelets: An Elementary Treatment of Theory and Applications (World Scientific, River Edge, NJ, 1995).
14
NE Huang, et al., The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc R Soc Math Phys Eng Sci 454, 903–995 (1998).
15
I Daubechies, J Lu, H-T Wu, Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Appl Comput Harmon Anal 30, 243–261 (2011).
16
I Daubechies, YG Wang, H-T Wu, ConceFT: Concentration of frequency and time via a multitapered synchrosqueezed transform. Philos Trans A Math Phys Eng Sci 374, 20150193 (2016).
17
L Fahrmeir, G Tutz Multivariate Statistical Modeling Based on Generalized Linear Models (Springer, New York, 2001).
18
J Durbin, SJ Koopman Time Series Analysis by State Space Methods (Oxford Univ Press, Oxford) Vol 38 (2012).
19
EN Brown, LM Frank, D Tang, MC Quirk, MA Wilson, A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. J Neurosci 18, 7411–7425 (1998).
20
AC Smith, EN Brown, Estimating a state-space model from point process observations. Neural Comput 15, 965–991 (2003).
21
RH Shumway, DS Stoffer, An approach to time series smoothing and forecasting using the EM algorithm. J Time Ser Anal 3, 253–264 (1982).
22
T Bohlin, Analysis of EEG signals with changing spectra a short-word Kalman estimator. Math Biosci 35, 221–259 (1977).
23
G Kitagawa, W Gersch Smoothness Priors Analysis of Time Series (Springer, Berlin) Vol 116 (1996).
24
MP Tarvainen, JK Hiltunen, PO Ranta-aho, PA Karjalainen, Estimation of nonstationary EEG with Kalman smoother approach: An application to event-related synchronization (ERS). IEEE Trans Biomed Eng 51, 516–524 (2004).
25
Y Qi, TP Minka, RW Picard, Bayesian spectrum estimation of unevenly sampled nonstationary data. Proc IEEE Int Conf Acoust Speech Signal Process 2, 1473–1476 (2002).
26
D Ba, B Babadi, PL Purdon, EN Brown, Robust spectrotemporal decomposition by iteratively reweighted least squares. Proc Natl Acad Sci USA 111, E5336–E5345 (2014).
27
RH Shumway, DS Stoffer Time Series Analysis and Its Applications (Springer, New York, 2011).
28
MJ Prerau, RE Brown, MT Bianchi, JM Ellenbogen, PL Purdon, Sleep neurophysiological dynamics through the lens of multitaper spectral analysis. Physiology 32, 60–92 (2017).
29
DR Brillinger Time Series: Data Analysis and Theory (SIAM, Philadelphia) Vol 38 (2001).
30
G Czanner, UT Eden, S Wirth, M Yanike, WA Suzuki, EN Brown, Analysis of between-trial and within-trial neural spiking dynamics. J Neurophysiol 99, 2672–2693 (2008).
31
BP Carlin, TA Louis Bayesian Methods for Data Analysis (CRC, 3rd Ed, Boca Raton, FL, 2009).
32
NM Dempster, NM Laird, DB Rubin, Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Ser B Stat Methodol 39, 1–38 (1977).
33
R Dahlhaus, Fitting time series models to nonstationary processes. Anal Stat 25, 1–37 (1997).
34
A Cimenser, et al., Tracking brain states under general anesthesia by using global coherence analysis. Proc Natl Acad Sci USA 108, 8832–8837 (2011).
35
PL Purdon, et al., Electroencephalogram signatures of loss and recovery of consciousness from propofol. Proc Natl Acad Sci USA 110, E1142–E1151 (2013).
36
PL Purdon, A Sampson, KJ Pavone, EN Brown, Clinical electroencephalography for anesthesiologists part I: Background and basic signatures. Anesthesiology 123, 937–960 (2015).
37
B Efron Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction (Cambridge Univ Press, Cambridge, UK, 2013).
38
MB Priestley Spectral Analysis and Time Series (Academic, London, 1981).
39
S Adak, Time-dependent spectral analysis of nonstationary time series. J Amer Statist Assoc 93, 1488–1501 (1998).
40
RD Dahlhaus, A likelihood approximation for locally stationary processes. Annal Stat 28, 1762–1794 (2000).
41
J Fan, F Han, H Liu, Challenges of big data analysis. Natl Sci Rev 1, 293–314 (2014).
42
A Doucet, N De Freitas, N Gordon Sequential Monte Carlo Methods in Practice (Springer, New York, 2001).
43
GJ McLachlan, T Krishnan The EM Algorithm and Extensions (Wiley, Hoboken, NJ, 2008).

Information & Authors

Information

Published in

The cover image for PNAS Vol.115; No.1
Proceedings of the National Academy of Sciences
Vol. 115 | No. 1
January 2, 2018
PubMed: 29255032

Classifications

Submission history

Published online: December 18, 2017
Published in issue: January 2, 2018

Keywords

  1. nonparametric spectral analysis
  2. spectral representation theorem
  3. complex Kalman filter
  4. statistical inference
  5. big data

Acknowledgments

This work was partially supported by National Research Foundation of Korea Grant 2017R1C1B5017254 from the Korean Government (to S.K.), a Guggenheim Fellowship in Applied Mathematics (to E.N.B.), NIH Transformative Research Award R01-GM104948 (to E.N.B.), NIH Award P01-GM118629 (to E.N.B.,), funds from MGH (E.N.B.), and funds from the Picower Institute for Learning and Memory (E.N.B.).

Notes

This article is a PNAS Direct Submission.

Authors

Affiliations

Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139;
Department of Electronics and Control Engineering, Hanbat National University, Daejeon 34158, Korea;
Michael K. Behr1
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
Demba Ba1
School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138;
Emery N. Brown2 [email protected]
Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02139;
Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139;
Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA 02114;
Institute of Medical Engineering and Science, Massachusetts Institute of Technology, Cambridge, MA 02139

Notes

2
To whom correspondence should be addressed. Email: [email protected].
Author contributions: S.K., D.B., and E.N.B. designed research; S.K., M.K.B., D.B., and E.N.B. performed the research; S.K., M.K.B., D.B., and E.N.B. contributed new analytic tools; S.K., M.K.B., D.B., and E.N.B. analyzed the data; and S.K., and E.N.B. wrote the paper.
1
S.K., M.K.B., and D.B. contributed equally to this work.

Competing Interests

Conflict of interest statement: The authors have applied for patents on the use of these methods to track brain states of patients receiving general anesthesia and sedation.

Metrics & Citations

Metrics

Note: The article usage is presented with a three- to four-day delay and will update daily once available. Due to ths delay, usage data will not appear immediately following publication. Citation information is sourced from Crossref Cited-by service.


Citation statements




Altmetrics

Citations

Export the article citation data by selecting a format from the list below and clicking Export.

Cited by

    Loading...

    View Options

    View options

    PDF format

    Download this article as a PDF file

    DOWNLOAD PDF

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Personal login Institutional Login

    Recommend to a librarian

    Recommend PNAS to a Librarian

    Purchase options

    Purchase this article to access the full text.

    Single Article Purchase

    State-space multitaper time-frequency analysis
    Proceedings of the National Academy of Sciences
    • Vol. 115
    • No. 1
    • pp. 1-E114

    Media

    Figures

    Tables

    Other

    Share

    Share

    Share article link

    Share on social media