Spectrum Estimation
Reference:
Slides of EE4C03, TUD
Hayes M H. Statistical digital signal processing and modeling
Content
How to estimate the power spectral density of a wide-sense stationary random process?
Since the power spectrum is the Fourier transform of the autocorrelation sequence, estimating the power
spectrum is equivalent to estimating the autocorrelation. For an autocorrelation ergodic process, recall that
lim
N
→
∞
{
1
2
N
+
1
∑
n
=
−
N
N
x
(
n
+
k
)
x
∗
(
n
)
}
=
r
x
(
k
)
\lim_{N\to \infty}\left\{\frac{1}{2N+1}\sum_{n=-N}^{N}x(n+k)x^*(n)\right\}=r_x(k)
N→∞lim{2N+11n=−N∑Nx(n+k)x∗(n)}=rx(k)
Thus, if
x
(
n
)
x(n)
x(n) is known for all
n
n
n, estimating the power spectrum is straightforward.
However, there are 2 difficulties with this approach:
- The amount of data is never unlimited and, in many cases, it may be very small.
- The data is often corrupted by the noise or contaminated with an interfering signal.
Therefore, we need to consider another approaches. They may be generally categorized into one of two classes: nonparametric or parametric approaches.
Nonparametric Methods
The Periodogram
Suppose that
x
(
n
)
x(n)
x(n) is only measured over a finite interval, say
n
=
0
,
1
,
⋯
,
N
−
1
n=0,1,\cdots,N-1
n=0,1,⋯,N−1, then the autocorrelation sequence must be estimated using with a finite sum
r
^
x
(
k
)
=
1
N
∑
n
=
0
N
−
1
x
(
n
+
k
)
x
∗
(
n
)
(PD.1)
\hat r_x(k)=\frac{1}{N}\sum_{n=0}^{N-1}x(n+k)x^*(n)\tag{PD.1}
r^x(k)=N1n=0∑N−1x(n+k)x∗(n)(PD.1)
In order to ensure that the values of
x
(
n
)
x(n)
x(n) that fall outside the interval
[
0
,
N
−
1
]
[0,N-1]
[0,N−1] are excluded from the sum,
(
P
D
.
1
)
(PD.1)
(PD.1) will be rewritten as follows: (biased)
r
^
x
(
k
)
=
1
N
∑
n
=
0
N
−
1
−
k
x
(
n
+
k
)
x
∗
(
n
)
;
k
=
0
,
1
,
⋯
,
N
−
1
(PD.2)
\hat r_x(k)=\frac{1}{N}\sum_{n=0}^{N-1-k}x(n+k)x^*(n);\quad k=0,1,\cdots,N-1 \tag{PD.2}
r^x(k)=N1n=0∑N−1−kx(n+k)x∗(n);k=0,1,⋯,N−1(PD.2)
with
r
^
x
(
−
k
)
=
r
^
x
∗
(
k
)
;
k
<
0
r
^
x
(
k
)
=
0
;
∣
k
∣
≥
N
(PD.3)
\begin{aligned} &\hat r_x(-k)=\hat r_x^*(k);&& k<0\\ &\hat r_x(k)=0;&& |k|\ge N \end{aligned} \tag{PD.3}
r^x(−k)=r^x∗(k);r^x(k)=0;k<0∣k∣≥N(PD.3)
Taking the discrete-time Fourier transform of
r
^
x
(
k
)
\hat r_x(k)
r^x(k) leads to an estimate of the power spectrum known as the periodogram:
P
^
p
e
r
(
e
j
ω
)
=
∑
k
=
−
N
+
1
N
−
1
r
^
x
(
k
)
e
−
j
k
ω
(PD.4)
\hat P_{per}(e^{j\omega})=\sum_{k=-N+1}^{N-1}\hat r_x(k)e^{-jk\omega }\tag{PD.4}
P^per(ejω)=k=−N+1∑N−1r^x(k)e−jkω(PD.4)
Although defined in terms of the estimated autocorrelation sequence
r
^
x
(
k
)
\hat r_x(k)
r^x(k), it will be more convenient to express the periodogram directly in terms of the process
x
(
n
)
x(n)
x(n). By defining
x
N
(
n
)
x_N(n)
xN(n) as the finite length signal of length
N
N
N that is equal to
x
(
n
)
x(n)
x(n) over the interval
[
0
,
N
−
1
]
[0,N-1]
[0,N−1], and is zero otherwise,
x
N
(
n
)
=
{
x
(
n
)
;
0
≤
n
<
N
0
;
otherwise
(PD.5)
x_N(n)=\left\{\begin{aligned}&x(n);&& 0\le n< N\\ &0;&&\text{otherwise}\end{aligned}\right.\tag{PD.5}
xN(n)={x(n);0;0≤n<Notherwise(PD.5)
i.e.,
x
N
(
n
)
x_N(n)
xN(n) is the product of
x
(
n
)
x(n)
x(n) with a rectangle window
w
R
(
n
)
w_R(n)
wR(n),
x
N
(
n
)
=
w
R
(
n
)
x
(
n
)
(PD.6)
x_N(n)=w_R(n)x(n)\tag{PD.6}
xN(n)=wR(n)x(n)(PD.6)
In terms of
x
N
(
n
)
,
x_{N}(n),
xN(n), the estimated autocorrelation sequence may be written as follows:
r
^
x
(
k
)
=
1
N
∑
n
=
−
∞
∞
x
N
(
n
+
k
)
x
N
∗
(
n
)
=
1
N
x
N
(
k
)
∗
x
N
∗
(
−
k
)
(PD.7)
\hat{r}_{x}(k)=\frac{1}{N} \sum_{n=-\infty}^{\infty} x_{N}(n+k) x_{N}^{*}(n)=\frac{1}{N} x_{N}(k) * x_{N}^{*}(-k)\tag{PD.7}
r^x(k)=N1n=−∞∑∞xN(n+k)xN∗(n)=N1xN(k)∗xN∗(−k)(PD.7)
Taking the Fourier transform and using the convolution theorem, the periodogram becomes
P
^
p
e
r
(
e
j
ω
)
=
1
N
X
N
(
e
j
ω
)
X
N
∗
(
e
j
ω
)
=
1
N
∣
X
N
(
e
j
ω
)
∣
2
(PD.8)
\hat{P}_{ {per}}\left(e^{j \omega}\right)=\frac{1}{N} X_{N}\left(e^{j \omega}\right) X_{N}^{*}\left(e^{j \omega}\right)=\frac{1}{N}\left|X_{N}\left(e^{j \omega}\right)\right|^{2} \tag{PD.8}
P^per(ejω)=N1XN(ejω)XN∗(ejω)=N1∣∣XN(ejω)∣∣2(PD.8)
where
X
N
(
e
j
ω
)
X_{N}\left(e^{j \omega}\right)
XN(ejω) is the discrete-time Fourier transform of the
N
N
N -point data sequence
x
N
(
n
)
x_{N}(n)
xN(n)
X
N
(
e
j
ω
)
=
∑
n
=
−
∞
∞
x
N
(
n
)
e
−
j
n
ω
=
∑
n
=
0
N
−
1
x
(
n
)
e
−
j
n
ω
X_{N}\left(e^{j \omega}\right)=\sum_{n=-\infty}^{\infty} x_{N}(n) e^{-j n \omega}=\sum_{n=0}^{N-1} x(n) e^{-j n \omega}
XN(ejω)=n=−∞∑∞xN(n)e−jnω=n=0∑N−1x(n)e−jnω
Thus, the periodogram is proportional to the squared magnitude of the DTFT of
x
N
(
n
)
,
x_{N}(n),
xN(n), and may be easily computed using a DFT as follows:
x
N
(
n
)
⟶
DFT
X
N
(
k
)
⟶
1
N
∣
X
N
(
k
)
∣
2
=
P
^
p
e
r
(
e
j
2
π
k
/
N
)
(PD.9)
x_{N}(n) \stackrel{\text { DFT }}{\longrightarrow} X_{N}(k) \longrightarrow \frac{1}{N}\left|X_{N}(k)\right|^{2}=\hat{P}_{ {per }}\left(e^{j 2 \pi k / N}\right)\tag{PD.9}
xN(n)⟶ DFT XN(k)⟶N1∣XN(k)∣2=P^per(ej2πk/N)(PD.9)
A MATLAB program for the periodogram is given below:
function Px = periodogram(x,nl,n2)
X = X(:);
if nargin == 1
nl = l; n2 = length(x);
end
Px = abs(fft(x(nl:n2),1024)).^2/(n2-nl+l);
Px(l) = Px(2);
end
A filter bank interpretation of the periodogram
Let
h
i
(
n
)
h_i(n)
hi(n) be an FIR filter of length
N
N
N that is defined as follows:
h
i
(
n
)
=
1
N
e
j
n
ω
i
w
R
(
n
)
=
{
1
N
e
j
n
ω
i
;
0
≤
n
<
N
0
;
otherwise
(PD.10)
h_i(n)=\frac{1}{N}e^{jn\omega_i}w_R(n)=\left\{\begin{aligned}& \frac{1}{N}e^{jn\omega_i};&& 0\le n< N\\ &0;&&\text{otherwise}\end{aligned}\right.\tag{PD.10}
hi(n)=N1ejnωiwR(n)=⎩⎨⎧N1ejnωi;0;0≤n<Notherwise(PD.10)
The frequency response of this filter is
H
i
(
e
j
ω
)
=
∑
n
=
0
N
−
1
h
i
(
n
)
e
−
j
n
ω
=
e
−
j
(
ω
−
ω
i
)
(
N
−
1
)
/
2
sin
[
N
(
ω
−
ω
i
)
/
2
]
N
sin
[
(
ω
−
ω
i
)
/
2
]
(PD.11)
H_{i}\left(e^{j \omega}\right)=\sum_{n=0}^{N-1} h_{i}(n) e^{-j n \omega}=e^{-j\left(\omega-\omega_{i}\right)(N-1) / 2} \frac{\sin \left[N\left(\omega-\omega_{i}\right) / 2\right]}{N \sin \left[\left(\omega-\omega_{i}\right) / 2\right]}\tag{PD.11}
Hi(ejω)=n=0∑N−1hi(n)e−jnω=e−j(ω−ωi)(N−1)/2Nsin[(ω−ωi)/2]sin[N(ω−ωi)/2](PD.11)
which is a bandpass filter with a center frequency
ω
i
,
\omega_{i},
ωi, and a bandwidth that is approximately equal to
Δ
ω
=
2
π
/
N
.
\Delta \omega=2 \pi / N .
Δω=2π/N.
If a WSS random process
x
(
n
)
x(n)
x(n) is filtered with
h
i
(
n
)
,
h_{i}(n),
hi(n), then the output process is
y
i
(
n
)
=
x
(
n
)
∗
h
i
(
n
)
=
∑
k
=
n
−
N
+
1
n
x
(
k
)
h
i
(
n
−
k
)
=
1
N
∑
k
=
n
−
N
+
1
n
x
(
k
)
e
j
(
n
−
k
)
ω
i
(PD.12)
y_{i}(n)=x(n) * h_{i}(n)=\sum_{k=n-N+1}^{n} x(k) h_{i}(n-k)=\frac{1}{N} \sum_{k=n-N+1}^{n} x(k) e^{j(n-k) \omega_{i}}\tag{PD.12}
yi(n)=x(n)∗hi(n)=k=n−N+1∑nx(k)hi(n−k)=N1k=n−N+1∑nx(k)ej(n−k)ωi(PD.12)
since
∣
H
i
(
e
j
ω
)
∣
ω
=
ω
i
=
1
,
\left|H_{i}\left(e^{j \omega}\right)\right|_{\omega=\omega_{i}}=1,
∣∣Hi(ejω)∣∣ω=ωi=1, then the power spectrum of
x
(
n
)
x(n)
x(n) and
y
(
n
)
y(n)
y(n) are equal at frequency
ω
i
\omega_{i}
ωi.
P
x
(
e
j
ω
i
)
=
P
y
(
e
j
ω
i
)
(PD.13)
P_{x}\left(e^{j \omega_{i}}\right)=P_{y}\left(e^{j \omega_{i}}\right)\tag{PD.13}
Px(ejωi)=Py(ejωi)(PD.13)
Furthermore, if the bandwidth of the filter is small enough so that the power spectrum of
x
(
n
)
x(n)
x(n) may be assumed to be approximately constant over the passband of the filter, then the power in
y
i
(
n
)
y_{i}(n)
yi(n) will be approximately
E
{
∣
y
i
(
n
)
∣
2
}
=
1
2
π
∫
−
π
π
P
x
(
e
j
ω
)
∣
H
i
(
e
j
ω
)
∣
2
d
ω
≈
Δ
ω
2
π
P
x
(
e
j
ω
i
)
=
1
N
P
x
(
e
j
ω
i
)
(PD.14)
E\left\{\left|y_{i}(n)\right|^{2}\right\}=\frac{1}{2 \pi} \int_{-\pi}^{\pi} P_{x}\left(e^{j \omega}\right)\left|H_{i}\left(e^{j \omega}\right)\right|^{2} \mathrm{d} \omega \approx \frac{\Delta \omega}{2 \pi} P_{x}\left(e^{j \omega_{i}}\right)=\frac{1}{N} P_{x}\left(e^{j \omega_{i}}\right)\tag{PD.14}
E{∣yi(n)∣2}=2π1∫−ππPx(ejω)∣∣Hi(ejω)∣∣2dω≈2πΔωPx(ejωi)=N1Px(ejωi)(PD.14)
and, therefore,
P
x
(
e
j
ω
i
)
≈
N
E
{
∣
y
i
(
n
)
∣
2
}
P_{x}\left(e^{j \omega_{i}}\right) \approx N E\left\{\left|y_{i}(n)\right|^{2}\right\}
Px(ejωi)≈NE{∣yi(n)∣2}
Thus, if we are able to estimate the power in
y
i
(
n
)
,
y_{i}(n),
yi(n), then the power spectrum at frequency
ω
i
\omega_{i}
ωi may be estimated as follows:
P
^
x
(
e
j
ω
i
)
=
N
E
^
{
∣
y
i
(
n
)
∣
2
}
(PD.15)
\hat{P}_{x}\left(e^{j \omega_{i}}\right)=N \hat{E}\left\{\left|y_{i}(n)\right|^{2}\right\}\tag{PD.15}
P^x(ejωi)=NE^{∣yi(n)∣2}(PD.15)
One simple yet very crude way to estimate the power is to use a one-point sample average,
E
^
{
∣
y
i
(
n
)
∣
2
}
=
∣
y
i
(
N
−
1
)
∣
2
(PD.16)
\hat{E}\left\{\left|y_{i}(n)\right|^{2}\right\}=\left|y_{i}(N-1)\right|^{2}\tag{PD.16}
E^{∣yi(n)∣2}=∣yi(N−1)∣2(PD.16)
From
(
P
D
.
12
)
(PD.12)
(PD.12) we see that this is equivalent to
∣
y
i
(
N
−
1
)
∣
2
=
1
N
2
∣
∑
k
=
0
N
−
1
x
(
k
)
e
−
j
k
ω
i
∣
2
(PD.17)
\left|y_{i}(N-1)\right|^{2}=\frac{1}{N^{2}}\left|\sum_{k=0}^{N-1} x(k) e^{-j k \omega_{i}}\right|^{2}\tag{PD.17}
∣yi(N−1)∣2=N21∣∣∣∣∣k=0∑N−1x(k)e−jkωi∣∣∣∣∣2(PD.17)
Therefore,
P
^
x
(
e
j
w
i
)
=
N
∣
y
i
(
N
−
1
)
∣
2
=
1
N
∣
∑
k
=
0
N
−
1
x
(
k
)
e
−
j
k
ω
i
∣
2
(PD.18)
\hat{P}_{x}\left(e^{j w_{i}}\right)=N\left|y_{i}(N-1)\right|^{2}=\frac{1}{N}\left|\sum_{k=0}^{N-1} x(k) e^{-j k \omega_{i}}\right|^{2}\tag{PD.18}
P^x(ejwi)=N∣yi(N−1)∣2=N1∣∣∣∣∣k=0∑N−1x(k)e−jkωi∣∣∣∣∣2(PD.18)
which is equivalent to the periodogram. Thus, the periodogram may be viewed as the estimate of the power spectrum that is formed using a filter bank of bandpass filters:
(It should be 1 N e j n ω i w R ( n ) \frac{1}{N}e^{jn\omega_i}w_R(n) N1ejnωiwR(n) instead.)
In Section [Minimum Variance Spectrum Estimation](# Minimum Variance Spectrum Estimation) we will consider a generalization of this filter bank idea that allows the filters to be data dependent and to have a frequency response that varies with the center frequency ω i \omega_i ωi.
Performance of the periodogram
Since
P
^
p
e
r
(
e
j
ω
)
\hat P_{per}(e^{j\omega})
P^per(ejω) is a function of the random variables
x
(
0
)
,
⋯
,
x
(
N
)
x(0),\cdots,x(N)
x(0),⋯,x(N), it is necessary to consider convergence of the periodogram, i.e., we will be interested in whether or not
lim
N
→
∞
E
{
[
P
^
p
e
r
(
e
j
ω
)
−
P
x
(
e
j
ω
)
]
2
}
=
0
\lim_{N\to \infty}E\left\{\Big[\hat P_{per}(e^{j\omega})-P_x(e^{j\omega})\Big]^2\right\}=0
N→∞limE{[P^per(ejω)−Px(ejω)]2}=0
In order for the periodogram to be mean-square convergent, it is necessary that
-
It is asymptotically unbiased
lim N → ∞ E { P ^ p e r ( e j ω ) } = P x ( e j ω ) \lim_{N\to \infty}E\left\{\hat P_{per}(e^{j\omega})\right\}=P_x(e^{j\omega}) N→∞limE{P^per(ejω)}=Px(ejω) -
It has a variance that goes to zero as the data record length N N N goes to infinity
lim N → ∞ V a r { P ^ p e r ( e j ω ) } = 0 \lim_{N\to \infty}\mathrm{Var}\left\{\hat P_{per}(e^{j\omega})\right\}=0 N→∞limVar{P^per(ejω)}=0
Periodogram Bias
-
Method 1 1 1-use expression ( P D . 4 ) (PD.4) (PD.4): P ^ p e r ( e j ω ) = ∑ k = − N + 1 N − 1 r ^ x ( k ) e − j k ω \hat P_{per}(e^{j\omega})=\sum_{k=-N+1}^{N-1}\hat r_x(k)e^{-jk\omega } P^per(ejω)=∑k=−N+1N−1r^x(k)e−jkω
From ( P D . 2 ) (PD.2) (PD.2) it follows that the expected value of r ^ x ( k ) \hat r_x(k) r^x(k) for k = 0 , 1 , ⋯ , N − 1 k=0,1,\cdots,N-1 k=0,1,⋯,N−1 is
E { r ^ x ( k ) } = 1 N ∑ n = 0 N − 1 − k E { x ( n + k ) x ∗ ( n ) } = 1 N ∑ n = 0 N − 1 − k r x ( k ) = N − k N r x ( k ) E\{\hat r_x(k)\}=\frac{1}{N}\sum_{n=0}^{N-1-k}E\{x(n+k)x^*(n)\}=\frac{1}{N}\sum_{n=0}^{N-1-k}r_x(k)=\frac{N-k}{N}r_x(k) E{r^x(k)}=N1n=0∑N−1−kE{x(n+k)x∗(n)}=N1n=0∑N−1−krx(k)=NN−krx(k)
and, for k ≥ N k\ge N k≥N, the expected value is zero. Using the conjugate symmetry of r ^ x ( k ) \hat r_x(k) r^x(k) we have
E { r ^ x ( k ) } = w B ( k ) r x ( k ) (PD.19) E\{\hat r_x(k)\}=w_B(k)r_x(k)\tag {PD.19} E{r^x(k)}=wB(k)rx(k)(PD.19)
where
w B ( k ) = { N − ∣ k ∣ N ; ∣ k ∣ ≤ N 0 ; ∣ k ∣ > N w_B(k)=\left\{\begin{aligned}&\frac{N-|k|}{N};&&|k|\le N\\&0;&&|k|>N\end{aligned}\right. wB(k)=⎩⎨⎧NN−∣k∣;0;∣k∣≤N∣k∣>N
is a Bartlett (triangular) window. Therefore, r ^ x ( k ) \hat r_x(k) r^x(k) is a biased estimate of the autocorrelation.Using ( P D . 4 ) (PD.4) (PD.4) it follows that the expected value of the periodogram is
E { P ^ p e r ( e j ω ) } = E { ∑ k = − N + 1 N − 1 r ^ x ( k ) e − j k ω } = ∑ k = − N + 1 N − 1 E { r ^ x ( k ) } e − j k ω = ∑ k = − ∞ ∞ r x ( k ) w B ( k ) e − j k ω (PD.20) E\{\hat P_{per}(e^{j\omega})\}=E\left\{\sum_{k=-N+1}^{N-1}\hat r_x(k)e^{-jk\omega }\right\}=\sum_{k=-N+1}^{N-1}E\left\{\hat r_x(k)\right\}e^{-jk\omega }=\sum_{k=-\infty}^{\infty}r_x(k)w_B(k)e^{-jk\omega}\tag{PD.20} E{P^per(ejω)}=E{k=−N+1∑N−1r^x(k)e−jkω}=k=−N+1∑N−1E{r^x(k)}e−jkω=k=−∞∑∞rx(k)wB(k)e−jkω(PD.20)
Since E { P ^ p e r ( e j ω ) } E\{\hat P_{per}(e^{j\omega})\} E{P^per(ejω)} is the Fourier transform of the product r x ( k ) w B ( k ) r_x(k)w_B(k) rx(k)wB(k), using the frequency convolution theorem we have
E { P ^ p e r ( e j ω ) } = 1 2 π P x ( e j ω ) ∗ W B ( E j ω ) (PD.21) E\{\hat P_{per}(e^{j\omega})\}=\frac{1}{2\pi}P_x(e^{j\omega})*W_B(E^{j\omega})\tag{PD.21} E{P^per(ejω)}=2π1Px(ejω)∗WB(Ejω)(PD.21)
where W B ( e j ω ) W_B(e^{j\omega}) WB(ejω) is the Fourier transform of the Bartlett window, w B ( k ) w_B(k) wB(k),
W B ( e j ω ) = 1 N [ sin ( N ω / 2 ) sin ( ω / 2 ) ] 2 W_B(e^{j\omega})=\frac{1}{N}\left[\frac{\sin(N\omega/2)}{\sin(\omega/2)}\right]^2 WB(ejω)=N1[sin(ω/2)sin(Nω/2)]2
Thus, the expected value of the periodogram is the convolution of the power spectrum P x ( e j ω ) P_x(e^{j\omega}) Px(ejω) with the Fourier transform of a Bartlett window and, therefore, the periodogram is a biased estimate.However, since
lim w → 0 W B ( e j ω ) = lim w → 0 1 N [ sin ( N ω / 2 ) sin ( ω / 2 ) ] 2 = N \lim_{w\to 0}W_B(e^{j\omega})=\lim_{w\to 0}\frac{1}{N}\left[\frac{\sin(N\omega/2)}{\sin(\omega/2)}\right]^2=N w→0limWB(ejω)=w→0limN1[sin(ω/2)sin(Nω/2)]2=N
and the bandwidth is approximately equal to Δ ω = 2 π / N \Delta \omega=2 \pi / N Δω=2π/N, 1 2 π W B ( e j ω ) \frac{1}{2\pi}W_B(e^{j\omega}) 2π1WB(ejω) can be seen as a rectangular window with area 1 1 1. As N N N goes to infinity, 1 2 π W B ( e j ω ) \frac{1}{2\pi}W_B(e^{j\omega}) 2π1WB(ejω) converges to an impulse.
Therefore, from
(
P
D
.
21
)
(PD.21)
(PD.21), the periodogram is asymptotically unbiased
lim
N
→
∞
E
{
P
^
p
e
r
(
e
j
ω
)
}
=
P
x
(
e
j
ω
)
(PD.22)
\lim_{N\to \infty}E\{\hat P_{per}(e^{j\omega})\}=P_x(e^{j\omega})\tag{PD.22}
N→∞limE{P^per(ejω)}=Px(ejω)(PD.22)
-
Method 2 2 2-use the equivalent expression ( P D . 8 ) (PD.8) (PD.8): P ^ p e r ( e j ω ) = 1 N X N ( e j ω ) X N ∗ ( e j ω ) = 1 N ∣ X N ( e j ω ) ∣ 2 \hat{P}_{ {per}}\left(e^{j \omega}\right)=\frac{1}{N} X_{N}\left(e^{j \omega}\right) X_{N}^{*}\left(e^{j \omega}\right)=\frac{1}{N}\left|X_{N}\left(e^{j \omega}\right)\right|^{2} P^per(ejω)=N1XN(ejω)XN∗(ejω)=N1∣∣XN(ejω)∣∣2
Since x N ( n ) = w R ( n ) x ( n ) x_N(n)=w_R(n)x(n) xN(n)=wR(n)x(n), we have
E { P ^ p e r ( e j ω ) } = 1 N E { [ ∑ n = − ∞ ∞ x ( n ) w R ( n ) e − j n ω ] [ ∑ m = − ∞ ∞ x ( m ) w R ( m ) e − j m ω ] ∗ } = 1 N E { ∑ m = − ∞ ∞ ∑ n = − ∞ ∞ x ( n ) x ∗ ( m ) w R ( m ) w R ( n ) e − j ( n − m ) ω } = 1 N ∑ m = − ∞ ∞ ∑ n = − ∞ ∞ r x ( n − m ) w R ( m ) w R ( n ) e − j ( n − m ) ω = 1 N ∑ k = − ∞ ∞ ∑ n = − ∞ ∞ r x ( k ) w R ( n − k ) w R ( n ) e − j k ω = 1 N ∑ k = − ∞ ∞ r x ( k ) [ ∑ n = − ∞ ∞ w R ( n − k ) w R ( n ) ] e − j k ω = 1 N ∑ k = − ∞ ∞ r x ( k ) [ w R ( k ) ∗ w R ( − k ) ] e − j k ω = 1 2 π N P x ( e j ω ) ∗ ∣ W R ( e j ω ) ∣ 2 (PD.23) \begin{aligned} E\left\{\hat{P}_{p e r}\left(e^{j \omega}\right)\right\} &=\frac{1}{N} E\left\{\left[\sum_{n=-\infty}^{\infty} x(n) w_{R}(n) e^{-j n \omega}\right]\left[\sum_{m=-\infty}^{\infty} x(m) w_{R}(m) e^{-j m \omega}\right]^{*}\right\} \\ &=\frac{1}{N} E\left\{\sum_{m=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} x(n) x^{*}(m) w_{R}(m) w_{R}(n) e^{-j(n-m) \omega}\right\} \\ &=\frac{1}{N} \sum_{m=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} r_{x}(n-m) w_{R}(m) w_{R}(n) e^{-j(n-m) \omega}\\ &=\frac{1}{N} \sum_{k=-\infty}^{\infty} \sum_{n=-\infty}^{\infty} r_{x}(k) w_{R}(n-k) w_{R}(n) e^{-jk \omega}\\ &=\frac{1}{N} \sum_{k=-\infty}^{\infty} r_{x}(k)\left[ \sum_{n=-\infty}^{\infty} w_{R}(n-k) w_{R}(n)\right] e^{-jk \omega}\\ &=\frac{1}{N} \sum_{k=-\infty}^{\infty} r_{x}(k)[w_R(k)*w_R(-k)] e^{-jk \omega}\\ &=\frac{1}{2\pi N}P_x(e^{j\omega})*|W_R(e^{j\omega})|^2 \end{aligned} \tag{PD.23} E{P^per(ejω)}=N1E{[n=−∞∑∞x(n)wR(n)e−jnω][m=−∞∑∞x(m)wR(m)e−jmω]∗}=N1E{m=−∞∑∞n=−∞∑∞x(n)x∗(m)wR(m)wR(n)e−j(n−m)ω}=N1m=−∞∑∞n=−∞∑∞rx(n−m)wR(m)wR(n)e−j(n−m)ω=N1k=−∞∑∞n=−∞∑∞rx(k)wR(n−k)wR(n)e−jkω=N1k=−∞∑∞rx(k)[n=−∞∑∞wR(n−k)wR(n)]e−jkω=N1k=−∞∑∞rx(k)[wR(k)∗wR(−k)]e−jkω=2πN1Px(ejω)∗∣WR(ejω)∣2(PD.23)
Note that
w B ( k ) = 1 N w R ( k ) ∗ w R ( − k ) (PD.24) w_B(k)=\frac{1}{N}w_R(k)*w_R(-k)\tag{PD.24} wB(k)=N1wR(k)∗wR(−k)(PD.24)
( P D . 24 ) (PD.24) (PD.24) coincides with ( P D . 21 ) (PD.21) (PD.21). The remaining procedures are the same with method 1.
Effect of the convolution by W B ( e j ω ) W_{B}\left(e^{j \omega}\right) WB(ejω):
- Spectral smoothing (smearing): the response to a signal consisting of a single frequency ω 0 \omega_{0} ω0 is not an impulse, but W B ( e j ( ω − ω 0 ) ) , W_{B}\left(e^{j\left(\omega-\omega_{0}\right)}\right), WB(ej(ω−ω0)), with a bandwidth of approximately 2 π N \frac{2 \pi}{N} N2π.
- Confusion: due to sidelobes of W B ( e j ( ω − ω 0 ) ) W_{B}\left(e^{j\left(\omega-\omega_{0}\right)}\right) WB(ej(ω−ω0)) at ω 0 ± 2 π N k \omega_{0} \pm \frac{2 \pi}{N} k ω0±N2πk, one might think there are additional frequencies.
This limits the resolution: ability to resolve closely spaced sinusoids. The resolution is defined by the half-power width of the main lobe,
Δ
w
=
R
e
s
[
P
^
p
e
r
(
e
j
ω
)
]
=
0.89
2
π
N
(PD.25)
\Delta w=\mathrm{Res}[\hat P_{per}(e^{j\omega})]=0.89\frac{2\pi}{N} \tag{PD.25}
Δw=Res[P^per(ejω)]=0.89N2π(PD.25)
Variance of the periodogram
(see derivation in 8.2.2)
V
a
r
{
P
^
p
e
r
(
e
j
ω
)
}
≈
P
x
2
(
e
j
ω
)
(PD.26)
\mathrm{Var}\{\hat P_{per}(e^{j\omega})\}\approx P_x^2(e^{j\omega}) \tag{PD.26}
Var{P^per(ejω)}≈Px2(ejω)(PD.26)
This is not a function of
N
N
N. The variance does not go to
0
0
0 as
N
→
∞
N \to \infty
N→∞ and the periodogram is not a consistent estimate of the power spectrum.
The Modified Periodogram
Recall that from Section [Performance of the periodogram: Periodogram Bias -> Method 2](# Performance of the periodogram) we obtain
P
^
p
e
r
(
e
j
ω
)
=
1
N
∣
X
N
(
e
j
ω
)
∣
2
=
1
N
∣
∑
n
=
−
∞
∞
x
(
n
)
w
R
(
n
)
e
−
j
ω
n
∣
2
(PD.8)
\hat P_{per}(e^{j\omega})=\frac{1}{N}|X_N(e^{j\omega})|^2=\frac{1}{N}\left|\sum_{n=-\infty}^{\infty}x(n)w_R(n)e^{-j\omega n}\right|^2\tag{PD.8}
P^per(ejω)=N1∣XN(ejω)∣2=N1∣∣∣∣∣n=−∞∑∞x(n)wR(n)e−jωn∣∣∣∣∣2(PD.8)
for which
E
{
P
^
p
e
r
(
e
j
ω
)
}
=
1
2
π
N
P
x
(
e
j
ω
)
∗
∣
W
R
(
e
j
ω
)
∣
2
(PD.23)
E\left\{\hat{P}_{p e r}\left(e^{j \omega}\right)\right\}=\frac{1}{2\pi N}P_x(e^{j\omega})*|W_R(e^{j\omega})|^2\tag{PD.23}
E{P^per(ejω)}=2πN1Px(ejω)∗∣WR(ejω)∣2(PD.23)
Instead of a rectangular window, try a general windows
w
(
n
)
w(n)
w(n),
P
^
p
e
r
(
e
j
ω
)
=
1
N
U
∣
∑
n
=
−
∞
∞
x
(
n
)
w
(
n
)
e
−
j
ω
n
∣
2
(MD.1)
\hat P_{per}(e^{j\omega})=\frac{1}{NU}\left|\sum_{n=-\infty}^{\infty}x(n)w(n)e^{-j\omega n}\right|^2\tag{MD.1}
P^per(ejω)=NU1∣∣∣∣∣n=−∞∑∞x(n)w(n)e−jωn∣∣∣∣∣2(MD.1)
where
U
U
U is a suitable normalization constant (energy of the window)
U
=
1
N
∑
n
=
0
N
−
1
∣
w
(
n
)
∣
2
(MD.2)
U=\frac{1}{N}\sum_{n=0}^{N-1}|w(n)|^2 \tag{MD.2}
U=N1n=0∑N−1∣w(n)∣2(MD.2)
This is called the modified periodogram.
Performance of the modified periodogram
Bias
Analogous to
(
P
D
.
8
)
(PD.8)
(PD.8) and (
P
D
.
23
PD.23
PD.23) with
(
M
D
.
1
)
(MD.1)
(MD.1), we have
E
{
P
^
p
e
r
(
e
j
ω
)
}
=
1
2
π
N
U
P
x
(
e
j
ω
)
∗
∣
W
(
e
j
ω
)
∣
2
(MD.3)
E\left\{\hat{P}_{p e r}\left(e^{j \omega}\right)\right\}=\frac{1}{2\pi NU}P_x(e^{j\omega})*|W(e^{j\omega})|^2\tag{MD.3}
E{P^per(ejω)}=2πNU1Px(ejω)∗∣W(ejω)∣2(MD.3)
where
W
(
e
j
ω
)
W(e^{j\omega})
W(ejω) is the Fourier transform of
w
(
n
)
w(n)
w(n).
Since (by Parseval)
U
=
1
N
∑
n
=
0
N
−
1
∣
w
(
n
)
∣
2
=
1
2
π
N
∫
−
π
π
∣
W
(
e
j
ω
)
∣
2
d
ω
(MD.4)
U=\frac{1}{N}\sum_{n=0}^{N-1}|w(n)|^2=\frac{1}{2\pi N}\int_{-\pi}^{\pi}|W(e^{j\omega})|^2d\omega \tag{MD.4}
U=N1n=0∑N−1∣w(n)∣2=2πN1∫−ππ∣W(ejω)∣2dω(MD.4)
it follows that
1
2
π
N
U
∫
−
π
π
∣
W
(
e
j
ω
)
∣
2
d
ω
=
1
(MD.5)
\frac{1}{2\pi NU}\int_{-\pi}^\pi|W(e^{j\omega})|^2d\omega=1 \tag{MD.5}
2πNU1∫−ππ∣W(ejω)∣2dω=1(MD.5)
so that
∣
W
(
e
j
ω
)
∣
2
/
2
π
N
U
|W(e^{j\omega})|^2/2\pi NU
∣W(ejω)∣2/2πNU will converge to a unit impulse.
For N → ∞ N\to \infty N→∞, the modified periodogram is asymptotically unbiased.
Variance
The window does not change the situation
V
a
r
{
P
^
p
e
r
(
e
j
ω
)
}
≈
P
x
2
(
e
j
ω
)
(PD.26)
\mathrm{Var}\{\hat P_{per}(e^{j\omega})\}\approx P_x^2(e^{j\omega}) \tag{PD.26}
Var{P^per(ejω)}≈Px2(ejω)(PD.26)
Thus, the modified periodogram is not a consistent estimate of the power spectrum.
Effect of the window
The window gives a way to trade-off the spectral resolution (width of main lobe) with the confusion (sidelobe amplitude).
Bartlett’s Method: Periodogram Averaging
In this section, we look at Bartlett’s method of periodogram averaging, which, unlike either the periodogram or the modified periodogram, produces a consistent estimate of the power spectrum. This can be done by averaging several periodograms (based on independent realizations of the same random process).
Let
x
i
(
n
)
,
i
=
1
,
2
,
⋯
,
K
x_{i}(n), i=1,2, \cdots, K
xi(n),i=1,2,⋯,K be
K
K
K uncorrelated realizations of
x
(
n
)
x(n)
x(n) over the interval
0
≤
n
≤
L
−
1.
0 \leq n \leq L-1 .
0≤n≤L−1. We have
N
=
L
K
N=L K
N=LK. Compute the periodogram of each realization:
P
^
per
(
i
)
(
e
j
ω
)
=
1
L
∣
∑
n
=
0
L
−
1
x
i
(
n
)
e
−
j
n
ω
∣
2
,
i
=
1
,
2
,
⋯
,
K
(BM.1)
\hat{P}_{\text {per }}^{(i)}\left(e^{j \omega}\right)=\frac{1}{L}\left|\sum_{n=0}^{L-1} x_{i}(n) e^{-j n \omega}\right|^{2}, \quad i=1,2, \cdots, K \tag{BM.1}
P^per (i)(ejω)=L1∣∣∣∣∣n=0∑L−1xi(n)e−jnω∣∣∣∣∣2,i=1,2,⋯,K(BM.1)
Then compute the average:
P
^
x
(
e
j
ω
)
=
1
K
∑
i
=
1
K
P
^
p
e
r
(
i
)
(
e
j
ω
)
(BM.2)
\hat{P}_{x}\left(e^{j \omega}\right)=\frac{1}{K} \sum_{i=1}^{K} \hat{P}_{p e r}^{(i)}\left(e^{j \omega}\right) \tag{BM.2}
P^x(ejω)=K1i=1∑KP^per(i)(ejω)(BM.2)
Bias
E
{
P
^
x
(
e
j
ω
)
}
=
1
K
∑
i
=
1
K
E
{
P
^
p
e
r
(
i
)
(
e
j
ω
)
}
=
E
{
P
^
p
e
r
(
i
)
(
e
j
ω
)
}
=
1
2
π
P
x
(
e
j
ω
)
∗
W
B
(
e
j
ω
)
(BM.3)
E\{\hat{P}_{x}\left(e^{j \omega}\right)\}=\frac{1}{K} \sum_{i=1}^{K} E\{\hat{P}_{p e r}^{(i)}\left(e^{j \omega}\right)\}= E\{\hat{P}_{p e r}^{(i)}\left(e^{j \omega}\right)\}=\frac{1}{2\pi}P_x(e^{j\omega})*W_B(e^{j\omega}) \tag{BM.3}
E{P^x(ejω)}=K1i=1∑KE{P^per(i)(ejω)}=E{P^per(i)(ejω)}=2π1Px(ejω)∗WB(ejω)(BM.3)
with
W
B
(
e
j
ω
)
W_B(e^{j\omega})
WB(ejω) based on a rectangular window with
L
L
L samples.
As before, the estimate P ^ x ( e j ω ) \hat P_x(e^{j\omega}) P^x(ejω) is asymptotically unbiased (as L → ∞ L\to \infty L→∞).
Variance
V
a
r
{
P
^
x
(
e
j
ω
)
}
=
V
a
r
{
1
K
∑
i
=
1
K
P
^
p
e
r
(
i
)
(
e
j
ω
)
}
=
1
K
V
a
r
{
P
^
p
e
r
(
i
)
(
e
j
ω
)
}
≈
1
K
P
x
2
(
e
j
ω
)
(BM.4)
\mathrm{Var}\{\hat P_x(e^{j\omega})\}=\mathrm{Var}\{ \frac{1}{K}\sum_{i=1}^{K} \hat{P}_{p e r}^{(i)}\left(e^{j \omega}\right) \}=\frac{1}{K}\mathrm{Var}\{\hat{P}_{p e r}^{(i)}\left(e^{j \omega}\right)\}\approx \frac{1}{K}P_x^2(e^{j\omega}) \tag{BM.4}
Var{P^x(ejω)}=Var{K1i=1∑KP^per(i)(ejω)}=K1Var{P^per(i)(ejω)}≈K1Px2(ejω)(BM.4)
This goes to
0
0
0 as
K
→
∞
K\to \infty
K→∞.
Thus, P ^ x ( e j ω ) \hat P_x(e^{j\omega}) P^x(ejω) is a consistent estimate if both L L L and K K K go to infinity.
However, the difficulty with this approach is that uncorrelated realizations of a process are generally not available. Instead, one typically only has a single realization of length N N N. Therefore, Bartlett proposed that x ( n ) x(n) x(n) be partitioned into K K K nonoverlapping sequences of length L L L where N = K L N = KL N=KL as illustrated in Fig. 8.12.
The Bartlett estimate is then computed as in
(
B
M
.
1
)
(BM.1)
(BM.1) and
(
B
M
.
2
)
(BM.2)
(BM.2) with
x
i
(
n
)
=
x
(
n
+
i
L
)
n
=
0
,
1
,
⋯
,
L
−
1
i
=
0
,
1
,
⋯
,
K
−
1
(BM.5)
\begin{aligned} x_i(n)=x(n+iL)\quad &n=0,1,\cdots,L-1\\ & i=0,1,\cdots,K-1 \end{aligned} \tag{BM.5}
xi(n)=x(n+iL)n=0,1,⋯,L−1i=0,1,⋯,K−1(BM.5)
Thus, the Bartlett estimate is
P
^
B
(
e
j
ω
)
=
1
K
L
∑
i
=
0
K
−
1
∣
∑
n
=
0
L
=
1
x
(
n
+
i
L
)
e
−
j
n
ω
∣
2
=
1
N
∑
i
=
0
K
−
1
∣
∑
n
=
0
L
=
1
x
(
n
+
i
L
)
e
−
j
n
ω
∣
2
(BM.6)
\hat P_B(e^{j\omega})=\frac{1}{KL}\sum_{i=0}^{K-1}\left|\sum_{n=0}^{L=1}x(n+iL)e^{-jn\omega}\right|^2= \frac{1}{N}\sum_{i=0}^{K-1}\left|\sum_{n=0}^{L=1}x(n+iL)e^{-jn\omega}\right|^2\tag{BM.6}
P^B(ejω)=KL1i=0∑K−1∣∣∣∣∣n=0∑L=1x(n+iL)e−jnω∣∣∣∣∣2=N1i=0∑K−1∣∣∣∣∣n=0∑L=1x(n+iL)e−jnω∣∣∣∣∣2(BM.6)
Performance of the Bartlett’s Method
Bias
Precisely as before,
E
{
P
^
x
(
e
j
ω
)
}
=
1
K
∑
i
=
1
K
E
{
1
L
∣
∑
n
=
0
L
=
1
x
(
n
+
i
L
)
e
−
j
n
ω
∣
2
}
=
1
2
π
P
x
(
e
j
ω
)
∗
W
B
(
e
j
ω
)
(BM.7)
E\{\hat{P}_{x}\left(e^{j \omega}\right)\}=\frac{1}{K} \sum_{i=1}^{K} E\left\{\frac{1}{L}\left|\sum_{n=0}^{L=1}x(n+iL)e^{-jn\omega}\right|^2\right\}=\frac{1}{2\pi}P_x(e^{j\omega})*W_B(e^{j\omega}) \tag{BM.7}
E{P^x(ejω)}=K1i=1∑KE⎩⎨⎧L1∣∣∣∣∣n=0∑L=1x(n+iL)e−jnω∣∣∣∣∣2⎭⎬⎫=2π1Px(ejω)∗WB(ejω)(BM.7)
with
W
B
(
e
j
ω
)
W_B(e^{j\omega})
WB(ejω) based on a rectangular window with
L
L
L samples. The estimate
P
^
x
(
e
j
ω
)
\hat P_x(e^{j\omega})
P^x(ejω) is asymptotically unbiased (as
L
→
∞
L\to \infty
L→∞).
The resolution is now determined by
L
L
L:
Δ
ω
=
R
e
s
[
P
^
B
(
e
j
ω
)
]
=
0.89
2
π
L
=
0.89
K
2
π
N
(BM.8)
\Delta\omega=\mathrm{Res}[\hat P_B(e^{j\omega})]=0.89\frac{2\pi}{L}=0.89K\frac{2\pi}{N} \tag{BM.8}
Δω=Res[P^B(ejω)]=0.89L2π=0.89KN2π(BM.8)
which is
K
K
K times worse than for the periodogram.
Variance
V
a
r
{
P
^
B
(
e
j
ω
)
}
=
1
K
V
a
r
{
P
^
p
e
r
(
i
)
(
e
j
ω
)
}
≈
1
K
P
x
2
(
e
j
ω
)
(BM.9)
\mathrm{Var}\{\hat P_B(e^{j\omega})\}=\frac{1}{K}\mathrm{Var}\{\hat{P}_{p e r}^{(i)}\left(e^{j \omega}\right)\}\approx \frac{1}{K}P_x^2(e^{j\omega}) \tag{BM.9}
Var{P^B(ejω)}=K1Var{P^per(i)(ejω)}≈K1Px2(ejω)(BM.9)
This goes to
0
0
0 as
K
→
∞
K\to \infty
K→∞.
Thus, P ^ x ( e j ω ) \hat P_x(e^{j\omega}) P^x(ejω) is a consistent estimate if both L L L and K K K go to infinity.
Welch’s Method: Averaging Modified Periodograms
In 1967, Welch proposed two modifications to Bartlett’s method.
-
The first is to allow the sequences x i ( n ) x_i (n) xi(n) to overlap:
Using the overlapping segments (each of length L L L, but spaced by D D D)
x i ( n ) = x ( n + i D ) , n = 0 , 1 , ⋯ , L − 1 (WM.1) x_i(n)=x(n+iD),\quad n=0,1,\cdots,L-1 \tag{WM.1} xi(n)=x(n+iD),n=0,1,⋯,L−1(WM.1)
The total number of data points that is used is
N = L + D ( K − 1 ) (WM.2) N=L+D(K-1) \tag{WM.2} N=L+D(K−1)(WM.2)
Often, 50 % 50\% 50% overlap is used: D = L / 2 D=L/2 D=L/2 and K = 2 N L − 1 K=2\frac{N}{L}-1 K=2LN−1. -
The second is to allow a data window w ( n ) w (n) w(n) to be applied to each sequence.
P ^ M ( i ) ( e j ω ) = 1 L U ∣ ∑ n = 0 L − 1 x ( n + i D ) w ( n ) e − j ω n ∣ 2 (WM.3) \hat P_{M}^{(i)}(e^{j\omega})=\frac{1}{LU}\left|\sum_{n=0}^{L-1}x(n+iD)w(n)e^{-j\omega n}\right|^2\tag{WM.3} P^M(i)(ejω)=LU1∣∣∣∣∣n=0∑L−1x(n+iD)w(n)e−jωn∣∣∣∣∣2(WM.3)
Thereby we produce a set of modified periodograms that are to be averaged. The resulting Welch’s estimate is given by
P
^
W
(
e
j
ω
)
=
1
K
∑
i
=
0
K
−
1
P
^
M
(
i
)
(
e
j
ω
)
=
1
K
L
U
∑
i
=
0
K
−
1
∣
∑
n
=
0
L
−
1
x
(
n
+
i
D
)
w
(
n
)
e
−
j
ω
n
∣
2
(WM.4)
\hat P_W(e^{j\omega})=\frac{1}{K}\sum_{i=0}^{K-1}\hat P_{M}^{(i)}(e^{j\omega})=\frac{1}{KLU}\sum_{i=0}^{K-1}\left|\sum_{n=0}^{L-1}x(n+iD)w(n)e^{-j\omega n}\right|^2\tag{WM.4}
P^W(ejω)=K1i=0∑K−1P^M(i)(ejω)=KLU1i=0∑K−1∣∣∣∣∣n=0∑L−1x(n+iD)w(n)e−jωn∣∣∣∣∣2(WM.4)
Performance of the Welch’s Method
Bias
From
(
M
D
.
3
)
(MD.3)
(MD.3), we have
E
{
P
^
W
(
e
j
ω
)
}
=
E
{
P
^
M
(
i
)
(
e
j
ω
)
}
=
1
2
π
L
U
P
x
(
e
j
ω
)
∗
∣
W
(
e
j
ω
)
∣
2
(WM.5)
E\{\hat P_W(e^{j\omega})\}=E\{\hat P_{M}^{(i)}(e^{j\omega})\}=\frac{1}{2\pi LU}P_x(e^{j\omega})*|W(e^{j\omega})|^2 \tag{WM.5}
E{P^W(ejω)}=E{P^M(i)(ejω)}=2πLU1Px(ejω)∗∣W(ejω)∣2(WM.5)
where
W
(
e
j
ω
)
W(e^{j\omega})
W(ejω) is the Fourier transform of the window
w
(
n
)
w(n)
w(n) (length
L
L
L).
Variance
It has been shown that, with a Bartlett window and a 50% overlap, the variance is approximately
V
a
r
{
P
^
W
(
e
j
ω
)
}
≈
9
8
K
P
x
2
(
e
j
ω
)
≈
9
8
V
a
r
{
P
^
B
(
e
j
ω
)
}
(WM.6)
\mathrm{Var}\{\hat P_W(e^{j\omega})\}\approx \frac{9}{8K}P_x^2(e^{j\omega})\approx \frac{9}{8}\mathrm{Var}\{\hat P_B(e^{j\omega})\}\tag{WM.6}
Var{P^W(ejω)}≈8K9Px2(ejω)≈89Var{P^B(ejω)}(WM.6)
Comparing
(
B
M
.
9
)
(BM.9)
(BM.9) and
(
W
M
.
6
)
(WM.6)
(WM.6) we see that, for a given number of sections
K
K
K, the variance of the estimate using Welch’s method is larger than that for Bartlett’s method by a factor of
9
/
8
9/8
9/8.
However, for a fixed amount of data,
N
N
N, and a given resolution (sequence length
L
L
L), with a 50% overlap twice as many sections may be averaged in Welch’s method. Expressing
V
a
r
{
P
^
W
(
e
j
ω
)
}
≈
9
L
16
N
P
x
2
(
e
j
ω
)
≈
9
16
V
a
r
{
P
^
B
(
e
j
ω
)
}
(WM.7)
\mathrm{Var}\{\hat P_W(e^{j\omega})\}\approx \frac{9L}{16N}P_x^2(e^{j\omega})\approx \frac{9}{16}\mathrm{Var}\{\hat P_B(e^{j\omega})\}\tag{WM.7}
Var{P^W(ejω)}≈16N9LPx2(ejω)≈169Var{P^B(ejω)}(WM.7)
Minimum Variance (MV) Spectrum Estimation
In the MV method, the power spectrum is estimated by filtering a process with a bank of narrowband bandpass filters. Unlike the filter bank derived in Section [A filter bank interpretation of the periodogram](# A filter bank interpretation of the periodogram), the filter in MV is data adaptive so that each filter may be designed to be “optimum” in the sense of rejecting as much out-of-band signal power as possible.
The motivation for this approach may be seen by looking at the effect of filtering in a WSS random process with a narrowband bandpass filter.
Let
x
(
n
)
x(n)
x(n) be a zero mean wide-sense stationary random process with a power spectrum
P
x
(
e
j
ω
)
P_x(e^{j\omega})
Px(ejω) and let
g
i
(
n
)
g_i(n)
gi(n) be an ideal bandpass filter with a bandwidth
Δ
\Delta
Δ and center frequency
ω
i
\omega_i
ωi:
∣
G
i
(
e
j
ω
)
∣
=
{
1
;
∣
ω
−
ω
i
∣
<
Δ
/
2
0
;
otherwise
|G_i(e^{j\omega})|=\left\{ \begin{aligned}& 1; && |\omega-\omega_i|<\Delta /2 \\ &0; && \text{otherwise} \end{aligned}\right.
∣Gi(ejω)∣={1;0;∣ω−ωi∣<Δ/2otherwise
If
x
(
n
)
x(n)
x(n) is filtered with
g
i
(
n
)
g_i(n)
gi(n), then the power spectrum of the output process,
y
i
(
n
)
y_i(n)
yi(n), is
P
i
(
e
j
ω
)
=
P
x
(
e
j
ω
)
∣
G
i
(
e
j
ω
)
∣
2
P_i(e^{j\omega})=P_x(e^{j\omega})|G_i(e^{j\omega})|^2
Pi(ejω)=Px(ejω)∣Gi(ejω)∣2
and the power in
y
i
(
n
)
y_i(n)
yi(n) is
E
{
∣
y
i
(
n
)
∣
2
}
=
1
2
π
∫
−
π
π
P
i
(
e
j
ω
)
d
ω
=
1
2
π
∫
−
π
π
P
x
(
e
j
ω
)
∣
G
i
(
e
j
ω
)
∣
2
d
ω
=
1
2
π
∫
ω
i
−
Δ
/
2
ω
i
+
Δ
/
2
P
x
(
e
j
ω
)
d
ω
(MV.1)
E\{|y_i(n)|^2\}=\frac{1}{2\pi}\int_{-\pi}^\pi P_i(e^{j\omega})d\omega=\frac{1}{2\pi}\int_{-\pi}^\pi P_x(e^{j\omega})|G_i(e^{j\omega})|^2d\omega=\frac{1}{2\pi}\int_{\omega_i-\Delta/2}^{\omega_i+\Delta/2} P_x(e^{j\omega})d\omega \tag{MV.1}
E{∣yi(n)∣2}=2π1∫−ππPi(ejω)dω=2π1∫−ππPx(ejω)∣Gi(ejω)∣2dω=2π1∫ωi−Δ/2ωi+Δ/2Px(ejω)dω(MV.1)
If
Δ
\Delta
Δ is small enough so that
P
x
(
e
j
ω
)
P_x(e^{j\omega})
Px(ejω) is approximately constant over the passband of the filter, then the power in
y
i
(
n
)
y_i(n)
yi(n) is approximately
E
{
∣
y
i
(
n
)
∣
2
}
≈
P
x
(
e
j
ω
)
Δ
2
π
(MV.2)
E\{|y_i(n)|^2\}\approx P_x(e^{j\omega}) \frac{\Delta}{2\pi} \tag{MV.2}
E{∣yi(n)∣2}≈Px(ejω)2πΔ(MV.2)
Therefore, it is possible to estimate the power spectral density of
x
(
n
)
x(n)
x(n) at frequency
ω
=
ω
i
\omega=\omega_i
ω=ωi from the filtered process by estimating the power in
y
i
(
n
)
y_i(n)
yi(n) and dividing by the normalized filter bandwidth,
Δ
/
2
π
\Delta/2\pi
Δ/2π,
P
^
x
(
e
j
ω
)
=
E
{
∣
y
i
(
n
)
∣
2
}
Δ
/
2
π
(MV.3)
\hat P_x(e^{j\omega})=\frac{E\{|y_i(n)|^2\}}{\Delta/2\pi} \tag{MV.3}
P^x(ejω)=Δ/2πE{∣yi(n)∣2}(MV.3)
As we saw in Section [A filter bank interpretation of the periodogram](# A filter bank interpretation of the periodogram), the periodogram produces an estimate of the power spectrum in a similar fashion. Specifically,
x
(
n
)
x (n)
x(n) is filtered with a bank of bandpass filters,
h
i
(
n
)
h_i (n )
hi(n), where
H
i
(
e
j
ω
)
=
∑
n
=
0
N
−
1
h
i
(
n
)
e
−
j
n
ω
=
e
−
j
(
ω
−
ω
i
)
(
N
−
1
)
/
2
sin
[
N
(
ω
−
ω
i
)
/
2
]
N
sin
[
(
ω
−
ω
i
)
/
2
]
(PD.11)
H_{i}\left(e^{j \omega}\right)=\sum_{n=0}^{N-1} h_{i}(n) e^{-j n \omega}=e^{-j\left(\omega-\omega_{i}\right)(N-1) / 2} \frac{\sin \left[N\left(\omega-\omega_{i}\right) / 2\right]}{N \sin \left[\left(\omega-\omega_{i}\right) / 2\right]}\tag{PD.11}
Hi(ejω)=n=0∑N−1hi(n)e−jnω=e−j(ω−ωi)(N−1)/2Nsin[(ω−ωi)/2]sin[N(ω−ωi)/2](PD.11)
and
E
^
{
∣
y
i
(
n
)
∣
2
}
=
∣
y
i
(
N
−
1
)
∣
2
(PD.16)
\hat{E}\left\{\left|y_{i}(n)\right|^{2}\right\}=\left|y_{i}(N-1)\right|^{2}\tag{PD.16}
E^{∣yi(n)∣2}=∣yi(N−1)∣2(PD.16)
Since each filter in the filter bank for the periodogram is the same, differing only in the center frequency, these filters are data independent, i.e., do not use the information of data. The MV spectrum estimation, however, is data adaptive. It uses the information in data to design a filter bank that rejects the maximum amount of out-of-band power.
The minimum variance spectrum estimation technique described in this section is based on this idea and involves the following steps:
- Design a bank of bandpass filters g i ( n ) g_i (n) gi(n) with center frequency ω i \omega_i ωi so that each filter rejects the maximum amount of out-of-band power while passing the component at frequency ω i \omega_i ωi with no distortion.
- Filter x ( n ) x(n) x(n) with each filter in the filter bank and estimate the power in each output process y i ( n ) y_i (n) yi(n).
- Set P ^ x ( e j ω i ) \hat P_x (e^{j\omega_i}) P^x(ejωi) equal to the power estimated in step (2) divided by the filter bandwidth. (see ( M V . 3 ) (MV.3) (MV.3))
Suppose that we would like to estimate the power spectral density of x ( n ) x(n) x(n) at frequency ω i \omega_i ωi. Let g i ( n ) g_i (n) gi(n) be a complex-valued FIR bandpass filter of order p p p.
To ensure that the filter does not alter the power in the input process at frequency
ω
i
\omega_i
ωi,
G
i
(
e
j
ω
)
G_i(e^{j\omega})
Gi(ejω) will be constrained to have a gain of one at
ω
=
ω
i
\omega=\omega_i
ω=ωi. (see
(
M
V
.
1
)
(MV.1)
(MV.1))
G
i
(
e
j
ω
i
)
=
∑
n
=
0
p
g
i
(
n
)
e
−
j
n
ω
i
=
1
(MV.4)
G_{i}\left(e^{j \omega_{i}}\right)=\sum_{n=0}^{p} g_{i}(n) e^{-j n \omega_{i}}=1 \tag{MV.4}
Gi(ejωi)=n=0∑pgi(n)e−jnωi=1(MV.4)
Let
g
i
g_{i}
gi be the vector of filter coefficients
g
i
(
n
)
g_{i}(n)
gi(n)
g
i
=
[
g
i
(
0
)
,
g
i
(
1
)
,
…
,
g
i
(
p
)
]
T
\mathbf{g}_{i}=\left[g_{i}(0), g_{i}(1), \ldots, g_{i}(p)\right]^{T}
gi=[gi(0),gi(1),…,gi(p)]T
and let
e
i
\mathbf{e}_{i}
ei be the vector of complex exponentials
e
j
k
ω
i
e^{j k \omega_{i}}
ejkωi,
e
i
=
[
1
,
e
j
ω
i
,
…
,
e
j
p
ω
i
]
T
\mathbf{e}_{i}=\left[1, e^{j \omega_{i}}, \ldots, e^{j p \omega_{i}}\right]^{T}
ei=[1,ejωi,…,ejpωi]T
The constraint on the frequency response given in
(
M
V
.
4
)
(MV.4)
(MV.4) may be written in vector form as
g
i
H
e
i
=
e
i
H
g
i
=
1
(MV.5)
\mathbf{g}_{i}^H \mathbf{e}_{i}=\mathbf{e}_{i}^H \mathbf{g}_{i}=1 \tag{MV.5}
giHei=eiHgi=1(MV.5)
The next step is to minimize the power in the output process subject to the linear constraint given in
(
M
V
.
5
)
(MV.5)
(MV.5), i.e.,
min
g
i
E
{
∣
y
i
(
n
)
∣
2
}
(MV.6)
\min_{\mathbf g_i}E\{|y_i(n)|^2\}\tag{MV.6}
giminE{∣yi(n)∣2}(MV.6)
The power in
y
i
(
n
)
y_i(n)
yi(n) may be express in terms of the autocorrelation matrix
R
x
\mathbf R_x
Rx as follows:
E
{
∣
y
i
(
n
)
∣
2
}
=
r
y
(
0
)
=
r
x
(
k
)
∗
g
(
k
)
∗
g
∗
(
−
k
)
∣
k
=
0
=
(
∑
l
=
−
∞
∞
∑
m
=
−
∞
∞
g
(
l
)
r
x
(
m
−
l
+
k
)
g
∗
(
m
)
)
∣
k
=
0
=
∑
l
=
−
∞
∞
∑
m
=
−
∞
∞
g
(
l
)
r
x
(
m
−
l
)
g
∗
(
m
)
=
g
i
H
R
x
g
i
(MV.7)
\begin{aligned} E\{|y_i(n)|^2\}&=r_y(0)=r_x(k)*g(k)*g^*(-k)|_{k=0}\\ &=\Big(\sum_{l=-\infty}^\infty \sum_{m=-\infty}^\infty g(l)r_x(m-l+k)g^*(m)\Big)\Big|_{k=0}\\ &=\sum_{l=-\infty}^\infty \sum_{m=-\infty}^\infty g(l)r_x(m-l)g^*(m)\\ &=\mathbf g_i^H \mathbf R_x \mathbf g_i \end{aligned}\tag{MV.7}
E{∣yi(n)∣2}=ry(0)=rx(k)∗g(k)∗g∗(−k)∣k=0=(l=−∞∑∞m=−∞∑∞g(l)rx(m−l+k)g∗(m))∣∣∣k=0=l=−∞∑∞m=−∞∑∞g(l)rx(m−l)g∗(m)=giHRxgi(MV.7)
From
(
M
V
.
5
)
∼
(
M
V
.
7
)
(MV.5)\sim(MV.7)
(MV.5)∼(MV.7), we formulate an optimization problem
min
g
i
g
i
H
R
x
g
i
s
u
b
j
e
c
t
t
o
g
i
H
e
i
=
1
(MV.8)
\begin{aligned} &\min_{\mathbf g_i} &&\mathbf g_i^H \mathbf R_x \mathbf g_i\\ &\mathrm{subject~to} &&\mathbf{g}_{i}^H \mathbf{e}_{i}=1 \end{aligned}\tag{MV.8}
giminsubject togiHRxgigiHei=1(MV.8)
Introduce a Lagrange multiplier
λ
\lambda
λ and minimize the unconstraint objective function:
L
(
g
i
,
λ
)
=
g
i
H
R
x
g
i
−
λ
(
g
i
H
e
i
−
1
)
L(\mathbf g_i,\lambda)=\mathbf g_i^H \mathbf R_x \mathbf g_i-\lambda(\mathbf g_i^H\mathbf e_i-1)
L(gi,λ)=giHRxgi−λ(giHei−1)
Set the gradient of
L
(
g
i
,
λ
)
L(\mathbf g_i,\lambda)
L(gi,λ) with respect to
g
i
H
\mathbf g_i^H
giH and
λ
\lambda
λ equal to zero:
{
∂
L
∂
g
i
H
=
R
x
g
i
−
λ
e
i
=
0
∂
L
∂
λ
=
g
i
H
e
i
−
1
=
0
\left\{\begin{aligned} &\frac{\partial L}{\partial \mathbf g_i^H}=\mathbf R_x \mathbf g_i-\lambda \mathbf e_i=0\\ &\frac{\partial L}{\partial \lambda}=\mathbf g_i^H\mathbf e_i-1=0 \end{aligned} \right.
⎩⎪⎪⎨⎪⎪⎧∂giH∂L=Rxgi−λei=0∂λ∂L=giHei−1=0
We obtain
g
i
=
λ
R
x
−
1
e
i
\mathbf g_i=\lambda \mathbf R_x^{-1}\mathbf e_i
gi=λRx−1ei
Substitute it into
∂
L
/
∂
λ
=
0
\partial L/\partial \lambda=0
∂L/∂λ=0,
λ
=
1
e
i
H
R
x
−
1
e
i
\lambda=\frac{1}{\mathbf e_i^H \mathbf R_x^{-1}\mathbf e_i}
λ=eiHRx−1ei1
Therefore, the solution for problem
(
M
V
.
8
)
(MV.8)
(MV.8) is
g
i
=
R
x
−
1
e
i
e
i
H
R
x
−
1
e
i
(MV.9)
\mathbf g_i=\frac{\mathbf R_x^{-1}\mathbf e_i}{\mathbf e_i^H \mathbf R_x^{-1}\mathbf e_i}\tag{MV.9}
gi=eiHRx−1eiRx−1ei(MV.9)
where the minimum value of
E
{
∣
y
i
(
n
)
∣
2
}
E\{|y_i(n)|^2\}
E{∣yi(n)∣2} is equal to
min
g
i
E
{
∣
y
i
(
n
)
∣
2
}
=
1
e
i
H
R
x
−
1
e
i
(MV.10)
\min_{\mathbf g_i}E\{|y_i(n)|^2\}=\frac{1}{\mathbf e_i^H \mathbf R_x^{-1}\mathbf e_i} \tag{MV.10}
giminE{∣yi(n)∣2}=eiHRx−1ei1(MV.10)
Having designed the bandpass filter bank and estimated the distribution of power in x ( n ) x (n) x(n) as a function of frequency, we may now estimate the power spectrum by dividing the power estimate by the bandwidth of the bandpass filter. (see ( M V . 3 ) (MV.3) (MV.3))
One of the criteria is to use the value of
Δ
\Delta
Δ that produces the correct power spectral density for white noise. Since the minimum variance estimate of the power in white noise
E
{
∣
y
i
(
n
)
∣
2
}
=
σ
x
2
/
(
p
+
1
)
E\{|y_i(n)|^2\}=\sigma_x^2/(p+1)
E{∣yi(n)∣2}=σx2/(p+1), where
σ
x
2
\sigma_x^2
σx2 is the variance of the white noise, it follows from
(
M
V
.
3
)
(MV.3)
(MV.3) that the spectrum estimate is
P
^
x
(
e
j
ω
)
=
E
{
∣
y
i
(
n
)
∣
2
}
Δ
/
2
π
=
2
π
σ
x
2
(
p
+
1
)
Δ
\hat P_x(e^{j\omega})=\frac{E\{|y_i(n)|^2\}}{\Delta/2\pi}=\frac{2\pi\sigma_x^2}{(p+1)\Delta}
P^x(ejω)=Δ/2πE{∣yi(n)∣2}=(p+1)Δ2πσx2
Therefore, if we set
Δ
=
2
π
p
+
1
(MV.11)
\Delta=\frac{2\pi}{p+1} \tag{MV.11}
Δ=p+12π(MV.11)
then
P
^
x
(
e
j
ω
)
=
σ
x
2
\hat P_x(e^{j\omega})=\sigma_x^2
P^x(ejω)=σx2. Using
(
M
V
.
11
)
(MV.11)
(MV.11) as the bandwidth of the filter
g
(
n
)
g(n)
g(n), the power spectrum estimate becomes, in general
P
^
x
(
e
j
ω
)
=
E
{
∣
y
i
(
n
)
∣
2
}
Δ
/
2
π
=
p
+
1
e
H
R
x
−
1
e
(MV.12)
\hat P_x(e^{j\omega})=\frac{E\{|y_i(n)|^2\}}{\Delta/2\pi}=\frac{p+1}{\mathbf e^H \mathbf R_x^{-1}\mathbf e} \tag{MV.12}
P^x(ejω)=Δ/2πE{∣yi(n)∣2}=eHRx−1ep+1(MV.12)
which is the minimum variance spectrum estimate. As is normally the case, the autocorrelation matrix is unknown, then
R
x
\mathbf R_x
Rx may be replaced with an estimated autocorrelation matrix
R
^
x
\hat {\mathbf R}_x
R^x.
Parametric Methods
With a parametric methods, we incorporate information that may be available about the process into the estimation procedure. We first select an appropriate model for the process, and then estimate the model parameters from the given data. The final step is to estimate the power spectrum by incorporating the estimated parameters into the parametric form for the spectrum.
Spectrum estimation via ARMA modeling
If we have modeled a random process using an ARMA
(
p
,
q
)
(p, q)
(p,q) model,
X
(
z
)
=
B
(
z
)
A
(
z
)
V
(
z
)
=
∑
k
=
0
q
b
(
k
)
z
−
k
1
+
∑
k
=
1
p
a
(
k
)
z
−
k
V
(
z
)
(PM.1)
X(z)=\frac{B(z)}{A(z)} V(z)=\frac{\sum_{k=0}^{q} b(k) z^{-k}}{1+\sum_{k=1}^{p} a(k) z^{-k}} V(z)\tag{PM.1}
X(z)=A(z)B(z)V(z)=1+∑k=1pa(k)z−k∑k=0qb(k)z−kV(z)(PM.1)
then the spectrum is
P
x
(
e
j
ω
)
=
∣
∑
k
=
0
q
b
(
k
)
e
−
j
ω
k
∣
2
∣
1
+
∑
k
=
1
p
a
(
k
)
e
−
j
ω
k
∣
2
(PM.2)
P_{x}\left(e^{j \omega}\right)=\frac{\left|\sum_{k=0}^{q} b(k) e^{-j \omega k}\right|^{2}}{\left|1+\sum_{k=1}^{p} a(k) e^{-j \omega k}\right|^{2}}\tag{PM.2}
Px(ejω)=∣1+∑k=1pa(k)e−jωk∣2∣∣∑k=0qb(k)e−jωk∣∣2(PM.2)
The quality of this estimate really depends on how well the model fits the process.
All-pole model
Recall the notes in Signal Modeling
- The Autocorrelation Method
For an AR model,
q
=
0.
q=0 .
q=0. The model coefficients are determined by solving the Yule-Walker equations (Yule-Walker method),
[
r
x
(
0
)
r
x
∗
(
1
)
r
x
∗
(
2
)
⋯
r
x
p
(
p
)
r
x
(
1
)
r
x
(
0
)
r
x
∗
(
1
)
⋯
r
x
∗
(
p
−
1
)
r
x
(
2
)
r
x
(
1
)
r
x
(
0
)
⋯
r
x
∗
(
p
−
2
)
⋮
⋱
⋮
r
x
(
p
)
r
x
(
p
−
1
)
r
x
(
p
−
2
)
⋯
r
x
(
0
)
]
[
1
a
[
1
]
a
[
2
]
⋮
a
[
p
]
]
=
[
ϵ
p
0
0
⋮
0
]
\left[\begin{array}{c|cccc} r_{x}(0) & r_{x}^{*}(1) & r_{x}^{*}(2) & \cdots & r_{x}^{p}(p) \\ \hline r_{x}(1) & r_{x}(0) & r_{x}^{*}(1) & \cdots & r_{x}^{*}(p-1) \\ r_{x}(2) & r_{x}(1) & r_{x}(0) & \cdots & r_{x}^{*}(p-2) \\ \vdots & \ddots & & \vdots & \\ r_{x}(p) & r_{x}(p-1) & r_{x}(p-2) & \cdots & r_{x}(0) \end{array}\right]\left[\begin{array}{c} 1 \\ \hline {a[1]} \\ a[2] \\ \vdots \\ a[p] \end{array}\right]=\left[\begin{array}{c} \epsilon_{p} \\\hline 0 \\ 0 \\ \vdots \\ 0 \end{array}\right]
⎣⎢⎢⎢⎢⎢⎡rx(0)rx(1)rx(2)⋮rx(p)rx∗(1)rx(0)rx(1)⋱rx(p−1)rx∗(2)rx∗(1)rx(0)rx(p−2)⋯⋯⋯⋮⋯rxp(p)rx∗(p−1)rx∗(p−2)rx(0)⎦⎥⎥⎥⎥⎥⎤⎣⎢⎢⎢⎢⎢⎡1a[1]a[2]⋮a[p]⎦⎥⎥⎥⎥⎥⎤=⎣⎢⎢⎢⎢⎢⎡ϵp00⋮0⎦⎥⎥⎥⎥⎥⎤
where
r
x
(
k
)
r_{x}(k)
rx(k) is estimated by
r
^
x
(
k
)
=
1
N
∑
n
=
0
N
−
1
−
k
x
(
n
+
k
)
x
∗
(
n
)
,
k
=
0
,
1
,
⋯
,
p
\hat{r}_{x}(k)=\frac{1}{N} \sum_{n=0}^{N-1-k} x(n+k) x^{*}(n), \quad k=0,1, \cdots, p
r^x(k)=N1n=0∑N−1−kx(n+k)x∗(n),k=0,1,⋯,p
and setting
∣
b
(
0
)
∣
2
=
ϵ
p
.
|b(0)|^{2}=\epsilon_{p} .
∣b(0)∣2=ϵp. We can use Levinson. Note that this is a biased estimate of
r
x
(
k
)
r_{x}(k)
rx(k).
We can also use an unbiased estimate of
r
x
(
k
)
r_x(k)
rx(k),
r
^
x
(
k
)
=
1
N
−
k
∑
n
=
0
N
−
1
−
k
x
(
n
+
k
)
x
∗
(
n
)
,
k
=
0
,
1
,
⋯
,
p
\hat{r}_{x}(k)=\frac{1}{N-k} \sum_{n=0}^{N-1-k} x(n+k) x^{*}(n), \quad k=0,1, \cdots, p
r^x(k)=N−k1n=0∑N−1−kx(n+k)x∗(n),k=0,1,⋯,p
but then the covariance matrix is not guaranteed to be positive definite and can become ill-conditioned (large variance of the spectrum estimate).
-
The Covariance Method
[ r x ( 0 , 0 ) r x ( 0 , 1 ) ⋯ r x ( 0 , p ) r x ( 1 , 0 ) r x ( 1 , 1 ) ⋯ r x ( 1 , p ) ⋮ ⋮ ⋱ ⋮ r x ( p , 0 ) r x ( p , 1 ) ⋯ r x ( p , p ) ] [ 1 a [ 1 ] ⋮ a [ p ] ] = [ ϵ p 0 ⋮ 0 ] \left[\begin{array}{cccc} r_{x}(0,0) & r_{x}(0,1) & \cdots & r_{x}(0, p) \\ r_{x}(1,0) & r_{x}(1,1) & \cdots & r_{x}(1, p) \\ \vdots & \vdots & \ddots & \vdots \\ r_{x}(p, 0) & r_{x}(p, 1) & \cdots & r_{x}(p, p) \end{array}\right]\left[\begin{array}{c} 1 \\ a[1] \\ \vdots \\ a[p] \end{array}\right]=\left[\begin{array}{c} \epsilon_{p} \\ 0 \\ \vdots \\ 0 \end{array}\right] ⎣⎢⎢⎢⎡rx(0,0)rx(1,0)⋮rx(p,0)rx(0,1)rx(1,1)⋮rx(p,1)⋯⋯⋱⋯rx(0,p)rx(1,p)⋮rx(p,p)⎦⎥⎥⎥⎤⎣⎢⎢⎢⎡1a[1]⋮a[p]⎦⎥⎥⎥⎤=⎣⎢⎢⎢⎡ϵp0⋮0⎦⎥⎥⎥⎤
where r x ( k , l ) r_x(k,l) rx(k,l) is estimated by
r ^ x ( k , l ) = ∑ n = p N − 1 x ( n − l ) x ∗ ( n − k ) \hat r_x(k,l)=\sum_{n=p}^{N-1}x(n-l)x^*(n-k) r^x(k,l)=n=p∑N−1x(n−l)x∗(n−k)
Frequency Estimation
In the previous section, we considered the problem of estimating the power spectrum of a WSS random process that could be modeled as the output of a linear shift-invariant filter that is driven by white noise. Another model of importance is one in which x(n) is a sum of complex exponentials in white noise,
x
(
n
)
=
∑
i
=
1
p
A
i
e
j
n
ω
i
+
w
(
n
)
x(n)=\sum_{i=1}^p A_i e^{jn\omega_i}+w(n)
x(n)=i=1∑pAiejnωi+w(n)
It is assumed that the amplitudes
A
i
A_i
Ai are complex,
A
i
=
∣
A
i
∣
e
j
ϕ
i
A_i=|A_i|e^{j\phi_i}
Ai=∣Ai∣ejϕi
with
ϕ
i
\phi_i
ϕi uncorrelated random variables that are uniformly distributed over the interval
[
−
π
,
π
]
[-\pi,\pi]
[−π,π]. Although the frequencies and magnitudes of the complex exponentials,
ω
i
\omega_i
ωi and
∣
A
i
∣
|A_i|
∣Ai∣, respectively, are not random, they are assumed to be unknown.
Eigendecomposition of the Autocorrelation Matrix
-
First-order process: consists of a single complex exponential in white noise.
x ( n ) = A 1 e j n ω 1 + w ( n ) (FE.1) x(n)=A_1e^{jn\omega_1}+w(n) \tag{FE.1} x(n)=A1ejnω1+w(n)(FE.1)
The autocorrelation sequence of x ( n ) x(n) x(n) is
r x ( k ) = P 1 e j k ω 1 + σ w 2 δ ( k ) (FE.2) r_x(k)=P_1e^{jk\omega_1}+\sigma_w^2\delta(k)\tag{FE.2} rx(k)=P1ejkω1+σw2δ(k)(FE.2)
where P 1 = ∣ A 1 ∣ 2 P_1=|A_1|^2 P1=∣A1∣2 is the power in the complex exponential.Therefore, the M × M M\times M M×M autocorrelation matrix for x ( n ) x(n) x(n) is a sum of an autocorrelation matrix due to the signal, R s \mathbf R_s Rs, and an autocorrelation matrix due to the noise, R n \mathbf R_n Rn,
R x = R s + R n (FE.3) \mathbf R_x=\mathbf R_s+\mathbf R_n\tag{FE.3} Rx=Rs+Rn(FE.3)
where the signal autocorrelation matrix is
R s = P 1 [ 1 e − j ω 1 e − j 2 ω 1 ⋯ e − j ( M − 1 ) ω 1 e j ω 1 1 e − j ω 1 ⋯ e − j ( M − 2 ) ω 1 e j 2 ω 1 e j ω 1 1 ⋯ e − j ( M − 3 ) ω 1 ⋮ ⋮ ⋮ ⋮ e j ( M − 1 ) ω 1 e j ( M − 2 ) ω 1 e j ( M − 3 ) ω 1 ⋯ 1 ] (FE.4) \mathbf{R}_{s}=P_{1}\left[\begin{array}{ccccc} 1 & e^{-j \omega_{1}} & e^{-j 2 \omega_{1}} & \cdots & e^{-j(M-1) \omega_{1}} \\ e^{j \omega_{1}} & 1 & e^{-j \omega_{1}} & \cdots & e^{-j(M-2) \omega_{1}} \\ e^{j 2 \omega_{1}} & e^{j \omega_{1}} & 1 & \cdots & e^{-j(M-3) \omega_{1}} \\ \vdots & \vdots & \vdots & & \vdots \\ e^{j(M-1) \omega_{1}} & e^{j(M-2) \omega_{1}} & e^{j(M-3) \omega_{1}} & \cdots & 1 \end{array}\right] \tag{FE.4} Rs=P1⎣⎢⎢⎢⎢⎢⎡1ejω1ej2ω1⋮ej(M−1)ω1e−jω11ejω1⋮ej(M−2)ω1e−j2ω1e−jω11⋮ej(M−3)ω1⋯⋯⋯⋯e−j(M−1)ω1e−j(M−2)ω1e−j(M−3)ω1⋮1⎦⎥⎥⎥⎥⎥⎤(FE.4)
and has a rank of one, and the noise autocorrelation matrix is diagonal
R n = σ w 2 I (FE.5) \mathbf R_n=\sigma_w^2\mathbf I \tag{FE.5} Rn=σw2I(FE.5)
has full rank. Note that if we define
e 1 = [ 1 , e j ω 1 , ⋯ , e j ( M − 1 ) ω 1 ] T (FE.6) \mathbf e_1=[1,e^{j\omega_1},\cdots,e^{j(M-1)\omega_1}]^T \tag{FE.6} e1=[1,ejω1,⋯,ej(M−1)ω1]T(FE.6)
then R s \mathbf R_s Rs may be written in terms of e 1 \mathbf e_1 e1 as follows:
R s = P 1 e 1 e 1 H (FE.7) \mathbf R_s=P_1 \mathbf e_1 \mathbf e_1^H \tag{FE.7} Rs=P1e1e1H(FE.7)
Consider an eigenvalue decomposition of R s \mathbf R_s Rs:
R s = V Λ s V H = [ v 1 , v 2 , ⋯ , v M ] [ λ 1 s λ 2 s ⋱ λ M s ] [ v 1 H v 2 H ⋮ v M H ] (FE.8) \mathbf{R}_{s}=\mathbf{V}\mathbf{\Lambda}_{s} \mathbf{V}^{H}=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \cdots, \mathbf{v}_{M}\right]\left[\begin{array}{llll} \lambda_{1}^{s} & & & \\ & & \lambda_{2}^{s} & & \\ & & & \ddots & \\ & & & & \lambda_{M}^{s} \end{array}\right]\left[\begin{array}{c} \mathbf{v}_{1}^{H} \\ \mathbf{v}_{2}^{H} \\ \vdots \\ \mathbf{v}_{M}^{H} \end{array}\right]\tag{FE.8} Rs=VΛsVH=[v1,v2,⋯,vM]⎣⎢⎢⎡λ1sλ2s⋱λMs⎦⎥⎥⎤⎣⎢⎢⎢⎡v1Hv2H⋮vMH⎦⎥⎥⎥⎤(FE.8)
It has only one nonzero eigenvalue. With
R s v 1 = R s ( e 1 / M ) = P 1 e 1 e 1 H e 1 / M = M P 1 v 1 (FE.9) \mathbf R_s\mathbf v_1=\mathbf R_s(\mathbf e_1/\sqrt{M})=P_1 \mathbf e_1 \mathbf e_1^H \mathbf e_1/\sqrt{M}=MP_1\mathbf v_1 \tag{FE.9} Rsv1=Rs(e1/M )=P1e1e1He1/M =MP1v1(FE.9)
it follows that the nonzero eigenvalue is equal to M P 1 , M P_{1}, MP1, and that v 1 = e 1 / M \mathbf v_{1}=\mathbf e_1/\sqrt{M} v1=e1/M is the corresponding eigenvector.In addition, since R x \mathbf R_x Rx is Hermitian then the remaining eigenvectors, v 2 , v 3 , ⋯ , v M \mathbf v_2,\mathbf v_3,\cdots,\mathbf v_M v2,v3,⋯,vM will be orthogonal to e 1 \mathbf e_1 e1,
e 1 H v i = 0 ; i = 2 , 3 , ⋯ , M (FE.10) \mathbf e_1^H \mathbf v_i=0;\quad i=2,3,\cdots,M \tag{FE.10} e1Hvi=0;i=2,3,⋯,M(FE.10)
Therefore, the eigenvalue decomposition of R x \mathbf{R}_{x} Rx is
R x = V ( Λ s + σ w 2 I ) V H = [ v 1 , v 2 , ⋯ , v M ] [ λ 1 s + σ w 2 σ w 2 ⋱ σ w 2 ] [ v 1 H v 2 H ⋮ v M H ] (FE.11) \mathbf{R}_{x}=\mathbf{V}\left(\mathbf{\Lambda}_{s}+\sigma_{w}^{2} \mathbf{I}\right) \mathbf{V}^{H}=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \cdots, \mathbf{v}_{M}\right]\left[\begin{array}{llll} \lambda_{1}^{s}+\sigma_{w}^{2} & & & \\ & & \sigma_{w}^{2} & & \\ & & & \ddots & \\ & & & & \sigma_{w}^{2} \end{array}\right]\left[\begin{array}{c} \mathbf{v}_{1}^{H} \\ \mathbf{v}_{2}^{H} \\ \vdots \\ \mathbf{v}_{M}^{H} \end{array}\right]\tag{FE.11} Rx=V(Λs+σw2I)VH=[v1,v2,⋯,vM]⎣⎢⎢⎡λ1s+σw2σw2⋱σw2⎦⎥⎥⎤⎣⎢⎢⎢⎡v1Hv2H⋮vMH⎦⎥⎥⎥⎤(FE.11)
where λ 1 s = M P 1 \lambda_1^s=MP_1 λ1s=MP1.Thus, it is possible to extract all of the parameters of interest about x ( n ) x (n) x(n) from the eigenvalues and eigenvectors of R x \mathbf R_x Rx as follows:
- Perform an eigendecomposition of the autocorrelation matrix, R x . \mathbf{R}_{x} . Rx. The largest eigenvalue will be equal to M P 1 + σ w 2 M P_{1}+\sigma_{w}^{2} MP1+σw2 and the remaining eigenvalues will be equal to σ w 2 \sigma_{w}^{2} σw2
- Use the eigenvalues of
R
x
\mathbf{R}_{x}
Rx to solve for the power
P
1
P_{1}
P1 and the noise variance as follows:
σ w 2 = λ min P 1 = 1 M ( λ max − λ min ) (FE.12) \begin{aligned} \sigma_{w}^{2} &=\lambda_{\min } \\ P_{1} &=\frac{1}{M}\left(\lambda_{\max }-\lambda_{\min }\right) \end{aligned}\tag{FE.12} σw2P1=λmin=M1(λmax−λmin)(FE.12) - Determine the frequency ω 1 \omega_{1} ω1 from the eigenvector v max \mathbf{v}_{\max } vmax that is associated with the largest eigenvalue using, for example, the second coefficient of v max \mathbf{v}_{\max } vmax
ω i = arg { v max ( 1 ) } (FE.13) \omega_{i}=\arg \left\{v_{\max }(1)\right\}\tag{FE.13} ωi=arg{vmax(1)}(FE.13)
In practice, instead of estimating the frequency of the complex exponential from a single eigenvector, we may consider using a weighted average as follows:
By applying the orthogonality condition given in ( F E . 10 ) (FE.10) (FE.10), the function
P ^ ( ω ) = 1 ∣ e ( ω ) H v i ∣ 2 , for any i = 2 , ⋯ , M \hat P(\omega)=\frac{1}{|\mathbf e(\omega)^H\mathbf v_i|^2},\quad \text{for any }i=2,\cdots,M P^(ω)=∣e(ω)Hvi∣21,for any i=2,⋯,M
has a peak at ω = ω 1 \omega=\omega_1 ω=ω1. It can be further combined into
P ^ ( ω ) = 1 ∑ i = 2 M ∣ e ( ω ) H v i ∣ 2 = 1 e ( ω ) H V n V n H e ( ω ) , V n = [ v 2 , ⋯ , v M ] (FE.14) \hat P(\omega)=\frac{1}{\sum_{i=2}^M|\mathbf e(\omega)^H\mathbf v_i|^2}=\frac{1}{\mathbf e(\omega)^H\mathbf V_n \mathbf V_n^H\mathbf e(\omega)},\quad \mathbf V_n=[\mathbf v_2,\cdots,\mathbf v_M]\tag{FE.14} P^(ω)=∑i=2M∣e(ω)Hvi∣21=e(ω)HVnVnHe(ω)1,Vn=[v2,⋯,vM](FE.14)
It also has a peak at ω = ω 1 \omega=\omega_1 ω=ω1. It is called the MUSIC spectrum (a pseudospectrum) -
Second-order process: two complex exponentials in white noise
x ( n ) = A 1 e j n ω 1 + A 2 e j n ω 2 + w ( n ) (FE.15) x(n)=A_1e^{jn\omega_1}+A_2e^{jn\omega_2}+w(n) \tag{FE.15} x(n)=A1ejnω1+A2ejnω2+w(n)(FE.15)
Then the autocorrelation of x ( n ) x(n) x(n) is
r x ( k ) = P 1 e j k ω 1 + P 2 e j k ω 2 + σ w 2 δ ( k ) (FE.16) r_x(k)=P_1 e^{jk\omega_1}+P_2 e^{jk\omega_2}+\sigma_w^2\delta(k) \tag{FE.16} rx(k)=P1ejkω1+P2ejkω2+σw2δ(k)(FE.16)
where P 1 = ∣ A 1 ∣ 2 P_1=|A_1|^2 P1=∣A1∣2 and P 2 = ∣ A 2 ∣ 2 P_2=|A_2|^2 P2=∣A2∣2. Thus, the autocorrelation matrix may again be written as
R x = P 1 e 1 e 1 H + P 2 e 2 e 2 H + σ w 2 I (FE.17) \mathbf R_x=P_1 \mathbf e_1 \mathbf e_1^H+P_2\mathbf e_2 \mathbf e_2^H+\sigma_w^2\mathbf I \tag{FE.17} Rx=P1e1e1H+P2e2e2H+σw2I(FE.17)
where
R s = P 1 e 1 e 1 H + P 2 e 2 e 2 H (FE.18) \mathbf R_s=P_1 \mathbf e_1 \mathbf e_1^H+P_2\mathbf e_2 \mathbf e_2^H \tag{FE.18} Rs=P1e1e1H+P2e2e2H(FE.18)
is a rank two matrix representing the component of R x \mathbf R_x Rx that is due to the signal, and
R n = σ w 2 I (FE.19) \mathbf R_n=\sigma_w^2\mathbf I \tag{FE.19} Rn=σw2I(FE.19)
is a diagonal matrix that is due to the noise. Let the eigenvalue decomposition of R s \mathbf R_s Rs be
R s = V Λ s V H = [ v 1 , v 2 , ⋯ , v M ] [ λ 1 s λ 2 s 0 ⋱ 0 ] [ v 1 H v 2 H ⋮ v M H ] (FE.20) \mathbf{R}_{s}=\mathbf{V}\mathbf{\Lambda}_{s} \mathbf{V}^{H}=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \cdots, \mathbf{v}_{M}\right]\left[\begin{array}{llll} \lambda_{1}^{s} & & & \\ & & \lambda_{2}^{s} & & \\ & & & 0 & \\ & & & &\ddots & \\ & & & & &0 \end{array}\right]\left[\begin{array}{c} \mathbf{v}_{1}^{H} \\ \mathbf{v}_{2}^{H} \\ \vdots \\ \mathbf{v}_{M}^{H} \end{array}\right]\tag{FE.20} Rs=VΛsVH=[v1,v2,⋯,vM]⎣⎢⎢⎢⎢⎡λ1sλ2s0⋱0⎦⎥⎥⎥⎥⎤⎣⎢⎢⎢⎡v1Hv2H⋮vMH⎦⎥⎥⎥⎤(FE.20)
Then the eigenvalue decomposition of R x \mathbf R_x Rx is
R x = V Λ s V H = [ v 1 , v 2 , ⋯ , v M ] [ λ 1 s + σ w 2 λ 2 s + σ w 2 σ w 2 ⋱ σ w 2 ] [ v 1 H v 2 H ⋮ v M H ] (FE.21) \mathbf{R}_{x}=\mathbf{V}\mathbf{\Lambda}_{s} \mathbf{V}^{H}=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \cdots, \mathbf{v}_{M}\right]\left[\begin{array}{llll} \lambda_{1}^{s}+\sigma_w^2 & & & \\ & & \lambda_{2}^{s}+\sigma_w^2 & & \\ & & & \sigma_w^2 & \\ & & & &\ddots & \\ & & & & &\sigma_w^2 \end{array}\right]\left[\begin{array}{c} \mathbf{v}_{1}^{H} \\ \mathbf{v}_{2}^{H} \\ \vdots \\ \mathbf{v}_{M}^{H} \end{array}\right]\tag{FE.21} Rx=VΛsVH=[v1,v2,⋯,vM]⎣⎢⎢⎢⎢⎡λ1s+σw2λ2s+σw2σw2⋱σw2⎦⎥⎥⎥⎥⎤⎣⎢⎢⎢⎡v1Hv2H⋮vMH⎦⎥⎥⎥⎤(FE.21)
The eigenvalues and eigenvectors of R x R_x Rx may be divided into two groups. The first group, consisting of the two eigenvectors that have eigenvalues greater than σ w 2 \sigma_w^2 σw2, are referred to as signal eigenvectors and span a two-dimensional subspace called the signal subspace. The second group, consisting of those eigenvectors that have eigenvalues equal to σ w 2 \sigma_w^2 σw2, are referred to as the noise eigenvectors and span an ( M − 2 ) (M - 2) (M−2)-dimensional subspace called the noise subspace.Since R x \mathbf R_x Rx is Hermitian, the eigenvectors v i \mathbf v_i vi form an orthonormal set. Therefore, the signal and noise subspaces are orthogonal. That’s to say, for any vector u \mathbf u u in the signal subspace and for any vector v \mathbf v v in the noise space, u H v = 0 \mathbf u^H\mathbf v=0 uHv=0.
Unlike the case for a single complex exponential, with a sum of two complex exponentials in noise, the signal eigenvectors will generally not be equal to e 1 \mathbf e_1 e1 and e 2 \mathbf e_2 e2. Nevertheless, e 1 \mathbf e_1 e1 and e 2 \mathbf e_2 e2 will lie in the signal subspace that is spanned by the signal eigenvectors v 1 \mathbf v_1 v1 and v 2 \mathbf v_2 v2, and since the signal subspace is orthogonal to the noise subspace, then e 1 \mathbf e_1 e1 and e 2 \mathbf e_2 e2 will be orthogonal to the noise eigenvectors v i \mathbf v_i vi, i.e.,
e 1 H v i = 0 ; i = 3 , 4 , ⋯ , M e 2 H v i = 0 ; i = 3 , 4 , ⋯ , M (FE.22) \begin{aligned} &\mathbf e_1^H \mathbf v_i=0;&& i=3,4,\cdots,M\\ &\mathbf e_2^H \mathbf v_i=0;&& i=3,4,\cdots,M \end{aligned}\tag{FE.22} e1Hvi=0;e2Hvi=0;i=3,4,⋯,Mi=3,4,⋯,M(FE.22)
Therefore, as in the case of one complex exponential, the complex exponential frequencies, ω 1 \omega_1 ω1 and ω 2 \omega_2 ω2, may be estimated using a frequency estimation function of the form
P ^ ( ω ) = 1 ∑ i = 3 M ∣ e ( ω ) H v i ∣ 2 = 1 e ( ω ) H V n V n H e ( ω ) , V n = [ v 3 , ⋯ , v M ] (FE.23) \hat P(\omega)=\frac{1}{\sum_{i=3}^M|\mathbf e(\omega)^H\mathbf v_i|^2}=\frac{1}{\mathbf e(\omega)^H\mathbf V_n \mathbf V_n^H\mathbf e(\omega)},\quad \mathbf V_n=[\mathbf v_3,\cdots,\mathbf v_M]\tag{FE.23} P^(ω)=∑i=3M∣e(ω)Hvi∣21=e(ω)HVnVnHe(ω)1,Vn=[v3,⋯,vM](FE.23)
It has peaks at ω 1 \omega_1 ω1 and ω 2 \omega_2 ω2.
-
p p p-order process: p p p distinct complex exponentials in white noise
The M × M M\times M M×M autocorrelation sequence is
r x ( k ) = ∑ i = 1 p P i e j k ω i + σ w 2 δ ( k ) (FE.24) r_x(k)=\sum_{i=1}^{p}P_i e^{jk\omega_i}+\sigma_w^2\delta(k) \tag{FE.24} rx(k)=i=1∑pPiejkωi+σw2δ(k)(FE.24)
where P i = ∣ A i ∣ 2 P_i=|A_i|^2 Pi=∣Ai∣2 is the power in the i i i-th component. Therefore, the autocorrelation matrix may be written as
R x = R s + R n = ∑ i = 1 p P i e i e i H + σ w 2 I (FE.25) \mathbf R_x=\mathbf R_s+\mathbf R_n=\sum_{i=1}^p P_i \mathbf e_i \mathbf e_i^H+\sigma_w^2 \mathbf I \tag{FE.25} Rx=Rs+Rn=i=1∑pPieieiH+σw2I(FE.25)
The eigenvalue decomposition of R x \mathbf R_x Rx is
R x = V Λ s V H = [ v 1 , v 2 , ⋯ , v M ] [ λ 1 s + σ w 2 ⋱ λ p s + σ w 2 σ w 2 ⋱ σ w 2 ] [ v 1 H v 2 H ⋮ v M H ] (FE.26) \mathbf{R}_{x}=\mathbf{V}\mathbf{\Lambda}_{s} \mathbf{V}^{H}=\left[\mathbf{v}_{1}, \mathbf{v}_{2}, \cdots, \mathbf{v}_{M}\right]\left[\begin{array}{llll} \lambda_{1}^{s}+\sigma_w^2 & & & \\ & & \ddots & & \\ & & &\lambda_{p}^{s}+\sigma_w^2 & & \\ & & & &\sigma_w^2 & \\ & & & & &\ddots & \\ & & & & & &\sigma_w^2 \end{array}\right]\left[\begin{array}{c} \mathbf{v}_{1}^{H} \\ \mathbf{v}_{2}^{H} \\ \vdots \\ \mathbf{v}_{M}^{H} \end{array}\right]\tag{FE.26} Rx=VΛsVH=[v1,v2,⋯,vM]⎣⎢⎢⎢⎢⎢⎢⎡λ1s+σw2⋱λps+σw2σw2⋱σw2⎦⎥⎥⎥⎥⎥⎥⎤⎣⎢⎢⎢⎡v1Hv2H⋮vMH⎦⎥⎥⎥⎤(FE.26)
Denote the signal eigenvectors V s \mathbf V_s Vs and noise eigenvectors V n \mathbf V_n Vn as
V s = [ v 1 , v 2 , ⋯ , v p ] V n = [ v p + 1 , v p + 2 , ⋯ , v M ] (FE.27) \begin{aligned} & \mathbf V_s=[\mathbf v_1,\mathbf v_2,\cdots,\mathbf v_p]\\ & \mathbf V_n=[\mathbf v_{p+1},\mathbf v_{p+2},\cdots,\mathbf v_M] \end{aligned}\tag{FE.27} Vs=[v1,v2,⋯,vp]Vn=[vp+1,vp+2,⋯,vM](FE.27)
Specifically, since each signal vector e 1 , ⋯ , e p \mathbf e_1,\cdots ,\mathbf e_p e1,⋯,ep, lies in the signal subspace, this orthogonality implies that e i \mathbf e_i ei will be orthogonal to each of the noise eigenvectors,
e i H v k = 0 ; i = 1 , 2 , … , p k = p + 1 , p + 2 , … , M (FE.28) \mathbf{e}_{i}^{H} \mathbf{v}_{k}=0 \quad ; \quad \begin{array}{l} i=1,2, \ldots, p \\ k=p+1, p+2, \ldots, M \end{array}\tag{FE.28} eiHvk=0;i=1,2,…,pk=p+1,p+2,…,M(FE.28)Therefore, the frequencies may be estimated using a frequency estimation function such as
P ^ ( ω ) = 1 ∑ i = p + 1 M ∣ e ( ω ) H v i ∣ 2 = 1 e ( ω ) H V n V n H e ( ω ) , V n = [ v p + 1 , ⋯ , v M ] (FE.29) \hat P(\omega)=\frac{1}{\sum_{i=p+1}^M|\mathbf e(\omega)^H\mathbf v_i|^2}=\frac{1}{\mathbf e(\omega)^H\mathbf V_n \mathbf V_n^H\mathbf e(\omega)},\quad \mathbf V_n=[\mathbf v_{p+1},\cdots,\mathbf v_M]\tag{FE.29} P^(ω)=∑i=p+1M∣e(ω)Hvi∣21=e(ω)HVnVnHe(ω)1,Vn=[vp+1,⋯,vM](FE.29)
上一篇: javaagent使用指南
推荐阅读
-
Spectrum Estimation
-
Generate the Theoretical Spectrum of a Cyclic Peptide
-
Win10 Build 15014预览版Spectrum问题解决方法
-
Win10 Build 15014预览版Spectrum问题解决方法
-
罗技G900 Chaos Spectrum鼠标全面评测 发烧友最爱
-
最大似然估计(Maximum likelihood estimation)(通过例子理解)
-
最大似然估计(Maximum likelihood estimation)(通过例子理解)
-
Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Dr
-
NR-based access to unlicensed spectrum之RP-170828
-
2010-ECCV - Two-phase kernel estimation for robust motion deblurring