Re-understanding of the Fourier Expansion and Signal Expansion

Why do we make Fourier transform to signals? The first thing that came to my mind for this question is that the Fourier transform has many nice features, it can transfer complex convolution in the time domain into a simple product in the frequency domain, it can simplify the filter design in frequency domain, its linearity, symmetry property etc. But those nice features can’t explain well why we need it but not other orthogonal expansions. While I’m looking for the answer to this question, my understanding of Fourier expansion gets refreshed and also extended to some applications of signal expansion in other fields.

Transforming a signal to a domain gives a new representation of the signal, and the best transform of a signal is related to the problem you want to solve, it should greatly simplify the analysis and solving process to the problem. The same rule applies to the Fourier transform. The answer for the question above is simply as: Fourier expansion is the best transform for the LTI system, because the complex exponential function is the eigenfunction of the LTI system. It means if we input a complex exponential signal to a LTI system, the output will still be a complex exponential signal, the only change will be the amplitude:

\[\begin{equation} \label{eq1} HX=kX \end{equation}\]

\(X\) is the eigenfunction \(e^{j\omega t}\), \(k\) is the system frequency response \(H(j\omega)\), \(H\) is the system function (also called as transfer function).

So if we decompose the input as a series of eigenfunction, the analysis of the LTI system will be greatly simplified: We only need to figure out the constant coefficient \(k\) which is the system frequency response \(H(j\omega)\), then we can get the output of the LTI system even when the system function is complicated. A very important subset of LTI systems is the systems represented by a differential equation with linear constant coefficients. Fourier transform can convert differential or difference equations into polynomial equations, in frequency domain, the differential calculation is transformed into a simple multiplication, which greatly simplifies the complexity of the problem, therefore it’s very useful to this kind of system analysis.

In parallel to the LTI system, there is also the LSI system(Linear Shift-Invariant System) in image processing. LSI system defines shift-invariant property in spatial domain: the input shift will lead to a corresponding shift in the output. The LSI system has the same eigenfunction – complex exponential function as the LTI system. We can use the same way of Fourier transform to analyze LSI system response or LSI filters in frequency domain.

Till now, we only talk about Fourier transform in a deterministic way, i.e. on deterministic signals. The next thing that came to my mind is, is there a similar transform or expansion for random signals or random processes? Is the answer power spectral density (PSD) which is the Fourier transform of the autocorrelation function? It turns out there is something more general than PSD.

To be clear, a random signal is an infinite signal in the time domain and does not have the conditions for integration, so it cannot be directly Fourier transformed. PSD defined from Fourier transform of the autocorrelation function is also known as spectral decomposition Theorems for stationary processes or stationary sequences. Here we mean generalized stationary process by stationary process. The statistical characteristics of a stationary process are completely determined by its correlation function, and the spectral function (or spectral density) of a stationary process completely describes its statistical characteristics. And for general random processes, this conclusion does not hold. In general random processes, Karhunen-Loève expansion provides a complete characterization. It decomposes the sample function of a random process with a set of orthogonal basis functions, and the decomposed coefficients are uncorrelated random variables.

\[\begin{equation} \label{eq2} x(t)=\lim_{N\rightarrow+\infty}\sum_{i=1}^Nx_i(t)\phi(t), 0 \leq t \leq T \end{equation}\]

This expansion is ensured by a integral equation:

\[\begin{equation} \label{eq3} \lambda_j\phi_j(t)=\int_0^TK_x(t,u)\phi_j(u)du \end{equation}\]

\(K_x(t,u)\) is the covariance function, \(\lambda_j\) is called eigenvalue, basis function \(\phi_j(t)\) is called eigenfunction.

A special case in Karhunen-Loève expansion is, when the stochastic process is a stationary process, the basis function set turns out to be the complex exponential function. So \(e^{j\omega t}\) is the eigenfunction of the stationary process, the eigenvalue \(\lambda\) is the power spectral density at frequency \(\omega\).

Karhunen-Loève expansion not only provides a theoretically complete characterization for stochastic processes, it has an important application in signal detection and estimation. With Karhunen-Loève expansion, we can transform the continuous-time signal detection to a set of uncorrelated random variables and build the detection statistics based on them. Especially under Gaussian random process, we can prove that those expansion coefficients are independent Gaussian random variables. In the simple binary detection, if we choose \(\phi_1(t)=s(t)\) as the first basis function in expansion, we will get the first expansion coefficient \(r_1=\int_0^Ts(t)r(t)dt\). This is a sufficient statistics for this detection, other basis functions in expansion don’t affect the decision at all. It’s also called the correlated receiver. In the general binary detection, we choose \(\phi_1(t)=s(t)\) as the first basis function and \(\phi_2(t)=[s_0(t)-\rho s_1(t)]/ \sqrt{1-\rho^2}\) as the second basis function in expansion, we get \(r_1\) and \(r_2\) and then construct the sufficient statistics with their difference. This can be generalized to M-ary detection under noise.

Note that the correlated receiver for signal detection can also be directly deduced by maximum likelihood principle if the signal observations are uncorrelated with each other, but Karhunen-Loève expansion can generally apply for signal observations which may be correlated.

In signal estimation, Karhunen-Loève expansion can give us the best linear time-varying filter(estimator) for a sample function of a random process under MSE metric, the filter is determined by the eigenfunction and eigenvalue of the covariance function of the random process. In practice, we usually choose the first K maximum items as a good approximation. This is also known as principal component analysis (PCA) in the data dimensionality reduction.