In the proposed cycle "Modeling the DSP of digital signal processing in MATLAB", previous articles were devoted to modeling digital filters (DF), FIR and IIR, including fixed-point (FT), software tools MATLAB.

Literature

  1. Ingle V., Proakis J. Digital Signal Processing Using MATLAB. second edition. Thomson, 2006.
  2. Oppenheim A., Shafer R. Digital signal processing. M.: Technosfera, 2006.
  3. Sergienko AB Digital signal processing. 2nd ed. St. Petersburg: PETER, 2006.
  4. Solonina AI, Ulakhovich D. A., Arbuzov S. M., Solovieva E. B. Fundamentals of digital signal processing. 2nd ed. St. Petersburg: BHV-Petersburg, 2005.
  5. Solonina A. I., Arbuzov S. M. Digital signal processing. Modeling in MATLAB. St. Petersburg: BHV-Petersburg, 2008.
  6. Solonina A. Modeling digital signal processing in MATLAB. Part 1. Synthesis of optimal (according to Chebyshev) FIR filters using MATLAB software // Components and technologies. 2008. No. 11.
  7. Solonina A. Modeling digital signal processing in MATLAB. Part 2. Synthesis of optimal IIR filters using MATLAB software // Components and technologies. 2008. No. 12.
  8. Solonina A. Modeling digital signal processing in MATLAB. Part 3. Description of the structures of FIR and IIR filters in MATLAB // Components and technologies. 2009. No. 1.
  9. Solonina A. Modeling digital signal processing in MATLAB. Part 4. Modeling the structures of digital fixed-point filters using MATLAB software: analysis of the characteristics of FIR filters // Components and technologies. 2009. No. 2.
  10. Solonina A. Modeling digital signal processing in MATLAB. Part 5. Modeling structures of digital fixed-point filters using MATLAB software: analysis of the characteristics of IIR filters // Components and technologies. 2009. No. 3.
  11. Solonina A. Modeling digital signal processing in MATLAB. Part 6. Modeling fixed-point digital filter structures using MATLAB software: impact quantization and response calculation // Components and technologies. 2009. No. 4.
1 In the Current Filter Information group, this corresponds to the message: Source-Designed.
2 In the Current Filter Information group, this corresponds to the message: Source-Imported.

A.B. Sergienko. Signal Processing Toolbox Overview

Signal processing has always been one of the most important application areas of the MATLAB system. This is primarily evidenced by the fact that the Signal Processing Toolbox was one of the first specialized packages - it appeared already in 1988, just four years after the creation of the MATLAB system itself.

To date, the Signal Processing package contains almost two hundred carefully designed specialized functions that allow you to solve a wide variety of signal analysis and signal processing problems.

The currently distributed version of MATLAB 6.1 (Release 12.1) contains the Signal Processing package version 5.1. The upcoming release of MATLAB 6.5 (Release 13) will include the Signal Processing package version 6.0.

According to official documentation The package of its functions are divided into twenty categories. In the list below, these categories are grouped into larger groups. So, according to their purpose, the functions of the Signal Processing package can be divided as follows:

In addition, the package includes three graphical environments:

MATLAB and its extension packages are primarily focused on digital signal processing, therefore, functions related to the calculation of analog circuits are considered as auxiliary. They are mainly intended to be called from other functions that use analog prototypes when synthesizing digital filters. However, these features can be very useful on their own. In turn, they can be divided into several groups.

The first group is the functions for calculating analog prototype filters, that is, low-pass filters with a cutoff frequency equal to 1 rad/s. The functions return zeros, poles, and filter gains and allow calculation of prototype Butterworth filters ( buttap), Chebyshev of the first kind ( cheb1ap), Chebyshev of the second kind ( cheb2ap), elliptical (Cauer) ( ellipap) and Bessel ( besselap).

The second group is analog filter conversion functions that allow you to convert a LPF prototype to a LPF with a different cutoff frequency ( lp2lp), into a high-pass filter (HPF) with a given cutoff frequency ( lp2hp), into a bandpass filter with a given average frequency and bandwidth ( lp2bp) and into a notch filter with given average frequency and stopband width ( lp2bs). Functions can accept and return filter descriptions as vectors of coefficients of the transfer function's numerator and denominator polynomials, or as state-space parameters.

The third group - functions for calculating analog filters with specified parameters. In the process of calculation, they call the functions of the first two groups. The set of initial data required for calculation includes the order of the filter, its type (LPF, HPF, bandpass or notch), frequency or several cutoff frequencies, and also (depending on the prototype) ripple parameters of the frequency response (AFC). There are functions for calculating Butterworth filters ( butter), Chebyshev of the first kind ( cheby1), Chebyshev of the second kind ( cheby2), elliptical (Cauer) ( elliptical) and Bessel ( besself). All these functions, except for the function besself, can also be used to calculate discrete filters (see below). A sign of the analog version of the calculation is the indication of the string "s" as the last input parameter.

The fourth group - functions for determining the required filter order according to the specified frequency response parameters (cutoff frequencies of the pass and stop bands, as well as allowable ripples in the pass and stop bands). Each type of filter has its own function for determining the required order: for the Butterworth filter - buttord, for the Chebyshev filter of the first kind - cheb1ord, for the Chebyshev filter of the second kind - cheb2ord, for an elliptical filter - ellipord. Just like the functions of the previous group, these functions allow you to determine the required order for discrete filters as well (see below). A sign of the analog version of the calculation is the indication of the string "s" as the last input parameter.

The fifth group - functions of transformation of forms of the description of analog linear systems. For analog systems, four such description methods are supported:

The Signal Processing package has functions that implement the mutual transformations of these four forms of representation of analog systems (only the function residue, which works with poles and residues, does not belong to the Signal Processing package, but to the core MATLAB library). These functions are listed in the following table.

final form Transfer function polynomial coefficients Zeros and poles Poles and deductions State space
original form
tf2zp residue tf2ss
Zeros and poles zp2tf zp2ss
Poles and deductions residue
State space ss2tf ss2zp

residue can convert in both directions. The direction of the transformation is determined by the number of input and output parameters.

The functions listed in the table, except for the function residue, can also perform transformations of discrete systems, since the transformation formulas for analog and discrete systems are the same.

Finally, the sixth group should include the only function freqs, which allows you to calculate or display graphically the amplitude and phase-frequency characteristics (AFC and PFC) of an analog linear system. The initial data are the coefficients of the polynomials of the numerator and denominator of the transfer function of the system.

As an example of how analog functions can be applied, let's calculate a 4th-order elliptical low-pass filter with a cutoff frequency of 3 kHz, ripple in the passband of 1 dB, and a stopband rejection of 20 dB, and then plot its frequency response and phase response. .

Ellip(4, 1, 20, 2*pi*3000, "s"); % filter calculation
f = 0:10:10000; % frequency vector for calculating frequency response and phase response
h = freqs(b, a, 2*pi*f); % complex gain
subplot(2, 1, 1)

grid
subplot(2, 1, 2)
plot(f, unwrap(angle(h))*180/pi) % PFC plot (in degrees)
grid

Function used in the example code unwrap eliminates insignificant jumps in PFC by 360°.

This category of functions is quite numerous and combines various means of analyzing discrete linear systems - as a rule, represented as vectors of coefficients of polynomials of the numerator and denominator of the transfer function (in z- areas).

Function freqz is a discrete analogue of the function freqs, it allows you to calculate the complex transfer coefficient or plot the frequency response and phase response of a discrete system. The initial data are the coefficients of the polynomials of the numerator and denominator of the transfer function of the system.

Similar to the function freqz running function grpdelay, which allows you to calculate or graphically display the frequency dependence of the group delay introduced by a discrete filter.

Function impz designed to calculate or graphically display the impulse response of a discrete system. The initial data, as for the previous functions, are the coefficients of the polynomials of the numerator and denominator of the system transfer function.

Function zplane allows you to display the zeros and poles of the system on the complex plane, additionally depicting a unit circle that limits the allowable area for the location of the poles sustainable discrete system. The initial data can be either the vectors of the coefficients of the numerator and denominator polynomials, or directly the vectors of zeros and poles of the system transfer function.

Function filter norm allows you to calculate norm discrete filter. This parameter is used, for example, when choosing the scaling factors for individual sections of a filter implemented in a sequential (cascade) form in order to reduce rounding errors. Two options are supported: 2-norm and -norm. 2-norm is rms the value of the frequency response of the filter (or, what is the same, the root of the sum of the squares of the samples of the impulse response of the filter), and the -norm is maximum frequency response value.

Function fvtool is essentially a graphical environment designed to analyze and visualize the characteristics of discrete systems (Filter Visualization Tool). However, unlike other graphical environments in the package, fvtool really is function, since when called, it requires the presence of input parameters - the coefficients of the polynomials of the numerator and denominator of the transfer function of the analyzed filter. An essential advantage of this function is the ability to simultaneously view the characteristics several filters. GUI provided by this function is practically the same as in the FDATool filter analysis and synthesis environment. Below is an example of a function call fvtool and the graphic window created by it in the mode of showing the frequency dependence of the group delay introduced by the filter.

b = ;
a = ;
fvtool(b, a)

A large group consists of functions for transforming the forms of descriptions of discrete systems. More forms of representation are supported for discrete systems than for analog ones:

  • Coefficients of polynomials of the numerator and denominator of the system transfer function.
  • Zeros, poles and system gain (transfer function factorization).
  • Poles and residues (representation of the transfer function of the system as a sum of simple fractions).
  • State space parameters.
  • Representation as a set of sequentially (cascaded) second-order sections.
  • Representation in the form of a lattice or lattice-ladder structure.

Functions that perform conversions between different representation forms are listed in the following table.

final form Transfer function polynomial coefficients Zeros and poles Poles and deductions State space Sections of the second order lattice structure
original form
Transfer function polynomial coefficients tf2zp residuez tf2ss tf2sos tf2latc
Zeros and poles zp2tf zp2ss zp2sos
Poles and deductions residuez
State space ss2tf ss2zp ss2sos
Sections of the second order sos2tf sos2zp sos2ss
lattice structure latc2tf

As can be seen from the table, the same function residuez(this is an analogue of the function residue, designed to work with descriptions of discrete systems) can transform in both directions. The direction of the transformation is determined by the number of input and output parameters.

Two more functions manipulate the polynomial coefficient vectors by modifying the position of the polynomial roots in the complex plane. Function polyscale multiplies all the roots of the processed polynomial by the given coefficient, and the function polystab makes all the roots of the polynomial lie inside the unit circle - for this, the roots that exceed unity in absolute value are inverted, that is, their modules are replaced by reciprocals, and the sign of their phases is reversed.

The remaining functions in this category are auxiliary. Function freqspace calculates a vector of evenly spaced frequency values, the function freqzplot is intended for plotting frequency response graphs, and the function unwrap allows you to eliminate insignificant jumps in the phase-frequency characteristics by 2p, looking for these jumps between the elements of the vector and shifting the necessary fragments of the vector by 2p n.

The operation of linear discrete filtering is generally described as follows:

Here x(k)- samples of the input signal, y(k)- readings of the output signal, a i and bj- constant coefficients. Maximum of numbers m and n is called the filter order.

Previous output samples may not be used in calculations, then all i = 0 and the filter is called non-recursive or transversal. If previous output samples are used, the filter is said to be recursive.

Linear discrete filtering refers to arbitrary data processing technologies, so the corresponding function is filter- does not belong to the Signal Processing package, but is built into the MATLAB kernel. Function filter2, also belonging to the core MATLAB library, implements two-dimensional discrete filtering.

Since a constant coefficient filter is a linear stationary discrete system, its response to an arbitrary input signal can be represented as discrete convolution input signal with filter impulse response:

Here h(k)- readings of the impulse response of the filter. The impulse response is the response of the filter to a single sample of a unit value applied to the input.

Of course, calculations using the convolution formula can be implemented in practice only for a finite length of the filter impulse response. This operation is carried out by the function conv; as well as the implementation of the discrete filtering algorithm, it does not belong to the Signal Processing package, but to the core MATLAB library. In fact, the implementation of the conv function is reduced to calling the function filter with relevant input parameters. Function conv2 implements a two-dimensional discrete convolution. Another function of the MATLAB core library is deconv- implements the convolution inversion, allowing the result of the convolution and one of the input vectors to determine the second input vector.

Mentioned above basic functions discrete filtering, as already mentioned, belong to the core MATLAB library; in fact, the Signal Processing package contains functions that solve more specific filtering tasks.

First of all, it should be noted that the function filter makes it possible to access the initial and final internal states of the filter, thus allowing to organize block signal processing. Sometimes it becomes necessary to form the internal state vector of the filter, knowing a certain number of previous input and output samples. This calculation is performed using the function filtic.

Function fftfilt implements discrete filtering using the Fast Fourier Transform (FFT) in combination with signal blocking. Only non-recursive filters can be implemented in this way. The result of the function operation coincides (up to computational errors) with the results of conventional filtering implemented using the function filter. However, the computational speed of FFT filtering can be significantly faster, especially if the length of the input signal is many times the length of the filter's impulse response (or vice versa).

Function filterfilt allows you to compensate for the phase shift introduced by conventional filtering (in other words, given function implements filtering without introducing a time delay). This is done by bidirectional signal processing. The first pass of filtering is carried out in the usual way, and then the resulting output signal is filtered a second time - from the end to the beginning. Due to this, phase shifts are compensated, and the resulting filter order is doubled. It should be noted that the resulting (equivalent to two filtering passes) filter does not satisfy the causality condition.

In the practical implementation of high-order recursive filters, they are often represented as second-order sections connected in series. This makes it possible to attenuate calculation errors arising from rounding and quantization errors of the filter coefficients. Tools for analyzing errors of this kind are concentrated in the Filter Design package, and in the Signal Processing package there is a function sosfilt, which allows implementing discrete data filtering using a filter represented as second-order sections.

Another possible discrete filter structure is a lattice structure. To implement filtering using a filter presented in this form, the function latcfilt.

Function medfilt1, which implements one-dimensional median filtering, refers to non-linear filtering algorithms. The essence of its operation is that a sliding window of a given length is applied to the input signal, the samples within the window are ordered, and the value from the middle of the ordered window is returned as the output sample (or the half-sum of the two elements closest to the middle, if the window has an even length). Median filtering is used, for example, to eliminate impulse noise (clicks) in the processing of audio signals. Function medfilt2, which implements a two-dimensional variant of median filtering, is located in the Image Processing package.

Function sgolayfilt performs discrete filtering using the Savitsky-Golay filter. Its essence lies in the fact that the input signal is divided into blocks of a given size and within each block a polynomial approximation of the signal by a polynomial of a given degree is performed according to the criterion of minimum mean square error. If the degree of the polynomials is one less than the size of the blocks, the output will be equal to the input; with a lower degree of polynomials, the signal will be smoothed. Savitsky-Golay filters are used to "clean" signals from noise.

Discrete filter synthesis is understood as the choice of such sets of coefficients ( a i) and ( b i) at which the characteristics of the resulting filter satisfy the specified requirements. Strictly speaking, the design task also includes the choice of an appropriate filter structure, taking into account the finite accuracy of the calculations. This is especially true when implementing filters "in hardware" - using specialized LSI or digital signal processors. The effects associated with the finite accuracy of calculations can be analyzed using the functions of the Filter Design package; filter synthesis functions do not take these effects into account.

The Signal Processing package contains a large number of functions that implement various discrete filter synthesis algorithms. We present the main characteristics of these functions in the form of a table, and then give some additional comments.

Function Filter type frequency response Synthesis method
butter recursive Butterworth Bilinear z-transform
cheby1 recursive Chebyshev of the first kind Bilinear z-transform
cheby2 recursive Chebyshev of the second kind Bilinear z-transform
elliptical recursive Cauera (elliptical) Bilinear z-transform
bilinear recursive Bilinear z-transform
impinvar recursive Arbitrary analog prototype Invariant Impulse Response Transformation
yulewalk recursive Piecewise linear Autoregressive method
invfreqz recursive Free Minimizing the difference between the numerator of the transfer function and the product of its denominator and the desired frequency response
prony recursive Synthesis from a given impulse response Exponential Prony Approximation
fir1 non-recursive Multiband
fir2 non-recursive Piecewise linear Inverse Fourier Transform Using Windows
firls non-recursive Minimizing the root mean square error
fircls non-recursive Piecewise constant
fircls1 non-recursive LPF, HPF Minimizing RMS Error with Maximum Deviation Limitation
firrcos non-recursive LPF cosine smoothing
intfilt non-recursive LPF Minimax approximation
remez non-recursive Piecewise linear with transition strips Minimax approximation
cremes Non-recursive (including non-linear PFC and complex coefficients) Piecewise linear with transition strips Minimax approximation

Discrete filter synthesis methods can be divided into two large groups: with and without analog prototype. When using an analog prototype filter, it is necessary to somehow represent the analog transfer function defined in the s-domain into a discrete transfer function defined in the z-domain. The Signal Processing package implements two methods for such a transformation: the invariant impulse response method and the bilinear z-transform method. Both methods result in recursive discrete filters.

When using the method of invariant impulse response, the impulse response of the analog prototype is discretized. The frequency response of the resulting discrete filter is accordingly the periodically repeated frequency response of the analog prototype. For this reason, this method is unsuitable for the synthesis of high-pass filters and, in general, filters, the transfer coefficient of which does not tend to zero with increasing frequency. The invariant impulse response method is implemented in the Signal Processing package using the function impinvar.

When using the bilinear z-transform method, the characteristics of the analog prototype are distorted only along the frequency axis. In this case, the frequency range of the analog filter (from zero to infinity) is converted to the working frequency range of the discrete filter (from zero to half the sampling frequency). The frequency axis transformation is described by the arc tangent function, so frequencies that are much lower than the sampling rate are converted approximately linearly. This method is implemented using the function bilinear for an arbitrary analog prototype. In addition, there are ready-made functions for calculating low- and high-pass filters, band-pass and notch filters using the bilinear z-transform method using analog prototypes with Butterworth, Chebyshev first and second kind AFC, and Cauer (elliptical filters). This is according to the function butter, cheby1, cheby2 and elliptical. All these functions can also be used to calculate analog filters (see earlier). A sign of a discrete calculation variant is the absence of the string "s" in the list of input parameters. There are also functions to determine the required order of these filters by given parameters AFC (boundary frequencies of the pass and delay bands, as well as the allowable ripple in these bands). This is according to the function buttord, cheb1ord, cheb2ord, ellipord. Just like the filter synthesis functions, these functions allow you to specify the required order for analog filters as well (see earlier). A sign of a discrete calculation variant is the absence of the string "s" in the list of input parameters.

As an example, we synthesize a fourth-order elliptical low-pass filter with the same parameters as the analog filter in one of the previous examples (cutoff frequency 3 kHz, frequency response ripple in the passband 1 dB, and signal rejection in the stopband 20 dB). We will take the sampling frequency equal to 12 kHz. After synthesis, we plot the frequency response and phase response of the resulting filter using the function freqz.

    Fs = 12000; % sampling frequency
    F0 = 3000; % cutoff frequency
    = ellip(4, 1, 20, F0/Fs*2); % filter calculation
    freqz(b, a, , Fs); % graph output

Synthesis methods that do not use an analog prototype are called direct. They, in turn, can also be divided into two groups: methods for synthesizing recursive and non-recursive filters.

Direct synthesis functions of non-recursive filters include the following:

  • Functions that implement the synthesis of filters by inverse Fourier transform of the desired frequency response, followed by multiplying the resulting impulse response by some weight function (window) to attenuate the frequency response ripple that appears due to the Gibbs effect. These are the features fir1 and fir2. This also includes the function of low-pass filter synthesis with cosine smoothing of the frequency response - firrcos. In addition, the function kaiserord allows, according to the given frequency response parameters, to estimate the required filter order during synthesis using the Kaiser window.
  • Functions that implement the minimization of the standard deviation of the frequency response of the resulting filter from the given one. These are the features firls, fircls and fircls1. The last two functions solve the optimization problem with the limitation of the maximum deviation of the frequency response from the given one. This avoids the appearance of large frequency response peaks near the transition bands.
  • Functions that implement minimax optimization, that is, minimization of the peak deviation of the frequency response of the resulting filter from the given one. The result is filters with uniform frequency response ripples. This group includes functions remez (standard version Remez method, implemented in the very first versions of the Signal Processing package) and cremes(an extended version that supports the synthesis of filters with non-linear phase response and with complex coefficients). In addition, the function remezord allows to estimate the required order of the filter in the synthesis by the Remez method according to the given frequency response parameters.

As an example, we synthesize a non-recursive LPF of the 32nd order using the Remez method with the same cutoff and sampling frequencies as in the previous example (cutoff frequency 3 kHz, sampling frequency 12 kHz). We set the start of the stopband to 3.5 kHz. After the synthesis, we will plot the impulse response graphs, as well as the frequency response of the resulting filter (the PFC of the filter is linear, so it makes no sense to plot it). The frequency response will be displayed on a linear scale along the vertical in order to clearly demonstrate the uniformity of its pulsations.

    Fs = 12000; % sampling frequency
    F0 = 3000; % cutoff frequency
    F1=3500; % stopband start
    b = remez(32, , ); % filter calculation
    impz(b) % impulse response graph
    = freqz(b, 1, , Fs); % complex gain
    figures
    plot(f, abs(h)) % frequency response graph
    grid

The direct synthesis functions of recursive filters include the following:

  • yulewalk- synthesis of a recursive filter with an arbitrary piecewise linear frequency response using the Yule-Walker method.
  • invfreqz- this function is designed to solve the problem of system identification, it allows you to determine the coefficients of the numerator and denominator of the transfer function of a discrete system by a set of values ​​of this transfer function at different frequencies.

In conclusion, we list a number of functions that are not included in the groups listed above. Function max flat is intended for the synthesis of a generalized Butterworth filter (for such filters, the number of zeros of the transfer function exceeds the number of its poles). Function intfilt performs the synthesis of filters designed to filter the signal when performing interpolation and decimation. The vector convolution operation can be represented as a vector-matrix product, and the matrix involved in this product can be calculated using the function convmtx. Finally, the function sgolay performs the synthesis of the Savitsky-Golay smoothing filter. Since, as described above, the Savitzky-Golay filter processes individual signal blocks, such a filter is not a stationary system. Therefore, the function sgolay returns an integer matrix of time-varying coefficients of the equivalent non-recursive filter.

In addition to the Signal Processing package, a number of discrete filter synthesis functions are available in the Communications and Filter Design packages.

The words "spectral analysis" in the minds of many MATLAB users are strongly associated with the function fft(see below the section "Functions of transformations of discrete signals"), which performs a discrete Fourier transform (DFT). However, this is just a one-to-one linear transformation, giving performance deterministic signal in the frequency domain. If the analyzed signal is random, it only makes sense for him grade spectral density power, for the calculation of which it is necessary to average the available data in one way or another. In addition, in some cases we know some Additional Information about the analyzed signal, and it is desirable to take this information into account in the spectral analysis.

Methods for spectral analysis of random signals are divided into two large classes - non-parametric and parametric. AT nonparametric(nonparametric) methods use only the information contained in the samples of the analyzed signal. Parametric(parametric) methods assume the presence of some statistical models random signal, and the process of spectral analysis in this case includes the determination parameters this model. The term "model-based spectrum analysis" (Model-Based Spectrum Analysis, MBSA) is also used.

The Signal Processing package contains functions that implement various methods of spectral analysis, both parametric and nonparametric (it should be emphasized once again that spectral analysis here means the estimation power spectral density of a random process). In addition, there are functions for obtaining other average characteristics of random discrete signals.

To determine the spectral characteristics of a discrete random process, the average power spectrum of its fragment limited in length is calculated, and then the length of the fragment tends to infinity:

. (1)

Here x(k) - readings of the random process, T- sampling period. The overline denotes averaging over the ensemble of realizations.

In addition, this spectrum can be expressed in terms of the correlation function of a random process:

. (2)

This expression is a discrete analogue of the Wiener-Khinchin theorem: the spectrum of a discrete random process is the Fourier transform of its correlation function.

As already mentioned, when using nonparametric methods for calculating the spectrum of a random process, only the information contained in the signal samples is used, without any additional assumptions. Three such methods are implemented in the Signal Processing package - the periodogram, the Welch method, and the Thomson method.

A periodogram is an estimate of the power spectral density obtained from N countdown one implementation random process according to definition (1) (naturally, not by taking the limit, but by averaging a finite number of terms). If a weighting function (window) is used to calculate the spectrum, the resulting power spectrum estimate is called modified periodogram(modified periodogram):

Relation (2) is satisfied only for an infinite number of samples used, therefore, for any finite N the periodogram estimate of the power spectral density turns out to be displaced- it turns out that inside the sum (2) the signal correlation function is multiplied by a triangular weight function. In addition, it can be shown that the periodogram is not a consistent estimate of the power spectral density, since dispersion such an estimate is comparable to the square of its mathematical expectation for any N. With an increase in the number of samples used, the values ​​of the periodogram begin to fluctuate faster and faster - its graph becomes more and more jagged.

In the Signal Processing package, the calculation of the periodogram (including the modified one) is performed using the function periodogram.

To reduce the irregularity of the periodogram, it is necessary to apply some kind of averaging. Bartlett suggested dividing the analyzed signal into non-overlapping segments, calculating a periodogram for each segment, and then averaging these periodograms. If the correlation function of the signal at the duration of the segment decays to negligible values, then the periodograms of individual segments can be considered independent. Welch made two improvements to Bartlett's method: the use of a weight function and splitting the signal into overlapping fragments. The use of the weighting function makes it possible to attenuate the spreading of the spectrum and reduce the bias of the obtained estimate of the power density spectrum at the cost of a slight deterioration in resolution. The overlapping of segments is introduced in order to increase their number and reduce the estimation variance.

Calculations using the Welch method (also called the averaged modified periodogram method) are organized as follows: the vector of signal samples is divided into overlapping segments, each segment is multiplied by the used weight function, modified periodograms are calculated for weighted segments, the periodograms of all segments are averaged .

The Welch method is the most popular periodogram method of spectral analysis. In the Signal Processing package, it is implemented using the function pwelch.

Thomson's method implemented by the function pmtm, based on the use prolate spheroidal functions(prolate spheroidal functions). These finite duration functions provide the maximum concentration of energy in a given frequency band. In addition to the spectral estimate itself, the function pmtm can return its confidence interval. It takes some time to calculate prolate spheroidal functions, so when using the function repeatedly pmtm you can speed up the calculations by pre-calculating the functions necessary for the analysis and saving them in the database. To work with such a base (it is a MAT-file named dpss.mat) is intended for a family of functions whose names begin with letters dpss (dpss- calculation of functions, dpssload- loading a family of functions from the database, dpsssave- saving a family of functions in the database, dpssdir- displaying the database directory, dpsclear- removal of a family of functions from the database).

As an example, we will form the implementation of an exponentially correlated random process and perform its spectral analysis using the three listed methods. The random signal we need is generated by passing normal discrete white noise through a first-order recursive filter:

X0 = randn(1, 1000);
a = 0.9;
X = filter(1, , X0);

We build a periodogram:

periodogram(X, , , 1)

As you can see, the periodogram is very jagged. Now let's estimate the spectrum of the same implementation using the Welch method:

pwelch(X, , , , 1)

The irregularity of the graph is much less. Finally, we use the Thomson method:

pmtm(X, , , 1)

On the output of the function pmtm The graph shows the bounds of the confidence interval along with the power spectrum estimate.

The use of parametric methods implies the presence of some mathematical models analyzed random process. Spectral analysis is reduced in this case to solving an optimization problem, that is, finding such parameters models in which it is closest to the actually observed signal. The Signal Processing package implements a number of varieties of autoregressive analysis and two methods based on the analysis of eigenvalues ​​and vectors of the signal correlation matrix: MUSIC (MUltiple SIgnal Classification) and EV (EigenVectors).

According to autoregressive model the signal is generated by passing discrete white noise through a "purely recursive" filter N-th order. The power spectral density of such a signal is proportional to the square of the module of the coefficient of the transfer function of the shaping filter. Thus, this method of spectral analysis is reduced to determining the filter coefficients of a given order, estimating the power of exciting white noise, and analytically calculating the power spectral density. To determine the coefficients of the model, the error is minimized linear prediction signal. The theoretical analysis shows that the optimal coefficients of the model are determined only by the correlation function of the signal.

In practice, we do not know the true correlation function of the signal under study, therefore, to minimize prediction errors, we use estimates CF obtained by time averaging. Was designed whole line methods of autoregressive analysis, which differ mainly in the approach to processing edge effects (that is, in the method of involving in the calculations those edge samples of the signal for which no shifted pair is found when calculating the CF). The Signal Processing package implements the Burg method, the covariance method, the modified covariance method, and the Yule-Walker autoregressive method.

Autoregressive spectrum analysis methods are best suited for signals that are truly autoregressive processes. Generally, nice results these methods give when the spectrum of the analyzed signal has clearly defined peaks. In particular, such signals include the sum of several sinusoids with noise.

When using autoregressive methods, it is important to choose the order of the autoregressive model correctly - it must be twice the number of sinusoidal oscillations that are supposedly contained in the analyzed signal.

Each method of autoregressive analysis in the Signal Processing package corresponds to two functions - the function of calculating the coefficients of the model and the function of the spectral analysis itself. The spectral analysis function calls the function for calculating the coefficients of the model, and then calculates the spectrum. The function names are summarized in the following table.

Method name

Model Coefficient Calculation Function

Spectral analysis function

covariance method arcov pcov
Modified covariance method armcov pmcov
Berg method arburg pburg
Youle-Walker autoregressive method aryule pyulear

The exponentially correlated random signal formed in the above example is a first-order autoregressive process, so the listed methods of spectral analysis are quite adequate for it. We apply the Berg method by setting the order of the autoregressive model equal to one (this is the second parameter of the function pburg):

pburg(X, 1, , 1)

The resulting smooth curve practically coincides with the theoretical spectrum of this random process.

The MUSIC (MUltiple SIgnal Classification) method is designed for spectral analysis of signals that are the sum of several sinusoids (more precisely, in the general case, several complex exponentials) with white noise. The purpose of the spectral analysis of such signals, as a rule, is not to calculate the spectrum as such, but to determine the frequencies and levels (amplitudes or powers) of the harmonic components. The MUSIC method is designed specifically for this, so the dependence of the signal level on the frequency obtained with it is called pseudospectrum(pseudospectrum).

The method is based on the analysis of eigenvalues ​​and eigenvectors of the signal correlation matrix. When performing an analysis, you must specify the order of the model, that is, the number of complex exponents that are supposed to be contained in the signal.

In the Signal Processing package, the MUSIC method is implemented using the function pmusic, and the function rootmusic makes it possible to obtain estimates of the frequencies and powers of the harmonic components of the signal.

A close relative of MUSIC is the eigenvectors (EV) method. Its only difference is that in the calculation formulas the eigenvectors are multiplied by weight coefficients inversely proportional to the corresponding eigenvalues. There is evidence in the literature that the EV method generates fewer false spectral peaks than MUSIC and generally better reproduces the shape of the noise spectrum.

In the Signal Processing package, the EV method is implemented using the function peig, and the function rooteig makes it possible to obtain estimates of the frequencies and powers of the harmonic components of the signal.

It should be emphasized that the pseudospectra are not estimates of the true power density spectrum, but are only spectral pseudo estimates, which make it possible to estimate the frequencies of sinusoidal or narrow-band signal components with a resolution somewhat higher than the resolution of autoregressive methods.

The discrete Fourier transform, used in all nonparametric spectral estimation methods, implies a periodic continuation of the analyzed signal fragment. In this case, jumps can occur at the junctions of fragments, leading to the appearance of side lobes of a significant level in the spectral region. To reduce this effect, the signal before performing the DFT is multiplied by the decaying from the center to the edges weight function (window). As a result, the magnitude of the jumps at the junctions of the segments decreases, and the level of unwanted side lobes of the spectrum also becomes smaller - the price for this is some expansion of the spectral peaks.

In addition to spectral analysis, weight functions are used in the synthesis of non-recursive filters by inverse Fourier transform of the desired frequency response. In this case, they allow you to increase the signal suppression in the filter's stopband due to some expansion of the passband.

Currently, the Signal Processing package contains about a dozen weight functions. The distribution of some of them is due to computational simplicity, while others are optimal in some sense.

The simplest is a rectangular window implemented by the function rectwin(in package versions prior to 5.0 inclusive, this function had the name box car). The rectangular window corresponds to the absence of weighting; this function is included in the package only for the formal completeness of the set of weighting functions. The triangular window is implemented by the function triang, the Bartlett window also has a triangular shape (function bartlett), it differs only slightly in the method of calculation.

Several weight functions are combinations of harmonic components. We list them in ascending order of the number of cosine terms:

  • Hanna window (function hann), sometimes incorrectly called the Hanning window, is a single cosine term.
  • Hamming window (function hamming) is one cosine term.
  • Blackman window (function black man) are two cosine terms.
  • Blackman-Harris window (function blackmanharris) are three cosine terms.
  • The Nuttall window (an alternative version of the Blackman-Harris window, the function nuttallwin) are three cosine terms.

The remaining windows are described by more complex mathematical relationships. Gaussian window shape (function gausswin) needs no explanation. Modified Bartlett-Hann window (function barthannwin) is a linear combination of the Bartlett and Hann windows. Bohmen window (function bohmanwin) is the convolution of two identical cosine pulses. Chebyshev window (function chebwin) has side lobes of a fixed (specified in the calculation) level and is calculated by the inverse Fourier transform of the frequency response of the window. Kaiser window (function Kaiser) also has a parameter that controls the level of the side lobes and the width of the main lobe, when calculating this window, modified Bessel functions are used. Tukey window (feature tukeywin) is a rectangle with cosine-smooth edges. At extreme allowed values smoothing factor, it turns into a rectangular window or a Hann window.

Finally, the function window provides a general interface for calling specific window calculation functions.

Functions belonging to this category calculate various statistical parameters of the signals. Functions can be divided into several groups.

The first group refers to the calculation of correlation and covariance functions (here it should be recalled that in domestic and foreign terminology these concepts do not coincide; in this review, the foreign version adopted in MATLAB is used). Function xcorr allows you to evaluate the correlation function of a signal or the cross-correlation function of two signals. A variant of this function, designed to work with two-dimensional signals, has the name xcorr2. Function xcov is designed to estimate the covariance function of a signal or the mutual covariance function of two signals. Functions cov and corrcoef, which are included in the MATLAB core library, allow you to obtain, respectively, the covariance matrix and the matrix of correlation coefficients by averaging several realizations of random data. Function corrmtx returns a matrix of intermediate data for estimating the correlation matrix of the signal, and can also return this matrix itself.

The next group of functions calculates statistical characteristics in the frequency domain using the non-parametric Welch method (see above). Function csd intended for evaluation mutual spectral density two random processes. It can also return a confidence interval for the resulting estimate. Function cohere gives an estimate of the square of the modulus mutual coherence functions two random processes. Function tfe allows you to evaluate complex gain system for realizations of its input and output signals.

Finally, the function psdplot used by all spectral estimation functions to plot the power spectral density. It can also be called explicitly - for example, to display a graph on a linear scale instead of the default logarithmic one, or to show several spectra on one graph.

Parametric modeling and linear prediction functions

Under parametric modeling is understood as the choice of a certain mathematical model of a random process and the subsequent selection of the parameters of this model to ensure the maximum correspondence between the signal generated by the model and the available real data sample.

One widely used in practice is the autoregressive (AR) model, in which a random signal is generated by passing discrete white noise through a “purely recursive” (i.e., not using delayed input signal samples) shaping filter. The four functions of the Signal Processing package are - arburg, arcov, armcov and aryule- designed to obtain estimates of the coefficients of the shaping filter and the dispersion (power) of the white noise that excites the filter. The calculation methods used by these functions were specified earlier in the section “Autoregressive Methods”, where we talked about autoregressive spectral analysis.

If we have an estimate complex gain systems at different frequencies, it is possible to construct a realizable model of the system, the frequency response of which will be as close as possible to the measured one. The realizability of the system here means the representability of its transfer function as a fractional-rational function with given orders of the numerator and denominator polynomials. Parametric modeling in this case is reduced to finding the optimal coefficients of the polynomials of the numerator and denominator of the transfer function. This task is solved by two functions of the Signal Processing package: the function invfreqs allows you to build a model of an analog system, and the function invfreqz performs a similar operation for discrete systems.

Another variant of the parametric modeling problem involves building a model of the system according to the available estimate of its impulse response. To do this, the Signal Processing package provides two functions. Function prony uses the fact that the impulse response of a recursive discrete system in the absence of multiple poles is the sum of discrete exponential functions (complex in the general case). The algorithm implemented by this function was originally developed in the 18th century by Baron de Prony with the aim of fitting the parameters of an exponential analytical model to experimental data. The stability of the resulting system is not guaranteed, but the first n samples ( n- the order of the numerator of the system transfer function specified in the calculation) of its impulse response exactly match the given ones.

The second function of modeling the system from the impulse response is the function stmcb- does not seek to ensure an exact match of the initial fragments of the impulse responses - instead, it minimizes standard deviation of the obtained characteristic from the given one, that is, the sum of the squares of the modules of the differences in the readings of the obtained and desired impulse responses. The function implements the Steiglitz-McBride iterative method, which is reduced to multiple solution of a system of linear equations with respect to the coefficients of the polynomials of the transfer function of the desired system.

As an example, we obtain a model of a third-order system using the Prony and Steiglitz-McBride methods, setting the triangular impulse response as a sample:

h = ; % impulse response
= prony(h, 3, 3); % Prony method
= stmcb(h, 3, 3); % Steiglitz-McBride method
% graphs of the impulse responses of the obtained systems
impz(b1, a1, 30)
title("Prony")
figures
impz(b2, a2, 30)
title("Stmcb")

Comparison of graphs clearly demonstrates the differences between the two algorithms. When using the Prony method, the first four readings of the obtained impulse response exactly coincide with the given ones, however, later on, the deviations from the given values ​​increase greatly, and after the end of the given fragment, a “tail” with a rather large level is observed, since the function prony does not make any assumptions about the required values ​​of the impulse response outside the given fragment. Function stmcb minimizes quadratic preset playback error endless impulse response, and at the end of an explicitly specified fragment, it is considered equal to zero. As a result, there is no exact correspondence between the readings of the given and obtained impulse responses (with the exception of the first one), but the error in reproducing the characteristic is “smeared” over the readings more evenly.

If a discrete random process is not white noise, its samples turn out to be correlated together. This allows, knowing the correlation function of the process, predict the value of its next count. The predicted value is calculated as a linear combination of the previous process samples. This is the main idea linear prediction. Linear prediction is used for parametric spectrum analysis (see earlier), system identification, speech signal analysis, and data compression during transmission.

System models based on linear prediction can be represented in various forms and, accordingly, described using different sets of parameters. A number of functions in the Signal Processing package allow you to convert a model description from one form to another. These functions are listed in the following table.

final form

Autocorrelation sequence

Reflection coefficients

Prediction coefficients

Arcsine parameters

Logarithmic ratios

Spectral line frequencies

original form

Autocorrelation sequence

ac2rc, schurrc

Reflection coefficients

Prediction coefficients

Arcsine parameters

Log ratios

Spectral line frequencies

In addition, the Signal Processing package has several more functions related to linear prediction. So, to calculate the coefficients of the predictive filter, it is necessary to solve a system of linear equations, the matrix of which is the correlation matrix of the input signal. This matrix has a number of properties, thanks to which it is possible to reduce the number of computational operations required to solve a system of linear equations. First, the correlation matrix is self-adjoint(that is, it does not change after applying to it Hermitian conjugation- combinations of transposition with complex conjugation). For a real signal, self-adjoint simply means that the matrix is ​​symmetrical about the main diagonal. Secondly, in the case of a stationary random process (and only for such processes can a predictive filter with constant parameters be used), the correlation matrix is Toeplitz matrix- along its diagonals parallel to the main one, there are the same numbers. Finally, the right side of the system of equations is the first column of the correlation matrix shifted by one position. Systems of linear equations with matrices having the indicated properties are called systems of Yule-Walker equations, and to solve them, a recursive Levinson-Durbin method. This iterative algorithm is implemented by the function levinson. Function rlevinson solves the inverse problem - allows you to find the vector of samples of the correlation function of the signal for the given coefficients of linear prediction.

Function lpc implements the calculation of linear prediction coefficients by the autocorrelation method and is an analogue of the function aryule(see earlier section on parametric spectral analysis). These two functions differ only in the MATLAB code used to compute the correlation matrix score. The results given by them coincide up to computational errors.

Signal Generation Functions

The Signal Processing package contains a number of functions designed to generate standard waveforms that are often encountered in solving various signal processing problems.

Generation of non-periodic signals

All functions for generating non-periodic signals receive as parameters a vector of time points and additional arguments describing the parameters of the generated pulse. The returned result is a vector of samples of the resulting signal. There are functions for generating signals of the following form:

  • rectpulse
  • - generation of a single rectangular pulse, the only additional parameter is the pulse duration;
  • tripuls
  • - generation of a single triangular pulse, additional parameters are the pulse duration and its asymmetry coefficient; - generation of a pulse with a rectangular spectrum, according to the formula sinc( x) = sin(p x)/(p x). This function has no additional parameters;
  • Gauspulse
  • - generation of a radio pulse with a Gaussian envelope. Additional parameters are the carrier frequency, the relative bandwidth and the level (in decibels) at which this bandwidth is measured;
  • gmonopuls
  • - generation of a Gaussian monopulse (its shape is the first derivative of the Gaussian function). Additional parameter is the average frequency of the generated signal spectrum.

Generation of periodic signals

The functions belonging to this group receive as parameters a vector of time points and additional arguments describing the parameters of the generated pulse. The period of generated signals is equal to 2p . To generate signals with a different period, it is necessary to scale the time argument passed to the function accordingly. The returned result is a vector of samples of the resulting signal. There are functions for generating periodic signals of the following form:

  • square
  • - generation of a periodic sequence of rectangular pulses. An additional parameter is the duty cycle of the pulses (the ratio of the pulse duration to the period of their repetition);
  • sawtooth
  • - generation of a periodic sawtooth signal. An additional parameter is the coefficient of asymmetry of the triangular pulses that make up the periodic sequence;
  • diric
  • is the Dirichlet function. An additional parameter is the integer order of the function. The Dirichlet function is calculated by the formula diric( x) = sin( nx/2)/(n sin( x/2));

Generation of oscillations with changing frequency

This group includes two functions - chirp and vco. Function chirp generates oscillations, the instantaneous frequency of which varies according to one of three possible laws - linear, quadratic or exponential. The function has more possibilities vco(Voltage Controlled Oscillator - a voltage-controlled generator), which allows you to generate oscillations with an arbitrary law of change in the instantaneous frequency. In fact, this function performs frequency modulation.

Pulse train generation

Function pulstran serves to generate a finite sequence of pulses of the same shape with arbitrarily set delays and amplitude factors. The shape of the pulses can be specified in one of two ways: by the name of the function that generates the pulse, or by an already calculated sample vector.

As an example, consider the use of functions pulstran and sinc to restore an analog signal from its discrete samples according to the Kotelnikov theorem.

t = -5:0.1:10; % time for analog signal
k = 0:5; % number of discrete signal samples
sd = ; % discrete signal
sa = pulstran(t, , "sinc"); % recovered analog signal
stem(k, sd) % graph of discrete signal
hold on
plot(t, sa, "r") % analog signal plot
hold off

Discrete Signal Transformation Functions

Perhaps the most well-known of discrete signal transformations is the Discrete Fourier Transform (DFT). The corresponding function using the fast Fourier transform (FFT) algorithm in MATLAB belongs to the category of data processing functions and is built-in (functions fft and ift- one-dimensional version, fft2 and ifft2- two-dimensional version, fft-shift and ifftshift- permutation of halves of the vector of spectral samples to transfer the zero frequency to the middle of the vector). Actually, the Signal Processing package contains functions that implement more specific transformations.

Like any linear transformation, the DFT can be represented as the multiplication of the transformation matrix by a column of samples of the converted signal. This transformation matrix for the DFT is calculated by the function dftmtx.

Again, due to the linearity of the DFT, any spectral sample can be represented as the result of processing the original signal by some filter. Representing this filter in recursive form gives Herzel's algorithm, implemented by the function goertzel. If only some spectral samples need to be computed, this algorithm is faster than FFT.

A close relative of the DFT is the discrete cosine transform. When it is calculated, instead of the periodic continuation of the signal, which is assumed in the DFT, neighboring fragments of the continued signal are mirrored in time. As a result, the signal becomes an even function of time, and its spectrum becomes real. For this reason, instead of complex exponentials, only cosines appear in the DFT formula, which gave the name this transformation. The forward and inverse discrete cosine transforms are calculated by the functions dct and idct respectively.

An important method for analyzing discrete numerical sequences is the z-transform, which results in a function of a complex variable z:

.

In some problems it is necessary to calculate z-transformation for points located on the spiral contour: . A computationally efficient calculation of the z-transform along such a contour uses the fast Fourier transform; it is implemented in the function czt.

When implementing some variants of the fast Fourier transform algorithm, in order to increase efficiency, it is necessary to rearrange the elements of the processed vector in reverse bit order(this means that in binary representations of vector element numbers, the bits are read in reverse order, and then the vector is ordered according to the new element numbers). To perform such a permutation, use the function bitrevorder.

When analyzing narrowband signals, it can be useful to represent the signal as an oscillation with time-varying amplitude and initial phase. To obtain such a representation, a complex analytical signal, the real part of which coincides with the original signal, and the imaginary part is determined by Hilbert transform from the original signal. In the frequency domain, the analytical signal is characterized by a one-sided spectrum: its spectral function is nonzero only for positive frequencies. Function hilbert calculates an analytical signal in the frequency domain by calculating the forward DFT, zeroing out half of the spectrum, and then calculating the inverse DFT.

As an example, let's calculate an analytical signal for a rectangular radio pulse:

s = zeros(256.1);
s(65:192) = cos(pi/2*(0:127)"); % radio pulse counts
sa = hilbert(s); % analytical signal
f = (-128:127)/128; % value of normalized frequencies for graphs
subplot(2,1,1)
plot(f, abs(fftshift(fft(s)))) % real signal spectrum
subplot(2,1,2)
plot(f, abs(fftshift(fft(sa)))) % spectrum of analytical signal

The lower graph clearly shows the zeroing of the spectrum in the region of negative frequencies and its doubling in the region of positive frequencies.

Cepstral Analysis

Cepstral analysis is related to homomorphic signal processing. Such processing obeys the generalized principle of superposition: if the input signal of the system is a combination of several signals obtained using a mathematical operation BUT, then at the output the results of processing individual signals are combined using the operation B. Cepstral analysis has found its application, in particular, in speech processing problems. The Signal Processing package has several functions related to cepstral analysis.

Function rceps calculates the real cepstrum of the signal, ignoring the information contained in phase spectrum. The same function allows you to get minimum-phase reconstruction signal. Zeros z-transformations of the sequence of readings of such a signal lie on the complex plane inside the unit circle, and its cepstrum coincides with the cepstrum of the original signal.

Complex cepstrum calculated using the function cceps, takes into account both amplitude and phase information, so its relationship with the original signal is one-to-one. The inverse transformation, that is, the calculation of the signal from the known complex cepstrum, is performed by the function iceps.

Changing the sample rate

Transformations include a group of functions that change the sampling frequency of the signal. Most often, the sampling rate needs to be changed by an integer number of times. In the case of increasing the sampling rate, this operation is called interpolation, and in case of decrease - thinning(decimation). Any recalculation of the sample rate requires several steps to be taken in succession. The Signal Processing package contains functions that implement both individual actions and their necessary sequences.

To perform interpolation between signal samples, the required number of zeros is inserted (function upsample), then the received signal is passed through a low-pass filter. Together, these two stages are implemented by the function interp.

To perform decimation, the signal is passed through a low-pass filter, and then each N-th count (function downsample). Together, these two stages are implemented by the function decimate.

The combination of interpolation and thinning operations allows you to change the sampling rate by a factor expressed by an arbitrary rational fraction. To increase the efficiency of calculations, such a recalculation is performed by a specialized function resample, and it, in turn, calls the function upfirdn(zero insertion, filtering and decimation). The main difference between these two functions is that upfirdn allows you to set the impulse response of the filter used, and when using the function resample the filter is calculated automatically.

Finally, for absolutely arbitrary recalculation of reference moments, common interpolation functions contained in the MATLAB core library can be used, such as interp1 and splines.

As an example, let's interpolate the signal used above to demonstrate the functions pulstran and sinc, upsampling the sampling rate by four times. Zero samples are added along the edges of the signal, since the function interp requires that the original sequence be at least nine samples long.

sd = ; % original signal
t = 0:9; % discrete time for original signal
sd4 = interp(sd, 4); % interpolation
t4 = (0:39)/4; % discrete time for interpolated signal
stem(t, sd) % graph of original signal
hold on
plot(t4, sd4, "x") % plot of the interpolated signal
hold off

In the above graph, the original signal is shown by “stalks”, and the interpolated signal is shown by crosses.

This group contains quite big number functions. Many of them are intended primarily for "internal use" - they are called by other functions in the package. However, a number of functions in this category have a completely independent value.

Function buffer allows you to represent the vector of signal samples in a matrix of consecutive frames, and these frames can overlap.

Functions mod and demod perform modulation and demodulation respectively. The following types of modulation are supported: amplitude, amplitude suppressed carrier, single-sideband, phase, frequency, quadrature, pulse-width, time-pulse.

Functions encode and udecode implement, respectively, uniform quantization and restoration of the signal by the numbers of quantization levels.

Function stripes is designed to display a signal graph in several lines. This is useful if you need to take a look long signal entirely, and the resolution along the vertical axis does not matter much.

A family of functions whose names begin with symbols dpss, is designed to calculate discrete prolate spheroidal functions and work with a database designed to store calculated functions. Discrete prolate spheroidal functions are used in spectral analysis by the Thomson method (see the section “Functions of Spectral Analysis and Statistical Signal Processing”, subsection “Nonparametric Methods”) above.

Functions cell2sos and sos2cell allow you to store information about the filter, presented in the form of sequentially included sections of the second order, not only in the form of a matrix, but also in the form of an array of cells. These two functions convert the view between the specified two forms.

Function specgram calculates the spectrogram of the signal, i.e. the spectra of successive signal fragments selected using a sliding window. Calculation results can be returned as a matrix or displayed in color in time-frequency coordinates.

Function cplxpair selects complex conjugate pairs in vectors of complex numbers.

Function eqtflength allows you to equalize the lengths of two vectors by padding the shorter one with zeros at the end.

Function seqperiod is designed to check the elements of the vector for periodicity and determine the period of their repetition.

The filter synthesis and analysis program FDATool (Filter Design & Analysis Tool) is a shell for calling the functions of synthesis and analysis of discrete filters. If you have the Filter Design Toolbox this program also allows you to analyze the effects associated with the quantization of filter coefficients, and perform filter type conversions (LPF to HPF, etc.).

The program allows not only to calculate filters from scratch, but also to save and load work sessions, edit previously saved filters. In addition, you can import a filter description from the MATLAB workspace or from XILINX CORE Generator (*.coe) format files. Imported filters can only be analyzed.

When analyzing, you can view the following filter characteristics (in latest versions package, you can view several charts at the same time):

  • Amplitude-frequency characteristic (AFC).
  • Phase response (PFC).
  • AFC and PFC at the same time.
  • group delay.
  • phase delay.
  • impulse response.
  • transition characteristic.
  • Location of zeros and poles on the complex plane.
  • Filter coefficients.
  • Information about the filter structure.

The calculated filter can be used as follows:

  • Save the session for later loading into the FDATool program for further editing and analysis.
  • Export to MATLAB workspace.
  • Export to text file.
  • Export to MAT-file.
  • Export to header (*.h) file of C language.
  • Export to SPTool (see below).

The figure below shows the view of the FDATool program window with the results of calculating the elliptic low-pass filter using the bilinear z-transform method. The frequency response and group delay graphs are displayed.

The signal processing program SPTool (Signal Processing Tool) allows you to perform the following operations: import signals from MAT-files or MATLAB workspace; view signal graphs (including several at the same time); apply to signals various methods spectral analysis and view the resulting graphs; calculate discrete filters using the package functions (including by direct editing the location of zeros and poles); pass signals through filters and analyze the resulting output signals.

It should be noted that in terms of analysis and synthesis of filters, the capabilities of the SPTool program are narrower than those of the FDATool program (the only exception is the ability to directly edit the location of zeros and poles, which is absent in FDATool). However, these limitations are offset by the ability to export the calculated filter from FDATool to SPTool.

The figures below show the view of the SPTool program window when viewing the signal graph, when analyzing the spectrum, and when editing the location of zeros and poles of the filter.

The program for the synthesis and analysis of weight (window) functions WinTool (Window Design and Analysis Tool), which appeared in package version 6.0 (R13), is a shell for calling the window calculation functions available in the package. This demonstrates the characteristics of the window (or several windows at the same time) in the time and frequency domains.

The figure below shows a view of the WinTool program window with the results of calculating the Chebyshev window of the 64th order with the level of the side lobes of the spectrum -100 dB. The time domain plot of the window and its spectrum are displayed.

Digital Filter Design

4.1.1. Basic definitions of digital filter design

A digital filter (DF) is broadly understood to mean any digital system, which extracts a digital signal or its parameters from the mixture existing at the input of the system interference signal.

A digital filter in the narrow sense is a frequency-selective circuit that provides selection digital signals by frequency.

Digital filters in a broad sense include:

Amplitude and phase correctors of frequency characteristics;

Differentiators;

Hilbert Transformers;

matched filters.

Digital filters in the narrow sense include frequency-selective filters:

Low pass filter (LPF);

High pass filter (HPF);

Band pass filter (PF);

Notch filter (RF).

Digital filters can be implemented:

hardware;

Programmatically;

Hardware-software.

Hardware implementation implies the use of functional elements in the form of registers, adders, multipliers, memory devices, logic elements.

Software implementation means that the filter is presented as a program written in a programming language.

Hardware-software implementation means the implementation of part of the filter functions in hardware (ADC, DAC, multiplication, data reception / transmission) while the other part of the functions is executed in software.

Digital filter design is understood as the process as a result of which a program is presented or digital device that meets the given requirements.

The design of the digital filter includes the following steps:

1. Synthesis.

2. Development of calculation algorithms.

3. Verification by simulation.

4. Practical implementation and debugging.

result synthesis is structural scheme filter and a set of coefficients and difference equations and transfer functions.

Development of the calculation algorithm depends on the capacity of the registers, the number of processor batteries, the possibility of parallelizing operations, the presence of multiplication and accumulation devices. The final algorithm should ensure that the filter functions in real time with minimal loss of quality.

Simulation verification is carried out in an unreal time scale according to standard signals using software emulators. At the same time, logical errors are eliminated and the compliance of the filter with the specified characteristics is checked.

Practical implementation and debugging carried out in real time with the help of debug modules.



General information about the synthesis of CF

During the synthesis process, the following steps are performed:

Filter requirements are set;

The coefficients of the transfer function or difference equation are calculated;

The block diagram of the digital filter is being formed.

The requirements for the digital filter can be specified either in the time domain or in the frequency domain, depending on the purpose of the filter.

Requirements in the time domain are usually specified for matched filters through the required impulse response.

Requirements in the frequency domain are usually specified for frequency selective filters.

For example, the requirements for a bandpass filter are characterized by five frequency bands (Figure 2.1):

Central Bandwidth (PP);

Two detention lanes (PZ1, PZ2);

Two transition lanes.

Figure 2.1 - diagram of requirements for the frequency response of the PF

Figure 1 shows:

The cutoff frequency of the first stopband PZ1, the width of which ;

Left passband cutoff frequency;

Right passband cutoff frequency, bandwidth ;

The cutoff frequency of the second delay band PZ2, the width of which ;

Transition strips 1 and 2 have a width , , respectively.

The value characterizes the maximum permissible deviation of the frequency response from 1 within the bandwidth. The value characterizes the maximum permissible deviation of the frequency response from 0 within the stop bands. Requirements for the characteristics of the frequency response within the transition bands are usually not specified.

There are the following types of synthesis methods:

Direct methods of synthesis;

Synthesis methods using an analog prototype.

Direct methods fall into two categories:

Best Practices;

suboptimal methods.

AT best practices numerical methods are used to find the minimum of a given quality function. As a minimized measure of the deviation of the filter characteristic from the given one, the error rate is used:



. (2.1)

Suboptimal methods make it possible to simplify calculations by taking into account the specifics of the problem.

The theoretical part for this laboratory work is contained in the tutorial "Methods for synthesizing digital filters" (file filtrs\lecture_dsp.doc)

  1. Working with the qedesign 1000 program.

Run QED.exe to download the QEDESIGN 1000 Digital Filter Design System.

In the main menu that opens, select the Design submenu, a window for specifying filter types for calculation will open:

Fig.1. Menu "Design" Fig.2. Options Menu

Here you can select the kind of filter calculation method of the filter to calculate:

1. Calculation of the IIR filter (IIR Design).

for IIR filters, additionally open the Options menu and select one of the methods

    Bilinear Transformation

    method of invariant impulse response (Impulse Invariant)

2. Calculation of the FIR filter by the weighting method using the window (FIR Design (Windows))

3. Calculation of the FIR filter by the method of optimal filters (FIR Equiripple FIR Filter Design)

Select the desired method by clicking the left mouse button.

1 - low-pass filter (Lowpass)

2 - high pass filter (Highpass)

3 - bandpass filter (Bandpass)

4 - notch filter (Bandstop)

After selecting the filter type, a window for setting filter parameters will open.

Fig.3. Window for setting low-pass filter parameters

Fig 4. Window for setting the parameters of the high-pass filter

In this window you need to set:

Sampling Frequency F d

Passband cutoff frequencies Wп (Passband Frequency)

Cut-off frequencies of the delay band Wз (Stopband Frequency)

Ripple (attenuation) in the passband (Passband Ripple)

Ripple (attenuation) in the delay band (Stopband Ripple)

In doing so, remember:

for low-pass filters Wp

for high-pass filters Wп>Wз

for bandpass filters Wп1

for notch filters Wp1

The cutoff frequencies must be less than the sample rate divided by 2.

Typical ripple (decay) range:

in the bandwidth 0.1 - 3 dB

in the delay band 20 - 100 dB

Fill in all the fields and click the "Accept" button.

D
then go to the main menu and select the menu item "Start".

In this screen for IIR filters, the window fig. 5.

Fig 5. Window for calculating IIR filters

Prototype analog filter type(Select Analog Filter Type):

1- Butterworth;

2 - Chebyshev;

3 - reverse Chebyshev;

4 - elliptical;

5 - Bessel;

Filter Order(Enter Order Desired)

The filter order must be a multiple of 2; be no greater than the estimated order (Estimated Filter Order).

For FIR filters calculated by the window method, a window opens, fig. 6

Rice. 6. Window for calculating FIR filters using the window method.

Here are the possible Window Functions, along with the recommended number of Estimated Order samples:

1 - rectangular;

2 - triangular;

4 - Hamming;

5 - Blackman;

and others.

Select number of filter taps (Enter Number of Taps Desired)

The number of readings should not be less than 2 and not more than recommended.

For optimal FIR filters, choose:

Estimated Number of taps

The number of readings should not be less than 3 and not more than recommended.

The user has the ability to display the following graphs on the screen and on the printer:

For IIR filters:

Amplitude characteristic (Magnitude)

Phase response (Phase);

Poles and zeros of the transfer function (Poles and Zeros);

Group delay characteristic (Group Delay);

For FIR filters:

Amplitude characteristic (Magnitude);

Amplitude response on a logarithmic scale (Log10 Magnitude);

Impulse response (Impulse Response);

Transient response (Step Response);

The menu for saving results will open, Fig.7.

Rice. 7. Save Results Menu

The common filename (eg filtr1) is used to save the results.

If the Save Problem Specifications option has been selected, the filter specification will be saved in the *.spc file.

If the Create Quantizied Coefficient File option is selected, the filter coefficients will be saved. The system performs quantization of the filter coefficients. The Quantize Coefficients option must always be selected. Otherwise, the filter coefficients will not be saved.

On fig. 8. shows a window for setting quantization parameters, which also contains information about how the filter is implemented.

Rice. 8. Quantize Options Menu

For IIR filters, two Implementation Types are possible:

    Cascaded form with 2nd order blocks in direct form (Cascaded Transposed Second Order Section);

    Cascaded form with 2nd order blocks in canonical form (Cascaded Canonic Second Order Section);

    Parallel direct form with blocks of the second order (Parallel Transposed Second Order Section);

    Parallel Transposed Second Order Section (Parallel Transposed Second Order Section);

The results of calculating the filter coefficients are saved in the *.flt file.

The following is an example of the contents of a file of quantized band pass filter coefficients.

FILTER COEFFICIENT FILE

FILTER TYPE BAND PASS

ANALOG FILTER TYPE ELLIPTIC

PASSBAND RIPPLE IN -dB -3.0000

STOPBAND RIPPLE IN -dB -20.0000

PASSBAND CUTOFF FREQUENCIES .500000E+03 .600000E+03 HERTZ

STOPBAND CUTOFF FREQUENCIES .400000E+03 .700000E+03 HERTZ

SAMPLING FREQUENCY .400000E+04 HERTZ

FILTER DESIGN METHOD: BILINEAR TRANSFORMATION

FILTER ORDER 4 0004h

NUMBER OF SECTIONS 20002h

NO. OF QUANTIZED BITS 16 0010h

QUANTIZATION TYPE - BLOCK FLOATING POINT

COEFFICIENTS SCALED FOR CASCADE FORM II

4 FFFFFFFC /* shift count for overall gain */

21237 000052F5 /* overall gain */

2 FFFFFFFE /* shift count for section 1 numerator values ​​*/

30006 00007536 /* section 1 coefficient B0 */

21867 FFFFAA95 /* section 1 coefficient B1 */

30006 00007536 /* section 1 coefficient B2 */

1 00000001 /* shift count for section 1 denominator values ​​*/

19208 00004B08 /* section 1 coefficient A1 */

15554 FFFFC33E /* section 1 coefficient A2 */

3 00000003 /* shift count for section 2 numerator values ​​*/

15490 00003C82 /* section 2 coefficient B0 */

25573 FFFF9C1B /* section 2 coefficient B1 */

15490 00003C82 /* section 2 coefficient B2 */

1 00000001 /* shift count for section 2 denominator values ​​*/

22299 0000571B /* section 2 coefficient A1 */

15636 FFFFC2EC ​​/* section 2 coefficient A2 */

2289276123046875D+00 3FCD4D8000000000 .22893000E+00 /* section 1 B0 */

1668319702148437D+00 BFC55AC000000000 -.16683484E+00 /* section 1 B1 */

2289276123046875D+00 3FCD4D8000000000 .22893000E+00 /* section 1 B2 */

1172363281250000D+01 3FF2C20000000000 -.11723777E+01 /* section 1 A1 */

9493560791015625D+00 BFEE612000000000 .94936011E+00 /* section 1 A2 */

3781799316406250D+01 400E412000000000 .37818290E+01 /* section 2 B0 */

6243408203125000D+01 C018F94000000000 -.62435106E+01 /* section 2 B1 */

3781799316406250D+01 400E412000000000 .37818290E+01 /* section 2 B2 */

1361022949218750D+01 3FF5C6C000000000 -.13610497E+01 /* section 2 A1 */

9543914794921875D+00 BFEE8A6000000000 .95439489E+00 /* section 2 A2 */

Implementation of the structure of the cascaded form of an IIR filter with sections of the second order in canonical form (type) 2

The synthesized filter can be represented as a product of second-order sections:

,

where K is the integer part of (N=1)/2. H k (z) has the general form:

The filter can be implemented using the following form for each second-order section in canonical form:

Fig.9. Second order section in canonical form

(What is the significant difference between such an implementation and a direct implementation?)

Implementation of the structure of the cascaded form of an FIR filter with second-order sections in canonical form

We can decompose H(z) for second-order systems:

K is the integer part of (M+1)/2 (M is the length of the filter).

A simple cascaded implementation of an FIR filter with second-order sections:

Rice. 10. A simple cascaded implementation of an FIR filter.