The concept of "signal" can be interpreted in different ways. This is a code or a sign transferred into space, a carrier of information, a physical process. The nature of alerts and their relationship to noise influence its design. Signal spectra can be classified in several ways, but one of the most fundamental is their change over time (constant and variable). The second main classification category is frequencies. If we consider in the time domain in more detail, among them we can distinguish: static, quasi-static, periodic, repetitive, transient, random and chaotic. Each of these signals has certain properties which may influence the relevant design decisions.

Signal types

Static, by definition, is unchanged over a very long period of time. Quasi-static determined by the level direct current, so it needs to be handled in low-drift amplifier circuits. This type of signal does not occur at radio frequencies because some of these circuits can produce a steady voltage level. For example, a continuous wave alert with a constant amplitude.

The term "quasi-static" means "nearly unchanged" and therefore refers to a signal that changes unusually slowly over a long time. It has characteristics that are more like static alerts (permanent) than dynamic alerts.

Periodic Signals

These are the ones that repeat exactly on a regular basis. Examples of periodic waveforms include sine, square, sawtooth, triangular waves, etc. The nature of the periodic waveform indicates that it is identical at the same points along the timeline. In other words, if the timeline advances exactly one period (T), then the voltage, polarity, and direction of the waveform change will repeat. For the voltage shape, this can be expressed by the formula: V (t) = V (t + T).

Repeating Signals

They are quasi-periodic in nature and therefore bear some resemblance to a periodic waveform. The main difference between them is found by comparing the signal at f(t) and f(t + T), where T is the alert period. Unlike periodic alerts, in repeated sounds these dots may not be identical, although they will be very similar, as will the overall waveform. The alert in question may contain either temporary or persistent indications, which vary.

Transient signals and impulse signals

Both kinds are either a one-time event or a periodic event in which the duration is very short compared to the period of the waveform. This means that t1<<< t2. Если бы эти сигналы были переходными процессами, то в радиочастотных схемах намеренно генерировались бы в виде импульсов или переходного режима шума. Таким образом, из вышеизложенной информации можно сделать вывод, что фазовый спектр сигнала обеспечивает колебания во времени, которые могут быть постоянными или периодическими.

Fourier series

All continuous periodic signals can be represented by a fundamental frequency sine wave and a set of cosine harmonics that add up linearly. These oscillations contain swell forms. An elementary sine wave is described by the formula: v = Vm sin(_t), where:

  • v is the instantaneous amplitude.
  • Vm is the peak amplitude.
  • "_" - angular frequency.
  • t - time in seconds.

The period is the time between the repetition of identical events, or T = 2 _ / _ = 1 / F, where F is the frequency in cycles.

The Fourier series that makes up the waveform can be found if a given quantity is decomposed into its component frequencies either by a frequency selective filter bank or by a digital signal processing algorithm called fast transform. The method of building from scratch can also be used. The Fourier series for any waveform can be expressed by the formula: f(t) = a o/2+ _ n -1 .

9. Properties of the Fourier transform. Linearity properties, time scale changes, others. Theorem on the spectrum of the derivative. Theorem on the spectrum of the integral.

10. Discrete Fourier Transform. Radio interference. Interference classification.

Discrete Fourier Transform can be obtained directly from the integral transformation of the discretizations of the arguments (t k = kt, f n = nf):

S(f) =s(t) exp(-j2ft) dt, S(f n) = ts(t k) exp(-j2f n kt), (6.1.1)

s(t) =S(f) exp(j2ft) df, s(t k) = fS(f n) exp(j2nft k). (6.1.2)

Recall that discretization of a function in time leads to periodization of its spectrum, and discretization of the spectrum in frequency leads to periodization of the function. It should also not be forgotten that the values ​​(6.1.1) of the number series S(f n) are discretizations of the continuous function S "(f) of the spectrum of the discrete function s(t k), as well as the values ​​(6.1.2) of the number series s(t k) are a discretization of the continuous function s"(t), and when these continuous functions S"(f) and s"(t) are restored from their discrete samples, the correspondence S"(f) = S(f) and s"(t) = s (t) is guaranteed only if the Kotelnikov-Shannon theorem is satisfied.

For discrete transformations s(kt)  S(nf), both the function and its spectrum are discrete and periodic, and the numerical arrays of their representation correspond to the assignment on the main periods T = Nt (from 0 to T or from - T/2 to T/2), and 2f N = Nf (from -f N to f N), where N is the number of readings, while:

f = 1/T = 1/(Nt), t = 1/2f N = 1/(Nf), tf = 1/N, N = 2Tf N . (6.1.3)

Relations (6.1.3) are the conditions for informational equivalence of dynamic and frequency forms of representation of discrete signals. In other words: the number of readings of the function and its spectrum must be the same. But each sample of the complex spectrum is represented by two real numbers and, accordingly, the number of samples of the complex spectrum is 2 times more than the samples of the function? This is true. However, the representation of the spectrum in complex form is nothing more than a convenient mathematical representation of the spectral function, the real readings of which are formed by adding two conjugate complex readings, and complete information about the spectrum of the function in complex form is contained in only one of its half - readings of the real and imaginary parts of complex numbers in frequency interval from 0 to f N , because information of the second half of the range from 0 to -f N is associated with the first half and does not carry any additional information.

In the case of discrete representation of signals, the argument t k is usually indicated by sample numbers k (by default, t = 1, k = 0.1,…N-1), and Fourier transforms are performed by the argument n (frequency step number) on the main periods. For N values ​​that are multiples of 2:

S(f n)  S n = s k exp(-j2kn/N), n = -N/2,…,0,…,N/2. (6.1.4)

s(t k)  s k = (1/N)S n exp(j2kn/N), k = 0,1,…,N-1. (6.1.5)

The main period of the spectrum in (6.1.4) for cyclic frequencies is from -0.5 to 0.5, for angular frequencies from - to . For an odd value of N, the boundaries of the main period in frequency (values ​​f N) are half the frequency step behind the samples (N/2) and, accordingly, the upper summation limit in (6.1.5) is set equal to N/2.

In computing operations on a computer, in order to exclude negative frequency arguments (negative values ​​of the numbers n) and use identical algorithms for the direct and inverse Fourier transforms, the main period of the spectrum is usually taken in the range from 0 to 2f N (0  n  N), and the summation in (6.1 .5) is produced respectively from 0 to N-1. In this case, it should be taken into account that the complex conjugate samples S n * of the interval (-N,0) of the two-sided spectrum in the interval 0-2f N correspond to the samples S N+1- n (i.e., the conjugate samples in the interval 0-2f N are the samples S n and S N+1- n).

Example: On the interval Т=, N=100, discrete signals s(k) =(k-i) are given - a rectangular pulse with single values ​​at points k from 3 to 8. The signal shape and the modulus of its spectrum in the main frequency range, calculated by the formula S(n) = s(k)exp(-j2kn/100) numbered from -50 to +50 with frequency step, respectively,=2/100, are shown in fig. 6.1.1.

Rice. 6.1.1. Discrete signal and modulus of its spectrum.

On fig. 6.1.2 shows the envelope values ​​of another form of representation of the main range of the spectrum. Regardless of the form of representation, the spectrum is periodic, which is easy to see if the spectrum values ​​are calculated for a larger interval of the argument n while maintaining the same frequency step, as shown in Fig. 6.1.3 for the envelope of spectrum values.

Rice. 6.1.2. Spectrum module. Rice. 6.1.3. Spectrum module.

On fig. 6.1.4. the inverse Fourier transform for the discrete spectrum is shown, performed by the formula s"(k) =(1/100)S(n)exp(j2kn/100), which shows the periodization of the original function s(k), but the main periodk=( 0.99) of this function completely coincides with the original signal s(k).

Rice. 6.1.4. Inverse Fourier Transform.

Transformations (6.1.4-6.1.5) are called Discrete Fourier Transforms (DFTs). For the DFT, in principle, all the properties of the integral Fourier transforms are valid, but in this case, the periodicity of discrete functions and spectra should be taken into account. The product of the spectra of two discrete functions (when performing any operations when processing signals in the frequency representation, such as filtering signals directly in the frequency form) will correspond to the convolution of periodized functions in the time representation (and vice versa). Such a convolution is called cyclic (see Section 6.4) and its results at the end sections of information intervals can differ significantly from the convolution of finite discrete functions (linear convolution).

It can be seen from the DFT expressions that to calculate each harmonic, N operations of complex multiplication and addition are needed and, accordingly, N 2 operations for the complete execution of the DFT. With large volumes of data arrays, this can lead to significant time costs. The acceleration of calculations is achieved by using the fast Fourier transform.

Interference is usually called extraneous electrical disturbances that are superimposed on the transmitted signal and make it difficult to receive it. With a high intensity of interference, reception becomes almost impossible.

Interference classification:

a) interference from neighboring radio transmitters (stations);

b) interference from industrial installations;

c) atmospheric interference (thunderstorms, precipitation);

d) interference caused by the passage of electromagnetic waves through the layers of the atmosphere: troposphere, ionosphere;

e) thermal and shot noise in the elements of radio circuits, due to the thermal motion of electrons.

Mathematically, the signal at the receiver input can be represented either as the sum of the transmitted signal and interference, and then the interference is called additive, or just noise, or in the form of a product of the transmitted signal and interference, and then such interference is called multiplicative. This interference leads to significant changes in the signal intensity at the receiver input and explains such phenomena as fading.

The presence of interference makes it difficult to receive signals at a high intensity of interference, signal recognition can become almost impossible. The ability of a system to resist interference is called noise immunity.

External natural active interference is the noise resulting from the radio emission of the earth's surface and space objects, the operation of other electronic equipment. A set of measures aimed at reducing the influence of mutual interference of RES is called electromagnetic compatibility. This complex includes both technical measures for improving radio equipment, the choice of a signal shape and a method for processing it, and organizational measures: frequency regulation, spacing of RES in space, normalization of the level of out-of-band and spurious emissions, etc.

11. Discretization of continuous signals. Theorem of Kotelnikov (counts). The concept of the Nyquist frequency. Concept of discretization interval.

Discretization of analog signals. Kotelnikov series

Any continuous message s(t), which occupies a finite time interval T With, can be transmitted with sufficient accuracy by a finite number N samples (samples) s(nT), i.e. a sequence of short pulses separated by a pause.

Discretization of messages in time is a procedure that consists in replacing an uncountable set of instantaneous signal values ​​with their countable (discrete) set, which contains information about the values ​​of a continuous signal at certain points in time.

With the discrete method of transmitting a continuous message, it is possible to reduce the time during which the communication channel is busy transmitting this message, from T With to , where is the duration of the pulse used to transmit the sample; it is possible to carry out simultaneous transmission of several messages over a communication channel (time multiplexing of signals).

The simplest is the discretization method based on V.A. Kotelnikov formulated for limited spectrum signals (sampling theorem):

if the highest frequency in the spectrum of the function s(t) is less than F m , then the function s(t) is completely determined by the sequence of its values ​​at moments separated from each other by no more than seconds and can be represented side by side:

.

Here, the value denotes the interval between readings on the time axis, and

sampling time, - the value of the signal at the moment of counting.

Series (1) is called the Kotelnikov series, and the samples (samples) of the signal ( s(nT)) is sometimes called the time spectrum of the signal.

has the following properties:

a) at the point t=nT the function is equal to 1, because at this point, the function argument is 0, and its value is 1;

b) at points t=kT, function, because the argument of the sine at these points is equal, and the sine itself is equal to zero;

c) spectral density of the function u n (nT) uniform in the frequency band and equal. This conclusion is based on the reciprocity theorem for the frequency and time of a pair of Fourier transforms. The PFC of the spectral density is linear and equal to (according to the signal shift theorem). In this way,

.

Time and frequency representations of a function u n (t) are given in Fig.3.

A graphical interpretation of the Kotelnikov series is shown in Fig.4.

The Kotelnikov series (1) has all the properties of the generalized Fourier series with basis functions u n (nT), and therefore defines the function s(t) not only at reference points, but also at any moment in time.

Function orthogonality interval u n is equal to infinity. Norm Square

The coefficients of the series, determined by the general formula for the Fourier series, are equal (using Parseval's equality):

Consequently

When the signal spectrum is limited by the final highest frequency, series (1) converges to the function s(t) for any value t.

If we take the interval T between samples less than , then the width of the spectrum of the basis function will be greater than the width of the signal spectrum, therefore, the fidelity of signal reproduction will be higher, especially in cases where the signal spectrum is not limited in frequency and the highest frequency F m one has to choose from energy or informational considerations, leaving the “tails” of the signal spectrum unaccounted for.

With an increase in the distance between samples (), the spectrum of the basis function becomes narrower than the spectrum of the signal, the coefficients C n will be samples of another function s 1 (t), whose spectrum is limited by the frequency .

If the duration of the signal T c is finite, then its frequency band is strictly equal to infinity, because the terms of finite duration and bandwidth are incompatible. However, you can almost always choose the highest frequency so that the "tails" contain either a small fraction of the energy or have little effect on the shape of the analog signal. With this assumption, the number of readings N on time T With will be equal to T With /T, i.e. N=2F m T c. Series (1) in this case has limits 0 , N.

Number N sometimes referred to as the number of degrees of freedom of the signal, or base signal. With an increase in the base, the accuracy of restoring an analog signal from a discrete one increases.

12. Time and frequency characteristics of linear radio circuits. The concept of impulse response. The concept of transient response. The concept of input and transfer frequency response.

When considering radio engineering signals, it was found that the signal can be represented both in time (dynamic representation) and in frequency (spectral representation) domains. Obviously, when analyzing the processes of signal conversion, the circuits must also have appropriate descriptions of time or frequency characteristics.

Let's start by considering the time characteristics of linear circuits with constant parameters. If the linear circuit performs transformation in accordance with the operator and a signal is applied to the input of the circuit as a delta function (in practice, a very short pulse), then the output signal (circuit reaction)

called impulse response chains. The impulse response forms the basis of one of the methods for analyzing signal transformation, which will be discussed below.

If a signal arrives at the input of the linear circuit, i.e. signal of the form “single difference”, then the output signal of the circuit

called transient response.

There is an unambiguous relationship between impulse and transient response. Since the delta function (see subsection 1.3):

,

then substituting this expression into (5.5), we get:

In turn, the transient response

. (5.8)

Let's move on to the consideration of the frequency characteristics of linear circuits. Let's apply the direct Fourier transform to the input and output signals

The ratio of the complex spectrum of the output signal to the complex spectrum of the input signal is called complex gain

(5.9)

It follows that

In this way, operator signal transformation by a linear circuit in the frequency domain is the complex gain.

We represent the complex transfer coefficient in the form

where and are the module and argument of the complex function, respectively. The modulus of the complex gain as a function of frequency is called amplitude-frequency characteristic (frequency response), and the argument - phase-frequency characteristic (PFC). The frequency response is even, and the phase-frequency characteristic - odd frequency function.

The time and frequency characteristics of linear circuits are interconnected by the Fourier transform

which is quite understandable, since they describe the same object - a linear circuit.

13. Analysis of the impact of deterministic signals on linear circuits with constant parameters. Time, frequency, operator methods.