Kotelnikov's theorem formulation meaning application. Is digital sound so good - sampling rate and Kotelnikov's theorem

At the end of the nineteenth and beginning of the twentieth century, telephone and radio communications developed rapidly. In 1882, the first in Russia telephone exchange. This station had 259 subscribers. And in Moscow at about the same time there were 200 subscribers.

In 1896, Alexander Popov transmitted the first radio signal to a distance of 250 meters, consisting of only two words:

The development of communications has been at the forefront of technological progress. A little over a century has passed since then, and thanks to the work of scientists and engineers in this industry, we see how the world has changed.

We cannot imagine our life without telephone, radio communication, television and the Internet. This is based on the propagation of electromagnetic waves, the theory of which was developed by James Clerk Maxwell in the middle of the nineteenth century. Electromagnetic waves are the carrier of useful signals, and in the theory of signal transmission, the theorem of the Russian scientist and engineer, academician Vladimir Aleksandrovich Kotelnikov plays a fundamental role.

It entered science under the name of the Kotelnikov theorem.

Vladimir Alexandrovich Kotelnikov

The future academician was born in 1908 in a family of Kazan University teachers. Studied at MVTU im. Bauman, attended lectures of interest to him at Moscow State University. In 1930, the electrical engineering faculty, where Kotelnikov studied, was transformed into the Moscow Power Engineering Institute, and Kotelnikov graduated from it. After graduation, he worked in various universities and laboratories. During the war, he headed the laboratory of a closed research institute in Ufa, where he dealt with issues of secure communication channels and message encoding.

Approximately such developments are mentioned by Solzhenitsyn in his novel "In the First Circle".

For about forty years he headed the department "Fundamentals of Radio Engineering", and was the dean of the Faculty of Radio Engineering. Later he became director of the Institute of Radio Engineering and Electronics of the Academy of Sciences of the USSR.

All students of the relevant specialties are still studying according to Kotelnikov's textbook " Theoretical basis radio engineering".

Kotelnikov also dealt with the problems of radio astronomy, radiophysical research of the oceans, and space research.

My latest work"Model quantum mechanics", written already at almost 97 years old, he did not have time to publish. It was released only in 2008.

V. A. Kotelnikov died at the age of 97 on February 11, 2005. He was twice a hero of socialist labor, was awarded many government awards. One of the minor planets is named after him.

Kotelnikov's theorem

The development of communication systems raised many theoretical questions. For example, signals of what frequency range can be transmitted over communication channels of different physical structure, with different bandwidth, so that information is not lost during reception.

In 1933, Kotelnikov proved his theorem, which is otherwise called the sampling theorem.

Statement of Kotelnikov's theorem:

If a analog signal has a finite (limited in width) spectrum, then it can be reconstructed uniquely and without losses from its discrete samples taken with a frequency strictly greater than twice the upper frequency.

An ideal case is described when the signal duration time is infinite. It has no interruptions, but it has a limited spectrum (by Kotelnikov's theorem). However mathematical model, which describes signals with a limited spectrum, in practice is well applicable to real signals.

Based on the Kotelnikov theorem, a method can be implemented discrete transmission continuous signals.

The physical meaning of the theorem

Kotelnikov's theorem in simple words can be explained as follows. If you need to transmit a certain signal, then it is not necessary to transmit it in its entirety. You can transmit its instantaneous impulses. The transmission frequency of these pulses is called the sampling frequency in the Kotelnikov theorem. It should be twice the upper frequency. In this case, at the receiving end, the signal is restored without distortion.

Kotelnikov's theorem draws very important conclusions about discretization. For different type signals exist different frequencies discretization. For voice (telephone) communication with a channel width of 3.4 kHz - 6.8 kHz, and for television signal- 16 MHz.

In communication theory, there are several types of communication channels. On the physical level- wired, acoustic, optical, infrared and radio channels. And although the theorem was developed for an ideal communication channel, it is applicable to all other types of channels.

Multichannel telecommunications

The Kotelnikov theorem underlies multichannel telecommunications. When sampling and transmitting pulses, the period between pulses is much greater than their duration. This means that in the intervals of pulses of one signal (this is called duty cycle), it is possible to transmit pulses of another signal. Systems for 12, 15, 30, 120, 180, 1920 voice channels were implemented. That is, about 2000 telephone conversations can be transmitted simultaneously over one pair of wires.

Based on the Kotelnikov theorem, in simple words, we can say that almost all modern systems connections.

Harry Nyquist

As is sometimes the case in science, scientists who similar problems come almost simultaneously to the same conclusions. This is quite natural. Until now, disputes have not subsided, who discovered the law of conservation - Lomonosov or Lavoisier, who invented the incandescent lamp - Yablochkin or Edison, - Popov or Marconi. This list can be continued without end.

So, the American physicist of Swedish origin Harry Nyquist in 1927 in the journal " Certain problems telegraph transmission" published his research with conclusions similar to those of Kotelnikov. His theorem is sometimes called the Kotelnikov-Nyquist theorem.

Harry Nyquist was born in 1907, did his PhD at Yale University, and worked at Bell Labs. There he studied the problems of thermal noise in amplifiers, participated in the development of the first phototelegraph. His works served as the basis for the further developments of Claude Shannon. Nyquist died in 1976.

Claude Shannon

Claude Shannon is sometimes referred to as the father information age- so great is his contribution to the theory of communication and computer science. Claude Shannon was born in 1916 in the USA. He worked at the Bell Lab and at a number of American universities. During the war, together with he was engaged in deciphering the codes of German submarines.

In 1948, in the article "Mathematical Theory of Communication," he proposed the term bit as a designation for the minimum unit of information. In 1949, he proved (independently of Kotelnikov) a theorem dedicated to the reconstruction of a signal from its discrete samples. It is sometimes called the Kotelnikov-Shannon theorem. True, in the West the name of the Nyquist-Shannon theorem is more accepted.

Shannon introduced the concept of entropy into communication theory. I studied codes. Thanks to his work, cryptography became a full-fledged science.

Kotelnikov and cryptography

Kotelnikov also dealt with the problems of codes and cryptography. Unfortunately, in the days of the USSR, everything related to codes and ciphers was strictly classified. And open publications of many of Kotelnikov's works could not be. However, he worked to create closed channels communications, the codes of which the enemy could not crack.

On June 18, 1941, almost before the war itself, Kotelnikov's article "Basics of Automatic Encryption" was written, published in the 2006 collection "Quantum Cryptography and Kotelnikov's One-Time Keys and Readings Theorem".

Noise immunity

With the help of Kotelnikov's work, a theory of potential noise immunity was developed, which determines which maximum amount There may be interference in the communication channel so that information is not lost. A variant of an ideal receiver, which is far from the real one, is considered. But ways to improve the communication channel are clearly defined.

space research

The team led by Kotelnikov made a great contribution to automation and telemetry systems. Sergei Pavlovich Korolev involved the Kotelnikov laboratory in solving the problems of the space industry.

Dozens of control and measuring points were built, connected into a single control and measuring complex.

Radar equipment was developed for interplanetary space stations, mapping was carried out in the opaque atmosphere of the planet Venus. With the help of devices developed under the direction of Kotelnikov, from the space stations "Venera" and "Magellan" radar was carried out on regions of the planet in predetermined sectors. As a result, we know what is hidden on Venus behind dense clouds. Mars, Jupiter, and Mercury were also explored.

Kotelnikov's developments have found application in orbital stations and modern radio telescopes.

In 1998, V. A. Kotelnikov was awarded the von Karman Prize. This is an award of the International Academy of Astronautics, which is given to people with creative thinking for a significant contribution to space research.

Search for radio signals of extraterrestrial civilizations

The international program to search for radio signals of extraterrestrial civilizations Seti with the help of the largest radio telescopes was launched in the 90s. It was Kotelnikov who justified the need to use multichannel receivers for this purpose. Modern receivers listen to millions of radio channels simultaneously, covering the entire possible range.

Also, under his leadership, work was carried out that defines the criteria for a reasonable narrowband signal in general noise and interference.

Unfortunately, these searches have not been successful so far. But on the scale of history, they are conducted for a very short time.

Kotelnikov's theorem refers to fundamental discoveries in science. It can be safely put on a par with the theorems of Pythagoras, Euler, Gauss, Lorentz, etc.

In every area where it is necessary to transmit or receive any electromagnetic signals, we consciously or unconsciously use the Kotelnikov theorem. We talk on the phone, watch TV, listen to the radio, use the Internet. All this basically contains the principle of signal discretization.

Write this article I was inspired by the following challenge:

As is known from the Kotelnikov theorem, in order for an analog signal to be digitized and then restored, it is necessary and sufficient that the sampling frequency be greater than or equal to twice the upper frequency of the analog signal. Suppose we have a sine with a period of 1 second. Then f = 1∕T = 1 hertz, sin((2 ∗ π∕T) ∗ t) = sin(2 ∗ π ∗ t), sampling frequency 2 hertz, sampling period 0.5 seconds. Substitute multiples of 0.5 seconds into the sine formula sin(2 ∗ π ∗ 0) = sin(2 ∗ π ∗ 0.5) = sin(2 ∗ π ∗ 1) = 0
There are zeros everywhere. How then can this sinus be restored?

Internet search for the answer to this question I did not give, the maximum that I managed to find was various discussions on the forums, where rather bizarre arguments for and against were given, up to links to experiments with various filters. It should be pointed out that the Kotelnikov theorem is a mathematical theorem and it should only be proved or refuted mathematical methods. Which is what I did. It turned out that there are quite a lot of proofs of this theorem in various textbooks and monographs, but I need to find where this contradiction arises. for a long time failed, because the proofs were given without many subtleties and details. I will also say that the very formulation of the theorem in different sources was different. Therefore, in the first section, I will give a detailed proof of this theorem, following the original work of the academician himself (V.A. Kotelnikov "On bandwidth"ether" and wire in telecommunications. "Materials for the I All-Union Congress on the technical reconstruction of communications and the development of low-voltage industry. 1933)

We formulate the theorem as it is given in the original source:
Any function F(t) consisting of frequencies from 0 to f1 cycles per second can be represented next to

Where k is an integer; ω = 2πf1; Dk are constants depending on F(t).

Proof: Any function F(t) that satisfies the Dirichlet conditions (a finite number of maxima, minima, and discontinuity points on any finite segment) and is integrable within the range from −∞ to +∞, which is always the case in electrical engineering, can be represented by the Fourier integral:

Those. as the sum of an infinite number of sinusoidal oscillations with frequencies from 0 to +∞ and frequency-dependent amplitudes C(ω)dω and S(ω)dω. And

In our case, when F(t) consists only of frequencies from 0 to f1, obviously

And so F(t) can be represented like this:

The functions C(ω) and S(ω), like any others on the section

They can always be represented by Fourier series, and these series can, at our request, consist of one cosine or one sine, if we take a period of twice the length of the segment, i.e. 2ω1.

Author's Note: An explanation is needed here. Kotelnikov uses the opportunity to supplement the functions C(ω) and S(ω) in such a way that C(ω) becomes an even function and S(ω) an odd function on the double section with respect to ω1. Accordingly, in the second half of the section, the values ​​of these functions will be C(2∗ω1 − ω) and −S(2∗ω1 − ω). These functions are reflected about the vertical axis with coordinate ω1, and the function S(ω) also changes sign

In this way

We introduce the following notation

Substituting we get:

Let's transform

Still transforming

We integrate and replace ω1 with 2πf1:

Inaccuracy in Kotelnikov's theorem

The entire proof looks rigorous. What is the problem? To understand this, let's turn to one not very widely known property of the inverse Fourier transform. It says that when converting back from the sum of sines and cosines to the original function, the value of this function will be equal to

That is, the restored function is equal to half the sum of the values ​​of the limits. What does this lead to? If our function is continuous, then there is no need. But if there is a finite gap in our function, then the values ​​of the function after the direct and inverse Fourier transforms will not coincide with initial value. Recall now the step in the proof of the theorem where the interval is doubled. The function S(ω) is complemented by the function −S(2 ∗ ω1 − ω). If S(ω1) (the value at the point ω1) is zero, nothing bad happens. However, if the value of S(ω1) is not equal to zero, the reconstructed function will not be equal to the original one, since at this point there is a discontinuity equal to 2S(ω1).
Let's return now to the original problem about the sine. As is known, the sine is an odd function whose image after the Fourier transform is δ(ω − Ω0) - the delta function. That is, in our case, if the sine has a frequency ω1, we get:

Obviously, at the point ω1, two delta functions of S(ω) and −S(ω) are summed, forming a zero, which is what we observe.

Conclusion

Kotelnikov's theorem is certainly a great theorem. However, it must be supplemented with one more condition, namely

In this formulation, boundary cases are excluded, in particular, the case with a sine in which the frequency is equal to the boundary frequency ω1, since it is impossible to use the Kotelnikov theorem with the above condition for it.

In 1933 V.A. Kotelnikov proved the sampling theorem, which is important in communication theory: continuous signal with a limited spectrum can be accurately restored (interpolated) from its readings taken at intervals , where is the upper frequency of the signal spectrum.

In accordance with this theorem, the signal can be represented by the Kotelnikov series:

.

Thus, the signal can be absolutely accurately represented using a sequence of readings given at discrete points (Fig. 1.16).

form an orthogonal basis in the space of signals characterized by a limited spectrum:

At .

Usually, for real signals, you can specify the frequency range within which the main part of its energy is concentrated and which determines the width of the signal spectrum. In some cases, the spectrum is deliberately reduced. This is due to the fact that the equipment and the communication line must have a minimum frequency band. The reduction of the spectrum is performed based on the allowable signal distortion. For example, in telephone communication, good speech intelligibility and subscriber recognition are ensured when transmitting signals in the frequency band . An increase leads to an unjustified complication of the equipment and an increase in costs. For the transmission of a television image with a standard of 625 lines, the bandwidth occupied by the signal is about 6 MHz.

It follows from the foregoing that processes with limited spectra can serve as adequate mathematical models of many real signals.

The view function is called the sampling function (Fig. 1.17).

It is characterized by the following properties. If , the sampling function has a maximum value at , and at times () it vanishes; the width of the main lobe of the sampling function at the zero level is , so the minimum pulse duration that can exist at the output linear system with a bandwidth equal to ; the sampling functions are orthogonal on an infinite time interval.

Based on the Kotelnikov theorem, one can propose next way discrete transmission of continuous signals:

To transmit a continuous signal over a communication channel with a bandwidth, we determine the instantaneous values ​​of the signal at discrete times , (). After that, we will transfer these values ​​via a communication channel to any of possible ways and restore the transmitted readings on the receiving side. To convert the stream of impulse readings into a continuous function, let's pass them through an ideal LPF with cutoff frequency .

It can be shown that the signal energy is found by the formula:

Expression (1.25) is widely used in the theory of noise-immune signal reception, but is approximate, because signals cannot be limited in frequency and time at the same time.

Kotelnikov's theorem


5.3. Theorem of Kotelnikov.

5.3.1. Continuous Signals are described continuous functions time. The instantaneous values ​​of such signals change in time smoothly, without sharp jumps (discontinuities). An example of a continuous signal timing diagram is shown in Fig. 5.2a. The signals whose timing diagrams are shown in Fig. 5.1 are not continuous, since their instantaneous values ​​change in jumps at some points in time. Many real signals are continuous. These include, for example, electrical signals in the transmission of speech, music, and many images.

Rice. 5.1. Timetable for the implementation of the telegraph signal.

a)

b)

in)

G)
Rice. 5.2. Discretization, quantization of a continuous signal: a - continuous signal; b – time-discrete (impulse) signal; c – discrete in time and values ​​(digital) signal; d – quantization error

5.3.2. Signals with discrete time.

They can be obtained from continuous ones by performing a special transformation on them, called time discretization. We will illustrate the meaning of these transformations with the help of the timing diagrams shown in Figure 5.2. We assume that it is possible to measure the instantaneous values ​​of the signal u(t) at times Δt, 2Δt, 3Δt…; Δt is called the time sampling interval. The measured values ​​u(Δt), u(2Δt), u(3Δt) are marked in Figure 5.2 a with dots. These values ​​can be used to generate a sequence of short rectangular pulses, the duration of which is the same and less than the sampling interval Δt, and the amplitudes are equal to the measured values ​​of the signal u(t). The sequence of such rectangular pulses is shown in Fig. 5.2b and is often called a pulse signal or a discrete time signal. Such a signal will be denoted by the symbol uΔ(t). Note that the time sampling step here is constant and equal to Dt, and the amplitude of each pulse is equal to the instantaneous value of the signal u(t) at the corresponding time. Since the continuous signal u(t) can take any values ​​at selected times, the pulse amplitudes pulse signal, obtained from the continuous one by sampling in time, can also take any values: In Fig.5.2b, the values ​​of the pulse amplitudes are indicated with an accuracy of only one decimal place after the decimal point. To accurately indicate the value of the amplitudes of the pulses, an unlimited number of decimal places after the decimal point may be required, i.e., the values ​​of the amplitudes of the pulses continuously fill a certain interval. Therefore, the signal pulse amplitudes uΔ(t) are sometimes called continuous values.

5.3.3. digital signals.

As will be shown later, when transmitting pulsed signals in telecommunications, a special transformation is often used, which consists in the following. Assume that during transmission, each pulse can only have an amplitude with a permitted value. The number of allowed pulse amplitude values ​​is finite and given. For example, in Figure 5.2c, the allowed amplitude values ​​are numbered 1, 2, 3, ...; the value of Δu is equal to the difference between any two adjacent allowed values ​​of the amplitudes. If the true value of the amplitude of the signal pulse uΔ(t) to be transmitted falls between the allowed values, then the amplitude of the transmitted pulse is taken equal to the allowed value, which is closest to the true one. Such a transformation is called quantization, the set of allowed values ​​of the amplitudes of the transmitted pulses is called the quantization scale, and the interval Δu between adjacent allowed values ​​is called the quantization step. For example, in fig. 2c allowed values ​​of pulse amplitudes are taken equal to integers 0; one; 2; 3 and form a uniform quantization scale, which can be extended to the region of negative values ​​of the signal u(t); while the quantization step Δu=1.

The sequence of pulses obtained as a result of quantization of the signal pulses uΔ(t) is also a pulse signal, for which we introduce the notation u c(t). The peculiarity of this signal is that the pulse amplitudes now have only allowed values ​​and can be represented by decimal digits with a finite number of digits. Such signals are called discrete or digital. Quantization leads to a quantization error e(t) = u ц(t) – uΔ(t). Figure 5.2d shows an example of the time diagram of the error e(t). Broadcast digital signal u c(t) instead of the signal uΔ(t) is actually equivalent to the transmission of a pulse signal uΔ(t) with an error signal e(t) previously superimposed on it, which in this case can be considered as interference. Therefore, e(t) is often referred to as quantization noise or quantization noise.

5.3.4. Theorem of Kotelnikov.

Because the discrete signals are widely used at present in the transmission of messages, and many real signals are continuous, it is important to know: is it possible to represent continuous signals using discrete ones; whether it is possible to specify the conditions under which such a representation turns out to be exact. The answers to these questions are given by the theorem proved in 1933 by the Soviet scientist V.A. Kotelnikov, which is one of the fundamental results of theoretical radio engineering. This theorem is formulated as follows: if a continuous signal u(t) has a limited spectrum and highest frequency in the spectrum is less than f in hertz, then the signal u(t) is completely determined by the sequence of its instantaneous values ​​at discrete times separated by no more than 1/(2fv) seconds.

Let us explain the meaning of Kotelnikov's theorem using the time diagrams shown in Fig. 5.2a. Let it be a part of the time diagram of the signal u(t) with a limited spectrum and with an upper cutoff frequency fc. If the sampling interval Δt<2 f в, то в теореме утверждается, что по значениям u(Δt), u(2Δt), u(3Δt),… можно определить точное значение сигнала u(t) для любого заданного момента времени t, находящегося между моментами отсчета. В соответствии с этой теоремой сигнал с ограниченным спектром и верхней частотой w в<=wΔ/2 можно представить рядом

, (2)

Where u(nΔt), n=…-1, 0, +1,… are samples of instantaneous values ​​of the signal and(t), wΔ = 2¶fΔ , fΔ=SΔt is the time sampling frequency.

Row 2 has an infinite number of terms, so to calculate the value of the signal u(t) at time t, you need to know the values ​​of all samples u(nΔt), n=…-1, 0, +1, … both before and after the specified moment t. Exact equality in (2) is achieved only when all terms are taken into account; if we restrict ourselves to a finite number of terms on the right side of (2), then their sum will give only an approximate value of the signal u(t).

The representation of the signal u(t) by series (2) is illustrated using Fig.5.3, which shows the time diagrams of the signal u(t) and three terms of series (2).

Fig.5.3. Representation of a limited spectrum signal by the Kotelnikov series.

Thus, the Kotelnikov theorem indicates the conditions under which a continuous signal can be exactly restored from the corresponding signal with discrete time. Real continuous signals, transferable, usually have spectra, albeit rather rapidly tending to zero with increasing frequency, but still unlimited. Such signals can be reconstructed from their discrete samples only approximately. However, by choosing a sampling step Δt sufficiently small, it is possible to ensure a negligible value of the error in reconstructing a continuous signal from its transmitted samples at discrete times. For example, when transmitting a telephone signal, the spectrum of which is unlimited, it is usually assumed that the conditional upper cut-off frequency f in = 3.4 kHz. In this case, we obtain that the sampling frequency must satisfy the inequality fΔ i 6.8 kHz, i.e. 6.8 thousand readings should be transmitted in one second. The quality of voice transmission is quite satisfactory. Increasing the sampling rate above the specified value is acceptable and results in a slight increase in the accuracy of the telephone signal reconstruction. If we take fΔ<6,8 кГц, то точность восстановления телефонного сигнала заметно падает.


In order to restore the original continuous signal from a sampled signal with small distortions (errors), it is necessary to rationally choose the sampling step. Therefore, when converting an analog signal to a discrete one, the question of the sampling step size necessarily arises.

. It is not difficult to understand the following idea intuitively. If an analog signal has a low frequency spectrum limited by some upper frequency Fe, (i.e. the function u(t) has the form of a smoothly varying curve, without sharp changes in amplitude), then it is unlikely that this function can change significantly in amplitude over a certain small sampling time interval.
It is quite obvious that the accuracy of restoring an analog signal from the sequence of its samples depends on the value of the sampling interval . The shorter it is, the less the function u(t) will differ from the smooth curve passing through the reference points. However, with a decrease in the sampling interval, the complexity and volume of the processing equipment increase significantly. With a sufficiently large sampling interval, the probability of distortion or loss of information increases when the analog signal is restored.
The optimal value of the discretization interval is established by the Kotelnikov theorem (other names are the sampling theorem, the K. Shannon theorem, the X. Nyquist theorem: the theorem was first discovered in mathematics by O. Cauchy, and then described again by D. Carson and R. Hartley), proved by him in 1933. The theorem of V. A. Kotelnikov is of great theoretical and practical importance: it makes it possible to correctly sample the analog signal and determines the optimal way to restore it at the receiving end from the reference values.
Fig.14.1. Spectral Density Representation

According to one of the most famous and simplest interpretations of the Kotelnikov theorem, an arbitrary signal u(t) whose spectrum is limited by a certain frequency Fe can - be completely restored by the sequence of their reference values ​​following with a time interval

(1)

Sampling interval

and frequency Fe(1) in radio engineering is often referred to as the Nyquist interval and frequency, respectively. Analytically, the Kotelnikov theorem is represented by the series (2)

Where k is the sample number;

- signal value at reference points; - upper frequency of the signal spectrum.
To prove the Kotelnikov theorem, consider an arbitrary continuous signal u(t), the spectral density of which is concentrated in the frequency band (solid line in Fig. 14.1).
Let's mentally supplement the graph of the spectral density symmetrically to the values ​​repeating with the period , (dashed lines in Fig. 14.1). We expand the periodic function thus obtained in a Fourier series, replacing in the formula

argument t us

, frequency on and (formally) P on the k. Then (3)

period is

, and write the sampling interval (4)

Let's use the formula of the inverse Fourier transform and represent the original continuous signal in the following form:

(5)

In the same way, we write the value of the discretized signal for some k-th time reference. Because the time , then

Comparing this expression with the formula for C k , we notice that Taking into account this relation, the spectral function (3), after simple transformations, will take the form: (7)

Then we do the following: substitute the expression

into the relation , change the order of integration and summation, represent the relation as , and calculate the integral.
As a result, we get the following formula:

It follows from this relationship that the continuous function u(t) is indeed determined by the totality of its discrete values ​​of the amplitude at reference points in time

, which proves the Kotelnikov theorem.
The simplest signals of the form orthogonal to each other on the time interval -, , are called sampling functions, basis functions, or Kotelnikov functions. The graph of the k-th Kotelnikov function is shown in fig. 2. Each of the basis functions s k (t) shifted relative to the similar nearest function s k-1 (t) or s k+1 (t) to the sampling interval. An elementary analysis of formula (10) and the graph in fig. 14.3 shows that the signal s k (t) reflected
Rice. 14.2. Graph of the Kotelnikov basis function


Fig.14.3. Approximation of a continuous signal by the Kotelnikov series by the sinx/x function, which also characterizes the envelope of the spectral density of a rectangular pulse.

The representation (more precisely, the approximation) of a given continuous signal u(t) by the Kotelnikov series (2) is illustrated by diagrams in Figs. 14.3. graph (here, for simplicity, the basis functions are shown without an argument t the first four terms of the series are constructed, corresponding to signal samples at time 0,