Why when using analog technology transmission. Why is analog technology still alive in the digital world? Nostalgia by analogy

An analog signal is a signal that changes continuously. The most common is change over time. At the level of the macrocosm, all signals about the world around us are of an analog nature and it is in the analog form that they are recorded by the corresponding sensors and converted into signals of a different nature, most often into electrical ones. (True, from the school course it is known that in atoms all transitions occur abruptly, quantized, but individual atoms do not belong to our consideration).

In a computer, we are dealing with a binary digital signal that takes only two discrete values, one value is assigned 0 and the other 1.

In the processor and RAM, this corresponds to the presence or absence of electric charge in an elementary memory cell (or, what is the same, the presence or absence of voltage). By the way, in computers, the value of 0 is assigned to a signal where there is voltage, and the absence of voltage is a signal of 1. Although at first glance it would be more logical the other way around.

On a hard disk, 0 corresponds to the magnetization of a section of the disk in one direction, and 1 in the opposite direction. On compact discs, the binary code is introduced by stamping recesses in the plastic, and on CD-R and CD-RW recording discs, the binary code is formed by darkening the information layer under the laser beam. And the very first recording of a binary code was carried out by punching holes in cardboard. There is a hole - this is 0 smooth paper - this is 1.

That is, it does not matter how the binary signal is created, as long as two well-differentiated levels are obtained.

Binary digital signal does not occur in nature, it is created by man. It is convenient for a person to work with information recorded in digital form. We can say that we are present at the creation by humanity of its own digital universe. Human society has created many discrete things. For example, the text - it consists of separate discrete letters, and there is no continuity here. For text and other discrete human-created materials, digital computer technology is better suited than analog.

If we compare digital and analog electronics, then digital video and sound processing devices are always equipment of a higher class than analog devices. Digital technology is becoming more and more high quality and prestigious.

Compared to analog, a digital signal has two pluses and one minus. Let's consider them in order.

1. Using an analog signal, in principle, it is impossible to transmit information without distortion, a digital signal allows you to transmit information completely without distortion.

Why is this happening? During transmission, some interference always occurs in the communication line, distorting the transmitted signal (dotted lines in the figure). No interference occurs only in the ideal case, which, like any ideal, is unattainable. And the receiver cannot restore the original signal, since only the transmitter has information about the original signal.

A completely different situation is observed with a digital signal. Here, too, interference occurs during transmission - where can you get away from them (dotted lines in the figure). But at the reception, the task is to recognize each signal as 0 or 1 - there is no middle ground. And if all 0 and 1 are recognized correctly, then this means that the information is transmitted without distortion.

Interference can occur not only in the transmission of information to long distances. Inside some device (TV, computer, etc.), strong interference and interference can also occur.

Two important conclusions follow from what has been said.

a) Digital technology works more reliably.

b) The pulse (digital) method of information transfer allows you to create an unlimited number of absolutely identical copies.

In an analog signal, each stage of copying will be accompanied by the appearance of interference, with the growth of stages of sequential copying, the signal quality becomes worse, in the end, the information completely ceases to be read.

In a digital signal, interference can be eliminated, since it is known what needs to be eliminated - everything that differs from 0 and 1. And from each subsequent copy, you can make a new copy, just like from the original. True, this dignity has unpleasant consequences, since it creates the ground for piracy and unauthorized use of someone else's intellectual property.

Signals are information codes that people use to convey messages in an information system. The signal may be given, but it is not necessary to receive it. Whereas a message can be considered only such a signal (or a set of signals) that was received and decoded by the recipient (analogue and digital signal).

One of the first methods of transmitting information without the participation of people or other living beings were signal fires. When danger arose, bonfires were successively lit from one post to another. Next, we will consider the method of transmitting information using electromagnetic signals and dwell on the topic in detail. analog and digital signal.

Any signal can be represented as a function that describes changes in its characteristics. This representation is convenient for studying devices and systems of radio engineering. In addition to the signal in radio engineering, there is also noise, which is its alternative. Noise carries no useful information and distorts the signal by interacting with it.

The concept itself makes it possible to abstract from specific physical quantities when considering phenomena associated with encoding and decoding information. The mathematical model of the signal in research allows relying on the parameters of the time function.

Signal types

Signals by physical environment information carriers are divided into electrical, optical, acoustic and electromagnetic.

According to the method of setting the signal can be regular and irregular. A regular signal is represented by a deterministic function of time. An irregular signal in radio engineering is represented by a chaotic function of time and is analyzed using a probabilistic approach.

Signals, depending on the function that describes their parameters, can be analog and discrete. A discrete signal that has been quantized is called a digital signal.

Signal processing

The analog and digital signal is processed and directed to transmit and receive the information encoded in the signal. Once the information is extracted, it can be used for different purposes. In particular cases, information is formatted.

Analog signals are amplified, filtered, modulated and demodulated. Digital, in addition to this, can still be compressed, detected, etc.

analog signal

Our sense organs perceive all the information coming into them in an analog form. For example, if we see a car passing by, we see its movement continuously. If our brain could receive information about its position once every 10 seconds, people would constantly get under the wheels. But we can estimate the distance much faster and this distance at any given time is clearly defined.

Absolutely the same thing happens with other information, we can evaluate the volume at any time, feel how much pressure our fingers put on objects, etc. In other words, almost all information that can arise in nature has an analog form. The easiest way to transmit such information is by analog signals, which are continuous and defined at any time.

To understand what an analog electrical signal looks like, you can imagine a graph showing amplitude on the vertical axis and time on the horizontal axis. If we, for example, measure the change in temperature, then a continuous line will appear on the graph, displaying its value at each point in time. To transmit such a signal with an electric current, we need to match the temperature value with the voltage value. So, for example, 35.342 degrees Celsius can be encoded as a voltage of 3.5342 V.

Analog signals used to be used in all types of communications. To avoid interference, such a signal must be amplified. The higher the level of noise, that is, interference, the stronger the signal must be amplified so that it can be received without distortion. This method of signal processing consumes a lot of energy to generate heat. Wherein amplified signal may itself cause interference to other communication channels.

Now analog signals are still used in television and radio, to convert the input signal in microphones. But, in general, this type of signal is universally superseded or superseded by digital signals.

digital signal

A digital signal is represented by a sequence of digital values. The most commonly used now are binary digital signals, as they are used in binary electronics and are easier to encode.

Unlike the previous signal type, the digital signal has two values ​​"1" and "0". If we recall our example with temperature measurement, then here the signal will be formed differently. If the voltage that is supplied by the analog signal corresponds to the value of the measured temperature, then in the digital signal for each temperature value, a certain amount of voltage pulses. The voltage pulse itself here will be equal to "1", and the absence of voltage - "0". The receiving equipment will decode the pulses and restore the original data.

Having imagined how the digital signal will look on the graph, we will see that the transition from zero to the maximum value is made abruptly. It is this feature that allows the receiving equipment to “see” the signal more clearly. If any interference occurs, it is easier for the receiver to decode the signal than with analog transmission.

However, it is impossible to restore a digital signal with a very high noise level, while it is still possible to “fish out” information from an analog type with high distortion. This is due to the clipping effect. The essence of the effect is that digital signals can be transmitted over certain distances, and then simply cut off. This effect occurs everywhere and is solved by a simple signal regeneration. Where the signal breaks, you need to insert a repeater or reduce the length of the communication line. The repeater does not amplify the signal, but recognizes its original form and produces an exact copy of it and can be used arbitrarily in the circuit. Such methods of signal repetition are actively used in network technologies.

Among other things, analog and digital signals differ in the ability to encode and encrypt information. This is one of the reasons for the transition of mobile communications to digital.

Analog and digital signal and digital-to-analog conversion

A little more needs to be said about how analog information transmitted over digital channels connections. Let's go back to the examples. As already mentioned, sound is an analog signal.

What happens in mobile phones that transmit information over digital channels

The sound entering the microphone is subjected to analog-to-digital conversion (ADC). This process consists of 3 steps. Separate signal values ​​are taken at regular intervals, this process is called sampling. According to the Kotelnikov theorem on the bandwidth of the channels, the frequency of taking these values ​​should be twice as high as the highest frequency of the signal. That is, if our channel has a frequency limit of 4 kHz, then the sampling frequency will be 8 kHz. Further, all selected signal values ​​are rounded or, in other words, quantized. The more levels this creates, the higher the accuracy of the reconstructed signal at the receiver. Then all values ​​are converted into a binary code, which is transmitted to the base station and then reaches the other subscriber, which is the receiver. A digital-to-analog conversion (DAC) process takes place in the receiver's phone. This is an inverse procedure, the purpose of which is to get the output as close as possible to the original signal. Further, the analog signal comes out in the form of sound from the phone's speaker.

Digital electronics is now increasingly replacing the traditional analog. Leading companies producing a wide variety of electronic equipment are increasingly declaring a complete transition to digital technology.

Advances in the technology of production of electronic microcircuits ensured the rapid development of digital technology and devices. The use of digital methods for processing and transmitting signals can significantly improve the quality of communication lines. Digital methods of signal processing and switching in telephony make it possible to reduce the weight and size characteristics of switching devices by several times, increase communication reliability, and introduce additional functionality.

The emergence of high-speed microprocessors, large-capacity RAM chips, small-sized information storage devices on large-capacity hard media made it possible to create fairly inexpensive universal personal electronic computers (computers), which are widely used in everyday life and production.

Digital technology is indispensable in telesignaling and telecontrol systems used in automated production, control of remote objects, such as spacecraft, gas pumping stations, etc. Digital technology has also taken a strong place in electrical and radio measuring systems. Modern devices for recording and reproducing signals are also inconceivable without the use of digital devices. Digital devices are widely used to control household appliances.

It is very likely that digital devices will dominate the electronics market in the future.

Let's start with some basic definitions..

Signal is any physical quantity (for example, temperature, air pressure, light intensity, current strength, etc.) that changes over time. It is thanks to this change in time that the signal can carry some kind of information.

electrical signal is an electrical quantity (for example, voltage, current, power) that changes with time. All electronics primarily work with electrical signals, although light signals, which are time-varying light intensity, are increasingly being used.

analog signal- this is a signal that can take any value within certain limits (for example, the voltage can smoothly change from zero to ten volts). Devices that work only with analog signals are called analog devices.


digital signal is a signal that can only take two values ​​(sometimes three values). Moreover, some deviations from these values ​​\u200b\u200bare allowed (Fig. 1.1). For example, the voltage can take two values: from 0 to 0.5 V (zero level) or from 2.5 to 5 V (one level). Devices that work exclusively with digital signals are called digital devices.

In nature, almost all signals are analog, that is, they change continuously within certain limits. That is why the first electronic devices were analog. They converted physical quantities into voltage or current proportional to them, performed some operations on them and then performed inverse transformations into physical quantities. For example, a person's voice (air vibrations) is converted into electrical vibrations with the help of a microphone, then these electrical signals are amplified by an electronic amplifier and, using an acoustic system, are again converted into air vibrations, into a louder sound.

Rice. 1.1. Electrical signals: analog (left) and digital (right).

All operations performed by electronic devices on signals can be divided into three large groups:

processing (or transformation);

Broadcast;

Storage.

In all these cases, useful signals are distorted by parasitic signals - noise, interference, interference. In addition, during signal processing (for example, during amplification, filtering), their shape is also distorted due to imperfection, imperfection electronic devices. And when transmitting over long distances and during storage, the signals are also weakened.

Rice. 1.2. Noise and interference distortion of an analog signal (left) and a digital signal (right).

In the case of analog signals, all this significantly degrades the useful signal, since all its values ​​are allowed (Fig. 1.2). Therefore, every conversion, every intermediate storage, every transmission over cable or air degrades the analog signal, sometimes up to its complete destruction. It must also be taken into account that all noise, interference and pickups are fundamentally not amenable to accurate calculation, therefore it is absolutely impossible to accurately describe the behavior of any analog devices. In addition, over time, the parameters of all analog devices change due to the aging of the elements, so the characteristics of these devices do not remain constant.

Unlike analog signals, digital signals, which have only two allowed values, are much better protected from noise, interference and interference. Small deviations from the allowed values ​​do not distort the digital signal in any way, since there are always zones of permissible deviations (Fig. 1.2). That is why digital signals allow for much more complex and multi-stage processing, much longer lossless storage, and much better transmission than analog signals. In addition, the behavior of digital devices can always be absolutely accurately calculated and predicted. Digital devices are much less susceptible to aging, since a slight change in their parameters does not affect their functioning in any way. In addition, digital devices are easier to design and debug. It is clear that all these advantages ensure the rapid development of digital electronics.

However, digital signals also have a major drawback. The fact is that at each of its allowed levels, the digital signal must remain at least for some minimum time interval, otherwise it will not be possible to recognize it. And an analog signal can take on any value for an infinitesimal time. It can also be said differently: an analog signal is defined in continuous time (that is, at any moment in time), and a digital signal is defined in discrete time (that is, only at selected points in time). Therefore, the maximum achievable performance of analog devices is always fundamentally greater than that of digital devices. Analog devices can handle more rapidly changing signals than digital ones. The speed of processing and transmitting information by an analog device can always be made higher than the speed of its processing and transmission by a digital device.

In addition, a digital signal transmits information only in two levels and by changing one of its levels to another, while an analog signal transmits information also with each current value of its level, that is, it is more capacious in terms of information transfer. Therefore, in order to transmit the amount of useful information that is contained in one analog signal, it is most often necessary to use several digital signals (usually from 4 to 16).

In addition, as already noted, in nature, all signals are analog, that is, to convert them into digital signals and for inverse conversion, the use of special equipment (analog-to-digital and digital-to-analog converters) is required. So nothing comes for free, and the price to pay for the benefits of digital devices can sometimes be unacceptably high.

Continuation. See No. 5, 6/2009

Toolkit

In all author's versions of the school course of informatics, the concept of information is the central system-forming concept. The fundamental component of computer science is the science of information and information processes. The profile course of informatics for the senior classes provides more opportunities for revealing this fundamental content than the course at the basic school. This is facilitated, firstly, by propaedeutics, passed in previous classes, and secondly, more high level mathematical and physical training of students.

The section on information coding is central to the theoretical component of the course. It reflects the basic ideas of representation and transformation of information that underlie information technology. Understanding these ideas contributes to a deep understanding of the essence of ICT for professional user and, more importantly, for the future computer systems designer.

More details than in the main school, here will be discussed about the features of analog and digital forms of information transmission. The essence of the ADC is explained in sufficient detail - analog-to-digital signal conversion.

Key Concept section - coding receives a multifaceted explanation. A code is a character sequence containing some information. Coding is the process of building code. All encoding options can be divided into two groups:

1) conversion from an analog form to a discrete, symbolic form;

2) conversion from one character system to another.

The methods of the second group of coding depend on the purpose. There may be the following options: transition from one presentation standard to another; data volume reduction (compression, packaging); classification of information (encryption) and the reverse procedure - decryption; ensuring error control during data transmission. In all cases, certain coding algorithms are used, which often have mathematical models behind them. The teacher should form a systematic understanding of students about the tasks of coding and how to solve them.

Informatics classes are divided into theoretical lessons and a computer workshop (of course, both forms of work can be combined within one academic hour). In this course, the authors offer another form of organizing classes - a lesson-research. The material for such a lesson is contained in §5 “Numerical experiments on sound processing”. The work consists in the fact that the teacher demonstrates numerical experiments performed in a spreadsheet environment using a computer and projection tools. Students can repeat the same calculations on their PCs in parallel, but then they receive tasks to continue the experiment on their own. The results are collectively discussed.

As part of the coding section, students continue to deepen their spreadsheet and Pascal programming skills.

§one. Signal - information carrier

A person perceives information from the outside world with the help of his senses. Most of Information is received by us through sight and hearing.

The organs of hearing perceive sound signals to which are carried by sound waves. The organs of vision perceive visual cues, whose nature is electromagnetic waves in a certain frequency range. Any signal- this is a change in some physical quantity that transmits information to the receiving object(a living being or a technical device). The sound signal is associated with a change in air pressure generated by a sound wave and affecting the organ of hearing. The visual signal is associated with a change in the parameters of electromagnetic light radiation perceived by the organs of vision.

For many centuries, people could hear sounds only at a distance of natural hearing from the source, to see objects that were in the field of view. The development of science and technology has allowed man to go beyond these natural limits of perception.

Over the past two centuries, scientists and inventors have achieved great results in creating means of communication for transmitting information over a distance. Various technical means of communication provide the transmission of signals of two types: analog and discrete.

A synonym for the word "analog" is "continuous". For example, sound is a continuous wave process occurring in the atmosphere or other continuous medium. The term "discrete" means "separated", consisting of separate particles, elements, quanta. The first technical means of communication in history were designed to transmit texts in discrete form.

The 19th century was a great age of technical inventions. In 1831, Michael Faraday discovered the phenomenon of electromagnetic induction. After that, the rapid development of electrical engineering begins: an electric generator is invented, means of transmitting electricity over a distance are created. Electricity has many uses. The most important of them: electric lighting and heating, Electrical engine, telecommunications - the transmission of information using electricity. The idea of ​​transmitting information over wires at that time seemed fantastic: it became possible to transmit text at the speed of transferring an electrical signal - close to the speed of light.

The first electromagnetic telegraph was created by the Russian scientist Pavel Lvovich Schilling in 1832. In 1837, American Samuel Morse patented his design for an electromagnetic telegraph machine. They also developed telegraph code, known as Morse code.

A telegraph message is a sequence of electrical signals carried from one telegraph apparatus through wires to another telegraph apparatus. These technical circumstances led S. Morse to the idea of ​​using only two types of signals - short and long - to encode a message transmitted over telegraph lines.

In Morse code, each letter of the alphabet is encoded by a sequence of short beeps (dots) and long beeps (dashes). In the table in fig. 1 shows the Morse code in relation to the Latin and Russian alphabets.

Rice. 1. Morse Code Table

The most famous telegraph message is the SOS distress signal ( S ave O ur S ouls- save our souls). Here's what it looks like in Morse code:

- - -

Three dots denote the Latin letter S, three dashes denote the letter O. Two pauses separate the letters from each other. The telegraph operator, who transmitted the message in Morse code, “tapped” it with a telegraph key: a dot - a short signal, a dash - a long signal, after each letter - a pause. On the receiving device, the message was recorded on paper tape in the form of graphic dots, dashes and spaces, which were visually read by the telegraph operator.

Morse code is uneven code, because for different letters of the alphabet, the length of the code varies from one to six characters (dots and dashes). For this reason, a third character is needed - a pause to separate the letters from each other.

The uniform telegraph code was invented by the Frenchman Jean Maurice Baudot in 1870. It used only two different types signals. It doesn't matter what you call them: dot and dash, plus and minus, zero and one. These are two different electrical signals.

In the Baudot code table, the length of codes for all characters of the alphabet is the same and equal to five. In this case, the problem of separating letters from each other does not arise: each five signals is a text sign.

Thanks to Bodo's idea, it was possible to automate the process of transferring and printing letters. In 1901, a keyboard telegraph apparatus was created. Pressing a key with a certain letter generates the corresponding five-pulse signal, which is transmitted over the communication line. The receiving machine, under the influence of this signal, prints the same letter on a paper tape.

Morse and Bodo telegraphs are discrete ways of transmitting information.

The next important development in communication technology was the invention of the telephone. In 1876, American Alexander Bell received a patent for his invention. A year later, Thomas Alva Edison invented telephone set with a carbon microphone, which can still be found in use. Telephone communication transmits sound over a distance by means of a continuous electrical signal modulated at the frequency of sound vibrations. An alternating electrical voltage is created in the speaker's microphone, and it is converted into sound vibrations in the listener's earpiece. Telephony is an analog way to transmit sound.

Thanks to the discovery in 1888 by Heinrich Hertz electromagnetic waves the invention of radio communication became possible. Almost simultaneously in 1895 Alexander Popov in Russia and in 1896 the Italian G. Marconi invented the first radio transmitters and radio receivers. Contemporaries of the invention called the radio a cordless telephone. The principle of sound transmission by radio communication is the transfer through space of high-frequency (carrier) electromagnetic waves, modulated in amplitude by low-frequency sound vibrations. In a radio receiver, sound vibrations are separated from the carrier frequency and converted into sound. Radio communication is an analog method for transmitting sound.

In the 20th century, with the invention of television, possible transfer to image distance. Television electromagnetic signal is also analog method of transmitting audio and video information.

In the second half of the twentieth century, there is a transition to a predominantly discrete form of information representation for its storage, transmission and processing. This process began with the invention of digital computing and measuring technology. At present, computer processing is becoming an element of all communication systems: telephone, radio and television. The development of digital telephony digital television. internet like universal system communication is based solely on discrete digital technology for storing, transmitting and processing information.

Questions and tasks

1. What is a signal?

2. Justify the correct use of the phrase “traffic signal”.

3. Give examples of analog signals in nature that transmit information.

4. Do you think human speech is an analog or discrete form of information transfer?

5. List the main events in the history of the invention of technical means of communication.

6. Why are digital communication technologies replacing analog ones lately?

§2. Text encoding

What is coding

Encoding is the representation of information as a combination of characters. Coding occurs according to certain rules. Encoding rules depend on the purpose of the code, i.e. on how and for what it will be used.

Writing is a way of encoding speech in natural language. Written text (also called written speech) is designed to transmit information from one person to other people both in space (letter, note) and in time (books, diaries, archives of documents, etc.). The rules by which people encode information in writing are called the grammar of a language (Russian, English, Chinese, etc.), and a person who can read and write is called a literate person.

If the recording of speech is called coding, then the reading of a written text is its decoding. Since we express our thoughts in the form of oral speech, the process of written information exchange between people can be displayed by the following diagram (see diagram).

With this written method of information exchange, paper is most often used as a medium.

With the invention of technical means of communication, it became possible to quickly transmit texts over long distances. But this process requires the use of an additional layer of coding. Let's repeat the above statement once again: the way of coding depends on the purpose of the code. If the code is intended to send text over technical system communication, it must be adapted to the capabilities of this system. An example of such a "technical" code is Morse code.

The process of transmitting a telegraph message using Morse code can be reflected in the diagram:

Ways to encode text

Text encoding always occurs according to the following rule: each character of the source text alphabet is replaced by a combination of characters of the encoding alphabet. For Morse code, these rules are presented in the table on rice. 1.

In the Morse code table, two characters are used to encode 32 letters of the Russian alphabet (the letter Yo began to be used in written text only in the middle of the 20th century): a dot and a dash. However, when transmitting words due to uneven codes different letters, you also have to apply a gap between the letters: a pause in the transmission time or a gap on the telegraph tape. Therefore, in fact, the alphabet of the Morse telegraph code contains three characters: a dot, a dash, a gap.

The telegraph code for Baudot is binary uniform five-digit code. On its basis, in 1932, was developed international telegraph code ITA2, the code table of which is presented onrice. 2.

Rice. 2. Telegraph code ITA2

Binary character codes are folded into a two-digit hexadecimal number format, in which the first digit takes on the values ​​0 or 1. There are three types of characters: letters (letters), numbers and signs (figures), control characters (control chars). Switching to the letter input mode occurs by the code 1F 16 (binary form 1 1111). The letter A has the code 03 16 (0 0011); letter code R - 0A 16 (0 1010). The same code in the digit input mode indicates the number 4. The word “BODO” in hexadecimal form is encoded as follows: 19 18 09 18. The length of the binary code of this word is 20.

In the second half of the twentieth century, computers were created and distributed. Computer word processing required the creation of a character encoding standard. In 1963, a standard was adopted, which was called ASCII - American Standard Code for Information Interchange. ASCII - seven-bit binary code, it is given in Table. one.

The character code is his serial number in the code table. It can be represented in decimal, binary and hexadecimal number systems. The code in computer memory is a seven-bit binary number. In table. 1 ASCII code is presented in collapsed hexadecimal form. When expanded into binary form, the codes are seven-digit binary integers ranging from 000 0000 2 = 00 16 = 0 to 111 1111 2 = 7F 16 = 127. 2 7 = 128 characters in total.

The first 32 characters (00 to 1F) are called control characters. They are not reflected by any characters on the monitor screen or when printed, but they determine some actions when text is displayed. For example, code 08 16 (BS) erases the previous character; by code 07 16 (BEL) - output sound signal; code OD 16 (CR) means jump to the beginning of the line (carriage return). These characters are inherited from the encoding for teletype communication, for which ASCII was originally used, so archaic terms such as “caret” have been preserved.

Symbols that have a graphical display begin with the code 20 16 . This is a space - skipping a position when outputting. An important property of an ASCII table is that uppercase and lowercase letters and decimal digits are alphabetically encoded. This property is extremely important for software processing character information, in particular, for alphabetical sorting words.

ASCII code extension. Eight-bit binary encoding allows you to encode an alphabet of 2 8 = 256 characters. The first half of the eight-digit code is the same as ASCII. The second half consists of characters with codes from 128 = 80 16 = 1000 0000 2 to 255 = FF 16 = 1111 1111 2 . This part of the encoding table is called code page(CP-code page). Non-Latin alphabets, pseudographic characters and some other characters that are not included in the first half are placed on the code page.

In table. 2, 3, 4 are code pages with the Russian alphabet. CP866 is used in the MS DOS operating system, CP1251 - in the Windows operating system. The KOI8-R encoding is used in the Unix operating system. Its first half is the same as ASCII.

Please note that not all encodings follow the rule of sequential encoding of the Russian alphabet. There are other character encoding standards where the Russian alphabet is present.

16-bit UNICODE standard. In 1991, the sixteen-bit international character encoding standard Unicode was developed, which allows you to encode 216 = 65536 characters. English (Latin), Russian (Cyrillic), Greek alphabets, Chinese characters, mathematical symbols and much more are placed in such a code table. There is no need for code pages. Character code range in hexadecimal form: 0000 to FFFF.

At the beginning of the code table, in the area from 0000 16 to 007F 16 , contains ASCII characters. Under Cyrillic characters, areas of characters with codes from 0400 16 to 052F 16, from 2DE0 16 to 2DFF 16, from A640 16 to A69F 16 are allocated.

Learning to Program

Consider a Pascal program that will display an encoding table in the range of codes from 20 to 255.

program table_code;

uses CRT; (Connecting the control library

character output)

var code: byte; (Integers from 0 to 255)

clrscr; (Clearing the character output screen)

for code:= 20 to 255 do

(Enumerating character codes)

if(code mod 10 = 0) then writeln;

(Line feed after 10 steps)

write(chr(kod):3,kod:4);

(Output of the symbol and its code)

Operator uses CRT connects to the program a library of subroutines for controlling symbolic output to the monitor screen. Further, the program uses a procedure from this library: clrscr - clearing the screen.

A variable of type byte occupies 1 byte of memory and accepts a set of positive integer values ​​in the range from 0 to 255.

The program uses the standard function chr(kod) , which as a result returns a character whose decimal code is equal to the value of the variable kod .

Values ​​are displayed in pairs: character - code. There are 10 such pairs in one line. The entire table will fit in 24 rows.

Questions and tasks

1. Define the concepts: code, encoding, decoding.

2. Give examples of encoding and decoding that were not mentioned in the paragraph.

3. What is the difference between uniform and non-uniform codes?

4. Encode the word COMPUTER using ITA2 and ASCII codes.

5. How to read the phrase “SPARTAK - CHAMPION”, encoded using CP1251, if the decoding is done using the KOI8-R code?

6. A letter was written in KOI8-R encoding, beginning with the phrase: “Hello, dear Sasha!” Decoding took place according to the seven-bit ASCII code, as a result of which the most significant (eighth) bit of all characters was lost. Write down the final text. Will the recipient be able to understand the content of the letter?

7. Using a spreadsheet, determine which code page is used on your computer. For example, Excel has a CHAR(code) function that returns the character corresponding to the given decimal code. The function inverse to it is CODESYM(character).

8. Implement the program Tabl_code on the computer. Complete it.

9*. Write a similar program that would output binary character codes.

ten*. Write a similar program that would output hexadecimal character codes.

§3. Image encoding

According to some estimates, a person perceives about 90% of information from the outside world through visual means. Human vision is a natural ability to perceive the image of objects in the surrounding world. The visual system perceives the light reflected or emitted by objects of observation. The reflected image is everything that we see in daylight or artificial light. For example, we read a book, look through the illustrations in it. Examples of emitted images are images on a TV or computer screen.

Since ancient times, people have learned to save and transmit images in the form of drawings. Photography appeared in the 19th century. The invention of cinema by the Lumiere brothers in 1895 made it possible to transmit moving images. In the 20th century, the video recorder was invented - a means of recording and transmitting images on magnetic tape.

tricks image encoding are being developed with the advent of digital technologies for storing, transmitting and processing images: digital photography, digital video, computer graphics.

When encoding an image, its spatial sampling and encoding of the light emanating from each discrete element of the image is carried out. AT computer technology the spatial grid of discrete elements from which the image on the monitor screen is built is called a raster. The discrete elements of the image on the screen themselves are called pixels (Fig. 3). The denser the grid of pixels, the higher the quality of the image, the less our eyes notice its discrete structure.

Video information is a binary image code stored in the computer's memory. The entire aggregate video code consists of codes of light emitted by individual pixels.

The natural images that we see around us are multicolored. Image storage technologies use methods to obtain monochromatic, i.e. monochrome, and colored(colorful) images. As you know, black-and-white photography, black-and-white cinema first appeared, and only later - color photography and color cinema. The same applies to television. The first computer displays had black and white screens, modern computers use color monitors.

Color (red, yellow, green, etc.) is the subjective perception of the color of light by a person. The objective difference between light of different colors lies in the different lengths of light waves. The subjective nature of color perception is confirmed, for example, by the fact that people suffering from color blindness do not distinguish certain colors at all.

Monochromatic Light Coding

The word “monochromatic” means one color. There is one background color. The whole image is obtained with shades this background color,differing in brightness(they also say - transparency). For example, if the background color is black, then by gradually brightening it, you can go through shades of gray to white ( rice. four). Such a continuous set of shades - from black to white - let's call black and white spectrum. From such shades, an image is obtained in black and white photography, on film and television screens. All drawings in this tutorial are in black and white.

Rice. 4. Continuous black and white spectrum

However, the background color does not have to be black. It can be brown, blue, green, etc. This happens in tinted photographs. There were monochrome monitors with a brown or green background color.

The monochrome light code indicates the brightness level of the background color. Computers use positive integer binary numbers to digitally encode light. The size of the binary code in bits is called the light coding depth.

With discrete digital coding, the continuous spectrum of shades of the base color is divided into an integer number of segments, within each of which the brightness is considered constant.

For natural light, the number of shades of the background color is infinite. With digital coding, the number of shades becomes the final value. The number of shades (K) and the bit depth of coding (b) are related by the formula:

The main formula of computer science works again!

The actual brightness of the image depends on the physical conditions of its transmission: the level of illumination from the light source when the image is reflected or the power of the light flux from the monitor when the image is emitted. If the maximum brightness is taken as one, then the value of the brightness of light in the range from black to white will vary from zero to one.

On the rice. Figure 5 shows the sampling of the black and white spectrum with b = 2. This means that the code size is 2 bits and the entire spectrum is divided into four levels - 4 shades.

Rice. 5. Monochromatic encoding with depth 2

Natural light, whose brightness ranges from 0 to 1/4, will appear as black, whose decimal code is 0, and whose binary code is 00. Next are two shades of gray. Light in the brightness range from 3/4 to 1 is represented as white, and its code is: 3 = 11 2 . If the brightness level is expressed as a percentage, then the black-and-white coding rules for b = 2 can be shown in the table:

On the rice. 6 shows the discretization of the black-and-white spectrum at b = 4. Since 2 4 = 16, 16 different black and white shades are encoded in this way. Decimal and binary codes are given.

Example. Consider a model example of encoding a black and white image. The monitor raster size is 8 x 8 pixels. The encoding depth is two:
b= 2 bits. The image is presented on rice. 7. The numbers indicate the numbering of rows and columns of the raster. Each cell is an image pixel.

The letter “P” is drawn on the screen. Its three segments are painted in different shades of the background color: black, dark gray and light gray. The binary code for the image would be:

Rice. 7. Discrete pattern

For clarity, the binary code is presented as a matrix, the lines of which correspond to the lines of the raster on the screen. In fact, computer memory is one-dimensional and all code is a chain of zeros and ones located in consecutive bytes of memory. The volume of such video information is equal to 16 bytes. If we translate this code into hexadecimal form, then it will be as follows:

FFFF D55F CFEF CFEF CFEF CFEF FFFF FFFF

When encoding a color image, various approaches are used, which are called color models. This will be discussed in detail in the section on computer graphics technologies.

Questions and tasks

1. Define the concepts: light, color, image.

2. What cannot be seen and what can be seen in absolute darkness?

3. What is a raster, a pixel?

4. What is the difference between a monochrome image and a color one?

5. What is the black and white spectrum?

6. What information is contained in the computer code of the image?

7. What is the size of the video code of the image displayed on the screen, with a raster size of 640x480 and a coding depth of 8 bits?

8. What is the image encoding depth if the video code size is 384 KB and the raster size is 1024x768?

9. The volume of the video code is 600 Kb, the encoding depth is 16 bits. What is the raster size used to display the image: 640x480 or 1024x768?

10. On a black-and-white “toy” monitor with a resolution of 8 x 8 pixels (see the example in the paragraph), the letters are displayed in turn: H, A, W. Develop and write down for each output image its binary and hexadecimal codes. The encoding depth is two. Different elements of letters have different color shades.

11. Restore the image on a black-and-white “toy” monitor by the hexadecimal code: F3F7 F3D7 F37F F1FF F3BF F3EF F3FB FFFF, - if the encoding depth is two.

§four. Analog signal encoding technology

In §1, the definitions of the concepts “analogue signal” and “discrete signal” were introduced. The light signal is analog because it is carried in a continuous stream electromagnetic radiation. The sound signal is carried by an acoustic wave, which generates a continuous process of changing air pressure with sound frequency.

To save image and sound in digital format, the corresponding analog signals must be encoded, i.e. are presented as a discrete sequence of zeros and ones - binary digits. The process of converting an analog signal into a discrete digital form called analog to digital conversion, or ADC for short.

On the rice. 8 shows a circuit for converting any analog signal of natural origin into a discrete digital code.

It follows from this scheme that both light and sound signals are initially converted into a continuous electrical signal, which is then subjected to analog-to-digital conversion.

Digitization of the image occurs during shooting on digital cameras and video cameras, as well as when entering an image into a computer using a scanner. The basis of the physical process of converting light into electric current is the phenomenon of the appearance of an electric charge in a semiconductor device - photodiode under the influence of light falling on it.

The magnitude of the electric potential arising on the photodiode is proportional to the brightness of the light flux. This value changes continuously as the brightness of the light changes.

Analog-to-digital conversion is to measure the magnitude of the electrical signal.

Measurement results in digital format are stored in the memory device. Spatial sampling occurs through the use of a photodiode array that divides the image into a finite number of elements.

Audio encoding

Let's take a closer look at the ADC process using the example of sound encoding when it is entered into a computer. When recording sound into a computer, the device that converts sound waves into an electrical signal is a microphone. Analog-to-digital conversion is performed by an electronic circuit located on sound card (sound card) of the computer to which the microphone is connected.

The amplitude and frequency of the output from the microphone and the input to the sound card the electrical signal corresponds to the amplitude and frequency properties of the acoustic signal. Therefore, the measurement of an electrical signal makes it possible to determine the characteristics of a sound wave: its frequency and amplitude.

An analog signal is a process of continuously changing signal amplitude over time ( rice. 9).

Rice. 9. Analog signal sampling

There are two main audio encoding parameters: sample rate and bit depth. The signal amplitude is measured at regular time intervals. The value of such a time interval is called sampling step, which is measured in seconds. Let us denote the discretization step t (с). Then the sampling frequency is expressed by the formula:

H = 1/ t (Hz)

The frequency is measured in hertz. One hertz corresponds to one measurement per second: 1 Hz = 1 s -1.

The higher the sampling rate, the more detailed the numerical code will reflect the change in signal amplitude over time. Good sound recording quality is obtained at sampling rates of 44.1 kHz and higher (1 kHz = 1000 Hz).

Bit depth encoding ( b) is the size of the binary code that will represent the amplitude of the signal in the computer's memory. The bit depth is related to the number of levels of splitting the signal amplitude by the formula:

The process of sampling the amplitude of sound is called sound quantization. Then the value of K can be called the number of sound quantization levels ( rice. 10).

Rice. 10. Analog signal quantization

The values ​​of the measured quantity are entered in soundcard register - special device memory. The bit width of the register is b - bit depth encoding. Hereinafter, this value will also be called the quantization bit depth. The measurement result is represented in the register as a binary integer.

The measured physical value is rounded to the nearest integer value that can be stored in the register of the sound card.

On the rice. 11 shows how this happens with a three-bit quantization of an analog signal. AT graphical form sampling and quantization of sound can be represented as a transition from a smooth curve to a broken line consisting of horizontal and vertical segments. It is assumed that at each time step the value of the measured quantity remains constant.

Rice. 11. Measurement of a variable physical quantity
using a three-digit register

The results of such a measurement will be recorded in the computer memory as a sequence of three-digit binary numbers.

The volume of recorded sound information is equal to:

3 x 9 = 27 bits.

In fact, three-bit sampling is not used in practice. This option is considered here only as case study. smallest size register for real devices - 8 digits. In this case, one measured value will take 1 byte of computer memory, and the number of quantization levels is 2 8 = 256. Measurements with such a register will be 32 times more accurate than with a three-digit register. With a 16-bit register, each value in memory will take 2 bytes, and the number of quantization levels:
2 16 = 32 768. The higher the quantization bit length, the higher the measurement accuracy of the physical quantity. But at the same time, the amount of memory occupied also grows.

The discrete digital representation of an analog signal reflects it more accurately, the higher the sampling rate and quantization bit depth.

The Nyquist–Kotelnikov theorem. A person hears sound vibrations approximately in the frequency range from 20 Hz to 20 kHz. Sound with frequencies above this range is called ultrasound, sound with a lower frequency - infrasound. In communication theory, the Nyquist-Kotelnikov theorem is known, according to which the ADC sampling frequency must be at least 2 times higher than the analog signal frequency. This means that if we want to store information about sound with a frequency of 20 kHz in binary code, then the sampling frequency must be at least 40 kHz. AT modern standard digital audio uses a sampling frequency of 44.1 kHz.

We can give the following figurative analogy of this theorem. The size of the mesh of the fishing net depends on the size of the fish that will be kept in it. The smaller the cells, the smaller the fish is held by the net. Paraphrased in a fishing way, the Nyquist-Kotelnikov theorem will sound like this: the length of the side of a square cell of the net should be half the transverse size of the smallest fish that you want to catch with nets. For example, if the transverse size of the fish should be at least 10 cm, then the side of the square cell of the fishing net should be no more than 5 cm. When performing the ADC, the “caught” harmonics are similar to fish falling into the nets; the sampling step is similar to the cell size of the network. More about harmonics will be discussed in the next paragraph.

Task 1. Sound was recorded into a computer for 10 seconds. Determine the amount of recorded information if the sampling frequency was 10 kHz and the quantization bit length was 16 bits.

The number of measurements of the sound signal (N) at the sampling rate H (Hz) for the time t (s) is calculated by the formula: N = H·t. Substituting the problem data, we get: N = 10,000 10 = 100,000 measurements. Quantization bit depth: 16 bits = 2 bytes. Hence the volume of sound information:

I \u003d 100,000 2 \u003d 200,000 b \u003d 200,000 / 1024 Kb \u003d 195.3125 Kb

Task 2. The recorded sound is stored in the file. The data was not compressed. The file size is 1 MB. It is known that the recording was made at a frequency of 22 kHz with a bit depth of sound quantization - 8 bits. Determine the playing time when playing a sound stored in a file.

From the solution of the previous problem, it follows that the amount of audio information (I), sampling frequency (H), quantization bit depth (b) and sound recording time (t) are interconnected by the formula:

If the recorded sound is played back without distortion, then the playback time is equal to the recording time. From here, the desired value is calculated by the formula:

t = I/(Hb)

When calculating, we translate the values ​​of I and b into bytes, and the value of H into hertz:

t = 1 1024 1024/(22 000 1) 47.66s

Questions and tasks

1. Name the main stages of the technology of encoding an analog signal of natural origin.

2. In what devices does light coding take place?

3. What technical devices are used to encode sound?

4. Give definitions to the concepts: sampling rate, quantization bit depth, quantization levels.

5. Determine the amount of digital code when recording sound for 1 minute, if the sampling frequency was 44.1 Hz, and the quantization bit was 8 bits.

6. Determine the sampling rate when encoding sound, if the volume of the sound file is 500 Kb, the recording time is 0.5 minutes, the quantization bit depth is 16 bits. The file is obtained after 50% compression of the source code.

§5. Numerical experiments on sound processing

Graph of function Y(x) - a visual (graphical) display of the dependence of the value of the function Y on the value of the argument x. The graph is plotted within the scope of the function definition (the range of the argument x) and the range of Y values. If the function has an infinite domain of definition, then for plotting the segment, within which the behavior of the function is most typical, is selected. The graph of a periodic function should at least reflect one period of change in the values ​​of the function.

Experiment 1: harmonic vibrations

Consider a method for constructing a graph of a periodic function that describes harmonic oscillations. Harmonic vibrations are called periodic changes over time of some physical quantity, described by the sine or cosine functions. In general, they look like this:

Y = A sin(2 vt + j) or Y = A cos(2vt+ j)

Here A is the oscillation amplitude; t - time (function argument); v- oscillation frequency, measured in hertz; j - initial phase of oscillations.

The period of the sin and cos functions is 2. The value of the function (Y) varies in the range from –A to +A. The graph of a sine function is called a sinusoid.

Sound vibrations described by a harmonic function are called harmonic vibrations. Pure musical tones: do, re, mi, etc. - are harmonic sound vibrations of different frequencies. Harmonic sound vibrations are emitted by a tuning fork - a reference source of musical tone. Harmonic oscillations are performed by a mathematical pendulum. In an electric oscillatory circuit, the current strength periodically changes according to a harmonic law.

Consider a way to plot a harmonic function in a spreadsheet environment. We will show how this is done using the MS Excel spreadsheet as an example.

The work takes place in two stages:

1 - function tabulation;

2 - plotting a function graph.

The resulting spreadsheet is shown in rice. 12.

Rice. 12. Table and graph of the harmonic function.

The parameters of the function are the oscillation frequency n and the amplitude A. These parameters are entered, respectively, in cells C1 and C2. The value of the initial phase j will be taken equal to zero.

Tabulation is the construction of a table of function values ​​on a certain interval of argument values ​​with a constant step. The tab stop (t) is written in cell G1 .

The table is placed in cells A4:B25. Column A contains the values ​​of the argument - time t, in column B - function values ​​Y=A sin(2 vt) . The change in time starts from the value t= 0 (cell A5). Cell A6 contains the formula: =A5+$G$1 . Further, this formula is copied into the following cells of column A. This ensures that the time changes with a constant step, stored in cell G1.

The following formula is entered in cell B5:

=$C$2*SIN(2*PI()*$C$1*A5).

This formula calculates the value of the function from the argument in cell A5. Standard Function PI() returns the value of the Pythagorean number p. The formula from cell B5 is copied down the column to cell B25.

On the rice. 12 shows the results of tabulating the function for the values n = 10 Hz, A = 1. The tabulation step is taken equal to 0.005. At a frequency of 10 Hz, the oscillation period is 1/10 = 0.1 s. With a tabulation step of 0.005, 20 steps fit on one period. This is quite a sufficient number of values ​​to plot a function graph.

Building a graph. For graphical data processing in spreadsheet processor there is a wizard for constructing charts and graphs. Its call occurs through the menu by commands: Insert - Diagrams. The further steps of the algorithm are as follows:

1 - select chart type: standard - point, view - smooth lines

2 - set the range of data (function values): in columns - B5:B25 ; tab ROW, values X: A5:A25

3 - define header: Y=A sin(2vt) ; axis labels: t, Y; grid lines; legend (none); data signatures (none).

4 - specify on which sheet of the book to mark the diagram.

Click DONE. The chart has been built.

Line thickness, background color, grid type can be adjusted separately using context menu(by the right mouse button), setting desired formats objects.

A person hears sound vibrations, on average, in the frequency range from 20 Hz to 20 kHz. The frequency of 10 Hz is the frequency of infrasound. Some animals perceive it by ear. If the frequency is doubled, then the lower frequency limit of human hearing will be reached. But then two periods of oscillations will fit in the time interval of 0.1 seconds. Such an experiment is easy to perform on the constructed spreadsheet. Change the frequency value in cell C1 to 20, after which the table will be recalculated, and the graph will take the form shown in rice. 13.

Rice. 13. Graph of sound vibrations for n = 20 Hz

On the time interval of 0.1 seconds, 2 periods of the function fit. Therefore, the oscillation period is 0.05 seconds.

Tasks. Run some spreadsheet experiments for frequency values: 5, 15, 30, 40 Hz. In each case, determine how many oscillation periods fit into the 0.1 second interval.

Experiment 2: non-harmonic vibrations

In the branch of mathematics called harmonic analysis, it is proved that any periodic function Y(t) with frequency n can be represented as a sum of harmonic (sinusoidal) functions with frequencies n, 2n, 3n, 4n… Such terms are called harmonics, and the representation of a function as a sum of harmonics is called its harmonic expansion:

Y(t) = A 1 sin(2 v t + j1) + A 2 sin( 4v t + j 2) + A 3 sin( 6v t + j 3) + …

Here A 1, A 2, ... - amplitudes of harmonics, j 1 , j 2 , …. are the initial phases of the harmonics. The number of terms for some functions may be finite, but it may also be infinite.

Example. Let's build a graph of a non-harmonic periodic function, represented as the sum of two harmonics:

Y(t) = A 1 sin(2 v t ) + A 2 sin( 4v t )

The initial phases are equal to zero. Let's perform calculations for the following parameter values: v\u003d 20 Hz, A 1 \u003d A 2 \u003d 1. As it was above, we will perform calculations on the time interval from 0 to 0.1 seconds, the tabulation step is 0.005.

To get a table of values, just replace the contents of cell B5 with the following formula:

=$C$2*SIN(2*PI()*$C$1*A5)+$C$2*SIN(2*PI()*2*$C$1*A5)

Then copy this formula down column B.

Rice. 14. Graph of non-harmonic oscillations

The plotted graph is shown in rice. 14. It can be seen from the graph that the oscillation period is 0.05 s, i.e. equal to the period of the first harmonic. The maximum oscillation amplitude increased and became equal to approximately 1.54.

1. Get a graph of oscillations, which differs from those considered in the example in that the amplitude of the second harmonic is half that of the first: A2 = A1/2.

2. Get a graph of fluctuations, consisting of three harmonics with the following parameters: A 1 = 1,
v n 1 \u003d 20 Hz; A 2 \u003d A 1 / 2,v 2 = 2v 1 Hz; A 3 \u003d A 2 / 2,v 3 = 2v 2 Hz. The initial phases are equal to zero.

3. Get a graph of oscillations that are made up of two harmonics with the following parameters: A 1 = 1,v 1 = 20 Hz, j 1 = 0; A 2 \u003d A 1,v 2 = 2v 1 Hz, j 2 \u003d p / 2. Compare the resulting graph with the graph on rice. 14. How did the phase shift between harmonics affect the amplitude of oscillations, the period of oscillations?

Experiment 3: Sampling and Quantization of Sound Vibrations

This experiment simulates the analog-to-digital conversion process. ADC includes signal sampling by time and quantization signal amplitude values. Time sampling is determined by the value of the sampling frequency H (Hz). The time step between two measurements is 1/N seconds.

The amplitude quantization process is determined by the audio quantization depth parameter: b. The number of quantization levels is 2 b . The codes that determine the amplitude of the audio signal are integers in the range from 0 to 2 b - 1.

The model of the audio signal quantization process implemented in the spreadsheet is shown in fig. 15. We consider a harmonic signal with a frequency n = 20 Hz. The signal frequency value is stored in cell C1. The sampling frequency of the ADC is H = 200 Hz (cell C2). Quantization depth b = 8 bits (cell G2).

Column A contains the time points of signal measurements during ADC execution. In cell A5 - the initial moment of time t = 0. Then the time increases in steps of 1/H s. Cell A6 contains the formula: =A5+1/$C$2 . Further, this formula is copied down column A.

The amplitude value of the analog signal is calculated by the formula:

Y = 0.5(1+sin(2 v t ))

Such a transformation of the sinusoid transfers it to the region of non-negative Y values ​​in the range from 0 to 1. This is done to simplify the description further process quantization. Cell B5 contains the following formula: =(1+SIN(2*PI()*$C$1*A5))/2 . This formula is then copied down column B.

In column C, the codes for measuring the signal amplitude are obtained, represented by integers decimal numbers. When written to the computer's memory, they are converted to the binary number system. The formula is placed in cell C5: =INTEGER(B5*2^$G$2) . Its meaning is as follows: since Y lies in the range from 0 to 1, the value of the expression will be equal to integers in the range from 0 to 2 b . Here, square brackets denote the selection of the integer part of the number.

When constructing a “Signal Coding” diagram, select the “Histogram” type. discrete view histograms clearly reflect the discrete nature of the code. The table is based on 21 signal measurements. For the given values ​​of n and H, it was possible to measure two periods of signal oscillations.

When changing the three parameters of the model: v, H and b - the table will be automatically recalculated. For example, if you increase the sampling rate by 2 times, i.e. put in a cell C2 is the number 400, then we get the graphs presented on rice. 16.

Rice. 15. Harmonic analog signal and quantization results.

Rice. 16. ADC with a sampling rate of 400 Hz

The measurements were made on one oscillation period. The discrete code now describes the oscillatory process in more detail.

Rice. 17. ADC with a quantization depth of 16 bits
and a sampling rate of 400 Hz

Quantization histogram on rice. 17 was obtained for b = 16. It can be seen that the range of code values ​​has increased. Therefore, encoding gives more accurate information about the signal strength than with b = 8.

1. Carry out calculations with the values ​​of the parameters:
v= 20 Hz, H = 100 Hz, b = 8 bits. Compare with the results on rice. 15. Draw conclusions.

2. Carry out numerical experiments on coding non-harmonic oscillations. Take the functions describing non-harmonic oscillations from the tasks for experiment No. 2.

3*. It follows from the Nyquist-Kotelnikov theorem that in order to restore harmonic oscillations with frequency n from a discrete code, the sampling frequency must be no less than 2 v, i.e. condition must be met: H 2v. Try to test this theorem on our model. Try to explain your results.

4. Write a Pascal program that simulates the process of encoding an analog signal (without graphics). The program should reproduce the tables that were obtained above in the spreadsheet environment.

§6. Binary Compression

Any information in a computer is represented in the form of a binary code. The larger the amount of this code, the more memory space it takes, the more time is required for its transmission over communication channels. All this affects the performance of the computer, the efficiency of using computer networks.

Data reduction occurs by binary code compression. There are two compression situations:

1) loss of information as a result of compression is unacceptable;

2) partial loss of information as a result of compression is acceptable.

In the first case compression, or package, data is produced only for their temporary storage on media or transmission over communication channels. To work with this data, you need unpacking, i.e. reduction to original form. In this case, not a single bit should be lost. For example, if text is compressed, then after decompression, not a single character should be distorted in it. The compressed program must also be fully recoverable, since the slightest corruption will render it inoperable. Lossless compression is commonly used when creating and file archives.

Packing with partial loss of information is performed by compressing the image code (graphics, video) and sound. This possibility is associated with the subjective capabilities of human vision and hearing.

Research scientists have shown that our vision is more significantly affected by the brightness of the image point (pixel), rather than its color properties. Therefore, the amount of video code can be reduced due to the fact that color codes are stored not for each pixel, but after one, two, etc. raster pixels. The larger these gaps, the more the video data is compressed, but the image quality deteriorates.

When encoding video films - a dynamic image, the property of inertia of vision is taken into account. Fast-moving movie clips can be encoded in less detail than static frames.

Hardest to compress sound code. With good recording quality, its volume in uncompressed form is very large, and the redundancy is relatively small. It also uses the psychophysiological features of human hearing. It is taken into account to which harmonics of natural sound our hearing is more susceptible, and to which - less. Weakly perceptible harmonics are filtered out by mathematical processing. Compression is also facilitated by taking into account the non-linear relationship between the amplitude of sound vibrations and our ear's perception of the loudness of the sound.

Various image and sound code compression algorithms are used to implement various presentation formats for graphics, video and sound. More about this will be discussed in the section on information technology.

Packing without loss of information. There are two approaches to solving the problem of compressing information without losing it. The first approach is based on the use of non-uniform symbolic code. The second approach is based on the idea of ​​identifying duplicate code fragments.

Let's consider a way to implement the first approach. In an eight-bit character encoding table (for example, KOI-8), each character is encoded with eight bits and, therefore, occupies 1 byte in memory. In section 1.2.3 of our textbook, it was said that the frequency of occurrence of different letters (characters) in the text is different. It was also shown there that the informational weight of symbols is the greater, the lower its frequency of occurrence. The idea of ​​text compression in computer memory is connected with this circumstance: to refuse to encode all characters with codes of the same length. Symbols with less information weight, i.e. frequently occurring, encode with a shorter code compared to less frequently occurring characters. With this approach, it is possible to significantly reduce the amount of common text code and, accordingly, the space it occupies in the computer's memory.

We have already considered Morse code, in which the principle is applied uneven code. If the dot is encoded as zero, and the dash is encoded as one, then this will be a binary code. True, there is a problem of separating letters from each other. In a telegraph message, it is solved with a pause - in fact, the third character in Morse code.

One of the simplest, but very effective ways to construct a binary non-uniform code that does not require a special separator is the D. Huffman algorithm (D.A. Huffman, 1952). A variant of the Huffman code table for uppercase letters of the Latin alphabet is given in Table. 5.

In this table, the letters are arranged in descending order of frequency of occurrence in the text. The letters E and T most commonly used in texts have 3-bit codes. And the rarest letters Q and Z are 10 bits. The larger the size of the text encoded with such a code, the smaller its information volume compared to the volume when using a single-byte encoding.

The peculiarity of this code is its so-called prefix structure. This means that the code of any character does not coincide with the beginning of the code of all other characters. For example, the code for the letter E is 100. Look at the table. 5. There is no other code that starts with these three characters. On this basis, characters are separated from each other in an algorithmic way.

Example 1 Using huffman code, encode the following text, consisting of 29 characters:

WENEEDMORESNOWFORBETTERSKIING

Using table. 5, encode the string:

011101 100 1100 100 100 11011 00011 1110 1011 100 0110 1100 1110 011101 01001 1110 1011 011100 100 001 001 100 1011 0110 110100011 1010 1010 1100 00001

After placing this code in memory byte by byte, it will take the form:

01110110 01100100 10011011 00011111 01011100 01101100 11100111 01010011 11010110 11100100 00100110 01011011 01101000 11101010 10110000 001

In hexadecimal form, it will be written like this:

76 64 9B 1F 5C 6C E7 53 D6 E4 26 5B 68 EA B0 20.

Thus, the text in ASCII encoding 29 bytes, Huffman encoding will take 16 bytes. The compression ratio is the ratio of the size of the code in bytes after compression to the size before compression (ie in 8-bit encoding). AT this example the compression ratio turned out to be 16/29 0.55.

Decoding (unpacking) of the text is performed using binary tree huffman coding. Graphic image Huffman tree corresponding to Table. 5 is shown in fig. 18. A tree is called binary if no more than two branches emerge from each vertex.

The leaves of this tree, located at the ends of the branches, are the symbols of the alphabet. The symbol code is formed from a sequence of binary digits located on the path from the tree root to the symbol leaf.

Text decompression occurs by scanning the binary code from left to right, starting with the first digit, moving from the root along the corresponding (having the same binary code) branches of the tree until a letter is reached. After selecting a letter in the code, the decoding process next letter starts again from the root of the binary tree.

Example 2. Decode the following binary code obtained by the Huffman algorithm (the code is separated into bytes by spaces):

01010001 00100101 00100011 11111100

Moving along the Huffman tree, starting from the first digit on the left, we get the following transcript:

It turned out the word HUFFMAN. The packaged code took 4 bytes, the source code -
7 bytes. Therefore, the compression ratio was 4/7 0.57.

tree on rice. 18 is an abbreviated version of the Huffman code. In full, it should take into account all possible characters found in the text: spaces, punctuation marks, brackets, etc.

In programs that compress text, a character frequency table is built for each processed text, and then codes of different lengths such as Huffman codes are generated. In this case, text compression becomes even more efficient, since the encoding is tuned specifically to this text. In programming theory, algorithms that find the optimal solution for each specific problem variant are called greedy algorithms.

To methods compression by taking into account the number of repetitions The code snippets include the RLE algorithm and the Lempel-Ziv algorithms. In the RLE algorithm, groups of consecutive identical one-byte codes are identified. Each such group is replaced by two bytes: the first indicates the number of repetitions (no more than 127), the second - the repeating byte. Such an algorithm, due to its simplicity, works quite quickly. It gives the greatest efficiency when compressing graphic information containing large areas of uniform shading.

The Lempel-Ziv algorithms (LZ77, LZ78) detect repeated sequences of bytes. They can be conditionally called words. If at sequential viewing data, a word is found that has already been encountered before, then a link is formed to it in the form of a backward offset relative to the current position and the length of the word in bytes. The software implementation of such algorithms is more complicated than for the RLE method. But the effect of compression is much higher.

We will return to the methods of encoding information when we consider the methods of protecting data by encryption.

Questions and tasks

1. In what cases, when data is compressed, can partial loss of information be allowed, and in what cases it is impossible?

2. How do variable length codes allow text to be “compressed”?

3. Encode the following text using Huffman codes:

HAPPYNEWYEAR. Calculate the compression ratio.

4. Decipher the following code using a binary Huffman tree:

11110111 10111100 00011100 00101100 10010011

01110100 11001111 11101101 001100

5. What is the idea of ​​the algorithm RLE compression? What type of information is best compressed by this algorithm?

6. What is the idea behind the Lempel-Ziva compression algorithm?

7. What properties of human vision and hearing are used to compress graphic and sound information?

For more information about compression algorithms, see: Andreeva E.V., Bosova L.L., Falina I.N. Mathematical foundations of informatics. M.: BINOM. Knowledge Lab, 2007.

08.11.2016

Digital technologies are changing our habits, the interior of our apartments, our lifestyle and the language of our communication. They are transforming business and government, entertainment and education, science and medicine. They significantly changed the person himself, especially in socio-economic and cultural aspects. Every third inhabitant of our planet carries with him cellular telephone and in places where the connection is "not very" we need cellular boost and directional antennas. We are spending more and more hours in the "digital space" of the Internet and less and less time in media such as television and radio. Paper media are being replaced by electronic ones. An increasing number of passengers on the subway do not read traditional books, but their electronic versions downloaded from the Web.

Digital technologies as we know them today have radically changed both our business and our private lives. Storage and transmission of data have become more efficient. The Internet, especially after the creation of the WWW, allows humanity to create and share information and knowledge on a global scale.

Digital, invisible and ubiquitous

The next step in the digital revolution will be the omnipresence of digital technologies. Our cameras and MP3 players, electronic notebooks and Cell phones more and more reminiscent pocket computers, acquiring the capabilities of video shooting, sound recording, high-speed data transmission.

Technical innovations based on the most different technologies, including radio identification and radio sensors, are changing the models of human existence in our digital age. Information and communication opportunities are becoming invisible and ubiquitous.

The theory of the future "omnipresence of computers" by Mark Weiser - the former chief scientist of the Xerox research center in Palo Alto - says that the most powerful, advanced and deep technologies are "those that disappear, weave themselves into the fabric of Everyday life until they dissolve into it. According to this opinion, all our familiar things will soon turn into miniature computers. And this is not fiction. One has only to pay attention to the trends of changing generations of computers. They don't just get smaller. They are becoming more and more numerous and irreplaceable. The solution of many problems will no longer require human intervention, and technologies that were so noticeable yesterday will disappear from our field of vision tomorrow. At the same time, everywhere in our environment, the most everyday things will have the ability to process information.

Two and a half decades ago, computers serving dozens of people were commonplace. Then came personal computers - one machine per person, now our society is in a phase of transition to ubiquitous computers, with several digital devices serving one person. Figure 2, taken from Mark Weiser's article "Computer of the 21st Century", illustrates the advent of the era of widespread computerization. It shows the stages of growth, saturation and decline of three generations of computers.

New vectors of development of the Network

So long predicted digital convergence is becoming a reality in many areas of life. Over the past two decades, telephone communication has changed beyond recognition. Wireless telephony has become widespread. At the same time, the telephone ceases to be a means of only verbal communication. Data traffic in communication networks is growing much faster than voice traffic. And while mobile operators strive to squeeze maximum benefit from voice communications, operators of other services - Voice over Internet Protocol (VoIP) - try to minimize this benefit.

VoIP technology owes its growing popularity to a multitude of advantages, which, together, for many categories of users - from housewives to transcontinental corporations - form an extremely attractive method of communication. VoIP calls are often free, or at least cheaper than conventional telephony. Users can call a destination from anywhere with an Internet connection and enjoy a variety of additional services such as call forwarding, video calling, conferencing, file sharing, etc.

VoIP services have been around since the 1990s. However, their mass distribution has become noticeable relatively recently. Skype is one of the most well-known services aimed at specific consumers.

Skype is a service that allows you, through a special computer program make free calls to other Skype subscribers around the world. If subscribers have webcams, Skype allows you to arrange video conferences. You can also call regular landline and mobile phones at very low rates. Skype includes the functions of instant messaging systems, while allowing you to organize chats with up to 100 people at the same time and save the information received.

Skype started in 2003 and a couple of years later was bought by eBay, the world's largest online auction site. The addition of Skype to eBay prompted several other large companies to start experimenting with Internet telephony. For example, Microsoft recently acquired VoIP-company Teleo, Yahoo! bought the DialPad company, and Google began to provide the Talk service. Telephony service providers are also showing interest in VoIP. British Telecom and Nokia are testing smart subscriber terminals that seamlessly switch between cellular and VoIP networks, allowing the subscriber to avoid having to buy two different terminals and pay bills from two operators.

A new kind of infrastructure

Devices that communicate by radio can be easily connected to a network: no need to dig trenches or build cable ducts, no need to lay cables. However, the modern world with its multi-gigabyte streams cannot do without a fixed infrastructure, so fixed networks also do not stand still. The main direction of development here is the creation of full-scale optical networks, characterized by a huge bandwidth. In developed countries, backbone networks providing intercity and international communication- already completely optical. Networks connecting homes and industrial buildings with backbone network- the so-called access networks, are still used today copper cables and DSL technologies. But they, of course, will be replaced by optical lines, realizing the concept of FTTH (fibre-to-the-home). Well, the last step - optical communication lines inside buildings - will also not be long in coming.

The general opinion of experts is that in the developed world, optical networks will constitute the ubiquitous fixed infrastructure. These networks will be complemented by radio networks, whose role will be triune.

First: provide convenient connection of terminal devices to the infrastructure. By analogy with the term "last mile", widely used in today's telecommunications literature, tomorrow's radio access networks will be networks of "last meters" - the distance from local transceivers to optical networks.

Second: communication for moving objects. This role, like the first one, is a classic mobile role.

The third role is relatively new. It consists in connecting devices without the use of infrastructure. Does it make sense? Yes, it has. For example, for all places and situations where there is simply no infrastructure (for example, in developing countries) or it is inaccessible or damaged (for example, due to an accident). Also, if computers are ubiquitous, one day we'll need to network a lot of cheap devices that are likely to perform some local task in the office or at home. It is likely that it will be too expensive to equip such devices with UMTS or WLAN interfaces. This is where we need the ability to connect devices without connecting them to the network infrastructure. It was for such purposes that Bluetooth technology was invented at one time, which became the first step in this direction.

New lifestyle

Hardly anyone is able to calculate how big the World Wide Web is today. Yahoo! estimates its size at 40 billion pages. Hundreds of times more - the volume of classified data stored by various organizations.

We often use the Internet without even knowing it. When dialing a phone number, we do not think that part of the way our call will pass through the VoIP section through the Internet. When we send an email to a colleague from a nearby office, we are not interested in which servers it will travel through. By clicking the "Search" button on Google or Yahoo!, we just want to get information. The Internet, together with the illusion of the "universality" of knowledge, has brought us new style life. And along with the new lifestyle - a new service market.

How big is the digital lifestyle market?

At one level, this is a huge segment that combines such digital industries as communications, broadcasting and the computer industry. But on the other hand, this is a market for one person who equally appreciates both paid and free services. It should be remembered here that the key social force in the market for new communication services is the tendency of society towards individualization, the desire of the client to choose products and services, guided only by his own needs. Therefore, suppliers and operators will have to offer the consumer the possibility of direct and personal choice and “customization” of the services received. multimedia communication, e-commerce, telemedicine, distance learning, the ubiquity of computers - in homes, offices, cars; radio networks in cafes and fitness clubs, shops and hotels, airports and universities - all this together will lead to a significant increase in global traffic transmitted over the Internet.

Thus, it is quite clear that before our eyes the three components of new services - communications, broadcasting and computer industry - must unite and create a new market that does not yet have correct name, but it will appear and probably very soon.

New opposites

IBM Global Business Services has released a new report, Moving Through the Media Divide: Innovation and Enabling New Business Models, which outlines the conflict facing traditional content owners and content distributors. It is he who is called in the Report “the gap in the media environment”, which is characterized by the tension in the relationship between traditional participants in the media market and “newcomers” from the field of digital technologies. IBM predicts that over the next four years, total revenues from new media distribution will grow at 23% per year - about five times the growth rate in the traditional media and entertainment market. In addition, according to expert estimates, in the transition to digital technologies for the formation, storage and distribution of content, the music industry will lose approximately 90–160 billion dollars, and the television and film industries will suffer even greater losses if an acceptable way out of the current conflict situation is not found.

If you look closely, you can easily see a clear divide between the old and new content distribution environments. The traditional environment is still dominated by content that is created by specialists and distributed through branded platforms. It is protected by holograms, “All rights reserved” stamps are put on it, highly paid lawyers monitor its distribution, cases of illegal (read - unpaid) use of such content are considered in courts of various instances. In the new environment, content is often created by users and accessed through open resources. These polar trends clearly define the conflict between incumbents and new market entrants.

Another conflict arises between the already existing market participants - the traditional owners of resources (film companies, game developers and recording studios) and their distributors (television companies, enterprises retail, film distributors, cable and satellite communications). The existing section of the media environment pits partners against each other in the fight for revenue growth.

Today's confrontation between traditional and new media resource providers has reached its highest point of tension. The problem, which was originally purely technical and consisted only in replacing analog communications with digital ones, has grown into an economic, legal and even political one. So it's time to change business models, innovate and redefine partnerships.

New firms and new relationships

Traditionally, markets are measured in terms of supply and demand, on the basis of which manufacturers and service providers decide what “values” consumers will pay for and try to create these values. But in the emerging digital world it seems that consumers themselves create these values. Classical examples of such "self-service" are massive online games, public sites.

Even traditional firms such as telecom operators are beginning to move in the direction of "personalization." In the 19th century, telegraph messages were printed and decoded by employees of telegraph companies; by the 20th century, users could send and receive messages themselves, but the network equipment belonged to telephone company. In the 19th century, equipment owned by the user was increasingly used to transmit messages.

Similar trends can be observed in the field of computers (for example, the use of free software and software with open source) and in the field of broadcasting (where ordinary people are increasingly involved in content creation by appearing on a reality show or calling a studio on a live TV or radio show).

The move towards personalization and increasing user-created value is changing the face of the market. The main indicators of this are the following.

What is a service and who is its consumer?

What can be considered today as a basic information and communication technology service? Twenty years ago it was defined as "a telephone in every home." Today basic service- not only the availability of the necessary services or equipment, but also the quality they provide. In the struggle for quality and throughput, and ultimately for the client, spears break, companies merge and go bankrupt, regulatory foundations collapse, concepts are written and forecasts do not come true.

At the end of 2006, the International Telecommunication Union published its annual - the seventh in a row - report of a group of analysts on trends in the development of the Internet. It is entitled "Digital.life" and says that in the coming decades, we should expect the dawn of a new era of digitalization, during which today's "Internet of data and people" will give way to tomorrow's "Internet of things".

In their report, ITU analysts remind the reader how, at the very beginning of the Internet era, we were struck by the possibility of contact - without telephone operators and long-distance calls - with people across the oceans, in other time zones, and even in other hemispheres. How unusual it was to access information while in front of the screen home computer, and not in the Lenin Library!

The next logical step in this technological revolution, according to experts, there will be networking of inanimate objects. They will communicate in real time and, in doing so, will radically transform the Internet. According to the report, there are currently about 875 million users in the world global network. And that number could just double if humans continue to be the main users of the future. But experts expect that in the coming decades, the number of terminals connected to the network will amount to tens of billions. This is at the heart of the Internet of Things. “The Internet of Things will enable new ways of using things that we have never imagined until now,” the authors of the report predict.

But despite the fact that there are quite a few reasons for concern, one thing is clear: science and technology continue to move forward. The Internet ceases to be something independent, it covers our entire life. Multibillion-dollar investments in data processing and transmission technologies lead to the emergence of more and more new services and opportunities for the consumer, which means more and more new markets and new incomes. This process is unpredictable, just as the inventor's thought process is unpredictable.

It is hardly worth trying to understand the ways of development of progress before continuing to move forward. With the dizzying speed of the emergence and change of technologies, an artificial stop "to realize" can be quite expensive. And in this I am ready to argue with the authors of the ITU report I have already mentioned, who call for reaping the benefits global Internet things "only after a full understanding of this progress, the benefits and difficulties associated with it."

Our world is gradually becoming digital. We are now at the very epicenter of the digital revolution, which originated in the early 1980s and is gradually replacing analog services and devices from our everyday life and business, replacing them with digital ones.