Construction of generating and checking matrices of cyclic codes. Construction of the verification matrix

Generator matrix linear -code is called a matrix of size , the rows of which are its basis vectors.

For example,

is the generating matrix of the two-word code (000, 011).

is generating for code B from Example 6.3.

We know that codewords are linear combinations of basis vectors, i.e. matrix rows. This means that words can be obtained by multiplying a vector with a matrix. So the message is written as a vector and the corresponding message code word is calculated by the formula

Thus, a vector of bits turns into a sequence of binary symbols transmitted over a channel or written to the memory of a storage device.

Let's turn to the decoding problem.

Suppose that for some binary vector all the codewords of the -code , satisfy the identity

in which denotes the scalar product of the vectors and .

About such a vector we will say that it is orthogonal. Having found such a vector, we could check using identity (6.2) whether the sequence received from the channel is a codeword.

Note that (6.2) is valid for all code words, if it is true for basis vectors, i.e. If

where the superscript T denotes transposition.

The more such “checks” we find, the more errors, apparently, we will be able to detect and correct.

Exercise 6.4. Prove that the checks form linear space.

We call this space the space orthogonal to the linear code or test space.

Exercise 6.5. Find the dimension of the linear space of checks.

To complete the last exercise, you need to notice that the matrix has exactly linearly independent columns. No more (why?) and no less (why?). Let's fix a list of numbers of these columns and call this set of numbers information set. A little later the meaning of this name will become clearer. Let us arbitrarily choose vector values ​​at positions not included in information set. What should be the values ​​at the positions of the information set in order for (6.3) to be fulfilled? To answer this question, we will have to solve the system linear equations, and the system has a unique solution.

The corollary of these arguments is the theorem

Theorem. The dimension of the check space of a linear -code is equal to .

We write the basis of the test space as a matrix

called check matrix code.

The checking and generating matrices are related by the relation

From this relation we see that for any code word we have

This identity can be used as a criterion for an arbitrary sequence to belong to a code, i.e. to detect errors.

Knowing, you can find. In order to understand how to do this, note that the same code can be specified by different generating matrices, choosing the basis of the space differently. Moreover, replacing any line with any linear combination of this line with other lines, we get a new generating matrix of the same code.

Rearranging the columns of a matrix, generally speaking, leads to a different code, but this different code is no different in its characteristics from the original one. Codes that differ only in position numbering are called equivalent.

It is clear that for each code there is an equivalent one for which the first positions form an information set, i.e. the first columns form a non-singular matrix of size . By replacing rows with linear combinations of rows (Gaussian method), the resulting matrix can be reduced to the form

where is the identity matrix of order , and is some matrix of size .

A matrix of the form (6.6) is called a generating matrix reduced to systematic form, and the corresponding code is called systematic. Coding for systematic code is a bit easier than for code general view:

, (6.7)

those. in a code word, the first positions are simply a copy of the information sequence, and the remaining (check) positions are obtained by multiplying the information vector by a matrix of size, which is sometimes significantly less than . Accordingly, information about the systematic code takes up significantly less memory than information about a general linear code.

For a systematic code with a generating matrix in the form (6.6), the parity check matrix can be calculated using the formula

Exercise 6.6. Check (6.7). Hint: to do this you need to substitute (6.8) and (6.6) into (6.4).

How to find the parity check matrix for non-systematic code?

Very simple. It is necessary to bring the matrix to a systematic form and use (6.7). If the first columns of the generating matrix form a non-singular submatrix (the first positions form an information set), then to bring it to a systematic form, such operations as rearranging rows and replacing rows with linear combinations of rows are sufficient. If not, you will first need to find the information set and renumber the positions so that the first positions become information.

In communication systems, several strategies for dealing with errors are possible:

  • error detection in data blocks and automatic request retransmission damaged blocks - this approach is used mainly at the data link and transport levels;
  • detecting errors in data blocks and discarding damaged blocks - this approach is sometimes used in streaming media systems where transmission latency is important and there is no time for retransmission;
  • bug fixes(English) forward error correction) is applied at the physical level.

Error detection and correction codes

Correction codes are codes used to detect or correct errors that occur during the transmission of information under the influence of interference, as well as during its storage.

To do this, when recording (transmitting) a specially structured structure is added to the useful data. excessive information, and when reading (receiving) it is used to detect or correct errors. Naturally, the number of errors that can be corrected is limited and depends on the specific code used.

WITH error correction codes, closely related error detection codes. Unlike the former, the latter can only establish the presence of an error in the transmitted data, but not correct it.

In fact, the error detection codes used belong to the same classes of codes as error correcting codes. In fact, any code that corrects errors can also be used to detect errors (and will be able to detect larger number mistakes than I was able to correct).

According to the way they work with data, codes that correct errors are divided into block, dividing information into fragments of constant length and processing each of them separately, and convolutional, working with data as a continuous stream.

Block codes

Let the encoded information be divided into fragments of length k bits that are converted to code words length n bit. Then the corresponding block code is usually denoted by ( n,k) . In this case the number is called code speed.

If the original k bit code leaves unchanged and adds nk test, this code is called systematic, otherwise unsystematic.

You can specify a block code in different ways, including a table where each set of k information bits are compared n code word bit. However, good code must meet at least the following criteria:

  • ability to correct as many errors as possible,
  • as little redundancy as possible,
  • ease of encoding and decoding.

It is easy to see that the above requirements contradict each other. That's why there is large number codes, each of which is suitable for its own range of tasks.

Almost all codes used are linear. This is because nonlinear codes are much more difficult to study and are difficult to achieve acceptable ease of encoding and decoding.

Linear spaces

Generator matrix

This relation establishes a connection between the vectors of coefficients and vectors. By listing all vectors of coefficients, one can obtain all vectors. In other words, the matrix G generates a linear space.

Check matrix

This means that the encoding operation corresponds to multiplying the original k-bit vector onto a non-singular matrix G, called generating matrix.

Let be an orthogonal subspace with respect to C, A H- matrix defining the basis of this subspace. Then for any vector the following is true:

.

Properties and important theorems

Minimum distance and correction ability

Hamming bound and perfect codes

Let there be a binary block ( n,k) code with corrective ability t. Then the inequality (called Hamming boundary):

.

Codes that satisfy this equality bound are called perfect. TO perfect codes include, for example, Hamming codes. Codes that are often used in practice with a high correction capacity (such as Reed-Solomon codes) are not perfect.

Energy gain

When transmitting information over a communication channel, the probability of error depends on the signal-to-noise ratio at the demodulator input, thus, at a constant noise level, the transmitter power is decisive. In satellite and mobile systems, as well as other types of communications, the issue of energy saving is acute. In addition, in certain systems Communications (for example, telephone) cannot be increased indefinitely by technical limitations.

Because noise-resistant coding allows you to correct errors; when used, the transmitter power can be reduced, leaving the information transmission speed unchanged. The energy gain is defined as the difference in the s/n ratios in the presence and absence of coding.

Application

check matrix- - [L.G. Sumenko. English-Russian dictionary on information technology. M.: State Enterprise TsNIIS, 2003.] Topics information technology in general EN check matrix ... Technical Translator's Guide

In the fields of mathematics and information theory, a linear code is an important type of block code used in error detection and correction schemes. Linear codes, compared to other codes, allow the implementation of more efficient algorithms... ... Wikipedia

To improve this article, it is desirable?: Find and arrange in the form of footnotes links to authoritative sources confirming what has been written. Cyclic code ... Wikipedia

A cyclic code is a linear code that has the property of cyclicity, that is, each cyclic permutation of a codeword is also a codeword. Used to transform information to protect it from errors (see Detection and... ... Wikipedia

- (LDPC code from the English Low density parity check code, LDPC code, low-density code) code used in information transmission, a special case of a block linear code with parity check. A special feature is the low density of significant... ... Wikipedia

Low density parity check code (LDPC code, low-density code) is a code used in information transmission, a special case of a block linear code with a parity check. Feature... ... Wikipedia

Reed Solomon codes are non-binary cyclic codes that allow you to correct errors in data blocks. The elements of the code vector are not bits, but groups of bits (blocks). Reed Solomon codes are very common, ... ... Wikipedia

Let us reduce the matrix H to triangular view:

System of equations:

Information symbols:

Security symbols:

Generator matrix

Information symbols and corresponding protective symbols:

N x 1 x 2 x 3 x 4 x 5 x 6 x 7 Weight

There is a black box. L is a set of transformations in it. L affected X.

X 1 and x 2 do not interact with each other, so it is a linear system.

Linearity of codes– the sum of two allowed codewords is equal to the allowed word.

Ticket 5.
a) The relationship between the value of information and entropy
b) Principles of constructing linear codes. Decoding by syndrome

Assumption that the set of channel output sequences is an -dimensional vector space over the field GF(q), and the set Y(n, R) input sequences (code) is a subspace of dimension nR. makes decoding much easier. In this case Y(n,R) is a subgroup of the additive group , and therefore , can be decomposed into cosets by subgroup . Let - all elements (codewords), then all elements of the set will be represented using a standard arrangement

(1)

(here 0 denotes the identity element of the group) .

Each row in (1) is formed by the corresponding cosets. If the elements of minimum weight in their coset are taken as generators 0, e 1, e 2, ..., e s, then any sequence from the i-th column differs from at in fewer categories than. from any other word i¹K. If there are several elements of minimum weight in the coset containing x, then there are the same number of codewords that differ from x in the same smallest number of bits.

For the proof, assume that x=y i +e, where e is the minimum weight element in its coset. Obviously, d(y i . x)=w (e) and d(y k ,x)=w(y k -y i -e). If e- the only element of minimum weight, then d(y i , x) K¹ i. If there are several such elements (for example, w( y j +e)=w(e)), then d(y i , x)=d(y k , x) then provided that y k = y j –y i . Therefore, for each element y j +e minimum weight in a coset containing e, there is a word y k = y j –y i , which is from at at a distance d(y k , i)=w(e).

Thus, for all sequences x included in the 1st column of the standard arrangement, the conditional probability P( x \y i) maximum. If x is in a coset with several elements of minimum weight, then the conditional probability P( x \у i)=Р( x \у k) and remains maximum for everyone y k, located at the same distance from x

Decoding Rule can be formulated as follows: find the output sequence of the channel xО in (1) and assume that the sequence y i ОY(n.R) was transmitted, which is in the same column as x.

Obviously, this rule coincides with maximum likelihood decoding and, therefore, is optimal.

Linear code decoding rule can be formulated this way: after that. as output sequence x; found in (1), determine the most probable error vector e, looking for the generating element of the coset that contains x; find the transmitted sequence from the relation y=x-e.

It is possible to construct a similar decoding procedure if we use the one-to-one correspondence between adjacent classes and syndromes of forming elements. The decoding rule is as follows: calculate the syndrome of the received sequence S= x H T =eH T ,

where e is the generating element of the coset containing x . Using the found syndrome S, find e; determine y from the relation y= x -e.

This decoding is also the same as maximum likelihood decoding and therefore remains optimal. The first codes with a similar decoding procedure were Hamming codes that corrected single errors.

However, finding a sequence of errors when the number of permissible errors is more than one quickly becomes complicated and, with sufficiently long codes and a large number of corrected errors, becomes almost impossible.

Ticket 6.
a) Entropy and its properties

Continuous transmission problem message closed on receipt copies of it at the reception point. There is no way to get an exact copy of the previous message, posk. this requires infinite precision in its reproduction. Poet. set the accuracy of the reproduction of the forward message.

e-entropy– this is the minimum amount of information, cat. required transmit via channel, th. restore message with a given accuracy for a given distribution p(x) of the source.

Transmission model cont. message:

d 2 = e 2 – error when restoring message y(t). It is necessary to establish a connection between m\d x(t) and x’(t), such that x’ carries as little information as possible about x, but ensures the specified accuracy.

H e= min I (x, x’)

Each interval is assigned a corresponding number. x = (b-a)/2 n .

the > n, the > intervals and > accuracy.

I(x,x’)=H(x’) – H(x’/x) - mutual information by definition.

H(x’/x) = 0 because meaning random variable x determines the value of a random variable x'.

(“at a fixed x”)

I(x,x’)= H(x’) - amount of mutual information between sets x And x' equals entropy x'.

(based on the information capacity of the register).

Let x be uniformly distributed. on the interval then all x’ are equally probable.

log[(b-a)/ x]=n entropy value ~ register length

It is necessary to ensure accuracy d 2 (mean square error):

With min amount of mutual information. The connection between x and x’ is defined by the number and length of intervals x. You need to choose them like this. x' was evenly distributed.

Let x and x' be continuous.

(1) x = x’ – n, where n is error. cat. is obtained as a result of approximation x x’th.

n=0, n 2 =d 2 e =e 2 – given error

He=H(x)-max(H(n)), H(n) will be max for Gaussian distribution. n

H(n)=/2

The source is most often named after them. Gaussian distribution with d 2 , then: H(n) = /2 = log(d 2 x /d 2 n)/2 = log(d 2 x /e 2)/2 [bit/count]

If for 1s. before 2F readings, then He t = 2F He= F*log(d 2 x /e 2) [bit/s]

According to Shannon's point for Gauss. channel: No< F log(1+q 2) [бит/c]


b) Discretization of continuous messages. Kotelnikov's theorem. Signal space.

Ticket 7.
a) Mutual information and its properties

Number of information, cat. Yj carries about Xi = amount of information, cat. Xi carries about Yj. And this information is called mutual information from Yj and Xi:. And she might. >0,<0,=0, в зависимости от средней информации.

For each pair ( Xi,Yj) corresponds to its quantity of information, and since Xi and Yj are random variables, then this amount of information is random. Therefore, we can introduce the concept of average information of sets:

A separate state is a pair of numbers.

I(X,Y) – complete mutual information (always ≥0 when the systems are independent).

Let's make identical transformations:

Then the mutual information can be written down:

, (*)

From the point of view of the information description of the communication system, it makes no difference which of the subsystems is considered as a transmitter and which as a receiver.

Therefore entropy N(X) And H(Y) can be interpreted as information that enters the communication channel, and the conditional entropies H(X/Y), H(Y/X) as information that is scattered in the channel.

According to the theorem I(X,Y)≥0 we get from (*):

When X and Y are independent, i.e. mutual info =0


b) Adaptive sampling of continuous messages

An arbitrary piecewise continuous function representing a message or signal can be expanded into a generalized Fourier series in a complete system of orthonormal functions

if the energy of the function is finite.

An infinite system of real functions is called orthogonal on the interval [a, b] if for , and a separate function is called normalized if .

A system of normalized functions in which every two different functions are mutually orthogonal is called an orthonormal system. When approximating functions, they are usually limited to a finite number of terms in the series. For a given system of functions and a fixed number of terms of the series n, the values ​​of the coefficients can be chosen such that the root mean square error of approximation

reaches a minimum. The minimum mean square error is achieved when the coefficients of the series are determined by the formula. A series with coefficients determined in this way is called a generalized Fourier series.

An orthogonal system is called complete if, by increasing the number of terms in the series, the mean square error can be made as small as desired.

Thus, given a countable set of coefficients, it is possible with a certain

accuracy, the corresponding function can be restored by transferring a sequence of coefficients. The specified sequence can be interpreted as a vector in n-dimensional Euclidean space with coordinates whose length is the square.

The last equality is a generalization of the Pythagorean theorem to the case of n-dimensional space. By direct calculations it is easy to establish that the signal energy .

Thus, discretization is the replacement of a continuous function with a sequence of coefficients ... (vector).

The choice of a system of orthogonal functions is determined by the purpose and physical essence of the problem being solved, and not by purely mathematical conclusions.

For the purpose of transmitting a signal over a communication channel, the expansion of a function into a Kotelnikov series is widely used, which makes it possible to significantly simplify the determination of coefficients. According to Kotelnikov's theorem, an arbitrary function with a limited spectrum can be identically represented by a countable number of its values, taken over a time interval where F is the upper limit frequency of the signal spectrum. In this case the functions ,

forming a system of orthogonal functions, differ from each other only by a shift along the time axis t by a multiple of , and each of them reaches its maximum value at those moments in time when the values ​​of all other functions are equal to zero. The expansion coefficients are determined by the formula

,

which, as a result of identical transformations, can be brought to the form: , that is, the coefficient is equal to the value of the function at the moment when the function reaches its maximum value.

If a normal (Gaussian) random process is subject to sampling, the energy spectrum of which has a rectangular shape, then the coefficients will be statistically independent random variables that coincide with the values ​​of the random function taken with a step Dt [9].

Thus, continuous messages can be transmitted digitally, that is, in the form of a sequence of numbers, with each number approximately expressing the value of the corresponding coefficient

Ticket 8.
a) Ergodic sources. Source performance with independent symbols


b) Relative entropy of continuous random variables

Relative(differential) entropy of a random variable X is called the quantity

In particular, if the interval d = 1, then

Let's find out the physical meaning of relative entropy H(X) .

Let the message source generate a sequence of random variable values X . After quantization we obtain a sequence of random variable values X' :

X i 1 ,X i 2 …..X ik …..X in .

With an unlimited increase in sequence length, with probability equal to one, only typical sequences, the number of which

where is the number of elementary n -measuring cube. The end of the vector depicting a typical sequence is internal point this cube. The product is equal to the volume of some area in n -dimensional space, the interior points of which represent ends of typical vectors(sequences). As , tends to zero, the number of typical sequences tends to infinity, and the volume of each elementary cube tends to zero. At the same time, the volume V T , occupied by typical sequences, remains constant, equal to .

Entropy in a discrete case could be determined through the number of typical sequences:

Similarly, relative entropy can be defined in terms of volume V T occupied by typical sequences:

Unlike the discrete case relative entropy can be not only positive, but also negative, and also equal to zero. The larger the volume V T occupied by typical sequences, the greater the uncertainty as to which of them will appear. Unit volume ( V T =1) corresponds to entropy (uncertainty) equal to zero ( H(X) =0). This value is accepted for the origin of relative entropy.

In particular, the relative entropy of a random variable with uniformity on the unit interval ( d = 1 ) distribution is equal to zero:

In this case the area n -dimensional space occupied by typical sequences approximately coincides with the domain of definition of all sequences and has the shape of a cube of unit volume ( V T = d n =1 ).

Ticket 9.
a) Performance of a Markov source. Redundancy

In communication systems, several strategies for dealing with errors are possible:

  • error detection in data blocks and automatic retransmission request damaged blocks - this approach is used mainly at the data link and transport levels;
  • detecting errors in data blocks and discarding damaged blocks - this approach is sometimes used in streaming media systems where transmission latency is important and there is no time for retransmission;
  • bug fixes(English) forward error correction) is applied at the physical level.

Error detection and correction codes

Correction codes are codes used to detect or correct errors that occur during the transmission of information under the influence of interference, as well as during its storage.

To do this, when recording (transmitting) a specially structured structure is added to the useful data. excessive information, and when reading (receiving) it is used to detect or correct errors. Naturally, the number of errors that can be corrected is limited and depends on the specific code used.

WITH error correction codes, closely related error detection codes. Unlike the former, the latter can only establish the presence of an error in the transmitted data, but not correct it.

In fact, the error detection codes used belong to the same classes of codes as error correcting codes. In fact, any error-correcting code can also be used to detect errors (and will be able to detect more errors than it was capable of correcting).

According to the way they work with data, codes that correct errors are divided into block, dividing information into fragments of constant length and processing each of them separately, and convolutional, working with data as a continuous stream.

Block codes

Let the encoded information be divided into fragments of length k bits that are converted to code words length n bit. Then the corresponding block code is usually denoted by ( n,k) . In this case the number is called code speed.

If the original k bit code leaves unchanged and adds nk test, this code is called systematic, otherwise unsystematic.

You can specify a block code in different ways, including a table where each set of k information bits are compared n code word bit. However, good code must satisfy at least the following criteria:

  • ability to correct as many errors as possible,
  • as little redundancy as possible,
  • ease of encoding and decoding.

It is easy to see that the above requirements contradict each other. That is why there are a large number of codes, each of which is suitable for its own range of tasks.

Almost all codes used are linear. This is because nonlinear codes are much more difficult to study and are difficult to achieve acceptable ease of encoding and decoding.

Linear spaces

Generator matrix

This relation establishes a connection between the vectors of coefficients and vectors. By listing all vectors of coefficients, one can obtain all vectors. In other words, the matrix G generates a linear space.

Check matrix

This means that the encoding operation corresponds to multiplying the original k-bit vector onto a non-singular matrix G, called generating matrix.

Let be an orthogonal subspace with respect to C, A H- matrix defining the basis of this subspace. Then for any vector the following is true:

.

Properties and important theorems

Minimum distance and correction ability

Reed-Solomon codes

Advantages and disadvantages of linear codes

Although linear codes are generally good at dealing with rare but large bunches of errors, their efficiency for frequent but small errors (for example, in a channel with AWGN) is less high. Due to linearity, in order to remember or enumerate all code words, it is enough to store in the memory of the encoder or decoder a significantly smaller part of them, namely only those words that form the basis of the corresponding linear space. This greatly simplifies the implementation of encoding and decoding devices and makes linear codes very attractive from the point of view of practical applications.

Performance assessment

The effectiveness of codes is determined by the number of errors that it can correct, the amount of redundant information that is required to be added, and the complexity of the implementation of encoding and decoding (both in hardware and in the form of a computer program).

Hamming bound and perfect codes

Let there be a binary block ( n,k) code with corrective ability t. Then the inequality (called Hamming boundary):

.

Codes that satisfy this equality bound are called perfect. Perfect codes include, for example, Hamming codes. Codes that are often used in practice with a high correction capacity (such as Reed-Solomon codes) are not perfect.

Energy gain

When transmitting information over a communication channel, the probability of error depends on the signal-to-noise ratio at the demodulator input, thus, at a constant noise level, the transmitter power is decisive. In satellite and mobile systems, as well as other types of communications, the issue of energy saving is acute. In addition, in certain communication systems (for example, telephone), technical restrictions do not allow unlimited increase in signal power.

Wikimedia Foundation Wikipedia

To improve this article, it is desirable?: Find and arrange in the form of footnotes links to authoritative sources confirming what has been written. Cyclic code ... Wikipedia

Cyclic code linear code, which has the property of cyclicity, that is, each cyclic permutation of a codeword is also a codeword. Used to transform information to protect it from errors (see Detection and... ... Wikipedia

In the field of mathematics and information theory, a linear code is important type block code used in error detection and correction schemes. Linear codes, compared to other codes, allow the implementation of more efficient algorithms... ... Wikipedia

- (LDPC code from the English Low density parity check code, LDPC code, low-density code) code used in information transmission, special case block linear code with parity check. A special feature is the low density of significant... ... Wikipedia

Low density parity check code (LDPC code, low-density code) is a code used in information transmission, a special case of a block linear code with a parity check. Feature... ... Wikipedia

structure- (framework): A logical structure for classifying and organizing complex information.

Let x1, x2, ..., xk mean a word of k information bits at the encoder input, encoded into a codeword C of dimension n bits:

encoder input: X=[x 1, x 2, ...,xk]

encoder output: C=[c 1, c 2, ..., cn]

Let a special generating matrix be given Gn,k,

specifying block code ( n,k).

Matrix rows Gn,k must be linearly independent.

Then the allowed code combination is C, corresponding to the encoded word X:

C=x 1 g 1 + x 2 g 2 + ... + x k g k.

Systematic (canonical) form of the generating matrix G size k x n :

The generator matrix of the systematic code creates a linear block code in which the first k bits of any codeword are identical to information bits, and the rest r=n-k bits of any codeword are linear combinations k information bits.

Check matrix Hn,k has r x n elements, and rightly so:

C x H T = 0.

This expression is used to check the received code combination. If equality to zero does not hold, then we obtain a row matrix || c 1 , c 2 , ..., c r||, called error syndrome.

Hamming code. Corrective and detecting abilities. Rules for choosing the relationship between the length of the code word and the number of information bits. Formation of the generator and parity check matrices of the Hamming code. Interpretation of error syndrome

Consider the Hamming code with code distance d=3, allowing you to correct single errors ( d=2qmax+1).

Number of allowed code combinations for code with d=3, for the Hamming code is strictly equal to 2 n/(n+1). First k bits of code combinations are used as information and their number is equal to

k= log 2 (2 n/(n+1)] = n– log 2 ( n+1).

This equation has integer solutions k= 0, 1, 4, 11, 26, which determine the corresponding Hamming codes: (3,1) code, (7,4) code, (15,11) code, etc. (Always n=2w‑1).

Check matrix H Hamming code ( r=n-k lines and n columns): for a binary (n,k) code, n=2 w -1 columns consist of all possible binary vectors with r=n-k elements, excluding the vector with all zero elements.

It's easy to check that G x H T= 0 (zero matrix of size k x r elements).

Example. Let's check how the code works when sending a message X=1011. The transmitted code combination will be formed as a linear combination (modulo 2 addition) of rows No. 1, 3, 4 of the matrix G 7,4:

Let us assume that the transmitted code word C error 0000100 was affected, which resulted in the word being received on the receiving side C"=10111 10.



Then, when multiplying C" by the check matrix H T we obtain the error syndrome row matrix that corresponds to that column of the check matrix H with the number of the bit containing the error.

Comparing the resulting syndrome with the strings H T , we find that bit No. 5 on the left is erroneous.

The Hamming decoder can operate in two mutually exclusive modes:

Error correction (correction) mode (since d min =3, then it allows you to correct single errors);

Error detection mode (since d min =3, then it detects multiplicity errors q£2). If the syndrome is not equal to 0, then the decoder produces an error signal.

If there are 2 errors in the received codeword, and the decoder is operating in correction mode, then it will not be able to determine from the syndrome whether one error has occurred or two, and will perform an incorrect correction of the codeword.

Extended Hamming code. Decoder operating modes, correction and detection abilities. Formation of a code word. Formation of the parity check matrix of the extended Hamming code. Interpretation of error syndrome

Extension of the Hamming code consists of padding the code vectors with an additional binary bit so that the number of ones contained in each code word is even. Due to this, Hamming codes with parity check have the following advantages:

Code lengths increase from 2 w-1 to 2 w, which is convenient from the point of view of transmitting and storing information;

Minimum distance d min of extended Hamming codes is 4, which makes it possible to detect (!) 3-fold errors.

An additional parity check bit allows the decoder to be used in a new hybrid mode - error detection and correction.

Consider the extension of the (7,4,3) Hamming code.

Each code vector C a is obtained from the code vectorc by adding an extra parity bit C a = ( c 1 , ..., c 7, c 8), where .

Check matrix H The (8,4) code is obtained from the check matrix of the (7,4) code in two steps:

A zero column is added to the (7,4)-code matrix;

The resulting matrix is ​​supplemented with a row consisting entirely of ones.

We get:

During syndromic decoding

s" = CH T,

and all components s" must be equal to 0.

In case of a single error, s"(4) = 1. Based on the value of the syndrome (lower 3 bits), we find and correct (invert) the erroneous bit.

In case of a double error, the component s"(4) = 0, and the syndrome is non-zero. In contrast to the standard Hamming code, this situation is already revealed, but is not corrected (a request is sent to retransmit the word, etc.).

Thus, the extended Hamming decoder can be used in one of two mutually exclusive modes:

To correct single and detect double errors;

To detect triple errors.