Pauli spin matrices. Pauli matrices

We continue our discussion of the properties of two-level systems. At the end of the previous chapter we talked about a particle with spin l/2 in a magnetic field. We described the spin state by specifying the amplitude WITH 1 that the z-component of the spin angular momentum is +h/2, and the amplitude WITH 2 that it is equal to - h/2. In previous chapters we denoted these basic states as |+> and |->. Let us again resort to these notations, although when it is more convenient, we will change them to | 1 > and | 2 >. We saw in the last chapter that when a particle with spin 1/2 and magnetic moment m is in a magnetic field IN=(B x , B y, B z), then the amplitudes C +(=C 1) And WITH -(=WITH 2) are connected by the following differential equations:

In other words, the Hamiltonian matrix H ij looks like

of course, equations (9.1) coincide with

Where i And j take the values ​​+ and - (or 1 and 2).

This two-state system - electron spin - is so important that it would be very useful to find a neater and more elegant way to describe it. We will now take a short mathematical digression to show you how the equations of a two-state system are usually written. This is done like this: first, notice that each term of the Hamiltonian is proportional to m, and some component of B; That's why (purely formal) you can write

There's no new physics here; these equations simply mean that the coefficients - there are only 4X3=12 of them - can be represented in such a way that (9.4) coincides with (9.2).

Let's see why this is so. Let's start with B z . Once IN z occurs only in H 11 and H 22, then everything will be fine if you take

We often write the matrix H ij in the form of a sign like this:

For the Hamiltonian of a particle with spin 1/2 in a magnetic field IN-it's the same

In the same way, the coefficients can be written in the form of a matrix

Describing the coefficients for B x, we find that the elements of the matrix s X should look like

Or in short:

AND finally looking at B y, we get

If we define three sigma matrices in this way, then equations (9.1) and (9.4) will coincide. To leave room for indexes i And j, we noted which a stands for which component IN, putting indices x, y, z above. Usually, however, i And j are discarded (it’s easy to imagine them anyway), and indexes x, y and z are placed at the bottom. Then (9.4) is written as follows:

Sigma matrices are so important (they are used constantly),

that we wrote them down in the table. 9.1. (One who is going to work

in quantum physics, I must remember them.) They are also called

Pauli spin matrices - named after the physicist who

invented them.

Table 9.1 SPIN PAULI MATRICES

In the table we have included another 2X2 matrix, which is needed when we want to consider a system whose o6a spin states have the same energy, or when we want to move to another zero-point energy. In such cases, we have to add to the first equation in (9.1) E 0 WITH+ , and to the second E 0 C - . This can be taken into account by introducing a new notation - identity matrix"1", or d ij:

rewriting (9.8) in the form

Usually just understood without further reservations, that any constant like E 0 is automatically multiplied by the identity matrix, and then they simply write

One of the reasons why spin matrices are so useful is that any a 2x2 matrix can be expressed in terms of them. In any matrix there are four numbers, say

It can always be written as a linear combination of four matrices. For example,

This can be done in any way, but, in particular, we can say that M consists of a certain number of s X plus a certain amount of a, etc., and write

where the "quantities" a, b, g and d in general case can be complex numbers.

Since any 2X2 matrix can be expressed in terms of an identity matrix and a sigma matrix, then all that may be needed to any systems with two states, we already have. Whatever the system with two states - an ammonia molecule, a fuchsin dye, whatever - the Hamiltonian equation can be rewritten in sigma. Although in the physical case of an electron in a magnetic field, sigmas seem to have a geometric meaning, they can also be considered simply useful matrices, suitable for use in any system with two states.

For example, one way to think about the proton and neutron is to think of them as the same particle in either of two states. We say that nucleon(proton or neutron) is a system with two states, in in this case states in relation to electric charge. If we consider the nucleon in this way, then the state | 1 > can represent a proton, and | 2 > - neutron. A nucleon is said to have two “isotopspin” states.

Since we will be using sigma matrices as the “arithmetic” of quantum mechanics for two-state systems, we will quickly become familiar with the conventions of matrix algebra. Under the "sum" of two or more matrices implies exactly what was meant in equation (9.4).

In general, if we “add” two matrices A And IN, then "sum" WITH means that each of its elements C ij is given by the formula

C ij =A ij +B ij .

Each element WITH is the sum of the elements A And IN, standing in the same places.

In ch. 3, § 6, we have already encountered the idea of ​​a matrix “product”. The same idea is useful when dealing with sigma matrices. In general, the “product” of two matrices A And IN(in this exact order) is defined as the matrix WITH with elements

This is the sum of the products of elements taken in pairs from i th lines A And k-ro column IN. If the matrices are written in the form of tables, as in Fig. 9.1, then you can specify a convenient “system” for obtaining the elements of the product matrix.

Fig. 9.1. Multiplication of two matrices.

Let's say you calculate WITH 23 . You move your left index finger on the second line A, and to the right - down the third column B, multiply each pair of numbers and add the pairs as you go. We tried to depict this in a drawing.

For 2X2 matrices this looks especially simple. For example, if s X multiplied by s x, then it comes out

i.e. just an identity matrix. Or, for example, let's calculate

Looking at the table. 9.1, you see that this is just a matrix s x, multiplied by i.(Remember that multiplying a matrix by a number means multiplying each element of the matrix by a number.) Pairwise sigma products are very important and look pretty funny, so we've listed them in the table. 9.2. You can count them yourself, like we did with s 2 X and s X s y.

With matrices O there is another very interesting and important point. One can, if one likes, imagine that the three matrices s X., s y and s z are similar to the three components of a vector; it is sometimes called the “sigma vector” and is denoted a. This is actually a "matrix vector", or "vector matrix". That's three different matrices, each connected to its own axis x, y or z. With their help, the Hamiltonian of the system can be written in a beautiful form suitable for any coordinate system:

Table 9.2 PRODUCTS OF SPIN MATRICES

Although we wrote these three matrices in a representation in which the terms "up" and "down" refer to the z direction (so s z looks particularly simple), we can imagine what they would look like in any other representation. And although this requires considerable calculations, it can still be shown that they change as components of a vector. (We won't worry about proving this for now, though. Check it out for yourself if you want.) You can use it in various systems coordinates as if it were a vector.

Do you remember that the Hamiltonian N tied in quantum mechanics with energy. It really exactly matches the energy in that simple case, when there is only one state. Even in a system with two states, what is the spin of an electron, if we write the Hamiltonian in the form (9.13), it very much resembles classical formula for the energy of a magnet with a magnetic moment m in a magnetic field B. Classically it looks like this:

where m is a property of the object, and B is the external field. One can imagine that (9.14) becomes (9.13) if the classical energy is replaced by the Hamiltonian and the classical m by the matrix (ms. Then, after such a purely formal replacement, the result can be interpreted as a matrix equation. It is sometimes argued that each quantity in the classical physics corresponds to a matrix in quantum mechanics. In fact, it would be more correct to say that the Hamiltonian matrix corresponds to energy and that every quantity that can be defined in terms of energy has a corresponding matrix. For example, the magnetic moment can be defined in terms of energy by saying that energy. in the external field IN There is - m B. This defines magnetic moment vector m . We then look at the formula for the Hamiltonian of a real (quantum) object in a magnetic field and try to guess which matrices correspond to which quantities in the classical formula. With this trick sometimes some classical quantities, their quantum counterparts appear.

If you want, try to figure out how, in what sense, the classical vector is equal to the matrix ms ; maybe you'll discover something. But don't worry about it. Really, it’s not worth it: in fact, they not equal. Quantum mechanics is a completely different type of theory, a different type of ideas about the world. Sometimes it happens that some correspondences emerge, but they are unlikely to represent anything more than mnemonic devices - rules for memorization.

In other words, you memorize (9.14) when you learn classical physics; then if you remember the correspondence m®ms, then you have a reason to remember (9.13). Of course nature knows quantum mechanics, while classical mechanics is just an approximation, which means there is nothing mysterious in the fact that from behind classical mechanics shadows of quantum mechanical laws peek out here and there, which actually represent their background. It is impossible to directly reconstruct a real object from a shadow, but a shadow helps us remember what the object looked like. Equation (9.13) is the truth, and equation (9.14) is its shadow. We first learn classical mechanics and therefore we want to derive quantum formulas from it, but once and for all established scheme for this purpose no. Every time you have to go back to real world and discover the correct quantum mechanical equations. And when they turn out to be similar to something classic, we rejoice. If these warnings seem boring to you, if, in your opinion, old truths are being uttered here about the relationship of classical physics to quantum, then I apologize: the conditioned reflex of the teacher, who is used to explaining quantum mechanics to students who have never before heard of Pauli spin matrices, has come into play. It always seemed to me that they did not lose hope that quantum mechanics could somehow be derived as a logical consequence of classical mechanics, the same one that they diligently taught in previous years. (Maybe they just want to avoid learning something new.) But luckily, you learned the classic formula (9.14) just a few months ago, and even then with the caveats that it's not quite correct, so maybe Perhaps you will not be so reluctant to accept the need to consider the quantum formula (9.13) as a primary truth.

End of work -

This topic belongs to the section:

Spin matrices as operators

On the website read: “spin matrices as operators”

If you need additional material on this topic, or you did not find what you were looking for, we recommend using the search in our database of works:

What will we do with the received material:

If this material was useful to you, you can save it to your page on social networks:

We continue our discussion of the properties of two-level systems. At the end of the previous chapter we talked about a particle with spin in a magnetic field. We described the spin state by specifying the amplitude that the -component of the spin angular momentum is equal to , and the amplitude that it is equal to . In previous chapters, we denoted these basic states and . Let us again resort to these notations, although when it is more convenient, we will change them to and .

We saw in the last chapter that when a particle with spin and magnetic moment is in a magnetic field , then the amplitudes and are related by the following differential equations:

(9.1)

In other words, the Hamiltonian matrix has the form

(9.2)

and, of course, equations (9.1) coincide with

, (9.3)

where and take values ​​and (or 1 and 2).

This two-state system - electron spin - is so important that it would be very useful to find a neater and more elegant way to describe it. We will now take a short mathematical digression to show you how the equations of a two-state system are usually written. This is done like this: first, notice that each term of the Hamiltonian is proportional to , and some component; therefore (purely formally) we can write

There's no new physics here; these equations simply mean that the coefficients and - there are all of them - can be represented in such a way that (9.4) coincides with (9.2).

Let's see why this is so. Let's start with . Since it occurs only in and , then everything will be fine if we take

We often write a matrix in the form of a table like this:

.

For the Hamiltonian, particles with spin in a magnetic field are the same as

.

In the same way, the coefficients can be written in the form of a matrix

. (9.5)

Describing the coefficients for , we find that the elements of the matrix should have the form

Or in short:

And finally, looking at , we get

If we define three sigma matrices in this way, then equations (9.1) and (9.4) will coincide. To leave room for indices and , we noted which component stands for which component, placing the indices on top. Usually, however, they are discarded (they are easy to imagine anyway), and the indices are placed at the bottom. Then (9.4) is written as follows:

Sigma matrices are so important (they are constantly used) that we have written them out in table. 9.1. (Anyone who is going to work in quantum physics is obliged to remember them.) They are also called Pauli spin matrices - after the name of the physicist who invented them.

Table 9.1 Pauli spin matrices

We have included another 2x2 matrix in the table, which is needed when we want to consider a system in which both spin states have the same energy, or when we want to move to another zero-point energy. In such cases, it is necessary to add to the first equation in (9.1), and to the second. This can be taken into account by introducing a new notation - the unit matrix “1”, or:

(9.9)

and rewriting (9.8) in the form

Usually they simply lower it without unnecessary reservations, that any constant like this is automatically multiplied by the identity matrix, and then they simply write

One of the reasons spin matrices are so useful is that any 2x2 matrix can be expressed in terms of them. In any matrix there are four numbers, say

It can always be written as a linear combination of four matrices. For example,

This can be done in any way, but, in particular, you can say what consists of a certain quantity plus a certain quantity, etc., and write

where "quantities" and in general can be complex numbers.

Since any 2x2 matrix can be expressed in terms of an identity matrix and a sigma matrix, then we already have everything we might need for any two-state system. Whatever the system with two states - an ammonia molecule, a fuchsin dye, whatever - the Hamiltonian equation can be rewritten in sigma. Although in the physical case of an electron in a magnetic field, sigmas seem to have a geometric meaning, they can also be considered simply useful matrices, suitable for use in any system with two states.

For example, one way to think about the proton and neutron is to think of them as the same particle in either of two states. We say that a nucleon (proton or neutron) is a system with two states, in this case states in relation to the electric charge. If we consider the nucleon in this way, then the state can be represented by a proton, and - by a neutron. A nucleon is said to have two "isotopic spin" states.

Since we will be using sigma matrices as the “arithmetic” of quantum mechanics for two-state systems, we will quickly become familiar with the conventions of matrix algebra. The “sum” of two or more matrices means exactly what was meant in equation (9.4).

In general, if we “add” two matrices and , then “sum” means that each of its elements is given by the formula

Each element is the sum of the elements and standing in the same places.

In ch. 3, § 6, we have already encountered the idea of ​​a matrix “product”. The same idea is useful when dealing with sigma matrices. In general, the “product” of two matrices and (in this exact order) is defined as a matrix with elements

This is the sum of the product of elements taken in pairs on the th row and th column. If the matrices are written in the form of tables, as in Fig. 9.1, then you can specify a convenient “system” for obtaining the elements of the product matrix. Let's say you are calculating . You move your left index finger down the second line and your right index finger down the third column, multiplying each pair of numbers and adding the pairs as you go. We tried to depict this in a drawing.

For 2x2 matrices this looks especially simple. For example, if multiplied by , then it comes out

.

i.e. just an identity matrix. Or. for example, let's calculate

.

Looking at the table. 9.1, you see that this is simply the matrix multiplied by . (Remember that multiplying a matrix by a number means multiplying each element of the matrix by a number.) Pairwise sigma products are very important and look pretty funny, so we've listed them in the table. 9.2. You can count them yourself, as we did with , and .

Another very interesting and important point is connected with matrices. You can, if you like, imagine that the three matrices , and are similar to the three components of a vector; it is sometimes called the “sigma vector” and is designated . This is actually a "matrix vector", or "vector matrix". These are three different matrices, each associated with its own axis, or. With their help, the Hamiltonian of the system can be written in a beautiful form suitable for any coordinate system:

Figure 9.1. Multiplication of two matrices.

Table 9.2 Product of spin matrices

Although we wrote these three matrices in a representation where the terms "up" and "down" refer to direction (so oh, looks especially simple), we can imagine what they would look like in any other representation. And although this requires considerable calculations, it can still be shown that they change as components of a vector. (We won't worry about proving this for now, though. Check it out for yourself if you want.) You can use it in different coordinate systems as if it were a vector.

Matrix. Then, after such a purely formal replacement, the result can be interpreted as a matrix equation. It is sometimes argued that for every quantity in classical physics there is a corresponding matrix in quantum mechanics. In fact, it would be more correct to say that the Hamiltonian matrix corresponds to energy and that every quantity that can be defined in terms of energy has a corresponding matrix. For example, the magnetic moment can be defined in terms of energy, saying that there is energy in the external field. This determines the magnetic moment vector. We then look at the formula for the Hamiltonian of a real (quantum) object in magnetic field and try to guess which matrices correspond to which quantities in the classical formula. With this trick, sometimes some classical quantities have their quantum counterparts.

If you want, try to figure out how, in what sense, a classical vector is equal to a matrix: maybe you will discover something. But don't worry about it. It’s really not worth it: in fact, they are not equal. Quantum mechanics is a completely different type of theory, a different type of ideas about the world. Sometimes it happens that some correspondences emerge, but they are unlikely to represent anything more than mnemonic devices - rules for memorization.

In other words, you memorize (9.14) when you learn classical physics; then if you remember the correspondence, then you have a reason to remember (9.13). Of course, nature knows quantum mechanics, while classical mechanics is just an approximation, which means there is nothing mysterious in the fact that from behind classical mechanics shadows of quantum mechanical laws peek out here and there, which actually represent their background. It is impossible to directly reconstruct a real object from a shadow, but a shadow helps us remember what the object looked like. Equation (9.13) is the truth, and equation (9.14) is its swamp. We first learn classical mechanics and therefore we want to derive quantum formulas from it, but there is no once and for all established scheme for this. Every time we have to go back to the real world and discover the correct quantum mechanical equations. And when they turn out to be similar to something classic, we rejoice.

If these warnings seem boring to you, if, in your opinion, old truths are being uttered here about the relationship of classical physics to quantum, then I apologize: the conditioned reflex of the teacher, who is used to explaining quantum mechanics to students who have never before heard of Pauli spin matrices, has come into play. It always seemed to me that they did not lose hope that quantum mechanics could somehow be derived as a logical consequence of classical mechanics, the same one that they diligently taught in previous years. (Maybe they just want to avoid learning something new.) But luckily, you learned the classic formula (9.14) just a few months ago, and even then with the caveats that it's not quite correct, so maybe Perhaps you will not be so reluctant to accept the need to consider the quantum formula (9.13) as a primary truth.

We continue our discussion of the properties of two-level systems. At the end of the previous chapter we talked about a particle with spin 1/2 in a magnetic field. We described the spin state by specifying the amplitude C 1 that the z-component of the spin angular momentum is equal to + h/2, and the amplitude C 2 that it is equal to -h/2. In previous chapters we designated these basic states | +> and | ->. Let us again resort to these notations, although when it is more convenient, we will change them to | 1> and | 2>.

We saw in the last chapter that when a particle with spin 1/2 and with a magnetic moment μ is in a magnetic field B = (B x, B y, B z), then the amplitudes C +(=C 1) and C_ (= C 2) are related by the following differential equations:

In other words, the Hamiltonian matrix H¡j has the form

Where i and j take the values ​​+ and - (or 1 and 2).

This two-state system—the spin of the electron—is so important that it would be very useful to find a neater and more elegant way to describe it. We will now take a short mathematical digression to show you how the equations of a two-state system are usually written. This is done like this: first, notice that each term in the Hamiltonian is proportional to μ and some component B; That's why (purely formal) you can write

There's no new physics here; these equations simply mean that the coefficients σ x ¡j, σy¡j and σ z ¡j - there are only 4X 3 = 12 - can be represented in such a way that (9.4) coincides with (9.2).

Let's see why this is so. Let's start with B z. Once B z found only in H 11 And N. 22, then everything will be fine if you take

We often write the matrix N¡j in the form of a sign like this:

For the Hamiltonian, a particle with spin 1/2 in a magnetic field B is the same as

In the same way, the coefficients σ z¡j can be written as a matrix

Describing the coefficients for B x, we find that the matrix elements σ x should look like

And finally, looking at IN , we get

If we define three sigma matrices in this way, then equations (9.1) and (9.4) will coincide. To leave room for indexes i and j, we noted which σ stands for which component of B by placing indices X, y, z above. Usually, however, i and j are discarded (they are easy to imagine anyway), and the indices x, y and z are placed at the bottom. Then (9.4) is written as follows:

Sigma matrices are so important (they are constantly used) that we have written them out in table. 9.1. (Anyone who plans to work in quantum physics must remember them.) They are also called Pauli spin matrices- named after the physicist who invented them.

We have included another 2x2 matrix in the table, which is needed when we want to consider a system in which both spin states have the same energy, or when we want to move to another zero-point energy. In such cases, we have to add to the first equation in (9.1) E 0 C+ , and to the second E 0 C _. This can be taken into account by introducing a new notation - identity matrix"1", or δ ¡j:

Usually just understood without further reservations, that any constant like E 0 is automatically multiplied by the identity matrix, and then they simply write

One of the reasons why spin matrices are so useful is that any a 2x2 matrix can be expressed in terms of them. In any matrix there are four numbers, say

It can always be written as a linear combination of four matrices. For example,

This can be done in any way, but, in particular, we can say that M consists of a certain amount σ x plus some amount of σ y, etc., and write

where the “quantities” α, β, γ and δ can generally be complex numbers.

Since any 2x2 matrix can be expressed in terms of an identity matrix and a sigma matrix, then all that can be needed to any systems with two states, we already have. Whatever the two-state system—an ammonia molecule, a fuchsia dye, whatever—the Hamiltonian equation can be rewritten in sigma. Although in the physical case of an electron in a magnetic field, sigmas seem to have a geometric meaning, they can also be considered simply useful matrices, suitable for use in any system with two states.

For example, one way to think about the proton and neutron is to think of them as one And the same particle in any of the two states. We say that nucleon(proton or neutron) is a system with two states, in this case states in relation to the electric charge. If we consider the nucleon in this way, then the state |1> can represent a proton, and |2> -neutron. A nucleon is said to have two "isotopsnin" states.

Since we will be using sigma matrices as the “arithmetic” of quantum mechanics for two-state systems, we will quickly become familiar with the conventions of matrix algebra. By the “sum” of two or more matrices we mean precisely what was meant V equation (9.4). In general, if we “add” two matrices A And IN, then the “sum” C means that each of its elements C¡j is given by the formula

Each element WITH is the sum of the elements A And IN, standing in the same places.

In ch. 3, § 6, we have already encountered the idea of ​​a matrix “product”. The same idea is useful when dealing with sigma matrices. In general, the “product” of two matrices A And IN(in this exact order) is defined as the matrix WITH with elements

This is the sum of the products of elements taken in pairs from the ¡th line A and kth column IN. If the matrices are written in the form of tables, as in Fig. 9.1, then you can specify a convenient “system” for obtaining the elements of the product matrix. Let's say you're calculating C23. You move your left index finger Bysecond line A, and to the right - down the third column B, multiply each pair of numbers and add the pairs as you go. We tried to depict this in a drawing.

For 2x2 matrices this looks especially simple. For example, if σ x multiplied by σ x, then it turns out

i.e. just an identity matrix. Or. for example, let's calculate

Looking at the table. 9.1, you see that this is simply the matrix σ x multiplied by i. (Remember that multiplying a matrix by a number means multiplying each element of the matrix by a number.) Pairwise sigma products are very important and look pretty funny, so we've listed them in the table. 9.2. You can calculate them yourself, as we did with σ x 2 and σ x σ y.

Another very interesting and important point is connected with the matrices σ. You can, if you like, imagine that the three matrices σ x, σ y and σ z are similar to the three components of a vector; it is sometimes called the "sigma vector" and is denoted σ. This is actually a "matrix vector", or "vector matrix". These are three different matrices, each associated with its own x, y or z axis. With their help, the Hamiltonian of the system can be written in a beautiful form suitable for any coordinate system:

Although we have written these three matrices in a notation in which the terms "up" and "down" refer to the z direction (so σ z looks especially simple), but you can imagine how they would look in any other representation. And although this requires considerable calculations, it can still be shown that they change as components of a vector. (We won't worry about proving this for now, though. Check it out for yourself if you want.) You can use σ in different coordinate systems as if it were a vector.

You remember that the Hamiltonian H is related to energy in quantum mechanics. It really exactly coincides with the energy in the simple case when there is only one state. Even in a system with two states, what is the spin of an electron, if we write the Hamiltonian in the form (9.13), it very much resembles classical formula for the energy of a magnet with a magnetic moment, and in a magnetic field B. Classically it looks like this:

where μ is a property of the object, and B is the external field. One can imagine that (9.14) turns into (9.13) if the classical energy is replaced by a Hamiltonian, and the classical μ — matrix μσ. Then, after such a purely formal replacement, the result can be interpreted as a matrix equation. It is sometimes argued that for every quantity in classical physics there is a corresponding matrix in quantum mechanics. In fact, it would be more correct to say that the Hamiltonian matrix corresponds to energy and that every quantity that can be defined in terms of energy has a corresponding matrix. For example, the magnetic moment can be defined in terms of energy, saying that the energy in the external field B is - μ·V. This defines magnetic moment vector μ. We then look at the formula for the Hamiltonian of a real (quantum) object in a magnetic field and try to guess which matrices correspond to which quantities in the classical formula. With this trick sometimes some classical quantities, their quantum counterparts appear.

If you want, try to figure out how, in what sense, the classical vector is equal to the matrix μσ : maybe you'll discover something. But don't worry about it. Really, it’s not worth it: in fact, they not equal. Quantum mechanics is a completely different type of theory, a different type of ideas about the world. Sometimes it happens that some correspondences emerge, but they are unlikely to represent anything more than mnemonic devices - rules for memorization.

In other words, you memorize (9.14) when you learn classical physics; then if you remember the correspondence μ →μσ, then you have a reason to remember (9.13). Of course, nature knows cable-stayed mechanics, classical mechanics is just an approximation, which means there is nothing mysterious in the fact that because of classical mechanics, shadows of quantum mechanical laws peek out here and there, which actually represent their background. It is impossible to directly reconstruct a real object from a shadow, but a shadow helps us remember what the object looked like. Equation (9.13) is the truth, and equation (9.14) is its shadow. We first learn classical mechanics and therefore we want to derive quantum formulas from it, but there is no once and for all established scheme for this. Every time we have to go back to the real world and discover the correct quantum mechanical equations. And when they turn out to be similar to something classic, we rejoice.

If these warnings seem boring to you, if, in your opinion, old truths are being uttered here about the relationship of classical physics to quantum, then I apologize: the conditioned reflex of the teacher, who is used to explaining quantum mechanics to students who have never before heard of Pauli spin matrices, has come into play. It always seemed to me that they did not lose hope that quantum mechanics could somehow be derived as a logical consequence of classical mechanics, the same one that they diligently taught in previous years. (Maybe they just want to avoid learning something new.) But luckily, you learned the classic formula (9.14) just a few months ago, and even then with the caveats that it's not quite correct, so maybe Perhaps you will not be so reluctant to accept the need to consider the quantum formula (9.13) as a primary truth.

Consider an electron with spin. Then the matrices that will represent the spin moments have the dimension

.

Let's consider presentation (or - performance). Consider in this representation the matrix
This is an operator in matrix notation.

We remember that in the matrix representation the kernel of the operator looked like

Then for our representation we have:

Similar to matrix

,

,

.

And are not diagonal matrices, then these quantities with are not measurable at the same time. Along the main diagonal they stand eigenvalues.

Matrices are introduced
. These are Pauli matrices.

,

,

.

It is easy to show that

Or in operator language

And the switches:

,

.

Then since
, then we get

At
:

At
we get

.

§ 40. The principle of identity

This principle in quantum mechanics is related in a certain way to the Heisenberg principle.

If we consider an ensemble of identical particles, then identification of these particles is impossible.

Identical particles have all the same internal properties (m,e,s, ...). We cannot introduce a trajectory in quantum mechanics, then we cannot distinguish identical particles.

For example, in an electron gas there are not individual particles, but a whole ensemble. In such a system there are identical particles.

In an ensemble of identical particles, states are realized that are invariant under their permutations.

Since particles cannot be identified, we cannot distinguish between states that are caused by rearrangement of particles.

§ 41. Permutation operator and its properties

Let us introduce the notation an operator that performs a permutation of the a-th and b-th particles from an ensemble of identical particles.

Operator for such systems of identical particles it has symmetry.

Since the particles are identical, they have the same interaction energy, i.e. it is invariant with respect to permutation.

That is, we can write

(50.1)

Since the operator does not depend explicitly on time, then from (50.1) it follows that it is an integral of motion. Its own values ​​are preserved.

Let's find the eigenvalues ​​of the operator .

Let's write the problem in terms of eigenfunctions and eigenvalues:

(50.2)

At repeated action operator , we get:

Taking into account (50.2):

Then from (50.3)

,
.

We obtain particles with symmetric and antisymmetric wave functions: bosons and fermions.

In addition, the operator is the integral of motion. Then its eigenvalues ​​are preserved in time. That is, the properties of the wave functions associated with the action of this operator are also preserved.

Functions corresponding to the eigenvalue +1 are called symmetric; they describe symmetric states.

Likewise

This is an antisymmetric function.

The properties of symmetry and antisymmetry are called integrals of motion, i.e. they are conserved. The ensemble cannot transition from one state to another (i.e., from symmetric to antisymmetric and vice versa).

Symmetric functions describe the state of systems with integer spin, i.e., an ensemble of bosons.

Antisymmetric functions – an ensemble of fermions.

,

,

,
.

We will consider stationary states, i.e.

,

.

Stationary functions satisfy the stationary Schrödinger equation

,

since the operators And commute, then

.

If only N particles, then it can be done N! permutations, then we have N! possible functions
.

Since everything
satisfy the Schrödinger equation at the same energy , then we get degeneracy. It is fictitious. In order to get rid of this degeneration, we will perform symmetrization of the functions.

Properties of Pauli matrices

A. All Pauli matrices are like operator matrices physical quantities are Hermitian.

B. For all Pauli matrices, the condition is satisfied, where 1 is the identity matrix. This can be checked directly. This statement is a consequence of the fact that the square of the spin projection of a particle with spin ½ in any state has a certain value (since there are two possibilities for the spin projection +1/2 and –1/2, and the squares of both of these numbers are ¼) .

IN.

D. Any matrix (2 2) can be represented as: . This is because the identity matrix and the three Pauli matrices () form full set matrices (2), since the space of such matrices is four-dimensional - a matrix is ​​determined by specifying four numbers, therefore any four linearly independent matrices will form a basis in the space of such matrices).

D. . In particular, i.e. they anticommute. Algebra (as the rules for multiplying matrices are called) is very simple - when rearranging matrices, the sign of their product simply changes.

E. Since the Pauli matrices are related to the spin projection operators onto the coordinate axes, the usual commutation relations for the torque projection operators onto the coordinate axes are satisfied for them

Let us now consider this question. Let the particle be in the state

(10)

What values ​​can the spin projection onto the axis take in this state and with what probabilities? To answer this question, it is necessary to find the eigenfunctions of the operator and expand function (10) into them.

Solving the equation (11)

System of homogeneous algebraic equations(12) has non-zero solutions in the case when the determinant of this system is equal to zero

From here we find the possible values ​​of the spin projections onto the axis (which, as it should be, are equal to the possible values ​​of the spin projection onto the axis):

(14)

Now substituting the eigenvalues ​​(14) into the system of equations (12), we find the eigenfunctions

(15)

(the factors arose from the normalization condition).

Let us now expand function (10) into eigenfunctions (15). This expansion has the following form

(16)

From here, according to the postulates of quantum mechanics, we find the probabilities different meanings spin projection onto the axis in state (10):

(17)

From formulas (17), in particular, it follows that if the particle is in a state with a certain spin projection onto the axis ( or ), then the probabilities of different values ​​of the spin projection onto the axis are the same, which is in accordance with the general result obtained earlier for the eigenstates of angular momentum operators in quantum mechanics. In conclusion, we note that from formulas (17) for probabilities it follows that the average value of the spin projection onto the axis is equal to