How the process of hardening parts in processors is started. Production of modern processors

CPU This is the heart of any modern computer. Any microprocessor is essentially a large integrated circuit on which transistors are located. By passing electric current, transistors allow you to create binary logic (on - off) calculations. Modern processors are based on 45 nm technology. 45nm (nanometer) is the size of one transistor located on the processor wafer. Until recently, 90 nm technology was mainly used.

The wafers are made from silicon, which is the second largest deposit in the earth's crust.

Silicon is obtained by chemical treatment, purifying it from impurities. After this, they begin to melt it, forming a silicon cylinder with a diameter of 300 millimeters. This cylinder is subsequently cut into plates with a diamond thread. The thickness of each plate is about 1 mm. To ensure that the plate has an ideal surface, after cutting with a thread, it is ground with a special grinding machine.

After this, the surface of the silicon wafer is perfectly smooth. By the way, many manufacturing companies have already announced the possibility of working with 450 mm plates. The larger the surface, the greater the number of transistors to accommodate, and the higher the processor performance.

CPU consists of a silicon wafer on the surface of which there are up to nine layers of transistors, separated by oxide layers for insulation.

Development of processor technology

Gordon Moore, one of the founders of Intel, one of the leaders in processor production in the world, in 1965, based on his observations, discovered a law according to which new models of processors and chips appeared at equal intervals of time. The number of transistors in processors is growing approximately 2 times in 2 years. For 40 years now, Gordon Moore's law has been working without distortion. The development of future technologies is just around the corner - there are already working prototypes based on 32 nm and 22 nm processor production technologies. Until mid-2004, processor power depended primarily on processor frequency, but since 2005, processor frequency has practically stopped growing. A new multi-core processor technology has appeared. That is, several processor cores are created with an equal clock frequency, and during operation the power of the cores is summed up. This increases the overall processor power.

Below you can watch a video about the production of processors.

Almost everyone knows that in a computer, the main element among all the “hardware” components is the central processor. But the circle of people who understand how a processor works is very limited. Most users have no idea about this. And even when the system suddenly starts to slow down, many believe that it is the processor that is not working well and do not attach importance to other factors. To fully understand the situation, let's look at some aspects of CPU operation.

What is a central processing unit?

What does the processor consist of?

If we talk about how an Intel processor or its competitor AMD works, you need to look at how these chips are designed. The first microprocessor (by the way, it was from Intel, model 4040) appeared back in 1971. It could perform only the simplest addition and subtraction operations with processing only 4 bits of information, i.e. it had a 4-bit architecture.

Modern processors, like the first-born, are based on transistors and are much faster. They are made by photolithography from a certain number of individual silicon wafers that make up a single crystal into which transistors are imprinted. The circuit is created on a special accelerator using accelerated boron ions. In the internal structure of processors, the main components are cores, buses and functional particles called revisions.

Main characteristics

Like any other device, the processor is characterized by certain parameters, which cannot be ignored when answering the question of how the processor works. First of all this:

  • Number of Cores;
  • number of threads;
  • cache size (internal memory);
  • clock frequency;
  • tire speed.

For now, let's focus on the clock frequency. It’s not for nothing that the processor is called the heart of the computer. Like the heart, it operates in pulsation mode with a certain number of beats per second. Clock frequency is measured in MHz or GHz. The higher it is, the more operations the device can perform.

At what frequency the processor operates, you can find out from its declared characteristics or look at the information in But while processing commands, the frequency can change, and during overclocking (overlocking) it can increase to extreme limits. Thus, the declared value is just an average indicator.

The number of cores is an indicator that determines the number of processing centers of the processor (not to be confused with threads - the number of cores and threads may not be the same). Due to this distribution, it is possible to redirect operations to other cores, thereby increasing overall performance.

How a processor works: command processing

Now a little about the structure of executable commands. If you look at how a processor works, you need to clearly understand that any command has two components - an operational one and an operand one.

The operating part specifies what the computer system should do at the moment; the operand specifies what the processor should be working on. In addition, the processor core can contain two computing centers (containers, threads), which divide the execution of a command into several stages:

  • production;
  • decryption;
  • command execution;
  • accessing the memory of the processor itself
  • saving the result.

Today, separate caching is used in the form of using two levels of cache memory, which avoids interception by two or more commands of accessing one of the memory blocks.

Based on the type of command processing, processors are divided into linear (execution of commands in the order in which they are written), cyclic and branching (execution of instructions after processing branch conditions).

Operations Performed

Among the main functions assigned to the processor, in terms of the commands or instructions executed, three main tasks are distinguished:

  • mathematical operations based on an arithmetic-logical device;
  • moving data (information) from one type of memory to another;
  • making a decision on the execution of a command, and on its basis, choosing to switch to the execution of other sets of commands.

Interaction with memory (ROM and RAM)

In this process, the components to be noted are the bus and the read-write channel, which are connected to the storage devices. ROM contains a constant set of bytes. First, the address bus requests a specific byte from the ROM, then transfers it to the data bus, after which the read channel changes its state and the ROM provides the requested byte.

But processors can not only read data from RAM, but also write it. In this case, the recording channel is used. But, if you look at it, by and large, modern computers, purely theoretically, could do without RAM at all, since modern microcontrollers are able to place the necessary data bytes directly in the memory of the processor chip itself. But there is no way to do without ROM.

Among other things, the system starts from the hardware testing mode (BIOS commands), and only then control is transferred to the loading operating system.

How to check if the processor is working?

Now let's look at some aspects of checking the processor's performance. It must be clearly understood that if the processor were not working, the computer would not be able to start loading at all.

It's another matter when you need to look at the indicator of the use of processor capabilities at a certain moment. This can be done from the standard “Task Manager” (opposite any process it is indicated how many percent of the processor load it provides). To visually determine this parameter, you can use the performance tab, where changes are tracked in real time. Advanced parameters can be seen using special programs, for example, CPU-Z.

In addition, you can use multiple processor cores using (msconfig) and additional boot parameters.

Possible problems

Finally, a few words about the problems. Many users often ask, why does the processor work, but the monitor does not turn on? This situation has nothing to do with the central processor. The fact is that when you turn on any computer, the graphics adapter is tested first, and only then everything else. Perhaps the problem lies precisely in the processor of the graphics chip (all modern video accelerators have their own graphics processors).

But using the example of the functioning of the human body, you need to understand that in the event of cardiac arrest, the entire body dies. Same with computers. The processor does not work - the entire computer system “dies”.

A few years ago, Intel introduced a step-by-step process for manufacturing microprocessors: from sand to the final product. The actual semiconductor elements look truly amazing.

Step 1. Sand

Silicon, which makes up about 25 percent of all chemical elements in the earth's crust by total mass, is second in abundance after oxygen. The sand has a high percentage of silicon dioxide (SiO2), which is the main ingredient not only for the production of Intel processors, but also for semiconductor production in general.

Molten silicon

The substance is purified through several steps until it produces semiconductor-grade silicon used in semiconductors. Ultimately, it comes in the form of monocrystalline ingots with a diameter of about 300 millimeters (12 inches). Previously, the ingots had a diameter of 200 millimeters (8 inches), and back in 1970 - even smaller - 50 millimeters (2 inches).

At this level of processor production, after purification, the crystal purity is one impurity atom per billion silicon atoms. The weight of the ingot is 100 kilograms.

Step 3. Cutting the ingot

The ingot is cut with a very fine saw into individual slices called substrates. Each of them is subsequently polished to produce a defect-free, mirror-smooth surface. It is on this smooth surface that the tiny copper wires will subsequently be applied.

Exposure of the photoresist layer

A photoresist liquid (the same materials used in traditional photography) is poured onto a substrate rotating at high speed. When rotated, a thin and uniform resistive layer is formed over the entire surface of the substrate.

An ultraviolet laser, through a mask and a lens, impacts the surface of the substrate, forming small illuminated ultraviolet lines on it. The lens produces a focused image 4 times smaller than the mask. Wherever ultraviolet lines impact the resistive layer, a chemical reaction occurs, causing those areas to become soluble.

Step 5: Etching

The soluble photoresist material is then completely dissolved using a chemical solvent. Thus, a chemical etchant is used to partially dissolve or etch a small amount of polished semiconductor material (substrate). The remainder of the photoresist material is removed through a similar washing process, exposing (exposing) the etched surface of the substrate.

Formation of layers

To create the tiny copper wires that will ultimately transfer electricity to and from the various connectors, additional photoresists (light-sensitive materials) are added, which are also washed and exposed. Subsequently, an ion doping process is performed to add impurities and protect copper ion deposit sites from copper sulfate during the electroplating process.

At various stages in these processor manufacturing processes, additional materials are added and etched and polished. This process is repeated 6 times to form 6 layers.

The final product looks like a grid of many microscopic copper strips that conduct electricity. Some of them are connected to others, and some are located at a certain distance from others. But they are all used for one purpose - to transfer electrons. In other words, they are designed to provide what is called "useful work" (for example, adding two numbers as quickly as possible, which is the essence of the computing model these days).

Multi-level processing is repeated on each individual small area of ​​the substrate surface on which chips will be manufactured. Such areas include those that are partially located outside the substrate.

Step 7: Testing

Once all the metal layers have been applied and all the transistors have been created, it is time for the next stage of Intel processor production - testing. A device with many pins is placed on the top of the chip. Many microscopic wires are attached to it. Each such wire has an electrical connection to the chip.

To reproduce the operation of the chip, a sequence of test signals is transmitted to it. The testing not only tests traditional computing capabilities, but also performs internal diagnostics to determine voltage values, cascade sequences, and other functions. The chip's response in the form of a test result is stored in a database specially allocated for a given area of ​​the substrate. This process is repeated for each section of the substrate.

Plate cutting

A very small saw with a diamond tip is used to cut the plates. The database populated in the previous step is used to determine which chips cut from the substrate are retained and which are discarded.

Step 9. Enclosure

All workplates are housed in physical enclosures. Even though the platters have been pre-tested and found to work correctly, this does not mean they are good processors.

The encapsulation process involves encasing a silicon die in a substrate material that has miniature gold wires connected to its contacts or ball array. An array of ball leads can be found on the back of the case. A heat sink is installed in the upper part of the case. It is a metal case. Once this process is complete, the CPU appears as a finished product ready for consumption.

Note: Metal heat sink is a key component of modern high-speed semiconductor devices. Previously, heat sinks were ceramic and did not use forced cooling. It was required for some models of the 8086 and 80286 and for models starting with the 80386. Previous generations of processors had much fewer transistors.

For example, the 8086 processor had 29 thousand transistors, while modern central processing units have hundreds of millions of transistors. Such a small number of transistors by today's standards did not generate enough heat to require active cooling. To separate these processors from those that needed this type of cooling, the ceramic chips were subsequently labeled “Heatsink Required.”

Modern processors generate enough heat to melt in a matter of seconds. Only the presence of a heat sink connected to a large radiator and fan allows them to function for a long time.

Sorting processors by characteristics

By this stage of production, the processor looks the same as it is bought in a store. However, another step is required to complete its production process. It's called sorting.

This step measures the actual performance of the individual CPU. Parameters such as voltage, frequency, performance, heat dissipation and other characteristics are measured.

The best chips are shelved as higher-end products. They are sold not only as the fastest components, but also as low and ultra-low voltage models.

Chips that are not included in the top processor group are often sold as processors with lower clock speeds. Additionally, lower-end quad-core processors may be sold as dual- or triple-core processors.

Processor performance

The sorting process determines the final speed, voltage and thermal characteristics. For example, on a standard substrate, only 5% of chips produced can operate at frequencies above 3.2 GHz. At the same time, 50% of chips can operate at 2.8 GHz.

Processor manufacturers are constantly investigating why most of their processors are running at 2.8 GHz instead of the required 3.2 GHz. Sometimes changes may be made to the processor design to increase performance.

Profitability of production

The profitability of the business for the production of processors and most semiconductor elements lies in the range of 33-50%. This means that at least 1/3 to 1/2 of the wafers on each wafer are functional, making the company profitable.

Intel has an operating margin of 95% when using 45 nm technology on a 300 mm substrate. This means that if it is possible to produce 500 silicon wafers from a single substrate, 475 of them will be functional and only 25 will be discarded. The more wafers that can be produced from one wafer, the more profit the company will make.

Intel Technologies Used Today

History of the use of new Intel technologies for mass production of processors:

  • 1999 - 180 nm;
  • 2001 - 130 nm;
  • 2003 - 90 nm;
  • 2005 - 65 nm;
  • 2007 - 45 nm;
  • 2009 - 32 nm;
  • 2011 - 22 nm;
  • 2014 - 14 nm;
  • 2019 - 10 nm (planned).

At the beginning of 2018, Intel announced that it would postpone mass production of 10nm processors to 2019. The reason for this is the high production cost. At the moment, the company continues to ship 10nm processors in small volumes.

Let us characterize Intel processor production technologies from a cost point of view. The company's management explains the high cost by the long production cycle and the use of a large number of masks. The 10nm technology is based on deep ultraviolet lithography (DUV) using lasers operating at a wavelength of 193 nm.

The 7nm process will use extreme ultraviolet (EUV) lithography using lasers operating at a wavelength of 13.5nm. Thanks to this wavelength, it will be possible to avoid the use of multipatterns widely used for the 10nm process.

The company's engineers believe that for now it is necessary to polish the DUV technology rather than jump directly to the 7nm process. Thus, for now, processors using 10nm technology will be discontinued.

Prospects for microprocessor production at AMD

Intel's only real competitor in the processor manufacturing market today is AMD. Due to Intel's mistakes associated with 10nm technology, AMD has slightly improved its position in the market. Intel's mass production using the 10nm process is way behind the times. AMD is known to use a third party to manufacture its chips. And now a situation has arisen where AMD uses 7nm processor production technologies for production, which are not inferior to their main competitor.

The main third-party manufacturers of semiconductor devices using new technologies for complex logic are Taiwan Semiconductor Manufacturing Company (TSMC), US-based GlobalFoundaries and Korea's Samsung Foundry.

AMD plans to use TSMC exclusively to produce next-generation microprocessors. In this case, new processor production technologies will be used. The company has already released a number of products using the 7nm process, including a 7nm GPU. The first is planned to be released in 2019. In 2 years it is planned to begin mass production of 5-nm chips.

GlobalFoundaries has abandoned 7 nm process development to focus on developing its 14/12 nm processes for customers targeting high-growth markets. AMD is making additional investments in GlobalFoundaries to produce current generation AMD Ryzen, EPYC and Radeon processors.

Production of microprocessors in Russia

The main microelectronic production facilities are located in the cities of Zelenograd (Mikron, Angstrem) and Moscow (Crocus). Belarus also has its own microelectronic production - the Integral company, which uses the 0.35 micron technological process.

The production of processors in Russia is carried out by the companies MCST and Baikal Electronics. The latest development of MCST is the Elbrus-8S processor. This is an 8-core microprocessor with a clock frequency of 1.1-1.3 GHz. The performance of the Russian processor is 250 gigaflops (floating point operations per second). Company representatives state that in a number of indicators the processor can compete even with the industry leader, Intel.

Production will continue with the Elbrus-16 model with a frequency of 1.5 GHz (the digital index in the name indicates the number of cores). Mass production of these microprocessors will be carried out in Taiwan. This should help reduce the price. As you know, the price of the company’s products is exorbitant. At the same time, the characteristics of the components are significantly inferior to leading companies in this sector of the economy. For now, such processors will be used only in government organizations and for defense purposes. The production technology for this line of processors will be the 28nm process technology.

Baikal Electronics produces processors intended for use in industry. In particular, this applies to the Baikal T1 model. Its area of ​​application is routers, CNC systems and office equipment. The company does not stop there and is already developing a processor for personal computers - “Baikal M”. There is little information about its characteristics yet. It is known that it will have an 8-core processor with support for up to 8 graphics cores. The advantage of this microprocessor will be its energy efficiency.

Modern microprocessors are the fastest and smartest chips in the world. They can perform up to 4 billion operations per second and are produced using many different technologies. Since the beginning of the 90s of the 20th century, when processors went into mass use, they have gone through several stages of development. The apogee of the development of microprocessor structures using existing 6th generation microprocessor technologies was 2002, when it became possible to use all the basic properties of silicon to obtain high frequencies with minimal losses in the production and creation of logic circuits. Now the efficiency of new processors is falling somewhat despite the constant increase in the frequency of operation of the crystals, since silicon technologies are approaching the limit of their capabilities.

Microprocessoris an integrated circuit formed on a small silicon crystal. Silicon is used in microcircuits due to the fact that it has semiconductor properties: its electrical conductivity is greater than that of dielectrics, but less than that of metals. Silicon can be made both an insulator, preventing the movement of electrical charges, and a conductor - then electrical charges will freely pass through it. The conductivity of a semiconductor can be controlled by introducing impurities.

The microprocessor contains millions of transistors connected to each other by thin conductors made of aluminum or copper and used to process data. This is how internal tires are formed. As a result, the microprocessor performs many functions - from mathematical and logical operations to controlling the operation of other chips and the entire computer.

One of the main parameters of the microprocessor is the frequency of the crystal, which determines the number of operations per unit of time, the frequency of the system bus, and the amount of internal cache memory. SRAM . The processor is labeled according to the operating frequency of the crystal. The frequency of operation of the crystal is determined by the frequency of switching of transistors from a closed state to an open state. The ability of a transistor to switch faster is determined by the production technology of the silicon wafers from which the chips are made. The dimension of the technological process determines the dimensions of the transistor (its thickness and gate length). For example, using the 90nm process technology, which was introduced in early 2004, the transistor size is 90nm and the gate length is 50nm.

All modern processors use field-effect transistors. The transition to a new technical process makes it possible to create transistors with higher switching frequencies, lower leakage currents, and smaller sizes. Reducing the size simultaneously reduces the chip area and therefore heat dissipation, and the thinner gate allows lower switching voltage to be supplied, which also reduces power consumption and heat dissipation.

The 90 nm technology norm has turned out to be quite a serious technological barrier for many chip manufacturers. This is confirmed by the company TSMC , which produces chips for many market giants such as companies AMD, nVidia, ATI, VIA . For a long time, it was unable to organize the production of chips using 0.09 micron technology, which led to a low yield of usable crystals. This is one of the reasons why AMD delayed the release of its processors with technology for a long time SOI (Silicon - on - Insulator ). This is due to the fact that it was precisely at this dimension of elements that all sorts of previously not so noticeable negative factors began to strongly manifest themselves, such as leakage currents, a large scatter of parameters and an exponential increase in heat generation.

There are two leakage currents: gate leakage current and subthreshold leakage. First caused by the spontaneous movement of electrons between the silicon channel substrate and the polysilicon gate. Second – spontaneous movement of electrons from the source of the transistor to the drain. Both of these effects lead to the need to increase the supply voltage to control the currents in the transistor, which negatively affects heat dissipation. So, reducing the size of the transistor, first of all, its gate and silicon dioxide layer are reduced ( SiO2 ), which is a natural barrier between the gate and the channel.

On the one hand, this improves the speed performance of the transistor (switching time), but on the other hand, it increases leakage. That is, it turns out to be a kind of closed cycle. So, the transition to 90 nm is another decrease in the thickness of the dioxide layer, and at the same time an increase in leakage. The fight against leaks means, again, an increase in control voltages, and, accordingly, a significant increase in heat generation. All this led to a delay in the introduction of a new technical process by competitors in the microprocessor market - Intel and AMD.

One of the alternative solutions is the use of technology SOI (silicon on insulator), which the company recently introduced AMD on their 64-bit processors. However, it cost her a lot of effort and overcoming a large number of associated difficulties. But the technology itself provides a huge number of advantages with a relatively small number of disadvantages.

The essence of the technology, in general, is quite logical - the transistor is separated from the silicon substrate by another thin layer of insulator. There are a lot of advantages. There is no uncontrolled movement of electrons under the transistor channel, affecting its electrical characteristics - once. After applying the unlocking current to the gate, the time of ionization of the channel to the operating state, until the operating current flows through it, is reduced, that is, the second key parameter of transistor performance is improved, the time of its on/off is two. Or, at the same speed, you can simply lower the unlocking current - three. Or find some kind of compromise between increasing the operating speed and decreasing the voltage. While maintaining the same gate current, the increase in transistor performance can be up to 30%; if you leave the frequency the same, focusing on energy saving, then the plus can be large - up to 50%.

Finally, the channel characteristics become more predictable, and the transistor itself becomes more resistant to sporadic errors, such as those caused by cosmic particles hitting the channel substrate and unexpectedly ionizing it. Now, when they get into the substrate located under the insulator layer, they do not affect the operation of the transistor in any way. The only disadvantage of SOI is that the depth of the emitter/collector region has to be reduced, which directly and directly affects the increase in its resistance as the thickness decreases.

And finally, third The reason that contributed to the slowdown in frequency growth is the low activity of competitors in the market. You could say everyone was busy with their own business. AMD was engaged in the widespread introduction of 64-bit processors, for Intel This was a period of improvement of the new technical process, debugging for an increased yield of usable crystals.

So, the need to switch to new technical processes is obvious, but it is becoming more and more difficult for technologists every time. The first microprocessors Pentium (1993) were produced using the 0.8 µm process, then 0.6 µm. In 1995, the 0.35 micron process technology was used for the first time for 6th generation processors. In 1997 it changed to 0.25 microns, and in 1999 to 0.18 microns. Modern processors are based on 0.13 and 0.09 micron technologies, the latter being introduced in 2004. As you can see, for these technical processes Moore's law is observed, which states that every two years the frequency of the crystals doubles as the number of transistors from them increases. The technical process is changing at the same pace. True, in the future the “frequency race” will outstrip this law. By 2006 the company Intel plans to develop a 65-nm process technology, and 32-nm in 2009.

Here it’s time to remember the structure of the transistor, namely, a thin layer of silicon dioxide, an insulator located between the gate and the channel, and which performs a completely understandable function - a barrier for electrons that prevents leakage of gate current.

It is obvious that the thicker this layer is, the better it performs its insulating functions, but it is an integral part of the channel, and it is no less obvious that if we are going to reduce the length of the channel (the size of the transistor), then we need to reduce its thickness, and quite at a fast pace. By the way, over the past few decades, the thickness of this layer has averaged about 1/45 of the entire length of the channel. But this process has its end - as the same Intel claimed five years ago, if we continue to use SiO 2, as it has been over the past 30 years, the minimum layer thickness will be 2.3 nm, otherwise the leakage current of the gate current will become simply unrealistic. .

Until recently, nothing has been done to reduce subchannel leakage, but now the situation is beginning to change, since the operating current, along with the gate response time, is one of the two main parameters characterizing the speed of operation of the transistor, and off-state leakage directly affects it - to save the required transistor efficiency, it is necessary, accordingly, to increase the operating current, with all the ensuing conditions.

Manufacturing microprocessor is a complex process that includes more than 300 stages. Microprocessors are formed on the surface of thin circular silicon wafers - substrates, as a result of a certain sequence of various processing processes using chemicals, gases and ultraviolet radiation.

The substrates typically have a diameter of 200 millimeters, or 8 inches. However, Intel has already switched to wafers with a diameter of 300 mm, or 12 inches. New plates make it possible to obtain almost 4 times more crystals, and the yield is much higher. The wafers are made from silicon, which is purified, melted, and grown into long cylindrical crystals. The crystals are then cut into thin slices and polished until their surfaces are mirror-smooth and free of defects. Next, thermal oxidation (film formation) is performed in a cyclic manner SiO2 ), photolithography, impurity diffusion (phosphorus), epitaxy (layer growth).

During the manufacturing process of microcircuits, the thinnest layers of materials are applied to the blank plates in the form of carefully calculated patterns. Up to several hundred microprocessors can be placed on one wafer, the manufacturing of which requires more than 300 operations. The entire process of producing processors can be divided into several stages: growing silicon dioxide and creating conductive regions, testing, manufacturing the package and delivery.

The microprocessor manufacturing process begins with " growing "on the surface of the polished wafer is an insulating layer of silicon dioxide. This step is carried out in an electric oven at a very high temperature. The thickness of the oxide layer depends on the temperature and the time that the wafer spends in the oven.

Then follows photolithography - a process during which a pattern is formed on the surface of the plate. First, a temporary layer of photosensitive material, a photoresist, is applied to the plate, onto which an image of the transparent sections of the template, or photomask, is projected using ultraviolet radiation. Masks are made during processor design and are used to form circuit patterns in each layer of the processor. Under the influence of radiation, the exposed areas of the photolayer become soluble, and they are removed using a solvent (fluoric acid), revealing the silicon dioxide underneath.

Exposed silica is removed using a process called " etching "The remaining photolayer is then removed, leaving a silicon dioxide pattern on the semiconductor wafer. As a result of a series of additional photolithography and etching operations, polycrystalline silicon, which has the properties of a conductor, is also applied to the wafer.

During the next operation, called " doping ", open areas of the silicon wafer are bombarded with ions of various chemical elements, which form negative and positive charges in silicon, changing the electrical conductivity of these areas.

Adding New Layers followed by etching of the circuit is carried out several times, while for interlayer connections “windows” are left in the layers, which are filled with metal, forming electrical connections between the layers. Intel used copper conductors in its 0.13-micron process technology. In the 0.18-micron manufacturing process and in previous generations, Intel used aluminum. Both copper and aluminum are excellent conductors of electricity. When using the 0.18-μm process technology, 6 layers were used; when introducing the 90 nm technology process in 2004, 7 layers of silicon were used.

Each layer of the processor has its own pattern; together, all these layers form a three-dimensional electronic circuit. The application of layers is repeated 20 - 25 times over several weeks.

To withstand the stress that the substrates are subjected to during the layering process, the silicon wafers must initially be thick enough. Therefore, before cutting the wafer into individual microprocessors, its thickness is reduced by 33% using special processes and contaminants are removed from the reverse side. Then, a layer of special material is applied to the back side of the “thinner” plate, which improves the subsequent attachment of the crystal to the body. In addition, this layer provides electrical contact between the back surface of the integrated circuit and the package after assembly.

After this, the wafers are tested to check the quality of all machining operations. To determine whether processors are working correctly, individual components are tested. If faults are detected, data about them is analyzed to understand at what stage of processing the failure occurred.

Electrical probes are then connected to each processor and power is applied. Processors are tested by a computer, which determines whether the characteristics of the manufactured processors meet specified requirements.

After testing, the wafers are sent to the assembly facility, where they are cut into small rectangles, each of which contains an integrated circuit. A special precision saw is used to separate the plate. Non-functional crystals are rejected.

Each crystal is then placed in an individual case. The case protects the crystal from external influences and provides its electrical connection to the board on which it will subsequently be installed. Tiny balls of solder, located at specific points on the chip, are soldered to the electrical terminals of the package. Now electrical signals can flow from the board to the chip and back.

In future processors the company Intel will apply technology BBUL , which will allow you to create fundamentally new cases with less heat generation and capacity between the legs CPU.

After the chip is installed in the case, the processor is tested again to determine whether it is functional. Faulty processors are rejected, and working ones are subjected to load tests: exposure to various temperature and humidity conditions, as well as electrostatic discharges. After each load test, the processor is tested to determine its functional status. Processors are then sorted based on their behavior at different clock speeds and supply voltages.

Processors that have passed testing are sent to final control, whose task is to confirm that the results of all previous tests were correct, and the parameters of the integrated circuit meet or even exceed established standards. All processors that pass final inspection are marked and packaged for delivery to customers.

For as long as I can remember, I have always dreamed of making a processor. Finally, yesterday I made it. Not God knows what: 8 bits, RISC, current operating frequency is 4 kHz, but it works. So far in the program for modeling logical circuits, but we all know: “today - in the model, tomorrow - in reality!”

Below the cut are several animations, a brief introduction to binary logic for the little ones, a short story about the main processor logic chips and, in fact, the circuit diagram.

Binary logic

The binary number system (for those who are not in the know) is a number system in which there are no digits greater than one. This definition confuses many until they remember that in the decimal number system there are no numbers greater than nine.

The binary system is used in computers because numbers in it are easy to encode with voltage: there is voltage, which means one; no voltage means zero. Additionally, “zero” and “one” can easily be understood as “false” and “true.” Moreover, most devices operating in the binary number system usually treat numbers as an array of “truths” and “falsies”, that is, they operate with numbers as logical quantities. For the little ones and those who are not in the know, I will tell and show how the simplest elements of binary logic work.

Buffer element

Imagine that you are sitting in your room and your friend is in the kitchen. You shout to him: “Friend, tell me, is the light on in the corridor?” The friend replies: “Yes, it’s on fire!” or “No, it’s not on.” Your friend is a buffer between the signal source (the light bulb in the hallway) and the receiver (you). Moreover, your friend is not just any ordinary buffer, but a managed buffer. He would be an ordinary buffer if he constantly shouted: “The light bulb is on” or “The light bulb is not on.”

Element “Not” - NOT

Now imagine that your friend is a joker who always tells lies. And if the light in the corridor is on, then he will tell you, “No, it’s very, very dark in the corridor,” and if it’s not on, then “Yes, the light is on in the corridor.” If you actually have such a friend, then he is the embodiment of the element “Not”.

“Or” element - OR

Unfortunately, one light bulb and one friend are not enough to explain the essence of the “Or” element. You need two light bulbs. So, you have two light bulbs in the hallway - a floor lamp, for example, and a chandelier. You shout: “Friend, tell me, is at least one light bulb in the corridor shining?”, and your friend answers “Yes” or “No.” Obviously, to answer “No” all the lights must be turned off.

Element “AND” - AND

The same apartment, you, a friend in the kitchen, a floor lamp and a chandelier in the hallway. To your question “Are both lights on in the corridor?” you get a “Yes” or “No” answer. Congratulations, your friend is now the “I” element.

Exclusive Or Element - XOR

Let’s repeat the experiment again for the “Or” element, but reformulate our question to a friend: “Friend, tell me, is there only one light bulb in the corridor?” An honest friend will answer such a question “Yes” only if there is really only one light bulb in the corridor.

Adders

Quarter adder

The “Exclusive Or” element is called a quarter adder. Why? Let's figure it out.
Let's create an addition table for two numbers in the binary number system:
0+0= 0
0+1= 1
1+0= 1
1+1= 10

Now let’s write down the truth table of the “Exclusive Or” element. To do this, we denote the glowing light bulb as 1, the extinguished light bulb as 0, and the friend’s answers “Yes”/“No” as 1 and 0, respectively.
0 XOR 0 = 0
0 XOR 1 = 1
1 XOR 0 = 1
1 XOR 1 = 0

Very similar, isn't it? The addition table and the truth table of “Exclusive Or” coincide completely, except for one single case. And this case is called "Overflow".

Half adder

When an overflow occurs, the result of the addition is no longer placed in the same number of digits as the terms were placed in. The terms are two single-digit numbers (one significant figure, understand?), and the sum is already a two-digit number (two significant figures). It is no longer possible to convey two numbers with one light bulb (“On”/“Off”). You need two light bulbs. We need it - we'll do it!

In addition to XOR, we need an AND element for the adder.
0 XOR 0 = 0 0 AND 0 = 0
0 XOR 1 = 1 0 AND 1 = 0
1 XOR 0 = 1 1 AND 0 = 0
1 XOR 1 = 0 1 AND 1 = 1

Tadam!
0+0= 00
0+1= 01
1+0= 01
1+1= 10

Our wunderwaffle half-adder works. It can be considered the simplest specialized processor that adds two numbers. A half-adder is called a half-adder because it cannot take into account carry (the result of another adder), that is, it cannot add three single-digit binary numbers. In this regard, it is impossible to make one multi-bit half-adder from several single-bit half-adders.

I won't go into detail about how full and multi-bit adders work, I just hope you get the basic idea.

More complex elements

Multiplexer

I suggest using your imagination again. So imagine this. You live in a private single-apartment house, and there is your mailbox near the door of this house. While going for a walk, you notice a strange postman standing near this very mailbox. And this is what he does: he takes out a bunch of letters from his bag, reads the number on the mailbox, and depending on the number on the mailbox, throws one or another letter into it. The postman works as a multiplexer. It determines in a certain way (the number on the envelope) which signal (letter) to send along the signal line (mailbox).

Multiplexers usually consist only of combinations of elements “And”, “Or” and “Not”. A single-bit multiplexer has one input called “address selection,” two inputs with the general name “input signal,” and one output, which is called “output signal.”

When 0 is applied to the "address select", the "output signal" becomes the same as the first "input signal". Accordingly, when a 1 is applied to the “select”, the “output signal” becomes equal to the second “input signal”.

Demultiplexer

But this thing works exactly the opposite. For “select address” we give the address, for “data input” we give the data, at the output with the number “address” we have the data from the input.

Counter

To understand how the meter works, you will again need your friend. Call him from the kitchen (I hope he wasn't too bored there, and, most importantly, he didn't eat all your food), and ask him to do this: let him remember the number 0. Every time you touch it, he should add one to the number that you remember, say the result and remember it. When the result is (let's say) 3, he should shout "Abracadabra!" and respond the next time he touches that he now remembers the number 0. A little difficult? See:

You touch a friend. Friend says "One".
You touch a friend. The friend says “Two.”
You touch a friend. The friend says "Three". A friend shouts out " Habrahabr!" Critical attack! You are temporarily paralyzed and cannot move.
You touch a friend. Friend says "Zero".

Well, and so on. Very simple, right?
You, of course, realized that your friend is now a counter. Touching a friend can be considered a "timing signal" or, simply put, a signal to continue counting. The cry of “Abracadabra” indicates that the stored value in the counter is the maximum, and that the next clock signal will set the counter to zero. There are two differences between the binary counter and your friend. First, a true binary counter outputs the stored value in binary form. Second: it always does only what you tell it to do, and never stoops to stupid jokes that could disrupt the operation of the entire processor system.

Memory

Trigger

Let's continue to mock your unfortunate (perhaps even imaginary) friend. Let him now remember the number zero. When you touch his left hand, he should remember the number zero, and when you touch his right hand, he should remember the number one. When asked “What number do you remember?” a friend must always answer with the number he remembered - zero or one.
The simplest memory cell is an RS flip-flop (“trigger” means “switch”). An RS flip-flop can store one bit of data (“zero”/“one”), and has two inputs. The Set input (just like your friend’s left hand) writes “one” to the trigger, and the Reset input (respectively, the right hand) writes “zero”.

Register

The register is a little more complicated. Your friend turns into a register when you ask him to remember something, and then you say, “Hey, remind me what I told you to remember?” and your friend answers correctly.

A register can usually store little more than one bit. It necessarily has a data input, a data output and a write enable input. From the data output you can read what is written in this register at any time. You can supply the data input that you want to write to this register. You can submit data until you get bored. Nothing will be written to the register anyway until one is applied to the write permission input, that is, a “logical one”.

Shift register

Have you ever stood in line? They probably were. So you can imagine what it's like to be data in a shift register. People come and stand at the end of the line. The first person in line enters the office of the big shot. The one who was second in line becomes first, and the one who was third is now second, and so on. A queue is such a tricky shift register from which “data” (well, that is, people) can run away on business, having previously warned the neighbors in the queue. In a real shift register, of course, “data” cannot escape from the queue.

So, a shift register has a data input (through which data enters the “queue”) and a data output (from which the very first record in the “queue” can be read). The shift register also has a “shift register” input. As soon as a “logical one” arrives at this input, the entire queue is shifted.

There is one important difference between a queue and a shift register. If the shift register is designed for four entries (for example, four bytes), then the first entry in the queue will reach the exit from the register only after four signals to the “shift register” input.

RAM

If many, many flip-flops are combined into registers, and many, many registers are combined in one chip, you get a RAM chip. A memory chip usually has an address input, a bidirectional data input (that is, this input can be written to and read from), and a write enable input. We supply some number to the address input, and this number will select a specific memory cell. After this, at the data input/output we can read what is written to this very cell.
Now we will simultaneously apply to the data input/output what we want to write to this cell, and to the write permission input - a “logical one”. The result is a bit predictable, isn't it?

CPU

BitBitJump

Processors are sometimes divided into CISC - those that can execute many different commands, and RISC - those that can execute few commands, but execute them well. One fine evening I thought: it would be great if it were possible to make a full-fledged processor that can execute just one command. I soon learned that there is a whole class of single-instruction processors - OISC, most often they use the Subleq (subtract, and if less than or equal to zero, then go) or Subeq (subtract, and if equal to zero, then go) instruction. While studying various options for OISC processors, I found the website of Oleg Mazonka, who developed the simplest single-command language BitBitJump. The only command in this language is called BitBitJump (copy a bit and go to the address). This certainly esoteric language is Turing complete—that is, any computer algorithm can be implemented in it.

A detailed description of BitBitJump and the assembler for this language can be found on the developer's website. To describe the processor operation algorithm, it is enough to know the following:

1. When the processor is turned on, 0 is written in the PC, A and B registers
2. Read the memory cell with the PC address and save what we read into register A
3. Increase PC
4. Read the memory cell with the PC address and save what we read into register B
5. Increase PC
6. We write the contents of the bit with address A into the cell with the address written in register B.
7. Read the memory cell with the PC address and save what we read into register B
8. Write the contents of register B to the PC register
9. Let's move on to point 2 of our plan
10. PROFIT!!!

Unfortunately, the algorithm is endless, and therefore PROFIT will not be achieved.

Actually, the scheme

The scheme was built spontaneously, so fear, horror and chaos rule the roost. However, it works, and works well. To turn on the processor, you need to:

1. Enter the program into RAM
2. Press the switch
3. Set the counter to position 4 (this can be done in hardware, but the circuit would become even more cumbersome)
4. Enable clock generator

As you can see, one register, one shift register, one RAM chip, two binary counters, one demultiplexer (represented by comparators), two multiplexers and some pure logic are used.