Modern processors have the shape of a small rectangle, which is presented in the form of a silicon wafer. The plate itself is protected by a special case made of plastic or ceramic. All the main circuits are protected, thanks to them, the full operation of the CPU is carried out. If with appearance everything is extremely simple, what concerns the circuit itself and how the processor is arranged? Let's look at this in more detail.

The CPU contains a small amount various elements. Each of them performs its own action, data and control are transferred. Regular Users used to distinguish processors by their clock speed, amount of cache memory and cores. But this is far from all that provides reliable and fast work. It is worth paying special attention to each component.

Architecture

The internal design of the CPU is often different from each other, each family has its own set of properties and functions - this is called its architecture. You can see an example of a processor design in the image below.

But many are accustomed to mean a slightly different meaning by processor architecture. If we consider it from the point of view of programming, then it is determined by its ability to execute a certain set of codes. If you buy a modern CPU, then most likely it belongs to the x86 architecture.

Nuclei

The main part of the CPU is called the core, it contains all the necessary blocks, as well as the execution of logical and arithmetic tasks. If you look at the figure below, you can make out what each functional block of the kernel looks like:

  1. Instruction fetch module. Here, instructions are recognized at the address, which is indicated in the program counter. The number of simultaneous reading of commands directly depends on the number of installed decryption blocks, which helps to load each cycle of work with the largest number of instructions.
  2. transition predictor responsible for optimal performance instruction fetch block. It determines the sequence of commands to be executed, loading the kernel pipeline.
  3. Decoding module. This part of the kernel is responsible for defining some processes to perform tasks. The decoding task itself is very difficult due to the variable size of the instruction. In the newest processors, there are several such blocks in one core.
  4. Data sampling modules. They take information from operational or cache memory. They carry out data sampling, which is necessary at this moment for the execution of the instruction.
  5. control block. The name itself speaks of the importance of this component. In the core, it is the main element, since it distributes energy between all blocks, helping to complete each action on time.
  6. Results saving module. Designed to be written to RAM after the instruction has been processed. The save address is specified in the running task.
  7. Interrupt element. The CPU is able to perform multiple tasks at once thanks to the interrupt function, this allows it to stop the progress of one program by switching to another instruction.
  8. Registers. The temporary results of instructions are stored here, this component can be called a small fast RAM. Often its size does not exceed several hundred bytes.
  9. Command counter. It stores the address of the instruction that will be used on the next processor cycle.

System bus

Devices that are part of a PC are connected via the CPU system bus. Only he is directly connected to it, the remaining elements are connected through various controllers. The bus itself has many signal lines through which information is transmitted. Each line has its own protocol, which allows the controllers to communicate with other connected computer components. The bus has its own frequency, respectively, the higher it is, the faster the exchange of information between the connecting elements of the system.

Cache

The speed of the CPU depends on its ability to fetch instructions and data from memory as quickly as possible. Cache memory reduces the execution time of operations due to the fact that it acts as a temporary buffer that provides instant transfer of data from the CPU to RAM, or vice versa.

The main characteristic of cache memory is its difference in levels. If it is high, then the memory is slower and bulkier. The fastest and smallest memory is the first level. The principle of operation of this element is very simple - the CPU reads data from RAM and enters them into a cache of any level, while deleting the information that has been accessed for a long time. If the processor needs this information again, it will get it faster thanks to the temporary buffer.

Socket (connector)

Due to the fact that the processor has its own socket (socket or slot), you can easily replace it if it breaks down or upgrade your computer. Without the socket, the CPU would simply be soldered into the motherboard, making it difficult to repair or replace later. It is worth paying attention - each socket is designed exclusively for installing certain processors.

Often, users inadvertently buy an incompatible processor and motherboard, which causes additional problems.

The modern consumer of electronics is very difficult to surprise. We are already accustomed to the fact that our pocket is legitimately occupied by a smartphone, a laptop is in a bag, a “smart” watch obediently counts steps on the hand, and headphones with an active noise reduction system caress our ears.

It's a funny thing, but we are used to carrying not one, but two, three or more computers at once. After all, this is how you can call a device that has CPU. And it doesn’t matter what a particular device looks like. A miniature chip is responsible for its work, having overcome a turbulent and rapid path of development.

Why did we bring up the topic of processors? Everything is simple. Over the past ten years, there has been a real revolution in the world mobile devices.

There are only 10 years difference between these devices. But Nokia N95 then seemed to us a space device, and today we look at ARKit with a certain mistrust

But everything could have turned out differently, and the battered Pentium IV would have remained the ultimate dream of an ordinary buyer.

We tried to do without complicated technical terms and tell how the processor works and find out which architecture is the future.

1. How it all started

The first processors were completely different from what you can see when you open the lid of your PC system unit.

Instead of microcircuits in the 40s of the XX century, electromechanical relays supplemented with vacuum tubes. The lamps acted as a diode, the state of which could be regulated by lowering or increasing the voltage in the circuit. The structures looked like this:

For the operation of one gigantic computer, hundreds, sometimes thousands of processors were needed. But, at the same time, you would not be able to run even a simple editor like NotePad or TextEdit from the standard set of Windows and macOS on such a computer. The computer would simply not have enough power.

2. The advent of transistors

First FETs appeared in 1928. But the world changed only after the appearance of the so-called bipolar transistors opened in 1947.

In the late 1940s, experimental physicist Walter Brattain and theorist John Bardeen developed the first point transistor. In 1950, it was replaced by the first junction transistor, and in 1954, the well-known manufacturer Texas Instruments announced a silicon transistor.

But the real revolution came in 1959, when the scientist Jean Henri developed the first silicon planar (flat) transistor, which became the basis for monolithic integrated circuits.

Yes, it's a bit tricky, so let's dig a little deeper and deal with the theoretical part.

3. How a transistor works

So, the task of such an electrical component as transistor is to control the current. Simply put, this little tricky switch controls the flow of electricity.

The main advantage of a transistor over a conventional switch is that it does not require the presence of a person. Those. such an element is capable of independently controlling the current. In addition, it works much faster than you would turn on or off the electrical circuit yourself.

From a school computer science course, you probably remember that a computer “understands” human language through combinations of only two states: “on” and “off”. In the understanding of the machine, this is the state "0" or "1".

The task of the computer is to represent electricity in the form of numbers.

And if earlier the task of switching states was performed by clumsy, bulky and inefficient electrical relays, now the transistor has taken over this routine work.

From the beginning of the 60s, transistors began to be made from silicon, which made it possible not only to make processors more compact, but also to significantly increase their reliability.

But first, let's deal with the diode

Silicon(aka Si - "silicium" in the periodic table) belongs to the category of semiconductors, which means that, on the one hand, it transmits current better than a dielectric, on the other hand, it does it worse than a metal.

Whether we like it or not, but to understand the work and the further history of the development of processors, we will have to plunge into the structure of one silicon atom. Don't be afraid, let's make it short and very clear.

The job of the transistor is to amplify weak signal with an additional power supply.

The silicon atom has four electrons, thanks to which it forms bonds (and to be precise - covalent bonds) with the same nearby three atoms, forming a crystal lattice. While most of the electrons are in bond, a small part of them is able to move through the crystal lattice. It is because of this partial transfer of electrons that silicon was classified as a semiconductor.

But such a weak movement of electrons would not allow the use of a transistor in practice, so scientists decided to increase the performance of transistors by doping, or more simply, additions to the crystal lattice of silicon by atoms of elements with a characteristic arrangement of electrons.

So they began to use a 5-valent impurity of phosphorus, due to which they received n-type transistors. The presence of an additional electron made it possible to accelerate their movement, increasing the current flow.

When doping transistors p-type boron, which contains three electrons, became such a catalyst. Due to the absence of one electron, holes appear in the crystal lattice (they play the role of a positive charge), but due to the fact that electrons are able to fill these holes, the conductivity of silicon increases significantly.

Suppose we took a silicon wafer and doped one part of it with a p-type impurity, and the other with an n-type impurity. So we got diode- the basic element of the transistor.

Now the electrons located in the n-part will tend to go to the holes located in the p-part. In this case, the n-side will have a slight negative charge, and the p-side will have a positive charge. The electric field formed as a result of this "gravity" - the barrier - will prevent the further movement of electrons.

If you connect a power source to the diode in such a way that "-" touches the p-side of the plate, and "+" touches the n-side, current flow will not be possible due to the fact that the holes will be attracted to the negative contact of the power source, and the electrons to positive, and the bond between the p and n electrons will be lost due to the expansion of the combined layer.

But if you connect the power supply with sufficient voltage the other way around, i.e. "+" from the source to the p-side, and "-" to the n-side, electrons placed on the n-side will be repelled by the negative pole and pushed to the p-side, occupying holes in the p-region.

But now the electrons are attracted to the positive pole of the power source and they continue to move through the p-holes. This phenomenon has been called forward biased diode.

diode + diode = transistor

By itself, the transistor can be thought of as two diodes docked to each other. In this case, the p-region (the one where the holes are located) becomes common for them and is called the “base”.

At N-P-N transistor two n-regions with additional electrons - they are also the "emitter" and "collector" and one, weak region with holes - the p-region, called the "base".

If you connect a power supply (let's call it V1) to the n-regions of the transistor (regardless of the pole), one diode will be reverse-biased and the transistor will be closed.

But, as soon as we connect another power source (let's call it V2), setting the "+" contact to the "central" p-region (base), and the "-" contact to the n-region (emitter), some of the electrons will flow through again formed chain (V2), and the part will be attracted by the positive n-region. As a result, electrons will flow into the collector region, and a weak electric current will be amplified.

Exhale!

4. So how does a computer actually work?

And now the most important.

Depending on the applied voltage, the transistor can be either open, or closed. If the voltage is insufficient to overcome the potential barrier (the one at the junction of p and n plates) - the transistor will be in the closed state - in the “off” state or, in the language binary system – "0".

With enough voltage, the transistor turns on, and we get the value "on" or "1" in binary.

This state, 0 or 1, is called a "bit" in the computer industry.

Those. we get the main property of the very switch that opened the way to computers for mankind!

In the first electronic digital computer ENIAC, or, more simply, the first computer, about 18 thousand triode lamps were used. The size of the computer was comparable to a tennis court, and its weight was 30 tons.

To understand how the processor works, there are two more key points to understand.

Moment 1. So, we have decided what is bit. But with its help, we can only get two characteristics of something: either "yes" or "no". In order for the computer to learn to understand us better, they came up with a combination of 8 bits (0 or 1), which they called byte.

Using a byte, you can encode a number from zero to 255. Using these 255 numbers - combinations of zeros and ones, you can encode anything.

Moment 2. The presence of numbers and letters without any logic would not give us anything. That is why the concept logical operators.

By connecting just two transistors in a certain way, you can achieve several logical actions at once: “and”, “or”. The combination of the amount of voltage across each transistor and the type of their connection allows you to get different combinations zeros and ones.

Through the efforts of programmers, the values ​​\u200b\u200bof zeros and ones, the binary system, began to be translated into decimal so that we could understand what exactly the computer “says”. And to enter commands, our usual actions, such as entering letters from the keyboard, are represented as a binary chain of commands.

Simply put, imagine that there is a correspondence table, say, ASCII, in which each letter corresponds to a combination of 0 and 1. You pressed a button on the keyboard, and at that moment on the processor, thanks to the program, the transistors switched so that the following appeared on the screen the most written letter on the key.

This is a rather primitive explanation of how the processor and the computer work, but it is this understanding that allows us to move on.

5. And the transistor race began

After the British radio engineer Geoffrey Dahmer proposed in 1952 to place the simplest electronic components in a monolithic semiconductor crystal, the computer industry has taken a leap forward.

From the integrated circuits proposed by Dahmer, engineers quickly switched to microchips based on transistors. In turn, several such chips have already formed themselves CPU.

Of course, the dimensions of such processors are not much similar to modern ones. In addition, until 1964, all processors had one problem. They required an individual approach - their own programming language for each processor.

  • 1964 IBM System/360. Universal compatible computer program code. An instruction set for one processor model could be used for another.
  • 70s. The appearance of the first microprocessors. Single chip processor from Intel. Intel 4004 - 10 µm TPU, 2300 transistors, 740 kHz.
  • 1973 Intel year 4040 and Intel 8008. 3,000 transistors at 740 kHz for the Intel 4040 and 3,500 transistors at 500 kHz for the Intel 8008.
  • 1974 Intel 8080. 6 micron TPU and 6000 transistors. The clock frequency is about 5,000 kHz. It was this processor that was used in the Altair-8800 computer. The domestic copy of the Intel 8080 is the KR580VM80A processor, developed by the Kyiv Research Institute of Microdevices. 8 bits
  • 1976 Intel 8080. 3 micron TPU and 6500 transistors. Clock frequency 6 MHz. 8 bits
  • 1976 Zilog Z80. 3 micron TPU and 8500 transistors. Clock frequency up to 8 MHz. 8 bits
  • 1978 Intel 8086. 3 micron TPU and 29,000 transistors. The clock frequency is about 25 MHz. The x86 instruction set that is still in use today. 16 bits
  • 1980 Intel 80186. 3 micron TPU and 134,000 transistors. Clock frequency - up to 25 MHz. 16 bits
  • 1982 Intel 80286. 1.5 micron TPU and 134,000 transistors. Frequency - up to 12.5 MHz. 16 bits
  • 1982 Motorola 68000. 3 µm and 84,000 transistors. This processor was used in Apple computer Lisa.
  • 1985 Intel 80386. 1.5 micron tp and 275,000 transistors. Frequency - up to 33 MHz in the 386SX version.

It would seem that the list could be continued indefinitely, but then Intel engineers faced a serious problem.

6. Moore's Law or how chipmakers live on

Out in the late 80s. Back in the early 60s, one of the founders by Intel Gordon Moore formulated the so-called "Moore's Law". It sounds like this:

Every 24 months, the number of transistors placed on a chip integrated circuit, is doubled.

It is difficult to call this law a law. It would be more accurate to call it empirical observation. Comparing the pace of technology development, Moore concluded that a similar trend could form.

But already during development fourth generation Intel processors i486 engineers are faced with the fact that they have already reached the performance ceiling and can no longer fit more processors in the same area. At that time, technology did not allow this.

As a solution, a variant was found using a number of additional elements:

  • cache memory;
  • conveyor;
  • built-in coprocessor;
  • multiplier.

Part of the computational load fell on the shoulders of these four nodes. As a result, the appearance of cache memory, on the one hand, complicated the design of the processor, on the other hand, it became much more powerful.

The Intel i486 processor already consisted of 1.2 million transistors, and maximum frequency his work reached 50 MHz.

In 1995, the development joins AMD and releases the fastest i486-compatible Am5x86 processor at that time on a 32-bit architecture. It was already manufactured according to the 350 nanometer process technology, and the number of installed processors reached 1.6 million pieces. The clock frequency has increased to 133 MHz.

But the chipmakers did not dare to pursue further increasing the number of processors installed on a chip and developing the already utopian CISC (Complex Instruction Set Computing) architecture. Instead, the American engineer David Patterson proposed to optimize the operation of processors, leaving only the most necessary computational instructions.

So processor manufacturers switched to the RISC (Reduced Instruction Set Computing) platform. But even this was not enough.

In 1991, the 64-bit R4000 processor was released, operating at a frequency of 100 MHz. Three years later, the R8000 processor appears, and two years later, the R10000 with clock speeds up to 195 MHz. In parallel, the market for SPARC processors developed, the architecture feature of which was the absence of multiplication and division instructions.

Instead of fighting over the number of transistors, chip manufacturers began to rethink the architecture of their work.. The rejection of "unnecessary" commands, the execution of instructions in one cycle, the presence of registers of general value and pipelining made it possible to quickly increase the clock frequency and power of processors without distorting the number of transistors.

Here are just a few of the architectures that appeared between 1980 and 1995:

  • SPARC;
  • ARM;
  • PowerPC;
  • Intel P5;
  • AMD K5;
  • Intel P6.

They were based on the RISC platform, and in some cases, a partial, combined use of the CISC platform. But the development of technology once again pushed chipmakers to continue building up processors.

In August 1999, the AMD K7 Athlon entered the market, manufactured using a 250 nm process technology and including 22 million transistors. Later, the bar was raised to 38 million processors. Then up to 250 million.

The technological processor increased, the clock frequency increased. But, as physics says, there is a limit to everything.

7. The end of the transistor competition is near

In 2007, Gordon Moore made a very blunt statement:

Moore's Law will soon cease to apply. It is impossible to install an unlimited number of processors indefinitely. The reason for this is the atomic nature of matter.

It is noticeable to the naked eye that the two leading chip manufacturers AMD and Intel have clearly slowed down the pace of processor development over the past few years. The accuracy of the technological process has increased to only a few nanometers, but it is impossible to place even more processors.

And while semiconductor manufacturers threaten to launch multilayer transistors, drawing a parallel with 3DNand memory, a serious competitor appeared at the walled x86 architecture 30 years ago.

8. What awaits "regular" processors

Moore's Law has been invalidated since 2016. This was officially announced by the largest processor manufacturer Intel. Double computing power 100% every two years chipmakers are no longer able.

And now processor manufacturers have several unpromising options.

First option - quantum computers . There have already been attempts to build a computer that uses particles to represent information. There are several similar quantum devices in the world, but they can only cope with algorithms of low complexity.

In addition, the serial launch of such devices in the coming decades is out of the question. Expensive, inefficient and… slow!

Yes, quantum computers consume much less power than their modern counterparts, but they will also be slower until developers and component manufacturers switch to new technology.

The second option - processors with layers of transistors. Both Intel and AMD have seriously thought about this technology. Instead of one layer of transistors, they plan to use several. It seems that in the coming years, processors may well appear in which not only the number of cores and clock frequency will be important, but also the number of transistor layers.

The solution has the right to life, and thus the monopolists will be able to milk the consumer for another couple of decades, but, in the end, the technology will again hit the ceiling.

Today, realizing the rapid development of the ARM architecture, Intel made a quiet announcement of the Ice Lake family of chips. The processors will be manufactured on a 10-nanometer process and will become the basis for smartphones, tablets and mobile devices. But it will happen in 2019.

9. ARM is the future

So, the x86 architecture appeared in 1978 and belongs to the type of CISC platform. Those. by itself, it implies the existence of instructions for all occasions. Versatility is the main strong point of the x86.

But, at the same time, versatility played a cruel joke with these processors. x86 has several key disadvantages:

  • the complexity of commands and their frank confusion;
  • high energy consumption and heat release.

For high performance, I had to say goodbye to energy efficiency. Moreover, two companies are currently working on the x86 architecture, which can be safely attributed to monopolists. These are Intel and AMD. Only they can produce x86 processors, which means that only they rule the development of technologies.

At the same time, several companies are involved in the development of ARM (Arcon Risk Machine). Back in 1985, developers chose the RISC platform as the basis for further development of the architecture.

Unlike CISC, RISC involves designing a processor with the minimum required number of instructions, but maximum optimization. RISC processors are much smaller than CISC, more power efficient and simpler.

Moreover, ARM was originally created solely as a competitor to x86. The developers set the task to build an architecture that is more efficient than x86.

Since the 1940s, engineers have understood that one of the priority tasks is to reduce the size of computers, and, first of all, the processors themselves. But almost 80 years ago, hardly anyone could have imagined that a full-fledged computer would be smaller than a matchbox.

The ARM architecture once supported Apple company, launching the production of Newton tablets based on the ARM6 family of ARM processors.

Sales of desktop computers are falling rapidly, while the number of mobile devices sold annually is already in the billions. Often, in addition to performance, when choosing an electronic gadget, the user is interested in several more criteria:

  • mobility;
  • autonomy.

x86 architecture is strong in terms of performance, but once you give up active cooling, how powerful processor seems pathetic compared to the ARM architecture.

10. Why ARM is the undisputed leader

You will hardly be surprised that your smartphone, whether it is a simple Android or Apple's 2016 flagship, is dozens of times more powerful. full-fledged computers era of the late 90s.

But how much more powerful is the same iPhone?

In itself, comparing two different architectures is a very difficult thing. Measurements here can only be performed approximately, but you can understand the enormous advantage that smartphone processors built on the ARM architecture provide.

A universal assistant in this matter is the artificial Geekbench performance test. The utility is available as stationary computers as well as on Android and iOS platforms.

The mid-range and entry-level laptops are clearly lagging behind the performance of the iPhone 7. In the top segment, things are a little more complicated, but in 2017, Apple releases the iPhone X on the new A11 Bionic chip.

There, the ARM architecture is already familiar to you, but the figures in Geekbench have almost doubled. Laptops from the "higher echelon" tensed up.

And it's only been one year.

The development of ARM is in leaps and bounds. While Intel and AMD show 5-10% performance gains year after year, over the same period, smartphone manufacturers manage to increase processor power by two to two and a half times.

Skeptical users who go through the top lines of Geekbench just want to be reminded: in mobile technologies size is what matters most.

Place a candy bar with a powerful 18-core processor that “rips the ARM architecture to shreds” on the table, and then put your iPhone next to it. Feel the difference?

11. Instead of output

It is impossible to cover the 80-year history of the development of computers in one material. But after reading this article, you will be able to understand how the main element of any computer is arranged - the processor, and what to expect from the market in the coming years.

Of course, Intel and AMD will work on further increasing the number of transistors on a single chip and promoting the idea of ​​multilayer elements.

But do you, as a customer, need such power?

You are unlikely to be satisfied with the performance iPad Pro or the flagship iPhone X. I don't think you're unhappy with the performance of your multicooker in your kitchen or the picture quality on a 65-inch 4K TV. But all these devices use processors on the ARM architecture.

Windows has already officially announced that it is looking towards ARM with interest. The company included support for this architecture back in Windows 8.1, and is now actively working on a tandem with the leading ARM chipmaker Qualcomm.

Google also managed to look at ARM - operating system Chrome OS supports this architecture. Several Linux distributions have appeared at once, which are also compatible with this architecture. And this is just the beginning.

And just try for a moment to imagine how pleasant it will be to combine an energy-efficient ARM processor with a graphene battery. It is this architecture that will make it possible to obtain mobile ergonomic gadgets that can dictate the future.

4.61 out of 5, rated: 38 )

website Great article, pour your tea.

The processor is, without a doubt, the main component of any computer. It is this small piece of silicon, a few tens of millimeters in size, that performs all those challenging tasks that you put in front of your computer. This is where the operating system runs, as well as all programs. But how does it all work? We will try to analyze this question in our today's article.

The processor manages the data on your computer and executes millions of instructions per second. And by the word processor, I mean exactly what it really means - a small silicon chip that actually performs all the operations on a computer. Before proceeding to consider how the processor works, you must first consider in detail what it is and what it consists of.

First, let's look at what a processor is. CPU or central processing unit (central processing unit) - which is a microcircuit with huge amount transistors, made on a silicon crystal. The world's first processor was developed by Intel Corporation in 1971. It all started with Intel models 4004. He could only perform computational operations and could only process 4 bytes of data. The next model came out in 1974 - Intel 8080 and could already process 8 bits of information. Then there were 80286, 80386, 80486. It was from these processors that the name of the architecture came from.

The clock frequency of the 8088 processor was 5 MHz, and the number of operations per second was only 330,000, which is much less than in modern processors. Modern devices have frequencies up to 10 GHz and several million operations per second.

We will not consider transistors, we will move to a higher level. Each processor consists of the following components:

  • Nucleus- all information processing is performed here and mathematical operations, there can be several cores;
  • Command Decoder- this component belongs to the core, it converts software commands into a set of signals that will be executed by core transistors;
  • Cache- an area of ​​ultra-fast memory, a small volume in which data read from RAM is stored;
  • Registers- These are very fast memory cells in which currently processed data is stored. There are only a few of them and they have a limited size - 8, 16 or 32 bits, the bit depth of the processor depends on this one;
  • coprocessor- a separate core that is optimized only for certain operations, such as video processing or data encryption;
  • address bus- for communication with all devices connected to the motherboard, it can have a width of 8, 16 or 32 bits;
  • Data bus- for communication with operative memory. With it, the processor can write data to memory or read it from there. The memory bus can be 8, 16 and 32 bits, this is the amount of data that can be transferred at one time;
  • Synchronization bus- allows you to control the frequency of the processor and clock cycles;
  • Restart bus- to reset the processor state;

The main component can be considered the core or computing-arithmetic unit, as well as processor registers. Everything else helps these two components work. Let's look at what registers are and what their purpose is.

  • Registers A, B, C- designed to store data during processing, yes, there are only three of them, but this is quite enough;
  • EIP- contains the address of the next program instruction in RAM;
  • ESP- address of data in RAM;
  • Z- contains the result of the last comparison operation;

Of course, these are far from all memory registers, but these are the most important and they are most used by the processor during program execution. Well, now that you know what the processor consists of, you can consider how it works.

How does a computer processor work?

The processing core of the processor can only perform mathematical operations, comparison operations, and moving data between cells and RAM, but this is enough for you to play games, watch movies and surf the web, and much more.

In fact, any program consists of such commands: move, add, multiply, divide, difference, and go to the instruction if the comparison condition is met. Of course, these are not all commands, there are others that combine the ones already listed or simplify their use.

All data movements are performed using the move instruction (mov), this instruction moves data between register cells, between registers and RAM, between memory and hard drive. For arithmetic operations there is special instructions. And the transition instructions are needed to fulfill the conditions, for example, check the value of register A and if it is not equal to zero, then go to the instruction for desired address. You can also create loops using jump instructions.

All this is very good, but how do all these components interact with each other? And how do transistors understand instructions? The operation of the entire processor is controlled by the instruction decoder. It forces each component to do what it's supposed to do. Let's look at what happens when a program needs to be executed.

At the first stage, the decoder loads the address of the first instruction of the program in memory into the register of the next EIP instruction, for this it activates the read channel and opens the latch transistor to let the data into the EIP register.

In the second clock cycle, the instruction decoder converts the instruction into a set of signals for the transistors of the computing core, which execute it and write the result to one of the registers, for example, C.

On the third cycle, the decoder increments the address of the next instruction by one, so that it points to the next instruction in memory. Further, the decoder proceeds to loading the next command, and so on until the end of the program.

Each instruction is already encoded by a sequence of transistors, and converted into signals, it causes physical changes in the processor, for example, changing the position of a latch that allows data to be written to a memory cell, and so on. The execution of different commands requires a different number of cycles, for example, for one command it may take 5 cycles, and for another, more complex one, up to 20. But all this still depends on the number of transistors in the processor itself.

Well, everything is clear with this, but it will all work only if one program is running, and if there are several of them and all at the same time. It can be assumed that the processor has several cores, and then a separate program is executed on each core. But no, in fact there are no such restrictions.

Into one certain moment only one program can run. All CPU time is shared among all running programs, each program is executed for several cycles, then the processor is transferred to another program, and all the contents of the registers are stored in RAM. When control returns to this program, the previously stored values ​​are loaded into the registers.

conclusions

That's all, in this article we looked at how a computer processor works, what a processor is and what it consists of. It might be a bit complicated, but we've covered things more simply. I hope you now have a clearer understanding of how this very complex device works.

At the end of the video about the history of the creation of processors:

A personal computer is a very complex and multifaceted thing, but in each system block we will find the center of all operations and processes - the microprocessor. What does a computer processor consist of and why is it still needed?

Probably, many will be delighted to learn what the microprocessor consists of. personal computer. It almost entirely consists of ordinary stones, rocks.

Yes, that's right... The processor contains substances such as, for example, silicon - the same material that makes up sand and granite rocks.

Hoff processor

The first microprocessor for a personal computer was invented almost half a century ago - in 1970 by Martian Edward Hoff and his team of engineers from Intel.

Hoff's first processor ran at just 750 kHz.

The main characteristics of a computer processor today, of course, are not comparable with the above figure, the current "stones" are several thousand times more powerful than their ancestor, and before that, it is better to familiarize yourself with the tasks that it solves.

Many people believe that processors can "think". It must be said right away that there is not a grain of truth in this. Any heavy-duty personal computer processor consists of many transistors - a kind of switches that perform one single function - to skip the signal further or stop it. The choice depends on the signal voltage.

If you look at it from the other side, you can see what the microprocessor consists of, and it consists of registers - information processing cells.

To connect the “stone” with the rest of the personal computer devices, a special high-speed road is used, called the “bus”. Tiny electromagnetic signals "fly" through it at lightning speed. This is the principle of operation of the processor of a computer or laptop.

microprocessor device

How is a computer processor arranged? In any microprocessor, 3 components can be distinguished:

  1. Processor core (this is where the division of zeros and ones occurs);
  2. Cache memory is a small storage of information right inside the processor;
  3. A coprocessor is a special brain center of any processor, in which the most complex operations take place. Here is the work with multimedia files.

The computer processor circuit in a simplified version is as follows:

One of the main indicators of the microprocessor is the clock frequency. It shows how many cycles the “stone” performs per second. The power of the computer processor depends on the totality of the indicators given above.

It should be noted that once rocket launches and the operation of satellites were controlled by microprocessors with a clock frequency a thousand times lower than that of today's "brothers". And the size of one transistor is 22nm, the layer of transistors is only 1nm. For reference, 1 nm is 5 atoms thick!

Now you know how a computer processor works and what successes have been achieved by scientists working in personal computer manufacturing firms.

CPU structure

To make it clear to a non-professional how the central processor of a computer works, consider what blocks it consists of:

Processor control unit;

Command and data registers;

Arithmetic logic units (perform arithmetic and logical operations);

Block of operations with real numbers, that is, with floating point numbers or, more simply, with fractions (FPU);

Buffer memory (cache) of the first level (separately for commands and data);

Buffer memory (cache) of the second level for storing intermediate results of calculations;

Most modern processors also have a third-level cache;

System bus interface.

The principle of the processor

The algorithm of the computer's central processor can be represented as a sequence of the following actions.

The processor control unit takes from the RAM into which the program is loaded certain values ​​(data) and commands to be executed (instructions). This data is loaded into the processor's cache memory.

From buffer memory processor (cache) instructions and received data are written to registers. Instructions are placed in instruction registers, and values ​​are placed in data registers.

The arithmetic logic unit reads instructions and data from the corresponding processor registers and executes these instructions on the received numbers.

The results are again written to the registers and, if the calculations are completed, to the processor's buffer memory. The processor has very few registers, so it is forced to store intermediate results in cache memory of various levels.

New data and commands necessary for calculations are loaded into the upper-level cache (from the third to the second, from the second to the first), and unused data, on the contrary, into the lower-level cache.

If the calculation cycle is over, the result is written to the computer's RAM to free up space in the processor's buffer memory for new calculations. The same thing happens when the cache is full of data: unused data is moved to the lower-level cache or to main memory.

The sequence of these operations forms the operational thread of the processor. During operation, the processor gets very hot. To prevent this from happening, you need to clean your laptop at home in a timely manner.

In order to speed up the work of the central processor and increase the performance of calculations, new architectural solutions are constantly being developed that increase the efficiency of the processor. Among them are the pipeline execution of operations, tracing, that is, an attempt to anticipate further actions programs, parallel processing of commands (instructions), multithreading and multi-core.

Multi-core processor has several computing cores, that is, several arithmetic-logical units, floating-point units and registers, as well as a first-level cache, each united in its own core. The cores have a common buffer memory of the second and third levels. The appearance of the third-level cache was precisely caused by multi-core and, accordingly, the need for a larger amount of fast buffer memory to store intermediate results of calculations.

The main indicators that affect the speed of data processing by the processor are the number of processing cores, the length of the pipeline, the clock frequency and the amount of cache memory. To increase the performance of a computer, it is often necessary to change the processor, and this entails a replacement motherboard and RAM. Our experts will help you upgrade, configure and repair your computer at home in Moscow. service center, if you are afraid of the process of self-assembly and modernization of the computer.