Now, even less advanced mobile phones can not do without a microprocessor, what can we say about tablet, portable and desktop personal computers. What is a microprocessor and how did the history of its creation develop? If you speak to understandable language, then the microprocessor is a more complex and multifunctional integrated circuit.

The history of the microcircuit (integrated circuit) begins since 1958, when Jack Kilby, an employee of the American company Texas Instruments, invented a kind of semiconductor device containing several transistors connected by conductors in one package. The first microcircuit - the progenitor of the microprocessor - contained only 6 transistors and was a thin plate of germanium with tracks made of gold applied to it. All this was located on a glass substrate. For comparison, today the bill goes to units and even tens of millions of semiconductor elements.

By 1970 quite a lot of manufacturers were engaged in the development and creation of integrated circuits of various capacities and different functional orientations. But this year can be considered the date of birth of the first microprocessor. It was in this year that Intel created a memory chip with a capacity of only 1 Kbit - negligible for modern processors, but incredibly large for that time. At that time, this was a huge achievement - the memory chip was able to store up to 128 bytes of information - much higher than similar analogues. In addition, at about the same time, the Japanese calculator manufacturer Busicom ordered the same Intel 12 chips of various functional orientations. Intel specialists managed to implement all 12 functional areas in one chip. Moreover, the created microcircuit turned out to be multifunctional, since it made it possible to programmatically change its functions without changing the physical structure. The microcircuit performed certain functions depending on the commands given to its control outputs.

Already a year later in 1971 Intel releases the first 4-bit microprocessor, codenamed 4004. Compared to the first 6-transistor chip, it contained as many as 2.3 thousand semiconductor elements and performed 60 thousand operations per second. At that time, it was a huge breakthrough in the field of microelectronics. 4-bit meant that the 4004 could process 4-bit data at once. Two more years later in 1973 the company produces an 8-bit processor 8008, which already worked with 8-bit data. Beginning since 1976, the company begins to develop a 16-bit version of the 8086 microprocessor. It was he who began to be used in the first IBM personal computers and, in fact, laid one of the bricks in the history of computers.

Types of microprocessors

By the nature of the executable code and the organization of the control device, several types of architectures are distinguished:

    A processor with a complex set of instructions. This architecture is characterized by a large number of complex instructions, and as a result, a complex control device. Early versions of CISC processors and processors for embedded applications are characterized by long instruction execution times (from a few cycles to hundreds), determined by the microcode of the control device. High-performance superscalar processors are characterized by deep program analysis and out-of-order execution of operations.

    A processor with a simplified set of instructions. This architecture has a much simpler control device. Most RISC processor instructions contain the same small number of operations (1, sometimes 2-3), and the command words themselves are the same width in the vast majority of cases (PowerPC, ARM), although there are exceptions (Coldfire). Superscalar processors have the simplest grouping of instructions without changing the order of execution.

    A processor with explicit parallelism. It differs from others primarily in that the sequence and parallelism of the execution of operations and their distribution among functional units are clearly defined by the program. Such processors can have a large number of functional units without much complication of the control device and loss of efficiency. Typically, such processors use a wide instruction word consisting of several syllables that define the behavior of each functional unit during a cycle.

    A processor with a minimal set of instructions. This architecture is determined primarily by an extremely small number of instructions (a few dozen), and almost all of them are zero-operand. This approach makes it possible to pack the code very tightly, allocating from 5 to 8 bits for one instruction. Intermediate data in such a processor is usually stored on the internal stack, and operations are performed on the values ​​at the top of the stack. This architecture is closely related to the programming ideology in the Forth language and is usually used to execute programs written in this language.

    A processor with a variable set of instructions. An architecture that allows you to reprogram yourself by changing the set of instructions, adjusting it to the task being solved.

    Transport-managed processor. The architecture originally branched off from EPIC, but is fundamentally different from the rest in that the instructions of such a processor encode functional operations, and the so-called transports are data transfers between functional units and memory in an arbitrary order.

According to the method of storing programs, two architectures are distinguished:

    Von Neumann architecture. This architecture uses one bus and one I/O device to access program and data.

    Harvard architecture. In processors of this architecture, there are separate buses and input-output devices for program fetching and data exchange. In embedded microprocessors, microcontrollers, and DSPs, this also defines the existence of two independent memory devices for storing programs and data. In central processing units, this determines the existence of a separate cache of instructions and data. Behind the cache, buses can be combined into one via multiplexing.

Introduction

1 Development of microprocessors

2 Microprocessors i80386

3 Microprocessors i80486

4 Pentium processors

5 Processor performance

6 Coprocessors

Bibliography


Introduction

The most important element of any PC is the microprocessor. It largely determines the capabilities of the computing system. The first i4004 microprocessor was manufactured in 1971 and since then Intel has firmly held a leading position in the market segment. The most successful development project is i8080. It was on it that the Altair computer was based, for which B. Gates wrote his first Basic interpreter. The classic i8080 architecture had a huge impact on the further development of single-chip microprocessors. The i8088 microprocessor, which was announced by Intel in June 1979, became the true industry standard for PCs. In 1981, the "blue giant" (IBM) chose this processor for their PC. Initially, the i8088 microprocessor ran at 4.77 MHz and had a speed of about 0.33 Mops, but then its clones were developed, designed for a higher clock frequency of 8 MHz. The i8086 microprocessor appeared exactly a year earlier, in July 1978, and became popular thanks to the CompaqDecPro computer. Based on the i8086 architecture and taking into account market demands, in February 1982, Intel released the i80286. It appeared at the same time as IBM computer PCAT. Along with the increase in performance, it had a protected mode (it used a more sophisticated memory management technique). Protected Mode allowed programs such as Windows 3.0 and OS/2 to work with RAM above 1MB. Thanks to 16-bit data on the new system bus, 2-byte messages can be exchanged with the PU. The new microprocessor made it possible to access 16 MB of RAM in a protected mode. The i80286 processor was the first to implement multitasking and control at the chip level virtual memory. With a clock frequency of 8 MHz, a performance of 1.2 Mips was achieved.

1 Development of microprocessors

Computers have been widely used since the 1950s. Previously, these were very large and expensive devices used only in government agencies and large firms. The size and shape of digital computers have changed beyond recognition as a result of the development of new devices called microprocessors.

A microprocessor (MP) is a program-controlled electronic digital device, designed to process digital information and control the process of this processing, performed on one or more integrated circuits with a high degree integration of electronic elements.

In 1970, Marshian Edward Hoff of Intel designed an integrated circuit similar in function to the central processing unit of a mainframe computer - the first micro Intel processor-4004, which was released for sale already in 1971.

This was a real breakthrough, because the Intel-4004 MP, less than 3 cm in size, was more productive than the giant ENIAC machine. True, it worked much more slowly and could process only 4 bits of information at the same time (large computer processors processed 16 or 32 bits at the same time), but the first MP also cost tens of thousands of times cheaper.

The crystal was a 4-bit processor with a classic Harvard-type computer architecture and was manufactured using advanced p-channel MOS technology with a design standard of 10 μm. Wiring diagram device consisted of 2300 transistors. The MP operated at a clock frequency of 750 kHz with a command cycle duration of 10.8 μs. The i4004 chip had an address stack (a program counter and three LIFO-type stack registers), an RON block (registers over random access memory or register file - RF), a 4-bit parallel ALU, an accumulator, a command register with a command decoder and a control circuit, as well as a communication circuit with external devices. All these functional nodes were combined with each other by a 4-bit SD. The instruction memory reached 4 KB (for comparison: the memory size of a minicomputer in the early 70s rarely exceeded 16 KB), and the RF CPU had 16 4-bit registers that could also be used as 8 8-bit ones. Such an organization of RONs is also preserved in subsequent MPs from Intel. Three stack registers provided three levels of subroutine nesting. The i4004 MP was mounted in a plastic or ceramic-metal DIP (Dual In-line Package) package with only 16 pins. His command system included only 46 instructions.

At the same time, the crystal had very limited I/O facilities, and there were no logical data processing operations (AND, OR, EXCLUSIVE OR) in the command system, and therefore they had to be implemented using special subroutines. The i4004 module did not have the ability to stop (HALT commands) and handle interrupts.

The processor instruction cycle consisted of 8 cycles of the master oscillator. There was a multiplexed SHA (address bus) / SHD (data bus), a 12-bit address was transmitted over 4 bits.

On April 1, 1972, Intel began shipping the industry's first 8-bit i8008. The crystal was fabricated using p-channel MOS technology with design standards of 10 μm and contained 3500 transistors. The processor operated at a frequency of 500 kHz with a machine cycle duration of 20 μs (10 periods of the master oscillator).

Unlike its predecessors, the MP had a Princeton-type computer architecture, and allowed the use of a combination of ROM and RAM as memory.

Compared to i4004, the number of RON decreased from 16 to 8, and two registers were used to store the address with indirect memory addressing (technology limitation - the RON block, similar to crystals 4004 and 4040 in MP 8008, was implemented in the form dynamic memory). The duration of the machine cycle was almost halved (from 8 to 5 states). To synchronize work with slow devices, the READY signal was introduced.

The command system consisted of 65 instructions. The MP could address 16K bytes of memory. Its performance compared with four-bit MP increased by 2.3 times. On average, about 20 medium-integrated circuits were required to interface the processor with memory and I / O devices.

The possibilities of p-channel technology for creating complex high-performance MTs were almost exhausted, so the "direction of the main blow" was transferred to n-channel MOS technology.

On April 1, 1974, the Intel 8080 MP was presented to the attention of all interested parties. Thanks to the use of p-MOS technology with design standards of 6 microns, 6 thousand transistors were placed on the chip. The clock frequency of the processor was increased to 2 MHz, and the duration of the instruction cycle was already 2 μs. The amount of memory addressed by the processor has been increased to 64 KB.

Due to the use of a 40-pin package, it was possible to separate SHA and SD, total number chips required to build the system in the minimum configuration was reduced to 6.

In the Russian Federation, a stack pointer was introduced, which is actively used in interrupt processing, as well as two program-inaccessible registers for internal transfers. The RON block was implemented on static memory chips. The exclusion of the battery from the RF and its introduction into the ALU simplified the internal bus control scheme.

New in MP architecture - use multilevel system vector interrupts. Such technical solution allowed to bring the total number of interrupt sources to 256 (before the advent of LSI interrupt controllers, the interrupt vector generation circuit required the use of up to 10 additional medium-integration chips). The i8080 introduced a direct memory access (DMA) mechanism (as previously in IBM System 360 mainframe computers, etc.).

The PDP opened the green light for the use in microcomputers of such complex devices as magnetic disk drives and tapes, CRT displays, which turned the microcomputer into a full-fledged computing system.

The tradition of the company, starting from the first chip, was the release of not a single CPU chip, but a family of LSIs designed for shared use.

Modern microprocessors are built on a 32-bit x86 or IA-32 architecture (Intel Architecture 32 bit), but very soon there will be a transition to a more advanced, productive 64-bit architecture IA-64 (Intel Architecture 64 bit). In fact, the transition has already begun, this is evidenced by the mass production and sale in 2003 of the new Athlon 64 microprocessor from AMD Corporation (Advanced Micro Devices), this microprocessor is remarkable in that it can work with both 32-bit applications and 64-bit applications. bit. The performance of 64-bit microprocessors is much higher.

2 Microprocessors i80386

In October 1985, Intel announced the first 32-bit microprocessor, the i80386. The first computer to use this microprocessor was the CompaqDeskPro 386. The full 32-bit architecture in the new microprocessor was complemented by an advanced memory manager, which, in addition to the segmentation unit, was supplemented by a page control unit. This device allows you to easily rearrange segments from one memory location to another. At a clock frequency of 16 MHz, the performance was 6 Mips. 32-address lines made it possible to physically address 4Gb of memory, in addition, a new virtual memory management mode V86 was introduced. In this mode, several i8086 tasks could be performed simultaneously.

The i80386 microprocessor, made on 1 chip with a coprocessor, was called i80386DX. A cheaper model of the 32-bit microprocessor did not appear until July 1988 (i80386SX). The new microprocessor used a 16-bit data bus and a 24-bit address bus. This was especially handy for the standard IBM PC AT. Software written for the i80386DX ran on the i80386DX. Internal registers were completely identical. The SX index comes from the word "sixteen" (16-bit data bus). For the i486 SX came to mean no coprocessor. At the fall show in 1989, Intel announced the i80486DX, which contained 1.2 million transistors on a single chip and was fully compatible with other 86 processors. New microcircuits for the first time combined on 1 chip the CPU, coprocessor and cache memory. Using the pipelined architecture inherent in RISC processors, allowing to achieve 4 times the performance of conventional 32-bit systems. 8KB of built-in cache memory accelerated execution by intermediate storage of frequently used commands and data. At a clock frequency of 25 MHz, the microprocessor had a performance of 16.5 Mips. Created in January 1991. the 50 MHz version of the microprocessor allowed for an additional 50% increase in performance. The built-in coprocessor significantly accelerated mathematical calculations, but later it became clear that only 30% of users needed such a microprocessor.

Are you using a computer or mobile device to read this topic now. The computer or mobile device uses a microprocessor to perform these actions. The microprocessor is the heart of any device, server or laptop. There are many brands of microprocessors from the most different manufacturers, but they all do roughly the same thing and in roughly the same way.
Microprocessor- also known as a processor or central processing unit, is a computing engine that is manufactured on a single chip. The first microprocessor was the Intel 4004, which appeared in 1971 and was not as powerful. He could add and subtract, and that's only 4 bits at a time. The processor was amazing because it was made on a single chip. You will ask why? And I will answer: engineers at that time produced processors either from several chips or from discrete components (transistors were used in separate packages).

If you have ever wondered what a microprocessor does in a computer, what it looks like, or how it differs from other types of microprocessors, then go under cat- there all the most interesting, and details.

Microprocessor Progress: Intel

The first microprocessor, which later became the heart of a simple home computer, was the Intel 8080, a complete 8-bit computer on a single chip, introduced in 1974. The first microprocessor caused a real surge in the market. Later in 1979 it was released new model- Intel 8088. If you are familiar with the PC market and its history, then you know that the PC market moved from the Intel 8088 to the Intel 80286, and then to the Intel 80386 and Intel 80486, and then to the Pentium, Pentium II, Pentium III and Pentium 4 All of these microprocessors are made by Intel, and they are all improvements to the basic design of the Intel 8088. The Pentium 4 can execute any code, but it does it 5,000 times faster.

In 2004 year Intel introduced microprocessors with multiple cores and millions of transistors, but even these microprocessors followed general rules, as previously manufactured chips. Additional Information in the table:

  • the date: is the year the processor was first introduced. Many processors were re-released at higher clock speeds and this continued for many years after the original release date.
  • transistors: is the number of transistors on a chip. You can see that the number of transistors on a single chip has been steadily increasing over the years.
  • Micron: width in microns of the smallest wire on the chip. For comparison, I can give a human hair, which has a thickness of about 100 microns. As the sizes got smaller and smaller, the number of transistors increased.
  • Clock frequency: maximum speed that the chip can develop. I will talk about the clock frequency a little later.
  • Width (bus) data: is the width of the ALU (Arithmetic Logic Unit). An 8-bit ALU can add, subtract, multiply, etc. In many cases, the data bus is the same width as the ALU, but not always. The Intel 8088 was 16-bit and had an 8-bit bus, while current Pentium models are 64-bit.
  • MIPS: this column in the table stands for displaying the number of operations per second. It is a unit of measure for microprocessors. Modern processors they can do so many things that today's ratings presented in the table will lose all meaning. But you can feel the relative power of the microprocessors of those times
This table shows that, in general, there is a relationship between clock speed and MIPS (operations per second). The maximum clock frequency is a function production processor. There is also a relationship between the number of transistors and the number of operations per second. For example, an Intel 8088 clocked at 5 MHz (currently 2.5-3 GHz) only executes 0.33 MIPS (about one instruction for every 15 clock cycles). Modern processors can often execute two instructions per clock. This increase is directly related to the number of transistors on the chip, and I will talk about this too later.

What is a chip?


The chip is also called an integrated circuit. This is usually a small, thin piece of silicon on which the transistors that make up the microprocessor have been engraved. A chip can be as small as one inch, but still contain tens of millions of transistors. Simpler processors may consist of several thousand transistors etched into a chip of just a few square millimeters.

How it works



Intel Pentium 4

To understand how a microprocessor works, it would be helpful to look inside and learn about its innards. In the process, you can also learn about assembly language, the microprocessor's native language, and many things that engineers can do to increase the speed of a processor.

The microprocessor executes a collection of machine instructions that tell the processor what to do. Based on instructions, the microprocessor does three main things:

  • Using its ALU (arithmetic logic unit), the microprocessor can perform mathematical operations. For example, addition, subtraction, multiplication and division. Modern microprocessors are capable of performing extremely complex operations.
  • The microprocessor can move data from one memory location to another
  • The microprocessor can make decisions and jump to a new set of instructions based on those decisions.


To put it bluntly, the microprocessor does complex things, but above I have described three main activities. The following diagram shows a very simple microprocessor capable of doing these three things. This microprocessor has:

  • Address bus (8, 16, or 32 bits) that sends memory access
  • Data bus (8, 16, or 32 bits) that sends data to or receives data from memory
  • RD (read, read) and WR (write, write) tell the memory whether they want to set or get the addressed location
  • Clock line that allows you to view the processor clock sequence
  • Reset line that resets the program counter to zero and restarts execution

microprocessor memory

Earlier we talked about address and data buses, as well as read and write lines. All this is connected to either RAM (Random Access Memory) or ROM (Read Only Memory or Read Only Memory, ROM) - usually both. In our example microprocessor, we have a wide address bus of 8 bits and the same wide data bus - also 8 bits. This means that the microprocessor can access 2^8 to 256 bytes of memory, and can read and write 8 bits of memory at a time. Let's assume that this simple microprocessor has 128 bytes of internal memory starting at address 0 and 128 bytes of RAM starting at address 128.

RAM stands for read-only memory. Chip permanent memory programmed with permanent preset preset bytes. The bus address tells the RAM chip which byte to reach and fit on the data bus. When the read line changes its state, the ROM chip presents the selected byte to the data bus.

RAM stands for RAM, lol. RAM contains a byte of information, and the microprocessor can read or write to those bytes depending on whether the read or write line is signaling. One of the problems that can be found in today's chips is that they forget everything as soon as the energy is gone. Therefore, the computer must have RAM.



RAM chip or Read Only Memory (ROM) chip

By the way, almost all computers contain some amount of RAM. On a personal computer, the read-only memory is called the BIOS (Basic Input/Output System). At startup, the microprocessor starts executing the instructions it finds in the BIOS. BIOS instructions, by the way, also perform their roles: they perform a hardware check, and then all the information goes to the hard disk to create a boot sector. The boot sector is one small program, and the BIOS keeps it in memory after reading it from disk. The microprocessor then starts executing instructions boot sector from RAM. The boot sector program will tell the microprocessor what else to take with you. hard drive into RAM, and then it does all that, and so on. This is how the microprocessor loads and executes the entire operating system.

microprocessor instructions

Even the incredibly simple microprocessor I've just described will have a fairly large set of instructions that it can execute. The instruction collection is implemented as bit patterns, each of which has a different meaning when loaded into the instruction sector. People don't remember bit patterns particularly well, as it's a collection of short words. By the way, this set of short words is called the processor assembly language. The assembler can translate words into a bit pattern very easily, and then the assembler's effort will be put into memory for the microprocessor to execute.

Here is a set of assembly language instructions:

  • LOADA mem- load into register with memory address
  • LOADB mem- load into register B from memory address
  • CONB mem- load a constant value into register B
  • SAVEB mem- save register B to memory address
  • SAVEC mem- save register C to memory address
  • ADD- add A and B and store the result in C
  • SUB- subtract A and B and store the result in C
  • MUL- multiply A and B and store the result in C
  • DIV- split A and B and store the result in C
  • COM- compare A and B and store the result in the test
  • JUMP addr- go to address
  • JEQ addr- jump if equal to solve
  • JNEQ addr- jump if not equal to solve
  • JG addr- jump if more, to solve
  • JGE addr- jump if greater than or equal to to solve
  • JL addr- jump if less to solve
  • JLE addr- jump if less than or equal to to solve
  • STOP- stop execution
assembly language
The C compiler translates this C code into assembly language. Assuming that main memory starts at address 128 in this processor, and read-only memory (which contains the assembly language program) starts at address 0, then for our simple microprocessor, the assembler might look like this:

// Assume a is at address 128 // Assume F is at address 1290 CONB 1 // a=1;1 SAVEB 1282 CONB 1 // f=1;3 SAVEB 1294 LOADA 128 // if a > 5 the jump to 175 CONB 56 COM7 JG 178 LOADA 129 // f=f*a;9 LOADB 12810 MUL11 SAVEC 12912 LOADA 128 // a=a+1;13 CONB 114 ADD15 SAVEC 12816 JUMP 4 // loop back to if17 STOP

Read Only Memory (ROM)
So now the question is: "How do all these instructions integrate with ROM?". I will explain, of course: each of these assembly language instructions must be represented as binary number. For simplicity, let's assume that each assembly language instruction assigns to itself unique number. For example, it will look like this:

  • LOADA - 1
  • LOADB - 2
  • CONB - 3
  • SAVEB - 4
  • SAVEC mem - 5
  • ADD - 6
  • SUB - 7
  • MUL - 8
  • DIV - 9
  • COM - 10
  • JUMP addr - 11
  • JEQ addr - 12
  • JNEQ addr - 13
  • JG addr - 14
  • JGE addr - 15
  • JL addr - 16
  • JLE addr - 17
  • STOP - 18
These numbers will be known as operation codes. In ROM, our little program will look like this:

// Assume a is at address 128 // Assume F is at address 129Addr opcode/value0 3 // CONB 11 12 4 // SAVEB 1283 1284 3 // CONB 15 16 4 // SAVEB 1297 1298 1 // LOADA 1289 12810 3 // CONB 511 512 10 // COM13 14 // JG 1714 3115 1 // LOADA 12916 12917 2 // LOADB 12818 12819 8 // MUL20 5 // SAVEC 12921 12922 1 // LOADA 12823 12824 3 // CONB6 125 // ADD27 5 // SAVEC 12828 12829 11 // JUMP 430 831 18 // STOP

You see that 7 lines of C code became 18 lines of assembler, and that all became 32 bytes in ROM.

Decoding
The decode instruction must turn each of the opcodes into a set of signals that will control various components within the microprocessor. Let's take the ADD instructions as an example and see what it has to do. So:

  • 1. In the first cycle, it is necessary to load the instruction itself, so the decoder needs to: activate the buffer for the program counter with three states, activate the read line (RD), activate the data in the three states of the buffer in the instruction register
  • 2. In the second cycle, the ADD instruction is decoded. Here you need to do very little: set the operation of the arithmetic logic unit (ALU) to register C
  • 3. During the third cycle, the program counter is incremented (in theory this may overlap in the second cycle)
Each instruction can be broken down into a set of sequenced operations, such as we just looked at. They manipulate the microprocessor components in the correct order. Some instructions, such as the ADD instruction, may take two or three cycles. Others may take five or six bars.

Let's come to the end


The number of transistors has a huge impact on processor performance. As you can see above, a typical Intel 8088 microprocessor could run 15 cycles. The more transistors, the higher the performance - it's simple. A large number of transistors also allows technology such as pipelining.

The pipeline architecture is made up of the execution of commands. It may take five cycles to execute one instruction, but there cannot be five instructions in different stages of execution at the same time. So it looks like one instruction completes each clock cycle.

All of these trends allow the number of transistors to grow, resulting in the multi-million dollar transistor heavyweights that are available today. Such processors can perform about a billion operations per second - just imagine. By the way, now many manufacturers have become interested in the release of 64-bit mobile processors and apparently another wave is coming, only this time the 64-bit architecture is the king of fashion. Maybe I will get to this topic in the near future and tell you how it actually works. On this, perhaps, everything for today. I hope you enjoyed it and learned a lot.

The first microprocessor was created in 1971, and with it finally was born fourth generation computers.


CPU(CPU, literally - the central processing unit) - an electronic unit or an integrated circuit (microprocessor) that executes machine instructions (program code). Sometimes referred to as a microprocessor or simply a processor.

The main characteristics of the central processing unit (CPU) are: clock speed, performance, power consumption and architecture.

Early CPUs were designed as unique building blocks for unique, and even one-of-a-kind, computer systems. Later, from the expensive way of developing processors designed to perform one the only program, computer manufacturers switched to serial production of typical classes of processors.

The creation of microcircuits allowed further increasing the complexity of the CPU while reducing their physical size.



Intel in 1971 created the world's first 4-bit microprocessor 4004
designed for use in calculators.



Later it was changed 8-bit Intel 8080 and 16-bit 8086, which laid the foundations for the architecture of all modern desktop computers.




Then followed his modification, 80186 .
AT processor 80286 a protected mode appeared, which allowed using up to 16 MB of memory.


Intel 80386 processor appeared in 1985 and introduced an improved protected mode, allowed the use of up to 4 GB of RAM.



Intel486(also known as i486, Intel 80486 or simply 486th) fourth generation x86-compatible microprocessor built on a hybrid core and released by Intel on April 10, 1989.

This microprocessor is an improved version of the 80386 microprocessor. It was first demonstrated at an exhibition in the fall of 1989.

It was the first microprocessor with built-in math coprocessor(FPU). It was used mainly in desktop PCs, in servers and portable PCs (laptops and laptops).



In personal computers began to be used x86 architecture processors.

Gradually, almost all processors began to be produced in the microprocessor format.

Microprocessor Intel Pentium presented March 22, 1993.
The new architecture of the processor allowed to increase the performance by 5 times compared to the 33 MHz 486DX.

The number of transistors is 3.1 million.
Connector 237/238 legs.


Next came (from Intel) 64-bit processors:
Itanium, Itanium 2, Pentium 4F, Pentium D, Xeon, Intel Core 2, Pentium Dual Core, Celeron Dual Core, Intel Core i3, Intel Core i5, Intel Core i7, Intel Xeon E3...

Multi-core processors contain several processor cores in one package (on one or more chips).

The first multi-core microprocessor was IBM's POWER4, which appeared in 2001 and had two cores. On November 14, 2005, Sun released eight-core UltraSPARC T1.

AMD went its own way, manufacturing in 2007 quad-core processor in a single chip.

Processors with 2, 3, 4 and 6 cores, as well as 2, 3 and 4-module processors have become massively available AMD processors Bulldozer generations.

8-core Xeon and Nehalem (Intel) and 12-core Opteron (AMD) processors are also available for servers.

For heat sink from microprocessors, passive radiators and active coolers are used.

Intel Core i7- x86-64 Intel processor family.
Single chip device: all cores, memory controller and cache are on the same chip.
Support turbo boost, with which the processor automatically increases performance when it is needed.


The protective cover of the processors is made of nickel-plated copper, the substrate is silicon, and the contacts are made of gold-plated copper.
The minimum and maximum storage temperatures for the Core i7 are -55°C and 125°C, respectively.
Maximum heat dissipation Core processors i7 is 130 watts.

The Intel Core i7 3820 is equipped with four physical and eight virtual processor cores, the nominal clock frequency of which is 3.6 GHz, and the dynamic frequency is 3.8 GHz, as well as ten megabytes of cache memory. Date of entry into the market - 2012.

Modern computers small-sized, convenient, have a high speed of information processing, a large amount of operational and physical memory.



Modern processors can be found not only in computers, but also in cars, mobile phones, household appliances and even in children's toys.

), ranging from the first MDA and CGA graphics adapters to the latest AMD and NVIDIA architectures. Now it's the turn to follow the development of central processing units - an equally important component of any computer. In this part of the material, we will talk about the 1970s, and therefore, the first 4- and 8-bit solutions.

The first CPUs were centipedes

1940s–1960s

Before delving into the history of the development of central processing units, it is necessary to say a few words about the development of computers in general. The first CPUs appeared in the 1940s. Then they worked with the help of electromechanical relays and vacuum tubes, and the ferrite cores used in them served as storage devices. For the functioning of a computer based on such microcircuits, it was required great amount processors. Similar computer It was a huge building the size of a fairly large room. At the same time, he released a large amount of energy, and his performance left much to be desired.

Computer using electromechanical relays

However, already in the 1950s, transistors began to be used in the design of processors. Thanks to their use, engineers have been able to achieve more high speed operation of chips, as well as reduce their power consumption, but improve reliability.

In the 1960s, the technology of manufacturing integrated circuits was developed, which made it possible to create microchips with transistors located on them. The processor itself consisted of several such circuits. Over time, technology has allowed more and more transistors to be placed on a chip, and as a result, the number of integrated circuits used in the CPU has decreased.

However, the processor architecture was still very, very far from what we see today. But the release in 1964 of the IBM System / 360 brought the design of the then computers and CPUs a little closer to the modern one - primarily in terms of working with software. The fact is that before the advent of this computer, all systems and processors worked only with the program code which was written especially for them. For the first time, IBM used a different philosophy in its computers: the entire line of CPUs of different performance supported the same set of instructions, which made it possible to write software that would run under any modification of the System / 360.

IBM System/360 computer

Returning to the topic of System/360 compatibility, it must be emphasized that IBM has paid a lot of attention to this aspect. For example, modern zSeries computers still support software written for the System/360 platform.

Do not forget about DEC (Digital Equipment Corporation), namely its line of PDP (Programmed Data Processor) computers. The company was founded in 1957, and in 1960 released its first minicomputer PDP-1. The device was an 18-bit system and was smaller than the mainframes of the time, occupying "only" a room corner. A CRT monitor was integrated into the computer. Interestingly, the world's first computer game called Spacewar! was written specifically for the PDP-1 platform. The cost of a computer in 1960 was 120 thousand US dollars, which was significantly lower than the price of other mainframes. Nevertheless, the PDP-1 was not very popular.

Computer PDP-1

The first commercially successful DEC device was the PDP-8 computer, released in 1965. Unlike the PDP-1, new system was 12 bit. The cost of the PDP-8 was 16 thousand US dollars - it was the cheapest minicomputer of that time. Thanks to such a low price, the device became available to industrial enterprises and scientific laboratories. As a result, about 50 thousand such computers were sold. A distinctive architectural feature of the PDP-8 processor was its simplicity. So, it had only four 12-bit registers that were used for tasks various types. At the same time, the PDP-8 contained a total of 519 logic gates.

Computer PDP-8. Frame from the film "Three Days of the Condor"

The architecture of PDP processors directly influenced the design of 4- and 8-bit processors, which will be discussed later.

Intel 4004

1971 went down in history as the year of the first microprocessors. Yes, yes, such solutions that are used today in personal computers, laptops and other devices. And one of the first to announce itself was the then just founded Intel company, having launched the 4004 model - the world's first commercially available single-chip processor.

Before going directly to the 4004 processor, it is worth saying a few words about Intel itself. It was created in 1968 by engineers Robert Noyce and Gordon Moore, who until then worked for the benefit of Fairchild Semiconductor, and Andrew Grove. By the way, it was Gordon Moore who published the well-known "Moore's Law", according to which the number of transistors in a processor doubles every year.

Already in 1969, just a year after its founding, Intel received an order from the Japanese company Nippon Calculating Machine (Busicon Corp.) to produce 12 chips for high-performance desktop calculators. The original chip design was suggested by Nippon itself. However, Intel engineers did not like this architecture, and an employee of the American company Ted Hoff proposed reducing the number of chips to four through the use of a universal CPU, which would be responsible for arithmetic and logical functions. In addition to the central processing unit, the chip architecture included RAM for storing user data, as well as ROM for storing software. After approval final structure microcircuits, work continued on the design of the microprocessor.

In April 1970, the Italian physicist Federico Fagin, who had also worked at Fairchild, joined the Intel engineering team. He had extensive experience in computer logic design and silicon gate MOS (metal-oxide-semiconductor) technologies. It was thanks to the contribution of Federico that Intel engineers managed to combine all the microcircuits into one chip. So the world's first microprocessor 4004 saw the light of day.

Intel 4004 processor

Concerning specifications Intel 4004, then, by today's standards, of course, they were more than modest. The chip was manufactured using a 10-micron process technology, contained 2300 transistors and operated at a frequency of 740 kHz, which meant that it could perform 92,600 operations per second. DIP16 packaging was used as a form factor. Dimensions Intel 4004 were 3x4 mm, and there were rows of contacts on the sides. Initially, all rights to the chip belonged to Busicom, which intended to use the microprocessor exclusively in calculators of its own production. However, they ended up allowing Intel to sell their chips. In 1971, anyone could purchase a 4004 processor for about $200. By the way, a little later, Intel bought all the rights to the processor from Busicom, predicting an important role for the chip in the subsequent miniaturization of integrated circuits.

Despite the availability of the processor, its scope was limited to the Busicom 141-PF calculator. Also for a long time there were rumors that Intel 4004 was used in the design of the onboard computer of the Pioneer 10 unmanned spacecraft, which became the first interplanetary probe to fly near Jupiter. These rumors are directly refuted by the fact that on-board computers the "pioneers" had 18- or 16-bit bits, while the Intel 4004 was a 4-bit processor. However, it is worth noting that NASA engineers considered the possibility of using it in their devices, but considered the chip not sufficiently tested for such purposes.

Processor Intel 4040

Three years after the release of the Intel 4004 processor, its successor, the 4-bit Intel 4040, saw the light of day. The chip was produced using the same 10-micron process technology and ran at the same clock frequency of 740 kHz. However, the processor has become a little "more complex" and has received a richer feature set. So, 4040 contained 3000 transistors (700 more than 4004). The form factor of the processor remained the same, however, instead of the 16-pin, they began to use a 24-pin DIP. Among the improvements in 4040, it is worth noting support for 14 new commands, increased stack depth to 7 levels, as well as support for interrupts. "Fortieth" was used mainly in test devices and equipment control.

Intel 8008

In addition to 4-bit processors, in the early 70s, an 8-bit model appeared in the arsenal of Intel - 8008. At its core, the chip was an 8-bit version of the 4004 processor with a lower clock speed. This should not be surprising, since the development of the 8008 model was carried out in parallel with the development of the 4004. So, in 1969, Computer Terminal Corporation (later Datapoint) commissioned Intel to create a processor for Datapoint terminals, providing them with an architecture diagram. As with the 4004, Tad Hoff suggested integrating all the ICs into a single chip, and CTC agreed to this proposal. Development was slowly coming to an end, but in 1970 CTC abandoned both the chip and further cooperation with Intel. The reasons were banal: Intel engineers did not invest in the development deadlines, and the functionality of the provided "stone" did not meet the CTC requests. The contract between the two companies was terminated, the rights to all developments remained with Intel. The Japanese company Seiko became interested in the new chip, whose engineers wanted to use the new processor in their calculators.

Processor Intel 8008

One way or another, but after the termination of cooperation with CTC, Intel renamed the chip under development 8008. In April 1972, this processor became available for order at a price of $120. After Intel was left without CTC support, the company's camp was cautious about the commercial prospects of the new chip, but doubts were in vain - the processor sold well.

The technical characteristics of the 8008 were in many ways similar to the 4004. The processor was manufactured in an 18-pin DIP form factor according to 10-micron technological standards and contained 3500 transistors. The internal stack supported 8 levels, and the amount of supported external memory was up to 16 KB. The clock speed of the 8008 was set at 500 kHz (240 kHz lower than the 4004). Due to this, an 8-bit Intel processor often lost in speed to a 4-bit one.

Several computer systems have been built based on the 8008. The first of these was a not very well-known project called The Sac State 8008. This system was developed within the walls of the University of Sacramento under the guidance of engineer Bill Pentz. Despite the fact that for a long time the Altair 8800 system was considered the first microcomputer created, it is The Sac State 8008 that is. The project was completed in 1972 and was a complete computer for processing and storing patient medical records. The computer included directly the 8008 processor, HDD, 8K RAM, color display, mainframe interface, and proprietary operating system. The cost of such a system was extremely high, so The Sac State 8008 could not get proper distribution, although it had no competitors in terms of performance for quite a long time.

This is what The Sac State 8008 looked like

However, The Sac State 8008 is not the only computer built around the 8008 processor. Other systems have been created, such as the American SCELBI-8H, the French Micral N, and the Canadian MCM/70.

Intel 8080

As in the case of the 4004 processor, some time later the 8008 also received an update in the face of the 8080 chip. However, in the case of the 8-bit solution, the changes made to the processor architecture were much more significant.

The Intel 8080 was introduced in April 1974. First of all, it should be noted that the production of the processor was transferred to a new 6-micron process technology. Moreover, N-MOS (n-channel transistors) technology was used in the production - unlike the 8008, which was manufactured using P-MOS logic. The use of a new process technology made it possible to place 6,000 transistors on a chip. The form factor used was a 40-pin DIP.

The 8080 received a richer instruction set, which included 16 data transfer instructions, 31 data processing instructions, 28 direct address jump instructions, and 5 control instructions. The clock frequency of the processor was 2 MHz - 4 times more than its predecessor. The 8080 also had a 16-bit address bus, which allowed addressing 64 KB of memory. These innovations ensured the high performance of the new chip, which is about 10 times higher than that of the 8008.

Processor Intel 8080

The 8080 processor in its first revision contained a serious bug that could lead to a freeze. The error was corrected in an updated revision of the chip, called 8080A and released only six months later.

Due to its high performance, the 8080 processor has become very popular. It was even used in control systems for street lighting and traffic lights. However, it was mainly used in computer systems, the most famous of which was the development of MITS Altair-8800, introduced in 1975.

Altair-8800 worked on the base operating system Altair BASIC, and the S-100 interface was used as a bus, which a few years later became the standard for all personal computers. The technical characteristics of the computer were more than modest. He had only 256 bytes of RAM, he had no keyboard and monitor. The user interacted with the computer by entering programs and data in binary form by clicking on a set of small keys that could occupy two positions: up and down. The result was also read in binary form - by the extinguished and glowing light bulbs. However, the Altair-8800 became so popular that a small company like MITS simply couldn't keep up with the demand for computers. The popularity of the computer directly contributed to its low cost - 621 US dollars. At the same time, for 439 US dollars it was possible to purchase a computer in disassembled form.

Computer Altair-8800

Returning to the topic of 8080, it should be noted that there were many clones of it on the market. The marketing situation at that time was very different from what we see today, and it was profitable for Intel to license third-party companies to manufacture copies of the 8080. Many large companies were involved in the production of clones, such as National Semiconductor, NEC, Siemens and AMD. Yes, in the 70s AMD did not yet have its own processors - the company was exclusively engaged in the release of "remakes" of other crystals at its own facilities.

Interestingly, there was also a domestic copy of the 8080 processor. It was developed by the Kyiv Research Institute of Microdevices and was called KR580VM80A. Several variants of this processor were released, including those for use in military facilities.

"Square" KR580VM80A

In 1976 appeared updated version chip 8080, which received the index 8085. The new crystal was manufactured according to the 3 micron process technology, which made it possible to place 6500 transistors on the chip. The maximum clock frequency of the processor was 6 MHz. The set of supported instructions contained 79 instructions, among which were two new instructions for managing interrupts.

Zilog Z80

The main event after the release of 8080 was the dismissal of Federico Fagin. The Italian did not agree with the internal policy of the company and decided to leave. Together with former Intel manager Ralph Ungermann and Japanese engineer Masatoshi Shima, he founded Zilog. Immediately after this, the development of a new processor, similar in architecture to the 8080, began. So, in July 1976, the Zilog Z80 processor appeared, binary compatible with the 8080.

Federico Fagin (left)

Compared to the Intel 8080, the Zilog Z80 had many improvements, such as an extended instruction set, new registers and instructions for them, new interrupt modes, two separate register blocks, and an integrated dynamic memory refresh circuit. In addition, the cost of the Z80 was much lower than the 8080.

As for the technical characteristics, the processor was manufactured according to 3-micron technological standards using N-MOS and CMOS technologies. Z80 contained 8500 transistors, and its area was 22.54 mm 2 . The clock frequency of the Z80 ranged from 2.5 to 8 MHz. The data bus width was 8 bits. The processor had a 16-bit address bus, and the amount of addressable memory was 64 KB. The Z80 was produced in several form factors: DIP40 or 44-pin PLCC and PQFP.

Zilog Z80 processor

The Z80 quickly surpassed all competing solutions in popularity, including the 8080. The processor was used in computers from companies such as Sharp, NEC and others. The Z80 also "registered" in the Sega and Nintendo consoles. In addition, the processor was used in gaming machines, modems, printers, industrial robots and many other devices.

ZX Spectrum

A device called the ZX Spectrum deserves special mention, despite the fact that our story today does not concern the decisions of the 80s of the last century. The computer was developed by the British company Sinclair Research and was released in 1982. The ZX Spectrum was far from the first SR development. In the early 1970s, the head of the company and its chief engineer Clive Sinclair (Clive Sinclair) were engaged in selling radio components by mail. Toward the mid-70s, Clive created a pocket calculator, which became the first successful invention of the company. Note that the company was not directly involved in the development of the calculator. They managed to find a successful combination of design, functionality and value, thanks to which the device sold well. The next Sinclair device was also a calculator, but with a richer set of functions. The device was intended for a more "advanced" audience, but he failed to gain much success.

Clive Sinclair - the "father" of the ZX Spectrum

After the calculators, Sinclair decided to focus on developing full-fledged computers, and between 1980 and 1981, the ZX line of home computers appeared: the ZX80 and ZX81. But the most popular solution was a system released in 1982 called the ZX Spectrum. Initially, it was supposed to enter the market under the name ZX83, but at the last moment it was decided to rename the device to emphasize the computer's support for color images.

ZX Spectrum has become popular primarily due to its simplicity and low cost. The computer looked like game console. A TV was connected to it via external interfaces, which was used as a monitor, and a cassette tape recorder that acted as a drive. On the body of the Spectrum there was a multifunctional keyboard with 40 rubber keys. Each button had up to seven values ​​when working in different modes.

ZX Spectrum computer

The internal architecture of the ZX Spectrum was also quite simple. Thanks to the use of ULA (Uncommitted Logic Array) technology, the main part of the computer circuit was placed on a single chip. The CPU used was a Zilog Z80 clocked at 3.5 MHz. The amount of RAM was 16 or 48 KB. True, some third-party manufacturers produced 32 KB memory modules that were inserted into one of the Spectrum expansion ports. The amount of ROM was 16 KB, and a dialect of the BASIC language called Sinclair BASIC was sewn into the memory. The ZX Spectrum only supported one-bit sound output through the built-in speaker. The computer worked only in graphics mode (8 colors and 2 brightness levels). Consequently, there was no support for text mode. The maximum resolution was 256x192 pixels.