08. 06.2018

Blog of Dmitry Vassiyarov.

Binary code - where and how is it used?

Today I am especially glad to meet you, my dear readers, because I feel like a teacher who, at the very first lesson, begins to introduce letters and numbers to the class. And since we live in a world of digital technologies, I will tell you what the binary code is, which is their basis.

Let's start with the terminology and find out what binary means. For clarification, let's return to our usual calculus, which is called "decimal". That is, we use 10 digits, which make it possible to conveniently operate with various numbers and keep an appropriate record.

Following this logic, the binary system provides for the use of only two characters. In our case, it's just "0" (zero) and "1" one. And here I want to warn you that hypothetically there could be other conventions in their place, but it is these values, denoting the absence (0, empty) and the presence of a signal (1 or “wand”), that will help us further understand the structure of the binary code.

Why do we need a binary code?

Before the advent of computers, various automatic systems were used, the principle of operation of which was based on receiving a signal. The sensor is triggered, the circuit is closed and a certain device is turned on. No current in the signal circuit - no operation. It was electronic devices that made it possible to make progress in processing information represented by the presence or absence of voltage in the circuit.

Their further complication led to the emergence of the first processors, which also did their job, already processing a signal consisting of pulses alternating in a certain way. We will not now go into the software details, but the following is important for us: electronic devices turned out to be able to distinguish a given sequence of incoming signals. Of course, it is possible to describe the conditional combination in this way: “there is a signal”; "no signal"; "there is a signal"; "there is a signal." You can even simplify the notation: “there is”; "No"; "there is"; "there is".

But it is much easier to indicate the presence of a signal with a unit “1”, and its absence with a zero “0”. Then instead of all this we can use a simple and concise binary code: 1011.

Of course, processor technology has stepped far forward and now chips are able to perceive not just a sequence of signals, but entire programs written by certain commands consisting of individual characters.

But for their recording, the same binary code is used, consisting of zeros and ones, corresponding to the presence or absence of a signal. Whether he exists or not, it doesn't matter. For a chip, any of these options is a single piece of information, which is called a “bit” (bit is the official unit of measurement).

Conventionally, a character can be encoded by a sequence of several characters. Two signals (or their absence) can describe only four options: 00; 01;10; 11. This encoding method is called two-bit. But it can also be:

  • Four-bit (as in the example in the paragraph above 1011) allows you to write 2 ^ 4 = 16 character combinations;
  • Eight bits (for example: 0101 0011; 0111 0001). At one time it was of the greatest interest to programming because it covered 2^8 = 256 values. This made it possible to describe all decimal digits, the Latin alphabet and special characters;
  • Sixteen-bit (1100 1001 0110 1010) or higher. But records with such a long length are already for modern, more complex tasks. Modern processors use 32 and 64 bit architectures;

To be honest, there is no single official version, it so happened that it was the combination of eight characters that became the standard measure of stored information, called “bytes”. This could apply even to a single letter written in 8-bit binary code. So, my dear friends, please remember (if anyone did not know):

8 bits = 1 byte.

So accepted. Although a character written as a 2-bit or 32-bit value can also nominally be called a byte. By the way, thanks to the binary code, we can estimate the volume of files measured in bytes and the speed of information transfer and the Internet (bits per second).

Binary encoding in action

To standardize the recording of information for computers, several encoding systems have been developed, one of which is ASCII, based on 8-bit recording, has become widespread. The values ​​in it are distributed in a special way:

  • the first 31 characters are control characters (from 00000000 to 00011111). Serve for service commands, output to a printer or screen, sound signals, text formatting;
  • the following from 32 to 127 (00100000 - 01111111) Latin alphabet and auxiliary symbols and punctuation marks;
  • the rest, up to the 255th (10000000 - 11111111) - alternative, part of the table for special tasks and displaying national alphabets;

The interpretation of the values ​​​​in it is shown in the table.

If you think that "0" and "1" are located in a chaotic order, then you are deeply mistaken. Using any number as an example, I will show you a pattern and teach you how to read numbers written in binary code. But for this we will accept some conditions:

  • A byte of 8 characters will be read from right to left;
  • If in ordinary numbers we use the digits of ones, tens, hundreds, then here (reading in reverse order) for each bit, various powers of “two” are presented: 256-124-64-32-16-8-4-2-1;
  • Now we look at the binary code of a number, for example 00011011. Where there is a “1” signal in the corresponding position, we take the values ​​​​of this bit and sum them up in the usual way. Accordingly: 0+0+0+32+16+0+2+1 = 51. You can verify the correctness of this method by looking at the code table.

Now, my inquisitive friends, you not only know what a binary code is, but also know how to convert the information encrypted by it.

Language understandable to modern technology

Of course, the algorithm for reading binary code by processor devices is much more complicated. But with its help, you can write anything you want:

  • Text information with formatting options;
  • Numbers and any operations with them;
  • Graphic and video images;
  • Sounds, including those that go beyond our hearing;

In addition, due to the simplicity of the “presentation”, various ways of recording binary information are possible:

  • Changing the magnetic field by ;
  • Complementing the advantages of binary coding is almost unlimited possibilities for transmitting information over any distance. It is this method of communication that is used with spacecraft and artificial satellites.

    So, today, the binary system is the language that most of the electronic devices we use can understand. And what is most interesting, no other alternative is foreseen for him yet.

    I think that the information I have provided will be enough for you to get started. And then, if such a need arises, everyone will be able to delve into an independent study of this topic.

    I will say goodbye and after a short break I will prepare for you a new article of my blog, on some interesting topic.

    It's better if you tell me yourself ;)

    See you soon.

    Computers don't understand words and numbers the way humans do. Modern software allows the end user to ignore this, but at the lowest levels your computer operates with a binary electrical signal that has only two states: there is current or no current. To "understand" complex data, your computer must encode it in binary.

    The binary system is based on two digits, 1 and 0, corresponding to on and off states that your computer can understand. You are probably familiar with the decimal system. It uses ten digits, from 0 to 9, and then moves on to the next order to form two-digit numbers, with the digit from each order ten times the previous one. The binary system is similar, with each digit being twice as large as the previous one.

    Counting in Binary

    In binary, the first digit is equivalent to 1 in decimal. The second digit is 2, the third is 4, the fourth is 8, and so on—doubling each time. Adding all these values ​​will give you a number in decimal format.

    1111 (binary) = 8 + 4 + 2 + 1 = 15 (decimal)

    Accounting for 0 gives us 16 possible values ​​for four binary bits. Move 8 bits and you get 256 possible values. This takes up a lot more space to represent, since four digits in decimal gives us 10,000 possible values. Of course, binary code takes up more space, but computers understand binary files much better than the decimal system. And for some things, like logic processing, binary is better than decimal.

    It should be said that there is another basic system that is used in programming: hexadecimal. Although computers do not work in hexadecimal, programmers use it to represent binary addresses in a human-readable format when writing code. This is because two digits of a hexadecimal number can represent a whole byte, that is, they replace eight digits in binary. The hexadecimal system uses the numbers 0-9, as well as the letters A through F, to get an extra six digits.

    Why computers use binaries

    Short answer: hardware and the laws of physics. Every character in your computer is an electrical signal, and in the early days of computing, measuring electrical signals was much more difficult. It was more reasonable to distinguish between only the "on" state, represented by a negative charge, and the "off" state, represented by a positive charge.

    For those who don't know why "off" is represented by a positive charge, it's because electrons have a negative charge, and more electrons means more current with a negative charge.

    Thus early room-sized computers used binaries to build their systems, and although they used older, bulkier equipment, they operated on the same fundamental principles. Modern computers use what is called transistor to perform calculations with binary code.

    Here is a schematic of a typical transistor:

    Basically, it allows current to flow from the source to the drain if there is current in the gate. This forms a binary key. Manufacturers can make these transistors as small as 5 nanometers, or as small as two strands of DNA. This is how modern processors work, and even they can suffer from problems distinguishing between on and off states (although this is due to their unrealistic molecular size, subject to oddities of quantum mechanics).

    Why only binary system

    So you might be thinking, “Why only 0 and 1? Why not add another number? Although this is partly due to the traditions of creating computers, at the same time, adding one more digit would mean the need to highlight one more state of the current, and not just “off” or “on”.

    The problem here is that if you want to use multiple voltage levels, you need a way to easily perform calculations with them, and modern hardware capable of doing this is not viable as a replacement for binary calculations. For example, there is a so-called triple computer, developed in the 1950s, but development stopped there. Ternary logic more efficient than binary, but there is no effective replacement for the binary transistor yet, or at least no transistor as tiny as binary.

    The reason we can't use ternary logic comes down to how transistors are connected in a computer and how they are used for mathematical calculations. The transistor receives information on two inputs, performs an operation and returns the result to one output.

    Thus, binary mathematics is easier for a computer than anything else. Binary logic is easily converted to binary systems, with True and False corresponding to the On and Off states.

    A binary truth table running on binary logic will have four possible outputs for each fundamental operation. But, since triple gates use three inputs, the triple truth table would have 9 or more. While a binary system has 16 possible operators (2^2^2), a ternary system would have 19683 (3^3^3). Scaling becomes a problem because although trinity is more efficient, it is also exponentially more complex.

    Who knows? In the future, we may very well see ternary computers as binary logic has run into problems of miniaturization. For now, the world will continue to operate in binary mode.

    A binary translator is a tool for translating binary code into text for reading or printing. You can translate a binary into English using two methods; ASCII and Unicode.

    Binary number system

    The binary decoder system is based on the number 2 (base). It consists of only two numbers as base-2 number system: 0 and 1.

    Although the binary system was used for various purposes in ancient Egypt, China and India, it has become the language of electronics and computers in the modern world. It is the most efficient system for detecting the off (0) and on (1) state of an electrical signal. It is also the basis of binary code to text, which is used in computers to compose data. Even the digital text you are currently reading consists of binary numbers. But you can read this text because we have transcribed the binary code of the translation file using the binary code of the word.

    What is ASCII?

    ASCII is a character encoding standard for electronic communication, short for American Standard Code for Information Interchange. In computers, telecommunications equipment, and other devices, ASCII codes represent text. While many additional characters are supported, most modern character encoding schemes are based on ASCII.

    ASCII is the traditional name for the encoding system; The Internet Assigned Numbers Authority (IANA) prefers the updated US-ASCII name, which clarifies that the system was developed in the US and is based on predominantly used typographic characters. ASCII is one of the highlights of the IEEE.

    Binary to ASCII

    Originally based on the English alphabet, ASCII encodes 128 specified seven-bit integer characters. You can print 95 coded characters, including numbers 0 to 9, lowercase letters a to z, uppercase letters A to Z, and punctuation characters. In addition, 33 non-printable control codes produced by Teletype machines were included in the original ASCII specification; most of these are now obsolete, although some are still in common use, such as carriage returns, line feeds, and tab codes.

    For example, the binary number 1101001 = hexadecimal 69 (i is the ninth letter) = decimal 105 would represent the ASCII lowercase I.

    Using ASCII

    As mentioned above, using ASCII you can translate computer text into human text. Simply put, it is a translator from binary to English. All computers receive messages in binary, 0 and 1 series. However, just as English and Spanish can use the same alphabet but have completely different words for many similar words, computers also have their own language version. ASCII is used as a method that allows all computers to exchange documents and files in the same language.

    ASCII is important because computers were given a common language during design.

    In 1963, ASCII was first used commercially as a seven-bit teleprinter code for the American Telephone & Telegraph TWX (Teletype Writer eXchange) network. The TWX initially used the previous five-bit ITA2, which was also used by the competing Telex teleprinter system. Bob Behmer introduced features such as the escape sequence. According to Behmer, his British colleague Hugh McGregor Ross helped popularize the work—"so much so that the code that became ASCII was first called the Behmer-Ross Code in Europe." Because of his extensive ASCII work, Boehmer has been called the "Father of ASCII".

    Until December 2007, when the UTF-8 encoding surpassed it, ASCII was the most common character encoding on the World Wide Web; UTF-8 is backward compatible with ASCII.

    UTF-8 (Unicode)

    UTF-8 is a character encoding that can be as compact as ASCII, but can also contain any Unicode characters (with some increase in file size). UTF is a Unicode conversion format. "8" means character representation using 8-bit blocks. The number of blocks a character must represent varies from 1 to 4. One of the really nice things about UTF-8 is that it is compatible with null-terminated strings. When encoded, no character will have a null(0) byte.

    Unicode and the ISO/IEC 10646 Universal Character Set (UCS) have a much wider range of characters, and their various encoding forms have begun to quickly replace ISO/IEC 8859 and ASCII in many situations. While ASCII is limited to 128 characters, Unicode and UCS support more characters by separating unique concepts of identification (using natural numbers called code points) and encoding (up to UTF-8, UTF-16, and UTF-32-bit binary formats.) .

    Difference between ASCII and UTF-8

    ASCII was included as the first 128 characters in the Unicode character set (1991), so the 7-bit ASCII characters in both sets have the same numeric codes. This allows UTF-8 to be compatible with 7-bit ASCII, since a UTF-8 file with only ASCII characters is identical to an ASCII file with the same character sequence. More importantly, forward compatibility is ensured because software that only recognizes 7-bit ASCII characters as special and does not modify the bytes with the highest bit set (as is often done to support 8-bit ASCII extensions such as ISO-8859 -1) will keep unmodified UTF-8 data.

    Binary Translator Applications

    The most common use for this number system can be seen in computer technology. After all, the basis of all computer language and programming is the two-digit number system used in digital coding.

    This is what constitutes the process of digital encoding, taking data and then rendering it with limited bits of information. Limited information consists of zeros and ones of the binary system. The images on your computer screen are an example of this. A binary string is used to encode these images for each pixel.

    If the screen uses a 16-bit code, each pixel will be given instructions on which color to display based on which bits are 0 and 1. The result is over 65,000 colors represented by 2^16. In addition to this, you'll find uses for binary number systems in the branch of mathematics known as Boolean algebra.

    The values ​​of logic and truth belong to this area of ​​mathematics. In this application, statements are assigned 0 or 1 depending on whether they are true or false. You can try binary to text, decimal to binary, binary to decimal conversion if you are looking for a tool that helps in this app.

    The advantage of the binary number system

    The binary number system is useful for a number of things. For example, a computer flips switches to add numbers. You can encourage the addition of a computer by adding binary numbers to the system. There are currently two main reasons for using this computer number system. First, it can ensure the reliability of the safety range. Secondarily, and most importantly, it helps to minimize the circuitry needed. This reduces the required space, energy consumption and costs.

    You can encode or translate binary messages written in binary numbers. For example,

    (01101001) (01101100011011110111011001100101) (011110010110111101110101) is the decoded message. When you copy and paste these numbers into our binary translator, you will get the following text in English:

    I love you

    It means

    (01101001) (01101100011011110111011001100101) (011110010110111101110101) = I love you

    tables

    binary

    hexadecimal

    A single digital signal is not very informative, because it can only take two values: zero and one. Therefore, in cases where it is necessary to transmit, process or store large amounts of information, several parallel digital signals are usually used. Moreover, all these signals should be considered only simultaneously, each of them separately does not make sense. In such cases, one speaks of binary codes, that is, codes formed by digital (logical, binary) signals. Each of the logical signals included in the code is called a bit. The more digits are included in the code, the more values ​​this code can take.

    Unlike the decimal coding of numbers that is familiar to us, that is, a code with base ten, with binary coding, the base of the code is the number two (Fig. 2.9). That is, each digit of the code (each digit) of the binary code can take not ten values ​​(as in the decimal code: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9), but only two - 0 and 1. The system of positional notation remains the same, that is, the youngest bit is written on the right, and the most senior is written on the left. But if in the decimal system the weight of each next digit is ten times greater than the weight of the previous one, then in the binary system (with binary coding) it is twice. Each digit of a binary code is called a bit (from the English "Binary Digit" - "binary number").

    Rice. 2.9. Decimal and binary encoding

    In table. 2.3 shows the correspondence of the first twenty numbers in decimal and binary systems.

    The table shows that the required number of bits of the binary code is much larger than the required number of bits of the decimal code. The maximum possible number with a number of digits equal to three is 999 in decimal, and only 7 in binary (that is, 111 in binary). In general, an n-bit binary number can take 2 n different values, and an n-bit decimal number can take 10 n values. That is, writing large binary numbers (with more than ten digits) becomes not very convenient.

    Table 2.3. Correspondence of numbers in decimal and binary systems
    Decimal system Binary system Decimal system Binary system

    In order to simplify the writing of binary numbers, the so-called hexadecimal system (hexadecimal encoding) was proposed. In this case, all binary digits are divided into groups of four digits (starting from the least significant one), and then each group is encoded with one character. Each such group is called nibble(or nibble, tetrad), and two groups (8 bits) - a byte. From Table. Figure 2.3 shows that a 4-bit binary number can take 16 different values ​​(from 0 to 15). Therefore, the required number of characters for a hexadecimal code is also 16, which is where the name of the code comes from. Numbers from 0 to 9 are taken as the first 10 characters, and then 6 initial capital letters of the Latin alphabet are used: A, B, C, D, E, F.

    Rice. 2.10. Binary and hexadecimal number notation

    In table. 2.4 shows examples of hexadecimal coding of the first 20 numbers (binary numbers are given in brackets), and in fig. 2.10 shows an example of writing a binary number in hexadecimal form. To denote hexadecimal encoding, the letter "h" or "H" (from the English Hexadecimal) is sometimes used at the end of the number, for example, the notation A17F h denotes the hexadecimal number A17F. Here A1 is the high byte of the number and 7F is the low byte of the number. The whole number (in our case, a two-byte number) is called word.

    Table 2.4. Hexadecimal coding system
    Decimal system hexadecimal system Decimal system hexadecimal system
    0 (0) A(1010)
    1(1) B(1011)
    2 (10) C(1100)
    3 (11) D(1101)
    4 (100) E (1110)
    5 (101) F(1111)
    6 (110) 10 (10000)
    7 (111) 11 (10001)
    8 (1000) 12 (10010)
    9 (1001) 13 (10011)

    To convert a hexadecimal number to decimal, it is necessary to multiply the value of the least significant (zero) digit by one, the value of the next (first) digit by 16, the second digit by 256 (16 2), etc., and then add all the products. For example, take the number A17F:

    A17F=F*16 0 + 7*16 1 + 1*16 2 + A*16 3 = 15*1 + 7*16+1*256+10*4096=41343

    But every digital equipment specialist (designer, operator, repairman, programmer, etc.) needs to learn how to use the hexadecimal and binary systems as freely as with the usual decimal, so that no transfers from system to system are required.

    In addition to the considered codes, there is also the so-called binary-decimal representation of numbers. As in the hexadecimal code, in the binary-decimal code, each digit of the code corresponds to four binary digits, however, each group of four binary digits can take not sixteen, but only ten values, encoded by the symbols 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. That is, one decimal place corresponds to four binary ones. As a result, it turns out that writing numbers in binary decimal code is no different from writing in a regular decimal code (Table 2.6), but in reality this is just a special binary code, each digit of which can take only two values: 0 and 1. BCD is sometimes very handy for organizing decimal numeric displays and scoreboards.

    Table 2.6. Binary Decimal Coding System
    Decimal system Binary Decimal Decimal system Binary Decimal
    0 (0) 10 (1000)
    1(1) 11 (1001)
    2 (10) 12 (10010)
    3 (11) 13 (10011)
    4 (100) 14 (10100)
    5 (101) 15 (10101)
    6 (110) 16 (10110)
    7 (111) 17 (10111)
    8 (1000) 18 (11000)
    9 (1001) 19 (11001)

    In binary code, any arithmetic operations can be performed on numbers: addition, subtraction, multiplication, division.

    Consider, for example, the addition of two 4-bit binary numbers. Let's add the number 0111 (decimal 7) and 1011 (decimal 11). Adding these numbers is no more difficult than decimal notation:

    Adding 0 and 0 gives you 0, adding 1 and 0 gives you 1, adding 1 and 1 gives you 0, and the carry is 1. The result is 10010 (decimal 18). When adding any two n-bit binary numbers, you can get an n-bit or (n + 1)-bit number.

    Subtraction is done in the same way. Let the number 0111 (7) be subtracted from the number 10010 (18). We write the numbers aligned to the least significant digit and subtract in the same way as in the case of the decimal system:

    Subtracting 0 from 0 gives you 0, subtracting 0 from 1 gives you 1, subtracting 1 from 1 gives you 0, subtracting 1 from 0 gives you 1, and borrowing 1 in the next digit. The result is 1011 (decimal 11).

    When subtracting, it is possible to obtain negative numbers, so you must use the binary representation of negative numbers.

    For the simultaneous representation of both binary positive and binary negative numbers, the so-called two's complement code is most often used. Negative numbers in this code are expressed by a number that, when added to a positive number of the same magnitude, will result in zero. In order to get a negative number, you need to change all the bits of the same positive number to the opposite ones (0 to 1, 1 to 0) and add 1 to the result. For example, let's write the number -5. The number 5 in binary code looks like 0101. We replace the bits with opposite ones: 1010 and add one: 1011. We sum the result with the original number: 1011 + 0101 = 0000 (we ignore the transfer to the fifth bit).

    Negative numbers in the two's complement code differ from positive ones in the value of the most significant digit: one in the most significant digit determines a negative number, and zero - a positive one.

    In addition to standard arithmetic operations, some specific operations are used in the binary number system, for example, modulo 2 addition. This operation (denoted by A) is bitwise, that is, there are no transfers from bit to bit and borrowing in high bits here. The rules for modulo 2 addition are: , , . The same operation is called a function XOR. For example, let's sum modulo 2 two binary numbers 0111 and 1011:

    Other bitwise operations on binary numbers include the AND function and the OR function. The AND function results in one only when the corresponding bits of the two original numbers are both ones, otherwise the result is -0. The OR function results in one when at least one of the corresponding bits of the original numbers is 1, otherwise the result is 0.

    This lesson will cover the topic “Coding information. Binary coding. Units of measurement of information”. During it, users will be able to get an idea about the encoding of information, how computers perceive information, its units of measurement and binary coding.

    Topic:Information around us

    Lesson: Coding information. Binary coding. Information units

    This lesson will cover the following questions:

    1. Coding as a change in the form of information presentation.

    2. How does a computer recognize information?

    3. How to measure information?

    4. Units of measurement of information.

    In the world of codes

    Why do people encode information?

    1. Hide it from others (mirror cryptography of Leonardo da Vinci, military encryption).

    2. Write down information in short (shorthand, abbreviation, road signs).

    3. For easier processing and transmission (Morse code, translation into electrical signals - machine codes).

    Coding is the representation of information by some code.

    The code is a system of symbols for presenting information.

    Ways to encode information

    1. Graphic (see Fig. 1) (using drawings and signs).

    Rice. 1. System of signal flags (Source)

    2. Numerical (using numbers).

    For example: 11001111 11100101.

    3. Symbolic (using alphabetic characters).

    For example: NKMBM CHGYOU.

    Decoding - this is an action to restore the original form of information presentation. To decode, you need to know the code and encoding rules.

    The means of encoding and decoding is the code table of correspondence. For example, correspondence in various number systems - 24 - XXIV, correspondence of the alphabet to any symbols (Fig. 2).


    Rice. 2. An example of a cipher (Source)

    Information Encoding Examples

    An example of information encoding is Morse code (see Fig. 3).

    Rice. 3. Morse code ()

    Morse code uses only 2 characters - a dot and a dash (short and long sound).

    Another example of encoding information is the flag alphabet (see Fig. 4).

    Rice. 4. Flag alphabet ()

    Also an example is the alphabet of flags (see Fig. 5).

    Rice. 5. ABC of flags ()

    A well-known example of coding is the musical alphabet (see Fig. 6).

    Rice. 6. Music alphabet ()

    Consider the following problem:

    Using the flag alphabet table (see Fig. 7), it is necessary to solve the following problem:

    Rice. 7

    Senior assistant Scrap passes the exam to Captain Vrungel. Help him read the following text (see Figure 8):

    There are mainly two signals around us, for example:

    Traffic light: red - green;

    Question: yes - no;

    Lamp: on - off;

    It is possible - it is impossible;

    Good bad;

    Truth is a lie;

    Back and forth;

    Yes - no;

    All these are signals indicating the amount of information in 1 bit.

    1 bit - this is the amount of information that allows us to choose one option from two possible ones.

    A computer is an electrical machine that runs on electronic circuits. In order for the computer to recognize and understand the input information, it must be translated into computer (machine) language.

    The algorithm intended for the performer must be written, that is, encoded, in a language understandable to the computer.

    These are electrical signals: current flows or current does not flow.

    Machine binary language - a sequence of "0" and "1". Each binary number can take the value 0 or 1.

    Each digit of the machine binary code carries an amount of information equal to 1 bit.

    A binary number that represents the smallest unit of information is called b it . A bit can be either 0 or 1. The presence of a magnetic or electronic signal in the computer means 1, the absence of 0.

    A string of 8 bits is called b ait . The computer processes this string as a separate character (number, letter).

    Consider an example. The word ALICE consists of 5 letters, each of which is represented by one byte in the computer language (see Fig. 10). So Alice can be measured as 5 bytes.

    Rice. 10. Binary Code (Source)

    In addition to bits and bytes, there are other units of information.

    Bibliography

    1. Bosova L.L. Informatics and ICT: Textbook for Grade 5. - M.: BINOM. Knowledge Lab, 2012.

    2. Bosova L.L. Informatics: Workbook for grade 5. - M.: BINOM. Knowledge Lab, 2010.

    3. Bosova L.L., Bosova A.Yu. Informatics lessons in grades 5-6: Methodological guide. - M.: BINOM. Knowledge Lab, 2010.

    2. Festival "Open Lesson" ().

    Homework

    1. §1.6, 1.7 (Bosova L.L. Informatics and ICT: Textbook for Grade 5).

    2. Page 28, tasks 1, 4; p. 30, assignments 1, 4, 5, 6 (Bosova L.L. Informatics and ICT: Textbook for grade 5).