We denote each row of the matrix A e i = (a i 1 a i 2 ..., a in) (for example,
e 1 = (a 11 a 12 ..., a 1 n), e 2 = (a 21 a 22 ..., a 2 n), etc.). Each of them is a row matrix that can be multiplied by a number or added to another row by general rules actions with matrices.

Linear combination of strings e l , e 2 ,...e k is the sum of the products of these strings by arbitrary real numbers:
e = l l e l + l 2 e 2 +...+ l k e k , where l l , l 2 ,..., l k are arbitrary numbers (linear combination coefficients).

Matrix rows e l , e 2 ,...e m are called linearly dependent, if there are such numbers l l , l 2 ,..., l m , which are not simultaneously equal to zero, such that the linear combination of matrix rows is equal to the zero row:
l l e l + l 2 e 2 +...+ l m e m = 0, where 0 = (0 0...0).

The linear dependence of the rows of the matrix means that at least one row of the matrix is ​​a linear combination of the rest. Indeed, let, for definiteness, the last coefficient l m ¹ 0. Then, dividing both sides of the equality by l m , we obtain an expression for the last row as a linear combination of the remaining rows:
e m = (l l /l m)e l + (l 2 /l m)e 2 +...+ (l m-1 /l m)e m-1 .

If a linear combination of rows is zero if and only if all coefficients are zero, i.e. l l e l + l 2 e 2 +...+ l m e m = 0 Û l k = 0 "k, then the lines are called linearly independent.

Matrix rank theorem. The rank of a matrix is ​​equal to the maximum number of its linearly independent rows or columns in terms of which all its other rows or columns can be linearly expressed.

Let's prove this theorem. Let an m x n matrix A have rank r (r(A) £ min (m; n)). Therefore, there exists a non-zero minor of order r. Any such minor will be called basic. Let this be a minor for definiteness

The rows of this minor will also be called basic.

Let us prove that then the rows of the matrix e l , e 2 ,...e r are linearly independent. Assume the opposite, i.e. one of these rows, for example, the rth row, is a linear combination of the rest: e r = l l e l + l 2 e 2 +...+ l r-1 e r-1 = 0. Then, if we subtract from r-th elements row elements of the 1st row multiplied by l l , elements of the 2nd row multiplied by l 2 , etc., finally, the elements of the (r-1)th row multiplied by l r-1 , then rth row will become zero. At the same time, according to the properties of the determinant, the above determinant should not change, and at the same time it should be equal to zero. A contradiction is obtained, the linear independence of strings is proved.

Let us now prove that any (r+1) matrix rows are linearly dependent, i.e. any string can be expressed in terms of basic strings.

Let's supplement the previously considered minor with one more row (i-th) and one more column (j-th). As a result, we obtain a minor of the (r+1)th order, which, by definition of the rank, is equal to zero.

where are some numbers (some or even all of these numbers can be equal to zero). This means that there are the following equalities between the elements of the columns:

From (3.3.1) it follows that

If equality (3.3.3) is true if and only if , then the rows are called linearly independent. Relation (3.3.2) shows that if one of the rows is linearly expressed in terms of the others, then the rows are linearly dependent.

It is also easy to see the opposite: if the rows are linearly dependent, then there is a row that is a linear combination of the other rows.

Let, for example, in (3.3.3) , then .

Definition. Let some minor of the rth order be selected in the matrix A and let the minor of the (r + 1)th order of the same matrix completely contain the minor inside it. We will say that in this case the minor borders the minor (or is bordering for ).

We now prove an important lemma.

Lemma about bordering minors. If the minor of order r of the matrix A= is non-zero, and all the minors bordering it are equal to zero, then any row (column) of the matrix A is a linear combination of its rows (columns) that make up .

Proof. Without violating the generality of the reasoning, we will assume that a non-zero minor of the rth order is in the left upper corner matrices A=:



.

For the first k rows of the matrix A, the statement of the lemma is obvious: it suffices to include in the linear combination the same row with a coefficient equal to one, and the rest with coefficients equal to zero.

We now prove that the remaining rows of the matrix A are linearly expressed in terms of the first k rows. To do this, we construct a minor of the (r + 1)th order by adding the kth row () to the minor and l-th column():

.

The resulting minor is zero for all k and l. If , then it is equal to zero as containing two identical columns. If , then the resulting minor is the bordering minor for and, therefore, is equal to zero by the hypothesis of the lemma.

Let us expand the minor in terms of the elements of the latter l-th column:

Assuming , we get:

(3.3.6)

Expression (3.3.6) means that k-th row matrix A is linearly expressed through the first r rows.

Since the values ​​of its minors do not change when a matrix is ​​transposed (due to the property of the determinants), everything proved is true for the columns as well. The theorem has been proven.

Corollary I. Any row (column) of a matrix is ​​a linear combination of its basic rows (columns). Indeed, the basis minor of the matrix is ​​different from zero, and all the minors bordering it are equal to zero.

Corollary II. An nth order determinant is equal to zero if and only if it contains linearly dependent rows (columns). The sufficiency of the linear dependence of rows (columns) for the equality of the determinant to zero was proved earlier as a property of determinants.

Let's prove the necessity. Let a square matrix of the nth order be given, the only minor of which is equal to zero. It follows that the rank of this matrix is ​​less than n, i.e., there is at least one row that is a linear combination of the base rows of this matrix.

Let us prove one more theorem about the rank of a matrix.

Theorem. The maximum number of linearly independent rows of a matrix is ​​equal to the maximum number of its linearly independent columns and is equal to the rank of this matrix.

Proof. Let the rank of the matrix A= be equal to r. Then any of its k base rows are linearly independent, otherwise the base minor would be equal to zero. On the other hand, any r+1 or more rows are linearly dependent. Assuming the contrary, we could find a non-zero minor of order greater than r by Corollary 2 of the previous lemma. The latter contradicts the fact that the maximum order of non-zero minors is r. Everything that has been proved for rows is also true for columns.

In conclusion, we present one more method for finding the rank of a matrix. The rank of a matrix can be determined by finding a minor of maximum order that is different from zero.

At first glance, this requires a calculation, although finite, but perhaps very a large number minors of this matrix.

The following theorem allows, however, significant simplifications to be made.

Theorem. If the minor of the matrix A is nonzero, and all the minors bordering it are equal to zero, then the rank of the matrix is ​​r.

Proof. It suffices to show that any subsystem of matrix rows for S>r will be linearly dependent under the conditions of the theorem (from this it will follow that r is the maximum number of linearly independent matrix rows or any of its minors of order greater than k are equal to zero).

Let's assume the opposite. Let the rows be linearly independent. By the lemma on bordering minors, each of them will be linearly expressed in terms of rows in which the minor is located and which, due to the fact that it is different from zero, are linearly independent:

Now consider the following linear combination:

or

Using (3.3.7) and (3.3.8), we obtain

,

which contradicts the linear independence of strings.

Consequently, our assumption is false and, therefore, any S>r rows under the conditions of the theorem are linearly dependent. The theorem has been proven.

Consider the rule for calculating the rank of a matrix - the method of bordering minors, based on this theorem.

When calculating the rank of a matrix, one should pass from minors of lower orders to minors of higher orders. If a nonzero r-th order minor has already been found, then only the (r+1)-th order minors bordering the minor need to be computed. If they are zero, then the rank of the matrix is ​​r. This method is also used if we not only calculate the rank of the matrix, but also determine which columns (rows) make up the basis minor of the matrix.

Example. Calculate the rank of a matrix by the method of fringing minors

Solution. The second-order minor in the upper left corner of the matrix A is non-zero:

.

However, all third-order minors surrounding it are equal to zero:

; ;
; ;
; .

Therefore, the rank of matrix A is equal to two: .

The first and second rows, the first and second columns in this matrix are basic. The remaining rows and columns are their linear combinations. Indeed, the following equalities hold for strings:

In conclusion, we note the validity of the following properties:

1) the rank of the product of matrices is not greater than the rank of each of the factors;

2) the rank of the product of an arbitrary matrix A on the right or left by a non-singular square matrix Q is equal to the rank of the matrix A.

Polynomial matrices

Definition. A polynomial matrix or -matrix is ​​a rectangular matrix whose elements are polynomials in one variable with numerical coefficients.

Elementary transformations can be performed on -matrices. These include:

Permutation of two rows (columns);

Multiplying a row (column) by a non-zero number;

Adding to one row (column) another row (column), multiplied by any polynomial.

Two -matrices and of the same size are called equivalent: if it is possible to pass from the matrix to using a finite number of elementary transformations.

Example. Prove the equivalence of matrices

, .

1. Swap the first and second columns in the matrix:

.

2. From the second line, subtract the first, multiplied by ():

.

3. Multiply the second row by (-1) and note that

.

4. Subtract from the second column the first one, multiplied by , we get

.

The set of all -matrices of given sizes is divided into non-intersecting classes of equivalent matrices. Matrices that are equivalent to each other form one class, not equivalent - another.

Each class of equivalent matrices is characterized by a canonical, or normal, -matrix of given sizes.

Definition. The canonical, or normal, -matrix of dimensions is the -matrix, which has polynomials on the main diagonal, where p is the smaller of the numbers m and n ( ), and polynomials that are not equal to zero have leading coefficients equal to 1, and each next polynomial is divisible by the previous one. All elements outside the main diagonal are 0.

It follows from the definition that if among the polynomials there are polynomials of degree zero, then they are at the beginning of the main diagonal. If there are zeros, then they are at the end of the main diagonal.

The matrix of the previous example is canonical. Matrix

also canonical.

Each -matrix class contains a unique canonical -matrix, i.e. each -matrix is ​​equivalent to a single canonical matrix, which is called the canonical form or normal form of the given matrix.

The polynomials on the main diagonal of the canonical form of the given -matrix are called the invariant factors of the given matrix.

One of the methods for calculating invariant factors is to reduce the given -matrix to the canonical form.

So, for the matrix of the previous example, the invariant factors are

It follows from what has been said that the presence of the same set of invariant factors is a necessary and sufficient condition for the equivalence of -matrices.

Reduction of -matrices to canonical form reduces to the definition of invariant factors

, ; ,

where r is the rank of the matrix; - the greatest common divisor of k-th order minors, taken with the highest coefficient equal to 1.

Example. Let -matrix

.

Solution. Obviously, the greatest common divisor of the first order, i.e. .

We define second-order minors:

, etc.

Already these data are enough to draw a conclusion: therefore, .

We define

,

Consequently, .

Thus, the canonical form of this matrix is ​​the following -matrix:

.

A matrix polynomial is an expression of the form

where is a variable; - square matrices of order n with numerical elements.

If , then S is called the degree of the matrix polynomial, n is the order of the matrix polynomial.

Any quadratic -matrix can be represented as a matrix polynomial. Obviously, the converse statement is also true, i.e. any matrix polynomial can be represented as some square matrix.

The validity of these statements clearly follows from the properties of operations on matrices. Let's look at the following examples:

Example. Represent a polynomial matrix

in the form of a matrix polynomial can be as follows

.

Example. Matrix polynomial

can be represented as the following polynomial matrix ( -matrix)

.

This interchangeability of matrix polynomials and polynomial matrices plays an essential role in the mathematical apparatus of factor and component analysis methods.

Matrix polynomials of the same order can be added, subtracted, and multiplied in the same way as ordinary polynomials with numerical coefficients. However, it should be remembered that the multiplication of matrix polynomials, generally speaking, is not commutative, since matrix multiplication is not commutative.

Two matrix polynomials are called equal if their coefficients are equal, i.e. the corresponding matrices for the same powers of the variable .

The sum (difference) of two matrix polynomials is a matrix polynomial whose coefficient at each degree of the variable is equal to the sum (difference) of the coefficients at the same degree in the polynomials and .

To multiply a matrix polynomial by a matrix polynomial, you need to multiply each term of the matrix polynomial by each term of the matrix polynomial, add the resulting products and bring like terms.

The degree of a matrix polynomial is a product less than or equal to the sum of the degrees of the factors.

Operations on matrix polynomials can be performed using operations on the corresponding -matrices.

To add (subtract) matrix polynomials, it is enough to add (subtract) the corresponding -matrices. The same applies to multiplication. -matrix of the product of matrix polynomials is equal to the product of -matrices of factors.

On the other hand, and can be written in the form

where B 0 is a nonsingular matrix.

When dividing by, there is a uniquely defined right quotient and a right remainder

where the degree R 1 is less than the degree , or (division without a remainder), as well as the left quotient and the left remainder if and only if, where, order

Consider an arbitrary, not necessarily square, matrix A of size mxn.

Matrix rank.

The concept of the rank of a matrix is ​​related to the concept of linear dependence (independence) of rows (columns) of a matrix. Consider this concept for strings. For columns, it's the same.

Denote the sinks of the matrix A:

e 1 \u003d (a 11, a 12, ..., a 1n); e 2 \u003d (a 21, a 22, ..., a 2n); ..., e m \u003d (a m1, a m2, ..., a mn)

e k =e s if a kj =a sj , j=1,2,…,n

Arithmetic operations over the rows of the matrix (addition, multiplication by a number) are introduced as operations carried out element by element: λе k =(λа k1 ,λа k2 ,…,λа kn);

e k +e s =[(а k1 +a s1),(a k2 +a s2),…,(а kn +a sn)].

Line e is called linear combination rows e 1 , e 2 ,…,e k , if it is equal to the sum of the products of these rows by arbitrary real numbers:

e=λ 1 e 1 +λ 2 e 2 +…+λ k e k

Lines e 1 , e 2 ,…,e m are called linearly dependent, if there are real numbers λ 1 ,λ 2 ,…,λ m , not all equal to zero, that the linear combination of these rows is equal to the zero row: λ 1 e 1 +λ 2 e 2 +…+λ m e m = 0 ,where 0 =(0,0,…,0) (1)

If the linear combination is equal to zero if and only if all coefficients λ i are equal to zero (λ 1 =λ 2 =…=λ m =0), then the rows e 1 , e 2 ,…,e m are called linearly independent.

Theorem 1. For strings e 1 ,e 2 ,…,e m to be linearly dependent, it is necessary and sufficient that one of these strings be a linear combination of the other strings.

Proof. Need. Let strings e 1 , e 2 ,…,e m be linearly dependent. Let, for definiteness, (1) λm ≠0, then

That. the string e m is a linear combination of the rest of the strings. Ch.t.d.

Adequacy. Let one of the rows, for example e m , be a linear combination of the other rows. Then there are numbers such that the equality holds, which can be rewritten as ,

where at least 1 of the coefficients, (-1), is non-zero. Those. rows are linearly dependent. Ch.t.d.

Definition. Minor k-th order matrix A of size mxn is called the k-th order determinant with elements lying at the intersection of any k rows and any k columns of matrix A. (k≤min(m,n)). .

Example., minors of the 1st order: =, =;

minors of the 2nd order: , 3rd order

A 3rd order matrix has 9 1st order minors, 9 2nd order minors and 1 3rd order minor (the determinant of this matrix).

Definition. Matrix rank A is the highest order of non-zero minors of this matrix. Designation - rgA or r(A).

Matrix rank properties.

1) the rank of the matrix A nxm does not exceed the smallest of its dimensions, i.e.

r(A)≤min(m,n).

2) r(A)=0 when all matrix elements are equal to 0, i.e. A=0.

3) For a square matrix A of the nth order, r(A)=n when A is nondegenerate.



(The rank of a diagonal matrix is ​​equal to the number of its non-zero diagonal elements).

4) If the rank of a matrix is ​​r, then the matrix has at least one minor of order r that is not equal to zero, and all minors of higher orders are equal to zero.

For the ranks of the matrix, the following relations are valid:

2) r(A+B)≤r(A)+r(B); 3) r(AB)≤min(r(A),r(B));

3) r(A+B)≥│r(A)-r(B)│; 4) r(A T A)=r(A);

5) r(AB)=r(A) if B is a square non-singular matrix.

6) r(AB)≥r(A)+r(B)-n, where n is the number of columns of matrix A or rows of matrix B.

Definition. A nonzero minor of order r(A) is called basic minor. (Matrix A can have several basis minors). Rows and columns at the intersection of which there is a basis minor are called respectively base lines and base columns.

Theorem 2 (on the basic minor). Basic rows (columns) are linearly independent. Any row (any column) of matrix A is a linear combination of basic rows (columns).

Proof. (For strings). If the basic rows were linearly dependent, then by Theorem (1) one of these rows would be a linear combination of other basic rows, then, without changing the value of the basic minor, you can subtract the specified linear combination from this row and get a zero row, and this contradicts because the basis minor is different from zero. That. the base rows are linearly independent.

Let us prove that any row of matrix A is a linear combination of basic rows. Because with arbitrary changes in rows (columns), the determinant retains the property of being equal to zero, then, without loss of generality, we can assume that the basis minor is in the upper left corner of the matrix

A=, those. located on the first r rows and the first r columns. Let 1£j£n, 1£i£m. Let us show that the determinant of the (r+1)th order

If j£r or i£r, then this determinant is equal to zero, because it will have two identical columns or two identical rows.

If j>r and i>r, then this determinant is a minor of the (r + 1)th order of the matrix A. Since the rank of the matrix is ​​r, so any minor of higher order is equal to 0.

Expanding it by the elements of the last (added) column, we get

a 1j A 1j +a 2j A 2j +…+a rj A rj +a ij A ij =0, where the last algebraic addition A ij coincides with the basic minor М r and therefore A ij = М r ≠0.

Dividing the last equality by A ij , we can express the element a ij as a linear combination: , where .

We fix the value i (i>r) and we get that for any j (j=1,2,…,n) the elements i-th line e i are linearly expressed in terms of row elements e 1 , e 2 ,…,e r , i.e. i-th row is a linear combination of basic rows: . Ch.t.d.

Theorem 3. (necessary and sufficient condition for the determinant to be equal to zero). In order for the nth order determinant D to be equal to zero, it is necessary and sufficient that its rows (columns) be linearly dependent.

Proof (p.40). Need. If the nth order determinant D is equal to zero, then the basis minor of its matrix is ​​of order r

Thus, one row is a linear combination of the others. Then, by Theorem 1, the rows of the determinant are linearly dependent.

Adequacy. If the rows D are linearly dependent, then by Theorem 1 one row A i is a linear combination of the other rows. Subtracting the indicated linear combination from the line A i, without changing the value of D, we obtain a zero line. Therefore, by properties of determinants, D=0. h.t.d.

Theorem 4. Under elementary transformations, the rank of the matrix does not change.

Proof. As was shown when considering the properties of determinants, when transforming square matrices, their determinants either do not change, or are multiplied by a nonzero number, or change sign. In this case, the highest order of non-zero minors of the original matrix is ​​preserved, i.e. the rank of the matrix does not change. Ch.t.d.

If r(A)=r(B), then A and B are equivalent: A~B.

Theorem 5. Using elementary transformations, one can reduce the matrix to stepped view. The matrix is ​​called stepped if it has the form:

А=, where a ii ≠0, i=1,2,…,r; r≤k.

Conditions r≤k can always be achieved by transposition.

Theorem 6. The rank of a step matrix is ​​equal to the number of its non-zero rows .

Those. The rank of the step matrix is ​​r, because there is a non-zero minor of order r:

Let

Dimension matrix columns . Linear combination of matrix columns is called a column matrix, while - some real or complex numbers, called linear combination coefficients. If in a linear combination we take all the coefficients equal to zero, then the linear combination is equal to the zero column matrix.

The columns of the matrix are called linearly independent , if their linear combination is equal to zero only when all coefficients of the linear combination are equal to zero. The columns of the matrix are called linearly dependent , if there is a set of numbers , among which at least one is non-zero, and the linear combination of columns with these coefficients is equal to zero

Similarly, definitions of linear dependence and linear independence of matrix rows can be given. In what follows, all theorems are formulated for the columns of the matrix.

Theorem 5

If there is zero among the columns of the matrix, then the columns of the matrix are linearly dependent.

Proof. Consider a linear combination in which all coefficients are equal to zero for all non-zero columns and one for a zero column. It is equal to zero, and among the coefficients of the linear combination there is a non-zero one. Therefore, the columns of the matrix are linearly dependent.

Theorem 6

If a matrix columns linearly dependent, then all matrix columns are linearly dependent.

Proof. For definiteness, we will assume that the first columns of the matrix linearly dependent. Then, by the definition of a linear dependence, there is a set of numbers , among which at least one is non-zero, and the linear combination of columns with these coefficients is equal to zero

Compose a linear combination of all columns of the matrix, including the remaining columns with zero coefficients

But . Therefore, all columns of the matrix are linearly dependent.

Consequence. Among the linearly independent columns of a matrix, any are linearly independent. (This assertion is easily proved by contradiction.)

Theorem 7

For the matrix columns to be linearly dependent, it is necessary and sufficient that at least one matrix column be a linear combination of the others.

Proof.

Need. Let the columns of the matrix be linearly dependent, that is, there is a set of numbers , among which at least one is different from zero, and the linear combination of columns with these coefficients is equal to zero

Assume for definiteness that . Then that is, the first column is a linear combination of the rest.



Adequacy. Let at least one column of the matrix be a linear combination of the others, for example, , where are some numbers.

Then , that is, the linear combination of columns is equal to zero, and among the numbers of the linear combination, at least one (for ) is nonzero.

Let the rank of the matrix be . Any non-zero minor of order is called basic . Rows and columns at the intersection of which there is a basic minor are called basic .

The concepts of linear dependence and linear independence are defined for rows and columns in the same way. Therefore, the properties associated with these concepts, formulated for columns, of course, are also valid for rows.

1. If the column system includes a zero column, then it is linearly dependent.

2. If a column system has two equal columns, then it is linearly dependent.

3. If a column system has two proportional columns, then it is linearly dependent.

4. A system of columns is linearly dependent if and only if at least one of the columns is a linear combination of the others.

5. Any columns included in a linearly independent system form a linearly independent subsystem.

6. A column system containing a linearly dependent subsystem is linearly dependent.

7. If the system of columns is linearly independent, and after adding a column to it, it turns out to be linearly dependent, then the column can be decomposed into columns , and moreover, in a unique way, i.e. expansion coefficients are found uniquely.

Let us prove, for example, the last property. Since the column system is linearly dependent, there are numbers not all equal to 0, which

in this equality. Indeed, if , then

Hence, a non-trivial linear combination of columns is equal to the zero column, which contradicts the linear independence of the system . Therefore, and then , i.e. a column is a linear combination of columns. It remains to show the uniqueness of such a representation. Let's assume the opposite. Let there be two expansions and , and not all expansion coefficients are respectively equal to each other (for example, ). Then from the equality

We get (\alpha_1-\beta_1)A_1+\ldots+(\alpha_k-\beta_k)A_k=o

sequentially, the linear combination of columns equals the null column. Since not all of its coefficients are equal to zero (at least ), this combination is non-trivial, which contradicts the condition of linear independence of the columns . The resulting contradiction confirms the uniqueness of the decomposition.

Example 3.2. Prove that two non-zero columns and are linearly dependent if and only if they are proportional, i.e. .

Solution. Indeed, if the columns and are linearly dependent, then there are numbers , which are not equal to zero at the same time, such that . And in this equality. Indeed, assuming that , we obtain a contradiction , since the column is also nonzero. Means, . Therefore, there is a number such that . The need has been proven.

Conversely, if , then . We got a non-trivial linear combination of columns equal to the zero column. So the columns are linearly dependent.

Example 3.3. Consider all possible systems formed from columns

Examine each system for a linear relationship.
Solution. Consider five systems containing one column each. According to paragraph 1 of Remarks 3.1: the systems , are linearly independent, and the system consisting of one zero column , is linearly dependent.

Consider systems containing two columns each:

– each of the four systems and is linearly dependent, since it contains a zero column (property 1);

– the system is linearly dependent, since the columns are proportional (property 3): ;

- each of the five systems and is linearly independent, since the columns are non-proportional (see the statement of example 3.2).

Consider systems containing three columns:

– each of the six systems and is linearly dependent, since it contains a zero column (property 1);

– systems are linearly dependent, since they contain a linearly dependent subsystem (property 6);

are systems and are linearly dependent, since the last column is linearly expressed in terms of the rest (property 4): and respectively.

Finally, systems of four or five columns are linearly dependent (by property 6).

Matrix rank

In this section, we consider another important numerical characteristic of a matrix, related to how much its rows (columns) depend on each other.

Definition 14.10 Let there be a matrix of sizes and a number not exceeding the smallest of the numbers and : . Let's choose arbitrarily the matrix rows and columns (the numbers of the rows may differ from the numbers of the columns). The determinant of a matrix composed of elements at the intersection of the selected rows and columns is called the matrix order minor.

Example 14.9 Let .

A first-order minor is any element of the matrix. So 2, , are first-order minors.

Minors of the second order:

1. take rows 1, 2, columns 1, 2, we get a minor ;

2. take rows 1, 3, columns 2, 4, we get a minor ;

3. take rows 2, 3, columns 1, 4, we get a minor

Minors of the third order:

rows here can only be selected in one way,

1. take columns 1, 3, 4, get a minor ;

2. take columns 1, 2, 3, get a minor .

Offer 14.23 If all minors of the order matrix are equal to zero, then all minors of order , if any, are also equal to zero.

Proof. Take an arbitrary minor of order . This is the determinant of the order matrix. Let's expand it by the first line. Then, in each term of the expansion, one of the factors will be a minor of the order of the original matrix. By assumption, the order minors are equal to zero. Therefore, the order minor will also be equal to zero.

Definition 14.11 The rank of a matrix is ​​the largest of the non-zero orders of the minors of the matrix . The rank of the zero matrix is ​​considered to be zero.

There is no single, standard, notation for the rank of a matrix. Following the tutorial , we will refer to it as .

Example 14.10 The matrix of Example 14.9 has rank 3 because there is a non-zero third-order minor, but there are no fourth-order minors.

Matrix rank is equal to 1, since there is a non-zero first-order minor (an element of the matrix), and all second-order minors are equal to zero.

The rank of a non-degenerate square order matrix is ​​equal to , since its determinant is a minor of the order and the non-degenerate matrix is ​​non-zero.

Offer 14.24 When transposing a matrix, its rank does not change, that is, .

Proof. The transposed minor of the original matrix will be the minor of the transposed matrix , and vice versa, any minor is the transposed minor of the original matrix . When transposing, the determinant (minor) does not change (Proposition 14.6). Therefore, if all minors of order in the original matrix are equal to zero, then all minors of the same order in are also equal to zero. If the order minor in the original matrix is ​​non-zero, then there is a non-zero minor of the same order. Consequently, .

Definition 14.12 Let the rank of the matrix be . Then any non-zero order minor is called a basic minor.

Example 14.11 Let . The determinant of the matrix is ​​zero, since the third row is equal to the sum of the first two. The second-order minor, located in the first two rows and the first two columns, is . Therefore, the rank of the matrix is ​​equal to two, and the considered minor is basic.

A basic minor is also a minor located, say, in the first and third rows, first and third columns: . The base will be the minor in the second and third rows, the first and third columns: .

The minor in the first and second rows, the second and third columns is equal to zero and therefore will not be basic. The reader can independently check which other second-order minors are basic and which are not.

Since the columns (rows) of the matrix can be added, multiplied by numbers, form linear combinations, it is possible to introduce definitions of linear dependence and linear independence of the system of columns (rows) of the matrix. These definitions are similar to the same definitions 10.14, 10.15 for vectors.

Definition 14.13 A system of columns (rows) is called linearly dependent if there is such a set of coefficients, of which at least one is nonzero, that the linear combination of columns (rows) with these coefficients will be equal to zero.

Definition 14.14 A system of columns (rows) is linearly independent if it follows from the equality to zero of a linear combination of these columns (rows) that all coefficients of this linear combination are equal to zero.

The following proposition, similar to Proposition 10.6, is also true.

Offer 14.25 A system of columns (rows) is linearly dependent if and only if one of the columns (one of the rows) is a linear combination of other columns (rows) of this system.

We formulate a theorem called basic minor theorem.

Theorem 14.2 Any column of a matrix is ​​a linear combination of columns passing through the basis minor.

The proof can be found in textbooks on linear algebra, for example, in,.

Offer 14.26 The rank of a matrix is ​​equal to the maximum number of its columns that form a linearly independent system.

Proof. Let the rank of the matrix be . Let's take the columns passing through the basis minor. Assume that these columns form a linearly dependent system. Then one of the columns is a linear combination of the others. Therefore, in the basis minor, one column will be a linear combination of the other columns. By Propositions 14.15 and 14.18, this basic minor must be equal to zero, which contradicts the definition of a basic minor. Therefore, the assumption that the columns passing through the basis minor are linearly dependent is not true. So, the maximum number of columns forming a linearly independent system is greater than or equal to .

Assume that the columns form a linearly independent system. Let's make a matrix out of them. All matrix minors are matrix minors. Therefore, the basis minor of the matrix has an order of at most . By the basis minor theorem, a column that does not pass through the basis minor of a matrix is ​​a linear combination of columns that pass through the basis minor, that is, the columns of the matrix form a linearly dependent system. This contradicts the choice of columns that form the matrix. Therefore, the maximum number of columns forming a linearly independent system cannot be greater than . Hence, it is equal to , as stated.

Offer 14.27 The rank of a matrix is ​​equal to the maximum number of its rows that form a linearly independent system.

Proof. By Proposition 14.24, the rank of a matrix does not change upon transposition. The rows of a matrix become its columns. The maximum number of new columns of the transposed matrix (former rows of the original one) forming a linearly independent system is equal to the rank of the matrix.

Offer 14.28 If the matrix determinant is equal to zero, then one of its columns (one of the rows) is a linear combination of the remaining columns (rows).

Proof. Let the order of the matrix be . The determinant is the only minor of a square matrix that has order . Since it is equal to zero, then . Therefore, the system of columns (rows) is linearly dependent, that is, one of the columns (one of the rows) is a linear combination of the others.

The results of Propositions 14.15, 14.18 and 14.28 give the following theorem.

Theorem 14.3 The determinant of a matrix is ​​zero if and only if one of its columns (one of the rows) is a linear combination of the other columns (rows).

Finding the rank of a matrix by calculating all its minors requires too much computational work. (The reader can verify that there are 36 second-order minors in a fourth-order square matrix.) Therefore, a different algorithm is used to find the rank. To describe it, some additional information is required.

Definition 14.15 We call the following operations on them elementary transformations of matrices:

1) permutation of rows or columns;
2) multiplying a row or column by a non-zero number;
3) adding to one of the rows another row, multiplied by a number, or adding to one of the columns of another column, multiplied by a number.

Offer 14.29 Under elementary transformations, the rank of the matrix does not change.

Proof. Let the rank of the matrix be equal to , -- the matrix resulting from the elementary transformation.

Consider a permutation of strings. Let be a minor of the matrix , then the matrix has a minor , which either coincides with or differs from it by a permutation of rows. And vice versa, any matrix minor can be associated with a matrix minor that either coincides with or differs from it in the order of rows. Therefore, from the fact that in the matrix all the minors of the order are equal to zero, it follows that in the matrix all the minors of this order are also equal to zero. And since the matrix has a non-zero order minor, the matrix also has a non-zero order minor, i.e. .

Consider multiplying a string by a non-zero number. A minor from a matrix corresponds to a minor from a matrix that either coincides with or differs from it by only one row, which is obtained from the minor row by multiplying by a non-zero number. In the last case . In all cases, or and are simultaneously equal to zero, or simultaneously different from zero. Consequently, .