Row equivalence is an equivalence relation

Two matrices A and B over a field are said to be row equivalent if there is a sequence of elementary row operations which transforms A into B. Prove that row equivalence is an equivalence relation.


Recall that the elementary row operations (for matrices over a field F) are as follows:

  1. Interchange two rows
  2. Add an F-multiple of one row to another
  3. Multiply a row by a nonzero element of F

We claim that (1) two of these are redundant and (2) elementary row operations can be achieved using multiplication by certain class of invertible “elementary matrices”, which happens to be closed under matrix inversion.

First, we show that row operation 1 (interchange two rows) can be achieved using a series of operations of types 2 and 3. Suppose we have a matrix M whose rows are R_1, R_2, \ldots, R_n, and we wish to interchange R_t and R_u. Let’s write M^\mathsf{T} = [M_1 | R_t | M_2 | R_u | M_3]. We will perform a sequence of row operations of type 2 and 3, keeping track of the rows after each step.

Operation Type Rows
Start [M_1 | R_t | M_2 | R_u | M_3]
Add -1 times row t to row u. 2 [M_1 | R_t | M_2 | R_u - R_t | M_3]
Add 1 times row u to row t. 2 [M_1 | R_u | M_2 | R_u - R_t | M_3]
Add -1 times row t to row u. 2 [M_1 | R_u | M_2 | -R_t | M_3]
Multiply row u by -1. 3 [M_1 | R_u | M_2 | R_t | M_3]

Note that we have achieved the same effect as a single application of operation 1. In addition, we only needed to use the elements 1 and -1 in F, which always exist. (In fact, we have 1 and -1 in any ring with 1, which is a useful generalization.)

We can think of a type 3 operation as a special case of a type 2 operation if we allow the same row to play both parts in a type 2 operation. Thus we can say that two matrices are row equivalent if there is a sequence of type 2 row operations turning one into the other.

Now define, over a field F, the (s,t,\alpha) elementary matrix as follows: E_{s,t}(\alpha) = [e_{i,j}], where e_{i,j} = \alpha if (i,j) = (s,t), 1 if j = i and (i,j) \neq (s,t), and 0 otherwise. Letting M = [m_{i,j}], we see that E_{s,y}(\alpha) M = [\sum_k e_{i,k} m_{k,j}]. If i \neq s, then e_{i,k} = 0 except when k = i, so that this sum is just m_{i,j}. If i = s and t \neq s, then e_{i,k} = 0 except when k \in \{i,t\}. In this case, this sum is m_{s,j} + \alpha m_{t,j}. If i = s and t = s, then e_{i,k} = 0 except when k = i, and the sum is \alpha m_{s,j}. In either case, the matrix E_{s,t}(\alpha) M is simply M after having row s replaced by row s plus \alpha times row t or after having row s multiplied by \alpha.

That is, two matrices A and B are row equivalent if and only if there is a matrix P = E_{s_1,t_1}(\alpha_1) \cdots E_{s_k,t_k}(\alpha_k) such that PA = B.

Note that, if t \neq s, then E_{s,t}(\alpha) \cdot E_{s,t}(-\alpha) = [\sum_k e_{i,k}f_{k,j}]. In this sum, if i \neq s, then e_{i,k} = 0 unless k = i. Then the sum is 0 unless j = i, in which case it is 1. If i = s, then e_{i,k} = 0 unless k \in \{i,t\}. \sum_k e_{i,k}f_{k,j} = e_{s,s}f_{s,j} + e_{s,t}f_{t,j}. If j = s (= i), this sum is 1. if j = t, this sum is -\alpha + \alpha = 0. Otherwise, the sum is 0. Hence E_{s,t}(\alpha) \cdot E_{s,t}(-\alpha) = I. In particular, E_{s,t}(\alpha) is invertible and E_{s,t}(\alpha)^{-1} = E_{s,t}(-\alpha).

Now note that E_{s,s}(\alpha) \cdot E_{s,s}(\alpha^{-1}) = [\sum_k e_{i,k}f_{k,j}]. In this sum, if i \neq s, then e_{i,k} = 0 unless k = i. If j = i, then the sum is 1, and otherwise is 0. If i = s, then e_{i,k} = 0 unless k = s. Then the sum is 0 unless j = s, in which case it is \alpha\alpha^{-1} = 1. Thus E_{s,s}(\alpha) is invertible and we have E_{s,s}(\alpha)^{-1} = E_{s,s}(\alpha^{-1}).

Thus the matrix P in our definition of row equivalence is invertible. Moreover, P^{-1} is also a product of elementary matrices.

Now it is easy to show that row equivalence is an equivalence relation.

  1. (Reflexive) The identiy matrix is elementary. Specifically, I = E_{s,s}(1). Thus every matrix is row equivalent to itself.
  2. (Symmetric) Suppose A can be row reduced to C. Then there is a product P of elementary matrices such that PA = C. Then A = P^{-1}C. Since P^{-1} is a product of elementary matrices, C can be row reduced to A.
  3. (Transitive) Suppose A can be row reduced to B and B to C. Then there exist products P and Q of elementary matrices such that PA = B and QB = C. Now QP is a product of elementary matrices, and QPA = C. Thus A can be row reduced to C.

(It turns out that GL_n(F) is generated by these elementary matrices, so that we can simply say A and C are row equivalent if there is an invertible matrix P such that PA = C, rather than specifying that P be a product of elementary matrices. We will prove this later.)

Advertisements
Post a comment or leave a trackback: Trackback URL.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: