Monthly Archives: August 2010

ZZ[x] and QQ[x] are not isomorphic

Prove that the rings $\mathbb{Z}[x]$ and $\mathbb{Q}[x]$ are not isomorphic.

We begin with some lemmas.

Lemma 1: Let $R$ and $S$ be rings with 1. If $\varphi : R \rightarrow S$ is a surjective ring homomorphism, then $\varphi(1_R) = 1_S$. Proof: Let $s \in S$. Now $s = \varphi(r)$ for some $r \in R$. Note that $\varphi(1_R) \cdot s = \varphi(1_R \cdot r)$ $= \varphi(r)$ $= s$, and likewise $s \varphi(1_R) = s$. Since the 1 in $S$ is unique, we have $\varphi(1_R) = 1_S$. $\square$

Lemma 2: Let $R$ and $S$ be rings with 1. If $\varphi : R \rightarrow S$ is a ring homomorphism such that $\varphi(1) = 1$, then $\varphi[R^\times] \subseteq S^\times$. Proof: If $u \in R^\times$ is a unit, we have $uv = 1$ for some $v \in R$. Then $\varphi(u) \varphi(v) = 1$, so that $\varphi(u)$ is a unit. $\square$

Lemma 3: Let $R$ and $S$ be rings with 1. If $\varphi : R \rightarrow S$ is a surjective ring homomorphism, then $\varphi[R^\times] = S^\times$. Proof: The $(\subseteq)$ direction follows from the previous lemma. Now let $w \in S^\times$. Since $\varphi$ is surjective, $w = \varphi(u)$ for some $u \in R$. Note that $1_S = wz$ for some $z \in S^\times$. Then $1_R = u \varphi^{-1}(z)$, and thus $u \in R^\times$. $\square$

Lemma 4: Let $R$ and $S$ be rings with 1. If $\varphi : R \rightarrow S$ is a ring isomorphism, then $\varphi|_{R^\times} : R^\times \rightarrow S^\times$ is a multiplicative group isomorphism. Proof: By the previous lemma, $\varphi|_{R^\times}$ indeed maps $R^\times$ surjectively to $S^\times$. Moreover, as the restriction of an injective function, $\varphi|_{R^\times}$ is injective. Finally, $\varphi|_{R^\times}$ is a multiplicative homomorphism since $\varphi$ is. $\square$

Note that $\mathbb{Z}[x]$ and $\mathbb{Q}[x]$ are rings with 1. If $\varphi : \mathbb{Z}[x] \rightarrow \mathbb{Q}[x]$ is a ring isomorphism, then the restriction $\psi$ of $\varphi$ to the units is a group isomorphism $\psi : \mathbb{Z}[x]^\times \rightarrow \mathbb{Q}[x]^\times$. However, note that by Proposition 4, $\mathbb{Z}[x]$ has 2 units while $\mathbb{Q}[x]$ has infinitely many units; thus we have a contradiction.

2ZZ and 3ZZ are not isomorphic as rings

Prove that the rings $2\mathbb{Z}$ and $3\mathbb{Z}$ are not isomorphic.

Suppose $\varphi : 2\mathbb{Z} \rightarrow 3\mathbb{Z}$ is an isomorphism. Then $\varphi(2) = 3a$ for some integer $a$. Now $\varphi(4) = \varphi(2 \cdot 2)$ $= \varphi(2) \cdot \varphi(2)$ $= 9a^2$, and also $\varphi(4) = \varphi(2 + 2)$ $= \varphi(2) + \varphi(2)$ $= 6a$, so that $6a = 9a^2$. Since $\mathbb{Z}$ is an integral domain, we have $2 = 3a$. However, this equation has no solutions in the integers, and we have a contradiction. Thus no such isomorphism exists.

Characterize the center of a group ring

Let $\mathcal{K} = \{k_1, \ldots, k_m \}$ be a conjugacy class in the finite group $G$. Let $R$ be a ring with 1.

1. Prove that the element $K = \sum_{i=1}^m k_i$ is in the center of the group ring $R[G]$. [Hint: Check that $g^{-1}Kg = K$ for all $g \in G$.]
2. Let $\mathcal{K}_1, \ldots, \mathcal{K}_n$ be the conjugacy classes of $G$, and for each $i$, let $K_i$ be the sum of the elements in $\mathcal{K}_i$ (as described in part 1). Prove that an element $\alpha \in R[G]$ is in the center if and only if $\alpha = \sum_{i=1}^n a_iK_i$ for some elements $a_i \in Z(R)$.

1. Let $g \in G$. Note that conjugation by $g$ permutes the elements of $\mathcal{K}$, so that (as an element of $R[G]$) we have $g^{-1}Kg = g^{-1} \left( \sum_{i=1}^m k_i \right) g$ $= \sum_{i=1}^m g^{-1}k_ig$ $= \sum_{i=1}^m k_i = K$. Thus $gK = Kg$ for all $g \in G$. Then for all $M = \sum_{i=1}^t r_i g_i \in R[G]$, we see the following.
 $KM$ = $\left( \displaystyle\sum_{i=1}^m k_i \right) \left( \displaystyle\sum_{j=1}^t r_j g_j \right)$ = $\displaystyle\sum_{i=1}^m \displaystyle\sum_{j=1}^t r_j k_i g_j$ = $\displaystyle\sum_{j=1}^t \displaystyle\sum_{i=1}^m r_j k_i g_j$ = $\displaystyle\sum_{j=1}^t r_j \left( \displaystyle\sum_{i=1}^m k_i \right) g_j$ = $\displaystyle\sum_{j=1}^t r_j K g_j$ = $\displaystyle\sum_{j=1}^t r_j g_j K$ = $\left( \displaystyle\sum_{j=1}^t r_j g_j \right) K$ = $MK$

Thus $K \in Z(R[G])$.

2. First we show that $N = \sum_{i=1}^n a_i K_i \in Z(R[G])$. Let $M = \sum_{j=1}^t r_j g_j \in R[G]$. Then we have the following.
 $NM$ = $\left( \displaystyle\sum_{i=1}^n a_i K_i \right) M$ = $\displaystyle\sum_{i=1}^n a_i K_i M$ = $\displaystyle\sum_{i=1}^n a_i M K_i$ = $\displaystyle\sum_{i=1}^n a_i \left( \displaystyle\sum_{j=1}^t r_j g_j \right) K_i$ = $\displaystyle\sum_{i=1}^n \left( \displaystyle\sum_{j=1}^t a_i r_j g_j \right) K_i$ = $\displaystyle\sum_{i=1}^n \left( \displaystyle\sum_{j=1}^t r_j a_i g_j \right) K_i$ = $\displaystyle\sum_{i=1}^n \left( \displaystyle\sum_{j=1}^t r_j g_j \right) a_i K_i$ = $\left(\displaystyle\sum_{j=1}^t r_j g_j \right) \left( \displaystyle\sum_{i=1}^n a_i K_i \right)$ = $MN$

Thus $N \in Z(R[G])$.

Now let $M = \sum_{i=1}^t r_i g_i \in Z(R[G])$. First, let $s \in R$ be arbitrary. By examining each coefficient of $Ms = sM$, we see that $r_i \in Z(R)$ for all $i$. Now recall that $G$ acts transitively (by conjugation) on each of its conjugacy classes. If $K$ is a conjugacy class of $G$ and $g_1,g_2 \in K$, then we have $h^{-1}g_1h = g_2$ for some $h \in G$. The coefficient of $g_2$ in $M = g^{-1}Mg$ is $r_2$ on one hand and $r_1$ on the other, so that in fact $r_1 = r_2$. In fact the coefficient of each $g_i \in K$ is $r_1$, and we have $M = \sum_{i=1}^m r_i K_i$.

Exhibit an element in the center of a group ring

Let $R$ be a ring with $1 \neq 0$, and let $G = \{g_1, \ldots, g_n \}$ be a finite group. Prove that the element $N = \sum_{i=1}^n g_i$ is in the center of the group ring $R[G]$.

Let $M = \sum_{i=1}^n r_i g_i$ be an element of $R[G]$. Note that for each $g_i \in G$, the action of $g_i$ on $G$ by conjugation permutes the subscripts. Then we have the following.

 $NM$ = $\left( \displaystyle\sum_{i=1}^n g_i \right) \left( \displaystyle\sum_{j=1}^n r_jg_j \right)$ = $\displaystyle\sum_{j=1}^n \displaystyle\sum_{i=1}^n r_jg_ig_j$ = $\displaystyle\sum_{j=1}^n \displaystyle\sum_{i=1}^n r_j g_j g_j^{-1} g_ig_j$ = $\displaystyle\sum_{j=1}^n r_j g_j \left( \displaystyle\sum_{i=1}^n g_j^{-1}g_ig_j \right)$ = $\displaystyle\sum_{j=1}^n r_j g_j \left( \displaystyle\sum_{i=1}^n g_i \right)$ = $\left( \displaystyle\sum_{j=1}^n r_jg_j \right) \left( \displaystyle\sum_{i=1}^n g_i \right)$ = $MN$.

Thus $N \in Z(R[G])$.

Compute in a group ring

Consider the following elements of the group ring $\mathbb{Z}/(3)[S_3]$: $\alpha = 1(2\ 3) + 2(1\ 2\ 3)$ and $\beta = 2(2\ 3) + 2(1\ 3\ 2)$. Compute $\alpha + \beta$, $2\alpha - 3\beta$, $\alpha\beta$, $\beta\alpha$, and $\alpha^2$.

Evidently,

1. $\alpha + \beta = 2(1\ 2\ 3) + 2(1\ 3\ 2)$
2. $2\alpha - 3\beta = 2\alpha = 2(2\ 3) + (1\ 2\ 3)$
3. $\alpha\beta = 2(1\ 2) + 1(1\ 2\ 3)$
4. $\beta\alpha = 1(1\ 3) + 2(1\ 3\ 2)$
5. $\alpha^2 = 1(1) + 2(1\ 2) + 2(1\ 3) + 1(1\ 3\ 2)$

Compute in a group ring

Consider the following elements of the integral group ring $\mathbb{Z}[S_3]$: $\alpha = 3(1\ 2) - 5(2\ 3) + 14(1\ 2\ 3)$ and $\beta = 6(1) + 2(2\ 3) - 7(1\ 3\ 2)$. Compute the following elements: $\alpha + \beta$, $2\alpha - 3\beta$, $\alpha\beta$, $\beta\alpha$, and $\alpha^2$.

Evidently,

1. $\alpha + \beta = 6(1) + 3(1\ 2) - 3(2\ 3) + 14(1\ 2\ 3) - 7(1\ 3\ 2)$
2. $2\alpha - 3\beta = -18(1) + 6(1\ 2) - 16(2\ 3) + 28(1\ 2\ 3) + 21(1\ 3\ 2)$
3. $\alpha\beta = -108(1) + 81(1\ 2) - 21(1\ 3) - 30(2\ 3) + 90(1\ 2\ 3)$
4. $\beta\alpha = -108(1) + 18(1\ 2) + 63(1\ 3) - 51(2\ 3) + 84(1\ 2\ 3) + 6(1\ 3\ 2)$
5. $\alpha^2 = 34(1) - 70(1\ 2) - 28(1\ 3) + 42(2\ 3) - 15(1\ 2\ 3) + 181(1\ 3\ 2)$

Compute in a group ring over Dih(8)

Let $\alpha = r + r^2 - 2s$ and $\beta = -3r^2 + rs$ be elements of the group ring $\mathbb{Z}[D_8]$. Compute the following: $\beta\alpha$, $\alpha^2$, $\alpha\beta - \beta\alpha$, and $\beta\alpha\beta$.

Evidently,

1. $\beta\alpha = -3 - 2r - 3r^3 + s + 6r^2s + r^3s$
2. $\alpha^2 = 5 + r^2 + 2r^3 - 4r^2s - 4r^3s$
3. $\alpha\beta - \beta\alpha = 2r - 2r^3 - s + r^2s$
4. $\beta\alpha\beta = 15r + 10r^2 + 7r^3 - 21s - 6rs - 5r^2s$

Strictly upper (lower) triangular matrices are zero divisors

Let $S$ be any ring and let $n \geq 2$ be an integer. Prove that if $A$ is any strictly upper triangular matrix in $M_n(S)$ then $A^n = 0$. (A strictly upper triangular matrix is a matrix whose entries on and below the main diagonal are zero.)

Before approaching this problem, we will introduce some “structural” operations on matrices and prove some basic properties.

Definition: Let $X$ be a set.

1. Suppose $A = [a_{i,j}] \in \mathsf{Mat}_{k,n}(X)$ and $B = [b_{i,j}] \in \mathsf{Mat}_{k,m}(X)$. We define a matrix $[A | B] \in \mathsf{Mat}_{k,n+m}(X)$ as follows: $([A|B])_{i,j} = a_{i,j}$ if $1 \leq j \leq n$ and $b_{i,j-n}$ otherwise.
2. Suppose $A = [a_{i,j}] \in \mathsf{Mat}_{n,k}(X)$ and $B = [b_{i,j}] \in \mathsf{Mat}_{m,k}(X)$. We define a matrix $\left[ \frac{A}{B} \right] \in \mathsf{Mat}_{n+m,k}(X)$ as follows: $\left(\left[ \frac{A}{B} \right]\right)_{i,j} = a_{i,j}$ if $1 \leq i \leq n$ and $b_{i-n,j}$ otherwise.

Lemma: Let $X$ be a set, $A = [a_{i,j}] \in \mathsf{Mat}_{n,k}(X)$, $B = [b_{i,j}] \in \mathsf{Mat}_{n,\ell}(X)$, $C = [c_{i,j}] \in \mathsf{Mat}_{m,k}(X)$, and $D = [d_{i,j}] \in \mathsf{Mat}_{m,\ell}(X)$. Then

 $\left[ \dfrac{[A | B]}{[C | D]} \right] = \left[ \left[ \dfrac{A}{C} \right] \bigg| \left[ \dfrac{B}{D} \right] \right].$

Proof: Let $\mathcal{S}$ denote the matrix on the left hand side of the equals sign and $\mathcal{T}$ the matrix on the right. We consider four possibilities for $(i,j)$.

1. Suppose $0 < i \leq n$ and $0 < j \leq k$. Then $(\mathcal{S})_{i,j} = a_{i,j} = (\mathcal{T})_{i,j}$.
2. Suppose $0 < i \leq n$ and $k < j \leq j+\ell$. Then $(\mathcal{S})_{i,j} = b_{i,j} = (\mathcal{T})_{i,j}$.
3. Suppose $n < i \leq n+m$ and $0 < j \leq k$. Then $(\mathcal{S})_{i,j} = c_{i,j} = (\mathcal{T})_{i,j}$.
4. Suppose $n < i \leq n+m$ and $k < j \leq k+\ell$. Then $(\mathcal{S})_{i,j} = d_{i,j} = (\mathcal{T})_{i,j}$.

Thus $\mathcal{S} = \mathcal{T}$. $\square$

Since these two operators “abide”, we will drop the inner brackets and write (for example) $\left[ \dfrac{A}{C} \bigg| \dfrac{B}{D} \right]$ for brevity.

Lemma: Let $R$ be a ring.

1. If $A = [a_{i,j}] \in \mathsf{Mat}_{n,k}(R)$, $B = [b_{i,j}] \in \mathsf{Mat}_{k,m}(R)$, and $C = [c_{i,j}] \in \mathsf{Mat}_{k,\ell}(R)$, then $A \cdot [B|C] = [A \cdot B | A \cdot C]$.
2. If $A = [a_{i,j}] \in \mathsf{Mat}_{n,k}(R)$, $B = [b_{i,j}] \in \mathsf{Mat}_{m,k}(R)$, and $C = [c_{i,j}] \in \mathsf{Mat}_{k,\ell}(R)$, then $\left[ \frac{A}{B} \right] \cdot C = \left[ \frac{A \cdot C}{B \cdot C} \right]$.

Proof: The $(i,j)$ entry of $A \cdot [B|C]$ is $\sum_{p=1}^k a_{i,p}([B|C])_{p,j}$. If $0 < j \leq m$, then this sum is $\sum_{p=1}^k a_{i,p}b_{p,j} = (A \cdot B)_{i,j}$. If $m < j \leq m+\ell$, then this sum is $\sum_{p=1}^k a_{i,p}c_{p,j-m} = (A \cdot C)_{i,j-m}$. Thus $(A \cdot [B|C])_{i,j} = ([A \cdot B|A \cdot C])_{i,j}$ for all $(i,j)$. The proof of the second statement is analogous. $\square$

Lemma: Let $R$ be a ring. If $A = [a_{i,j}] \in \mathsf{Mat}_{n,k}(R)$, $B = [b_{i,j}] \in \mathsf{Mat}_{n,\ell}(R)$, $C = [c_{i,j}] \in \mathsf{Mat}_{k,m}(R)$, and $D = [d_{i,j}] \in \mathsf{Mat}_{\ell,m}(R)$, then $[A|B] \cdot \left[ \dfrac{C}{D} \right] = AC + BD$. Proof: For each $(i,j)$, note the following.

 $\left( [A|B] \cdot \left[ \dfrac{C}{D} \right] \right)_{i,j}$ = $\displaystyle\sum_{p=1}^{k+\ell} \left( [A|B] \right)_{i,p} \left( \left[ \dfrac{C}{D} \right] \right)_{p,j}$ = $\left( \displaystyle\sum_{p=1}^{k} \left( [A|B] \right)_{i,p} \left( \left[ \dfrac{C}{D} \right] \right)_{p,j} \right) + \left( \displaystyle\sum_{p=k+1}^{k+\ell} \left( [A|B] \right)_{i,p} \left( \left[ \dfrac{C}{D} \right] \right)_{p,j} \right)$ = $\left( \displaystyle\sum_{p=1}^k a_{i,p}c_{p,j} \right) + \left( \displaystyle\sum_{p=k+1}^{k+\ell} b_{i,p-k}d_{p-k,j} \right)$ = $(A \cdot C)_{i,j} + (B \cdot D)_{i,j}$ = $(A \cdot C + B \cdot D)_{i,j}$

Thus the two matrices are equal. $\square$

Lemma: Let $R$ be a ring. Let $A_1 \in \mathsf{Mat}_{n \times k}(R)$, $B_1 \in \mathsf{Mat}_{n \times \ell}(R)$, $C_1 \in \mathsf{Mat}_{t \times k}(R)$, $D_1 \in \mathsf{Mat}_{t \times \ell}(R)$, $A_2 \in \mathsf{Mat}_{k \times m}(R)$, $B_2 \in \mathsf{Mat}_{k \times p}(R)$, $C_2 \in \mathsf{Mat}_{\ell \times m}(R)$, and $D_2 \in \mathsf{Mat}_{\ell \times p}(R)$. Then

 $\left[ \dfrac{A_1 | B_1}{C_1 | D_1} \right] \cdot \left[ \dfrac{A_2 | B_2}{C_2 | D_2} \right] = \left[ \dfrac{A_1A_2 + B_1C_2 | A_1B_2 + B_1D_2}{C_1A_2 + D_1C_2 | C_1B_2 + D_1D_2} \right].$

Proof: Using the previous lemmas, we have the following.

 $\left[ \dfrac{A_1 | B_1}{C_1 | D_1} \right] \cdot \left[ \dfrac{A_2 | B_2}{C_2 | D_2} \right]$ = $\left[ \dfrac{A_1 | B_1}{C_1 | D_1} \right] \cdot \left[ \left[ \dfrac{A_2}{C_2} \right] \bigg| \left[ \dfrac{B_2}{D_2} \right] \right]$ = $\left[ \left[ \dfrac{A_1 | B_1}{C_1 | D_1} \right] \cdot \left[ \dfrac{A_2}{C_2} \right] \bigg| \left[ \dfrac{A_1 | B_1}{C_1 | D_1} \right] \cdot \left[ \dfrac{B_2}{D_2} \right] \right]$ = $\left[ \left[ \dfrac{[A_1 | B_1]}{[C_1 | D_1]} \right] \cdot \left[ \dfrac{A_2}{C_2} \right] \bigg| \left[ \dfrac{[A_1 | B_1]}{[C_1 | D_1]} \right] \cdot \left[ \dfrac{B_2}{D_2} \right] \right]$ = $\left[ \left[ \dfrac{[A_1|B_1] \cdot \left[ \dfrac{A_2}{C_2} \right]}{[C_1|D_1] \cdot \left[ \dfrac{A_2}{C_2} \right]} \right] \Bigg| \left[ \dfrac{[A_1|B_1] \cdot \left[ \dfrac{B_2}{D_2} \right]}{[C_1|D_1] \cdot \left[ \dfrac{B_2}{D_2} \right]} \right] \right]$ = $\left[ \dfrac{A_1A_2 + B_1C_2 | A_1B_2 + B_1D_2}{C_1A_2 + D_1C_2 | C_1B_2 + D_1D_2} \right]$. $\square$

We now introduce another definition.

Definition: Let $R$ be a ring, $n \geq 2$, and $1 \leq k \leq n$. A matrix $M \in \mathsf{Mat}_n(R)$ is called $k$-strictly upper triangular if $M = \left[ \dfrac{0 | M^\prime}{0_k | 0} \right]$ where $0_k$ is the $k \times k$ zero matrix, $M^\prime$ has dimensions $(n-k) \times (n-k)$, and $M^\prime$ is upper triangular.

For example, 1-strictly upper triangular matrices and strictly upper triangular matrices are the same, and an $n \times n$ matrix is zero if and only if it is $n$-strictly upper triangular.

Lemma: Let $R$ be a ring, $n \geq 2$, and $N$ a square matrix over $R$ of dimension $n$. If $N$ is strictly upper triangular and $N = \left[ \dfrac{N_1 | N_2}{N_3 | N_4} \right]$, where $N_4$ is square, then $N_4$ is strictly upper triangular. Proof: The elements on or below the main diagonal of $N_4$ are on or below the main diagonal of $N$, hence are zero. $\square$

Lemma: Let $R$ be a ring, $n \geq 2$, and $1 \leq k \leq n$. If $A = \left[ \begin{array}{c|c} 0 & A^\prime \\ \hline 0_k & 0 \end{array} \right]$ is $k$-strictly upper triangular and $A^\prime$ is strictly upper triangular, then $A$ is $k+1$-strictly upper triangular. Proof: We have $A^\prime = \left[ \begin{array}{c|c} 0 & A^{\prime\prime} \\ \hline 0_1 & 0 \end{array} \right]$, where $A^{\prime\prime}$ is upper triangular and $0_1$ has dimension $1 \times 1$. Thus we have the following.

$A = \left[ \begin{array}{c|c|c} 0 & 0 & A^{\prime\prime} \\ \hline 0 & 0_1 & 0 \\ \hline 0_k & 0 & 0 \end{array} \right] = \left[ \begin{array}{c|c} 0 & A^{\prime\prime} \\ \hline 0_{k+1} & 0 \end{array} \right],$

So that $A$ is $k+1$-strictly upper triangular. $\square$

Lemma: Let $R$ be a ring, let $n \geq 2$, and let $M,N \in \mathsf{Mat}_n(R)$. If $M$ is upper triangular and $N$ is strictly upper triangular, then $MN$ is strictly upper triangular. Proof: Recall that $(MN)_{i,j} = \sum_{k=1}^n m_{i,k}n_{k,j}$. Suppose $i \geq j$. Then if $k \geq j$, $n_{k,j} = 0$. If $k < i$, $m_{i,k} = 0$. Thus $(MN)_{i,j} = 0$, so that $MN$ is strictly upper triangular. $\square$

Lemma: Let $R$ be a ring, let $n \geq 2$, and let $1 \leq k < n$. If $M,N \in \mathsf{Mat}_n(R)$ such that $M$ is $k$-strictly upper triangular and $N$ is strictly upper triangular, then $MN$ is $k+1$-strictly upper triangular. Proof: Write $M = \left[ \begin{array}{c|c} 0 & M^\prime \\ \hline 0_k & 0 \end{array} \right]$ and $N = \left[ \begin{array}{c|c} N_1 & N_2 \\ \hline N_3 & N_4 \end{array} \right]$, where $M$ and $N_4$ have dimension $n-k \times n-k$. Evidently, $MN = \left[ \begin{array}{c|c} 0 & M^\prime N_4 \\ \hline 0_k & 0 \end{array} \right]$. By the previous lemma, since $M^\prime$ is upper triangular and $N_4$ is strictly upper triangular, $M^\prime N_4$ is strictly upper triangular. Thus $MN$ is $k+1$-strictly upper triangular. $\square$

Now to the main result.

If $A$ is an $n \times n$ matrix over a ring $R$, and $A$ is strictly upper triangular, then by an easy induction argument $A^k$ is $k$-strictly upper triangular. Thus $A^n = 0$.

The center of a matrix ring over a commutative ring is precisely the scalar matrices

Let $R$ be a commutative ring with 1. Prove that the center of the ring $M_n(R)$ is the set of scalar matrices.

Recall the definition and properties of $E_{i,j}$ from a previous exercise.

We begin with a lemma.

Lemma: $E_{p,q}E_{s,t} = E_{p,t}$ if $q = s$ and 0 otherwise. Proof: If $q = s$, then the $p$th row of $E_{p,q}E_{s,t}$ is the $s$th row of $E_{s,t}$ and all other entries are 0. Thus $E_{p,q}E_{s,t} = E_{p,t}$. If $q \neq s$, then the $p$th row of $E_{p,q}E_{s,t}$ is the $q$th row of $E_{s,t}$, which is all zeroes, and all other entries are 0. Thus $E_{p,q}E_{s,t} = 0$. $\square$

Now suppose $B = [b_{i,j}] \in Z(M_n(R))$.

By the previous exercise, note that the $(p,t)$ entry of $E_{p,q}BE_{s,t} = E_{p,q}E_{s,t}B$ is $b_{q,s}$. By the lemma, if $q \neq s$, then $b_{q,s} = 0$. Thus $B$ is a diagonal matrix. Now if $q = s$, then the $(p,t)$ entry of $E_{p,t}B$ is $b_{q,q}$ on one hand, and $b_{t,t}$ on the other, since the $p$th row of $E_{p,t}B$ is the $t$th row of $B$. Thus $b_{q,q} = b_{t,t}$ for all choices of $q$ and $t$. Hence $B = bI$ for some $b \in R$, and we have $Z(M_n(R)) \subseteq \{ rI \ |\ r \in R \}$.

Conversely, $(rI)A = rA = Ar = A(rI)$. Thus $Z(N_n(R)) = \{ rI \ |\ r \in R \}$.

Definition and properties of matrices with a single nonzero entry

Let $S$ be a ring with identity $1 \neq 0$. Let $n$ be a positive integer and let $A = [a_{i,j}]$ be an $n \times n$ matrix over $S$. Let $E_{i,j}$ be the element of $M_n(S)$ whose $(i,j)$ entry is 1 and whose other entries are all 0.

1. Prove that $E_{i,j}A$ is the matrix whose $i$th row is the $j$th row of $A$ and all other rows are 0.
2. Prove that $AE_{i,j}$ is the matrix whose $j$th column is the $i$th column of $A$ and all other columns are 0.
3. Deduce that $E_{p,q}AE_{r,s}$ is the matrix whose $(p,s)$ entry is $a_{q,r}$ and all other entries are 0.

1. By definition, $E_{i,j}A = [c_{p,q}]$, where $c_{p,q} = \sum_{k=1}^n e_{p,k}a_{k,q}$. Note that if $p \neq i$, then $e_{p,k} = 0$, so that $c_{p,q} = 0$. If $p = i$, then $c_{p,q} = a_{j,q}$; thus the $i$th row of $E_{i,j}A$ is the $j$th row of $A$, and all other entries are 0.
2. The proof for $AE_{i,j}$ is very similar.
3. By the above arguments, $E_{p,q}A$ is the matrix whose $p$th row is the $q$th row of $A$, and all other entries are 0. Then $E_{p,q}AE_{r,s}$ is the matrix whose $s$th column is the $r$th column of $E_{p,q}A$, which is all zeroes except for the $p$th row, whose entry is the $(q,r)$ entry of $A$, and all other entries are zero.