## Tag Archives: direct sum

### Facts about power series of matrices

Let $G(x) = \sum_{k \in \mathbb{N}} \alpha_k x^k$ be a power series over $\mathbb{C}$ with radius of convergence $R$. Let $A$ be an $n \times n$ matrix over $\mathbb{C}$, and let $P$ be a nonsingular matrix. Prove the following.

1. If $G(A)$ converges, then $G(P^{-1}AP)$ converges, and $G(P^{-1}AP) = P^{-1}G(A)P$.
2. If $A = B \oplus C$ and $G(A)$ converges, then $G(B)$ and $G(C)$ converge and $G(B \oplus C) = G(B) \oplus G(C)$.
3. If $D$ is a diagonal matrix with diagonal entries $d_i$, then $G(D)$ converges, $G(d_i)$ converges for each $d_i$, and $G(D)$ is diagonal with diagonal entries $G(d_i)$.

Suppose $G(A)$ converges. Then (by definition) the sequence of matrices $G_N(A)$ converges entrywise. Let $G_N(A) = [a_{i,j}^N]$, $P = [p_{i,j}]$, and $P^{-1} = [q_{i,j}]$. Now $G_N(P^{-1}AP) = P^{-1}G_N(A)P$ $= [\sum_\ell \sum_k q_{i,k} a_{k,\ell}^Np_{\ell,j}]$. That is, the $(i,j)$ entry of $G_N(P^{-1}AP)$ is $\sum_\ell \sum_k q_{i,k} a_{k,\ell}^Np_{\ell,j}$. Since each sequence $a_{k,\ell}^N$ converges, this sum converges as well. In particular, $G(P^{-1}AP)$ converges (again by definition). Now since $G_N(P^{-1}AP) = P^{-1}G_N(A)P$ for each $N$, the corresponding sequences for each $(i,j)$ are equal for each term, and so have the same limit. Thus $G(P^{-1}AP) = P^{-1}G(A)P$.

Now suppose $A = B \oplus C$. We have $G_N(B \oplus C) = \sum_{k=0}^N \alpha_k (B \oplus C)^k$ $= \sum_{k=0}^N \alpha_k B^k \oplus C^k$ $= (\sum_{k = 0}^N \alpha_k B^k) \oplus (\sum_{k=0}^N \alpha_k C^k)$ $= G_N(B) \oplus G_N(C)$. Since $G_N(A)$ converges in each entry, each of $G_N(B)$ and $G_N(C)$ converge in each entry. So $G(B)$ and $G(C)$ converge. Again, because for each $(i,j)$ the corresponding sequences $G_N(A)_{i,j}$ and $(G_N(B) \oplus G_N(C))_{i,j}$ are the same, they converge to the same limit, and thus $G(B \oplus C) = G(B) \oplus G(C)$.

Finally, suppose $D$ is diagonal. Then in fact we have $D = \bigoplus_{t=1}^n [d_t]$, and so by the previous part, $G(D) = \bigoplus_{t=1}^n G(d_t)$. In particular, $G(d_t)$ converges, and $G(D)$ is diagonal with diagonal entries $G(d_t)$ as desired.

### The minimal polynomial of a direct sum is the least common multiple of minimal polynomials

Let $M = A \oplus B = \begin{bmatrix} A & 0 \\ 0 & B \end{bmatrix}$ be a direct sum of square matrices $A$ and $B$. Prove that the minimal polynomial of $M$ is the least common multiple of the minimal polynomials of $A$ and $B$.

Given a linear transformation $T$ on $V$, we let $\mathsf{Ann}_T(V)$ denote the annihilator in $F[x]$ of $V$ under the action induced by $x \cdot v = T(v)$.

Let $p(x) \in \mathsf{Ann}_M(V)$. If $p(x) = \sum r_ix^i$, we have $\sum a_iM^i = 0$ as a linear transformation. Note that $M^k = \begin{bmatrix} A^k & 0 \\ 0 & B^k \end{bmatrix}$. So we have $\begin{bmatrix} \sum r_i A^i & 0 \\ 0 & \sum r_i B^i \end{bmatrix} = 0$, and thus $\sum r_i A^i = 0$ and $\sum r_i B^i = 0$. So $p(x) \in \mathsf{Ann}_A(W_1) \cap \mathsf{Ann}_B(W_2)$, where $V = W_1 \oplus W_2$. Conversely, if $p(x) \in \mathsf{Ann}_A(W_1) \cap \mathsf{Ann}_B(W_2)$, then $p(A) = 0$ and $p(B) = 0$ as linear transformations. Then $p(M) = \sum r_i M^i$ $= \begin{bmatrix} \sum r_iA^i & 0 \\ 0 & \sum r_iB^i \end{bmatrix}$ $= 0$, so that $p(x) \in \mathsf{Ann}_M(V)$. So we have $\mathsf{Ann}_M(V) = \mathsf{Ann}_A(W_1) \cap \mathsf{Ann}_B(W_2)$.

That is, $(m_M) = (m_A) \cap (m_B)$, where $m_T$ is the minimal polynomial of $T$.

Lemma: In a principal ideal domain $R$, if $(a) \cap (b) = (c)$, then $c$ is the least common multiple of $a$ and $b$. Proof: Certainly $c \in (a)$ and $c \in (b)$, so that $c$ is a multiple of both $a$ and $b$. If $d$ is a multiple of $a$ and of $b$, then $d \in (a) \cap (b) = (c)$, so that latex d\$ is a multiple of $c$. $\square$

The result then follows.

### Every torsion module over a principal ideal domain is the direct sum of its p-primary components

Let $R$ be a principal ideal domain and let $N$ be a torsion $R$-module. Prove that the $p$-primary component of $N$ is a submodule for every prime $p \in R$. Prove that $N$ is the direct sum of its $p$-primary components.

We proved this in a previous exercise.

### Over an integral domain, the rank of a direct sum is the sum of the ranks

Let $R$ be an integral domain and let $A$ and $B$ be (left, unital) $R$-modules. Prove that $\mathsf{rank}(A \oplus B) = \mathsf{rank}(A) + \mathsf{rank}(B)$, where the rank of a module is the largest possible cardinality of a linearly independent subset.

Suppose $A$ has rank $n$ and $B$ has rank $m$. By the previous exercise, there exist free submodules $A_1 \subseteq A$ and $B_1 \subseteq B$ having free ranks $n$ and $m$, respectively, such that the quotients $A/A_1$ and $B/B_1$ are torsion. Note that $A_1 \oplus B_1 \subseteq A \oplus B$ is free. By this previous exercise, we have $(A \oplus B)/(A_1 \oplus B_1) \cong_R (A/A_1) \oplus (B/B_1)$. Note that since $R$ is an integral domain, finite direct sums of torsion modules are torsion. Thus $(A \oplus B)/(A_1 \oplus B_1)$ is torsion. Since $A_1 \oplus B_1$ is free and has free rank $n+m$, by this previous exercise, $A \oplus B$ has rank $n+m$.

### Show that a given stable subspace does not have a stable complement

Let $F = \mathbb{R}$ and $V = \mathbb{R}^2$. Let $v_1 = (1,0)$ and $v_2 = (0,1)$ be the standard basis $B$ for $V$. Let $\varphi : V \rightarrow V$ be the linear transformation whose matrix with respect to this basis (on both sides) is $A = \begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix}$. Prove that the subspace $W$ spanned by $v_1$ is stable under $\varphi$. Prove that there is no subspace $W^\prime$, also stable under $\varphi$, such that $V = W \oplus W^\prime$.

Note that $\begin{bmatrix} 2 & 1 \\ 0 & 2 \end{bmatrix} \begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 2 \\ 0 \end{bmatrix} = 2v_1$. Thus $\varphi[W] \subseteq W$, and so $W$ is stable under $\varphi$.

If there does exist a subspace $W^\prime$, stable under $\varphi$, such that $V = W \oplus W^\prime$, then by this previous exercise, there is a basis $E \subseteq V$ with respect to which the matrix realization of $\varphi$ is (block) diagonal. That is, there exists an invertible matrix $P = \begin{bmatrix} a & b \\ c & d \end{bmatrix}$ such that $PAP^{-1} = D$ is diagonal. Evidently, we have $PAP^{-1} = \begin{bmatrix} 2 - \frac{ac}{ad-bc} & \frac{a^2}{ad-bc} \\ \frac{-c^2}{ad-bc} & 2 + \frac{ac}{ad-bc} \end{bmatrix}$. If this matrix is diagonal, then $a = c = 0$. This is a contradiction, however, since then $P$ is not invertible. So no such subspace $W^\prime$ exists.

### A linear transformation on a finite dimensional vector space which has a stable subspace decomposes as a direct sum

Let $V$ be a finite dimensional vector space over a field $F$ and let $\varphi : V \rightarrow V$ be a linear transformation. A subspace $W \subseteq V$ is called $\varphi$-stable if $\varphi[W] \subseteq W$. Prove that if $\varphi$ has a stable subspace $W$, then $\varphi$ decomposes as a direct sum of linear transformations. Moreover, show that if each summand is nonsingular, then $\varphi$ is nonsingular.

Conversely, prove that if $\alpha \oplus \beta$ is nonsingular over a finite dimensional vector space, then $\alpha$ and $\beta$ are nonsingular. Prove that this statement is not true over an infinite dimensional vector space.

Suppose $\varphi[W] \subseteq W$. Letting $\pi : V \rightarrow V/W$ denote the natural projection, note that $\pi \circ \varphi : V \rightarrow V/W$. Moreover, since $W$ is $\varphi$-stable, we have $W \subseteq \mathsf{ker}\ \pi \circ \varphi$. By the generalized first isomorphism theorem, we have an induced linear transformation $\overline{\varphi} : V/W \rightarrow V/W$ given by $\overline{\varphi}(v+W) = \varphi(v) + W$. It is clear that the restriction $\varphi|_W : W \rightarrow W$ is a linear transformation. Recall that $V \cong W \oplus V/W$ via the isomorphism $\theta : (w, v+W) = v+w$. Evidently, $\theta \circ (\varphi|_W \oplus \overline{\varphi}) = \varphi \circ \theta$. Thus $\varphi$ decomposes as a direct sum: $\theta = \alpha \oplus \beta$, where $V = A \oplus B$.

Suppose $\alpha : A \rightarrow A$ and $\beta : B \rightarrow B$ are nonsingular linear transformations. That is, $\mathsf{ker}\ \alpha = \mathsf{ker}\ \beta = 0$. Suppose $(a,b) \in \mathsf{ker}\ \alpha \oplus \beta$; then $a \in \mathsf{ker}\ \alpha$ and $b \in \mathsf{ker}\ \beta$, and we have $(a,b) = 0$. Thus $\alpha \oplus \beta$ is nonsingular.

Now suppose $V$ is finite dimensional, $W \subseteq V$ is $\varphi$-stable for some linear transformation $\varphi$, and let $\varphi|_W : W \rightarrow W$ and $\overline{\varphi} : V/W \rightarrow V/W$ be the induced maps discussed above. Suppose $\varphi$ is nonsingular. Since $V$ is finite dimensional, in fact $\varphi$ is an isomorphism. This induces the following short exact sequence of vector spaces.

a diagram of vector spaces

Note that $\mathsf{ker}\ \varphi|_W \subseteq \mathsf{ker}\ \varphi$, so that $\varphi|_W$ is injective (I.e. nonsingular). Again, since $W$ is finite dimensional, $\varphi|_W$ is surjective. Using part (a) to this previous exercise, $\overline{\varphi}$ is injective- that is, nonsingular.

Note that this proof strategy depends essentially on the fact that, on finite dimensional vector spaces, injectivity and surjectivity are equivalent. If we are going to find an infinite dimensional counterexample, then the subspace $W$ must also be infinite dimensional and the induced mapping $\varphi|_W$ must be injective but not surjective.

Consider the vector space $V = \bigoplus_\mathbb{N} F$. Let $\varphi : V \rightarrow V$ be the “right shift operator” given by $\varphi(a)_i = 0$ if $i = 0$ and $a_{i-1}$ otherwise. Let $W = 0 \oplus \bigoplus_{\mathbb{N}^+} F$; that is, all tuples in $V$ whose first coordinate is 0. Certainly $W$ is a subspace of $V$, and moreover is stable under $\varphi$. We also have that $V/W \cong F \cong F \oplus \bigoplus_{\mathbb{N}^+} 0$. Now $\varphi|_W$ is injective but not surjective. Suppose $v+W \in V/W$; then $\overline{\varphi}(v+W) = \varphi(v)+W$. Since $\varphi(v) \in W$, in fact $\overline{\varphi} = 0$. So $\overline{\varphi}$ is singular.

### As an F-vector space, an infinite direct sum of F has strictly smaller dimension than an infinite direct power of F over the same index set

Let $F$ be a field. Prove that a vector space $V$ over $F$ having basis $B$ (regardless of the cardinality of $B$) is isomorphic as a vector space to $\bigoplus_B F$. Prove that $\prod_B F$ is also an $F$-vector space which has strictly larger dimension than that of $\bigoplus_B F$.

(So a free module on any set is isomorphic to a direct sum. We’ve never gotten around to proving this in the best possible generality, though, so we’ll just prove it for vector spaces here.)

Note that, by the universal property of free modules, the natural injection $B \rightarrow \bigoplus_B F$ which sends $b$ to the tuple which has 1 in the $b$th component and 0 elsewhere induces a vector space homomorphism $\varphi : V \rightarrow \bigoplus_B F$. This mapping is clearly surjective, and also clearly injective. So $V \cong_F \bigoplus_B F$.

Certainly $\prod_B F$ is an $F$-vector space which contains $\bigoplus_B F$, so that $\mathsf{dim}\ \bigoplus_B F \leq \mathsf{dim}\ \prod_B F$. Suppose these two dimensions are in fact equal. Identify $B$ with the usual basis of $\bigoplus_B F$. By this previous exercise, there is a basis $D$ of $\prod_B F$ which contains $B$, and as argued above, $\prod_B F \cong_F \bigoplus_D F$. By our hypothesis, in fact $B$ and $D$ have the same cardinality, and so there exists a bijection $\theta : B \rightarrow D$. Now $\theta$ induces a vector space isomorphism $\Theta : \bigoplus_B F \rightarrow \prod_B F$.

However, note that $|\prod_B F| = |F|^{|B|}$, while $|\bigoplus_B F| = |\bigcup_{T \subseteq \mathcal{P}(X), T\ \mathrm{finite}} \prod_T F|$ $= \sum_{|B|} |F|^{|T|}$ $= \sum_{|B|} |F|$ $= |B| \cdot |F|$. Since $|B| \cdot |F| < |F|^{|B|}$, we have a contradiction. Thus the dimension of $\bigoplus_B F$ is strictly smaller than that of $\prod_B F$.

### The interaction of Hom with direct sums and direct products

Let $R$ be a ring with 1. Let $A$ be a left unital $R$-module, and let $\{B_i\}_I$ be a nonempty family of left unital $R$-modules. Prove that, as abelian groups, $\mathsf{Hom}_R(\bigoplus_I B_i, A) \cong \prod_I \mathsf{Hom}_R(B_i,A)$ and $\mathsf{Hom}_R(A, \prod_I B_i) \cong \prod_I \mathsf{Hom}_R(A,B_i)$. Prove also that if $R$ is commutative, these pairs are $R$-module isomorphic.

First we show that $\mathsf{Hom}_R(\bigoplus_I B_i, A) \cong \prod_I \mathsf{Hom}_R(B_i,A)$ as abelian groups. Recall that for each $i$, we have the canonical injection $\iota_i : B_i \rightarrow \bigoplus_I B_i$. Define for each $i \in I$ the map $\varphi_i : \mathsf{Hom}_R( \bigoplus_I B_i, A) \rightarrow \mathsf{Hom}_R(B_i,A)$ by $\varphi_i(\alpha) = \alpha \circ \iota_i$; certainly each $\varphi_i$ is well defined. By the universal property of direct products of abelian groups, there exists a unique group homomorphism $\Phi : \mathsf{Hom}_R(\bigoplus_I B_i, A) \rightarrow \prod_I \mathsf{Hom}_R(B_i, A)$ such that $\pi_i \circ \Phi = \varphi_i$, where $\pi_i$ denotes the $i$th natural projection from a direct product. We claim that this $\Phi$ is a group isomorphism.

Suppose $\alpha \in \mathsf{ker}\ \Phi$, so $\Phi(\alpha) = 0$. Then $(\pi_i \circ \Phi)(\alpha) = 0$ for each $i$, so that $\varphi_i(\alpha) = 0$ for each $i$. Thus $\alpha \circ \iota_i = 0$ for all $i$. That is, $\alpha$ applied to the $i$th component of any element in $\bigoplus_I B_i$ is zero, for all $i$. So $\alpha = 0$, and thus $\mathsf{ker}\ \Phi = 0$. So $\Phi$ is injective.

Now suppose $\psi = (\psi_i) \in \prod_I \mathsf{Hom}_R(B_i,A)$. Define $\alpha_\psi : \bigoplus_I B_i \rightarrow A$ by $\alpha_\psi(b_i) = \sum \psi_i(b_i)$; this map is well defined since only finitely many terms of $(b_i)$ are nonzero. Moreover, it is clear that $\alpha_\psi$ is an $R$-module homomorphism, so in fact $\alpha_\psi \in \mathsf{Hom}_R(\bigoplus_I B_i, A)$. Note that for all $i \in I$, $\Phi(\alpha_\psi)_i(b) = \varphi_i(\alpha_\psi)(b)$ $= (\alpha_\psi \circ \iota_i)(b)$ $= \alpha_\psi(\iota_i(b))$ $= \psi_i(b)$. Thus $\Phi(\alpha_\psi)_i = \psi_i$ for all $i$, and so $\Phi(\alpha_\psi) = \psi$. Thus $\Phi$ is surjective.

So $\Phi$ is an isomorphism of abelian groups. Suppose now that $R$ is commutative, so that both $\mathsf{Hom}_R(\bigoplus_I B_i,A)$ and $\prod_I \mathsf{Hom}_R(B_i,A)$ are naturally left $R$-modules. For all $r \in R$ and all module homomorphisms $\alpha : \bigoplus_I B_i \rightarrow A$, we have $\Phi(ra) = ((r \alpha) \circ \iota_i)$ $= (r(\alpha \circ \iota_i))$ $= r(\alpha \circ \iota_i)$ $= r\Phi(\alpha)$. Thus $\Phi$ is an isomorphism of left $R$-modules.

Next we show that $\mathsf{Hom}_R(A, \prod_I B_i) \cong \prod_I \mathsf{Hom}_R(A,B_i)$ as abelian groups. For each $i \in I$, define $\varphi_i : \mathsf{Hom}_R(A, \prod_I B_i) \rightarrow \mathsf{Hom}_R(A,B_i)$ by $\varphi_i(\alpha) = \pi_i \circ \alpha$, where $\pi_i$ denotes the $i$th canonical projection from a direct product. By the universal property of direct products, we have a unique group homomorphism $\Phi : \mathsf{Hom}_R(A,\prod_I B_i) \rightarrow \prod_I \mathsf{Hom}_R(A, B_i)$ such that $\pi_i \circ \Phi = \varphi_i$ for all $i$. We claim that this $\Phi$ is an isomorphism.

Suppose $\alpha \in \mathsf{ker}\ \Phi$. So $\Phi(\alpha) = 0$, and thus $(\pi_i \circ \Phi)(\alpha) = 0$ for all $i$. Thus $\varphi_i(\alpha) = 0$ for all $i$, so $\pi_i \circ \alpha = 0$ for all $i$. That is, every coordinate of any element in the image of $\alpha$ is 0. So $\alpha = 0$. Thus $\mathsf{ker}\ \Phi = 0$, and so $\Phi$ is injective.

Now let $\psi = (\psi_i) \in \prod_I \mathsf{Hom}_R(A,B_i)$. Define $\alpha_\psi : A \rightarrow \prod_I B_i$ by $\alpha_\psi(a)_i = \psi_i(a)$. Certainly $\alpha_\psi$ is a module homomorphism. Note that, for all $i$, $\Phi(\alpha_\psi)_i(a) = \varphi_i(\alpha_\psi)(a)$ $= (\pi_i \circ \alpha_\psi)(a)$ $= \psi_i(a)$. So $\Phi(\alpha_\psi)_i = \psi_i$ for all $i$, and we have $\Phi(\alpha_\psi) = \psi$. So $\Phi$ is surjective, and thus an isomorphism of groups.

Finally, suppose $R$ is commutative. If $r \in R$ and $\alpha : A \rightarrow \prod_I B_i$ is a module homomorphism, then $\Phi(r\alpha) = (\pi_i \circ r\alpha)$ $= (r(\pi_i \circ \alpha))$ $= r(\pi_i \circ \alpha)$ $= r\Phi(\alpha)$. Thus in this case $\Phi$ is an isomorphism of $R$-modules.

### An arbitrary direct sum of modules is flat if and only if each direct summand is flat

Let $R$ be a ring with 1 and let $\{A_i\}_I$ be a family of (right, unital) $R$-modules. Prove that $\bigoplus_I A_i$ is flat if and only if each $A_i$ is flat.

We begin with a lemma.

Lemma: Let $\{\varphi_i : A_i \rightarrow B_i\}_I$ be a family of (unital) $R$-module homomorphisms. Then $\Phi : \bigoplus_I A_i \rightarrow \bigoplus_I B_i$ given by $\Phi(a_i) = (\varphi_i(a_i))$ is injective if and only if each $\varphi_i$ is injective. Proof: Suppose $\Phi$ is injective. If $\varphi_k(a) = \varphi_k(b)$, and letting $\iota_k$ denote the inclusion $A_k \rightarrow \bigoplus_I A_i$ or $B_k \rightarrow \bigoplus_I B_i$, then $\Phi(\iota_k(a)) = \Phi(\iota_k(b))$, so that $a = b$. Thus each $\varphi_k$ is injective. Conversely, suppose each $\varphi_k$ is injective; then if $\Phi(a_i) = \Phi(b_i)$, we have $\varphi_i(a_i) = \varphi_i(b_i)$ for each $i$, so that $a_i = b_i$ for each $i$. So $(a_i) = (b_i)$, and thus $\Phi$ is injective. $\square$

Suppose each $A_i$ is flat. Now let $L$ and $M$ be left unital $R$-modules and let $\theta : L \rightarrow M$ be a module homomorphism. Since each $A_i$ is flat, $1_i \otimes \theta : A_i \otimes_R L \rightarrow A_i \otimes_R M$ is injective. Using the lemma, $\Theta = \bigoplus_I (1_i \otimes \theta) : \bigoplus_I (A_i \otimes L) \rightarrow \bigoplus_I (A_i \otimes M)$ is injective. In this previous exercise, we constructed group isomorphisms $\Phi : (\bigoplus_I A_i) \otimes_R L \rightarrow \bigoplus_I (A_i \otimes_R L)$ and $\Psi : \bigoplus_I (A_i \otimes_R L) \rightarrow (\bigoplus_I A_i) \otimes_R M$. We claim that $\Psi \circ \Theta \circ \Phi = 1 \otimes \theta$. To that end, note that $(\Psi \circ \Theta \circ \Phi)((a_i) \otimes \ell) = (\Psi \circ \Theta)((a_i \otimes \ell))$ $= \Psi((a_i \otimes \theta(\ell)))$ $= (\sum a_i) \otimes \theta(\ell)$ $= (1 \otimes \theta)((a_i) \otimes \ell)$, as desired. Thus $1 \otimes \theta : (\bigoplus_I A_i) \otimes_R L \rightarrow (\bigoplus_I A_i) \otimes_R M$ is injective, and so $\bigoplus_I A_i$ is flat.

Conversely, suppose $\bigoplus_I A_i$ is flat. Let $L$ and $R$ be (unital, left) $R$-modules and $\theta : L \rightarrow M$ a module homomorphism. Note that $\Theta = 1 \otimes \theta : (\bigoplus_I A_i) \otimes_R L \rightarrow (\bigoplus_I A_i) \otimes_R M$ is injective. Recall the homomorphisms $\Phi$ and $\Psi$ from the previous paragraph; $\Psi^{-1} \circ \Theta \circ \Phi^{-1}$ is an injective module homomorphism mapping $\bigoplus_I (A_i \otimes_R L)$ to $\bigoplus_I (A_i \otimes_R M)$, and in fact $(\Psi^{-1} \circ \Theta \circ \Phi^{-1})((a_i \otimes \ell_i)) = (\Psi^{-1} \circ \Theta)(\sum \iota_i(a_i) \otimes \ell_i)$ $= \Psi^{-1}(\sum \iota_i(a_i) \otimes \theta(\ell_i))$ $= (a_i \otimes \theta(\ell_i))$ $= ((1_i \otimes \theta)(a_i \otimes \ell_i))$. By the lemma, each $1_i \otimes \theta$ is injective, so that each $A_i$ is flat.

### The direct sum of two modules is injective if and only if each direct summand is injective

Let $R$ be a ring with 1 and let $Q_1$ and $Q_2$ be (left, unital) $R$-modules. Prove that $Q_1 \oplus Q_2$ is injective if and only if $Q_1$ and $Q_2$ are injective.

Recall Baer’s Criterion: an $R$-module $Q$ is injective if and only if for every left ideal $I \subseteq R$ and every $R$-module homomorphism $\varphi : I \rightarrow Q$, there exists an $R$-module homomorphism $\Phi : R \rightarrow Q$ such that $\Phi|_I = \varphi$.

Suppose $Q_1$ and $Q_2$ are injective. Let $I \subseteq R$ be a left ideal and let $\varphi : I \rightarrow Q_1 \oplus Q_2$ be an $R$-module homomorphism. Letting $\pi_1$ and $\pi_2$ denote the first and second coordinate projections, $\pi_1 \circ \varphi : I \rightarrow Q_1$ and $\pi_2 \circ \varphi : I \rightarrow Q_2$ are $R$-module homomorphisms. By Baer’s Criterion, there exist $R$-module homomorphisms $\Phi_1 : R \rightarrow Q_1$ and $\Phi_2 : R \rightarrow Q_2$ such that $\Phi_1|_I = \pi_1 \circ \varphi$ and $\Phi_2|_I = \pi_2 \circ \varphi$. Define $\Phi : R \rightarrow Q_1 \oplus Q_2$ by $\Phi(r) = (\Phi_1(r), \Phi_2(r))$. Certainly $\Phi$ is an $R$-module homomorphism. Moreover, $\Phi_I = \varphi$. Thus $Q_1 \oplus Q_2$ is injective.

Suppose $Q_1 \oplus Q_2$ is injective. Let $I \subseteq R$ be a left ideal and let $\varphi_1 : I \rightarrow Q_1$ and $\varphi_2 : I \rightarrow Q_2$ be $R$-module homomorphisms. Define $\varphi : I \rightarrow Q_1 \oplus Q_2$ by $\varphi(i) = (\varphi_1(i), \varphi_2(i))$. Certainly $\varphi$ is an $R$-module homomorphism. By Baer’s Criterion, there exists an $R$-module homomorphism $\Phi : R \rightarrow Q_1 \oplus Q_2$ extending $\varphi$. Now let $\Phi_1 = \pi_1 \circ \Phi$ and $\Phi_2 = \pi_2 \circ \Phi$; certainly $\Phi_1 : R \rightarrow Q_1$ and $\Phi_2 : R \rightarrow Q_2$ are $R$-module homomorphisms, and if $a \in I$ then $(\Phi_1(a), \Phi_2(a)) = \Phi(a) = \varphi(a) = (\varphi_1(a), \varphi_2(a))$. Thus $Q_1$ and $Q_2$ are injective.