## Monthly Archives: July 2011

### Interaction between the determinant of a square matrix and solutions of a linear matrix equation over a commutative unital ring

Let $R$ be a commutative ring with 1, let $V$ be an $R$-module, and let $X = [x_1\ \cdots\ x_n]^\mathsf{T} \in V^n$. Suppose that for some matrix $A \in \mathsf{Mat}_{n \times n}(R)$ we have $AX = 0$. Prove that $\mathsf{det}(A)x_i = 0$ for all $x_i$.

Let $B$ be the transpose of the matrix of cofactors of $A$. That is, $B = [(-1)^{i+j} \mathsf{det}(A_{j,i})]_{i,j=1}^n$. By Theorem 30 in D&F, we have $BA = \mathsf{det}(A)I$, where $I$ is the identity matrix.

Now if $AX = 0$, then $BAX = B0 = 0$, so that $\mathsf{det}(A)X = 0$. Comparing entries, we have $\mathsf{det}(A)x_i = 0$ for all $i$.

### The columns of a square matrix over a field are linearly independent if and only if the determiniant of the matrix is nonzero

Let $F$ be a field and let $A = [A_1 | \cdots | A_n]$ be a square matrix of dimension $n \times n$ over $F$. Prove that the set $\{A_i\}_{i=1}^n$ is linearly independent if and only if $\mathsf{det}\ A \neq 0$.

Let $B$ be the reduced row echelon form of $A$, and let $P$ be invertible such that $PA = B$.

Suppose the columns of $A$ are linearly independent. Now $B$ has column rank $n$. In particular, $B = I$. Now $1 = \mathsf{det}(PA) = \mathsf{det}(P) \mathsf{det}(A)$; so $\mathsf{det}(A) \neq 0$.

We prove the converse contrapositively. Suppose the columns of $A$ are linearly dependent; then the column rank of $B$ is strictly less than $n$, so that $B$ has a row of all zeros. Using the cofactor expansion formula, $\mathsf{det}(B) = 0$. Since $P$ is invertible, its determinant is nonzero; thus $\mathsf{det}(A) = 0$. Thus if $\mathsf{det}(A) \neq 0$, then the columns of $A$ are linearly independent.

### The cofactor expansion formula of a square matrix along a column

Let $A$ be a square matrix over a field $F$. Formulate and prove the cofactor expansion formula for $\mathsf{det}\ A$ along the $j$th column.

Let $A = [\alpha_{i,j}]$.

We begin with a definition. If $A = \left[ \begin{array}{c|c|c} A_{1,1} & C_1 & A_{1,2} \\ \hline R_1 & \alpha_{i,j} & R_2 \\ \hline A_{2,1} & C_2 & A_{2,2} \end{array} \right]$, where $A_{1,1}$ has dimension $(i-1) \times (j-1)$, then the $(i,j)$-minor of $A$ is the matrix $A_{i,j} = \left[ \begin{array}{c|c} A_{1,1} & A_{1,2} \\ \hline A_{2,1} & A_{2,2} \end{array} \right]$.

Recall that the cofactor expansion formula for $\mathsf{det}\ A$ along the $i$th row is

$\mathsf{det}\ A = \sum_{j=1}^n (-1)^{i+j} \alpha_{i,j} \mathsf{det}(A_{i,j})$.

The analogous expansion along the $j$th column is

$\mathsf{det}\ A = \sum_{i=1}^n (-1)^{i+j} \alpha_{i,j} \mathsf{det}(A_{i,j})$.

which we presently prove to be true. First, note that $(A_{i,j})^\mathsf{T} = (A^\mathsf{T})_{j,i}$; this follows from our definition of minors and the fact that $\left[ \begin{array}{c|c} A & B \\ \hline C & D \end{array} \right]^\mathsf{T} = \left[ \begin{array}{c|c} A^\mathsf{T} & C^\mathsf{T} \\ \hline B^\mathsf{T} & D^\mathsf{T} \end{array} \right]$. Now we have the following.

 $\mathsf{det}(A)$ = $\mathsf{det}(A^\mathsf{T})$ = $\sum_{j=1}^n (-1)^{i+j} \alpha_{j,i} \mathsf{det}((A^\mathsf{T})_{i,j})$ = $\sum_{j=1}^n (-1)^{i+j} \alpha_{j,i} \mathsf{det}((A_{j,i})^\mathsf{T})$ = $\sum_{j=1}^n (-1)^{i+j} \alpha_{j,i} \mathsf{det}(A_{j,i})$ = $\sum_{i=1}^n (-1)^{i+j} \alpha_{i,j} \mathsf{det}(A_{i,j})$,

as desired.

### If V is an infinite dimensional vector space, then its dual space has strictly larger dimension

Let $V$ be an infinite dimensional vector space over a field $F$, say with basis $A$. Prove that the dual space $\widehat{V} = \mathsf{Hom}_F(V,F)$ has strictly larger dimension than does $V$.

We claim that $\widehat{V} \cong_F \prod_A F$. To prove this, for each $a \in A$ let $F_a$ be a copy of $F$. Now define $\varepsilon_a : \widehat{V} \rightarrow F_a$ by $\varepsilon_a(\widehat{v}) = \widehat{v}(a)$. By the universal property of direct products, there exists a unique $F$-linear transformation $\theta : \widehat{V} \rightarrow \prod_A F_a$ such that $\pi_a \circ \theta = \varepsilon_a$ for all $a \in A$. We claim that $\theta$ is an isomorphism. To see surjectivity, let $(v_a) \in \prod_A F_a$. Now define $\varphi \in \widehat{V}$ by letting $\varphi(a) = v_a$ and extending linearly; certianly $\theta(\varphi) = (v_a)$. To see injectivity, suppose $\varphi \in \mathsf{ker}\ \theta$. Then $\theta(\varphi) = 0$, so that $(\pi_a \circ \theta)(\varphi) = 0$, and thus $\varepsilon_a(\varphi) = 0$ for all $a$. Thus $\varepsilon(a) = 0$ for all $a \in A$. Since $A$ is a basis of $V$, we have $\varphi = 0$. Thus $\theta$ is an isomorphism, and we have $\widehat{V} \cong_F \prod_A F$.

By this previous exercise, $\widehat{V}$ has strictly larger dimension than does $V$.

### The dual basis of an infinite dimensional vector space does not span the dual space

Let $F$ be a field and let $V$ be an infinite dimensional vector space over $F$; say $V$ has basis $B = \{v_i\}_I$. Prove that the dual basis $\{\widehat{v}_i\}_I$ does not span the dual space $\widehat{V} = \mathsf{Hom}_F(V,F)$.

Define a linear transformation $\varphi$ on $V$ by taking $v_i$ to 1 for all $i \in I$. Note that for all $i \in I$, $\varphi(v_i) \neq 0$. Suppose now that $\varphi = \sum_{i \in K} \alpha_i\widehat{v}_i$ where $K \subseteq I$ is finite; for any $j \notin K$, we have $(\sum \alpha_i \widehat{v}_i)(v_j) = 0$, while $\varphi(v_j) = 1$. Thus $\varphi$ is not in $\mathsf{span}\ \{\widehat{v}_i\}_I$.

So the dual basis does not span $\widehat{V}$.

### The annihilator of a subset of a dual vector space

Let $V$ be a vector space over a field $F$ and let $\widehat{V} = \mathsf{Hom}_F(V,F)$ denote the dual vector space of $V$. Given $S \subseteq \widehat{V}$, define $\mathsf{Ann}(S) = \{v \in V \ |\ f(v) = 0\ \mathrm{for\ all}\ f \in S \}$. (This set is called the annihilator of $S$ in $V$.)

1. Prove that $\mathsf{Ann}(\widehat{S})$ is a subspace of $V$ for all $\widehat{S} \subseteq \widehat{V}$.
2. Suppose $\widehat{W}_1$ and $\widehat{W}_2$ are subspaces of $\widehat{V}$. Prove that $\mathsf{Ann}(\widehat{W}_1 + \widehat{W}_2) = \mathsf{Ann}(\widehat{W}_1) \cap \mathsf{Ann}(\widehat{W}_2)$ and $\mathsf{Ann}(\widehat{W}_1 \cap \widehat{W}_2) = \mathsf{Ann}(\widehat{W}_1) + \mathsf{Ann}(\widehat{W}_2)$.
3. Let $\widehat{W}_1, \widehat{W}_2 \subseteq \widehat{V}$ be subspaces. Prove that $\mathsf{Ann}(\widehat{W}_1) = \mathsf{Ann}(\widehat{W}_2)$ if and only if $\widehat{W}_1 = \widehat{W}_2$.
4. Prove that, for all $\widehat{S} \subseteq \widehat{V}$, $\mathsf{Ann}(\widehat{S}) = \mathsf{Ann}(\mathsf{span}\ \widehat{S})$.
5. Assume $V$ is finite dimensional with basis $B = \{v_i\}_{i=1}^n$, and let $\widehat{B} = \{\widehat{v}_i\}_{i=1}^n$ denote the basis dual to $B$. Prove that if $\widehat{S} = \{\widehat{v}_i\}_{i=1}^k$ for some $1 \leq k \leq n$, then $\mathsf{Ann}(\widehat{S}) = \mathsf{span} \{v_i\}_{i=k+1}^n$.
6. Assume $V$ is finite dimensional. Prove that if $\widehat{W} \subseteq \widehat{V}$ is a subspace, then $\mathsf{dim}\ \mathsf{Ann}(\widehat{W}) = \mathsf{dim}\ V - \mathsf{dim}\ \widehat{W}$.

[This needs to be cleaned up.]

Recall that a bounded lattice is a tuple $(L, \wedge, \vee, \top, \bot)$, where $\wedge$ and $\vee$ are binary operators on $L$ and $\top$ and $\bot$ are elements of $L$ satisfying the following:

1. $\wedge$ and $\vee$ are associative and commutative,
2. $\top$ and $\bot$ are identity elements with respect to $\wedge$ and $\vee$, respectively, and
3. $a \wedge (a \vee b) = a$ and $a \vee (a \wedge b) = a$ for all $a,b \in L$. (Called the “absorption laws”.)

If $L_1$ and $L_2$ are bounded lattices, a bounded lattice homomorphism is a mapping $\varphi : L_1 \rightarrow L_2$ that preserves the operators- $\varphi(a \wedge b) = \varphi(a) \wedge \varphi(b)$, $\varphi(a \vee b) = \varphi(a) \vee \varphi(b)$, $\varphi(\bot) = \bot$, and $\varphi(\top) = \top$. As usual, a lattice homomorphism which is also bijective is called a lattice isomorphism.

The interchangability of $\wedge$ and $\vee$ (and of $\bot$ and $\top$) immediately suggests the following definition. Given a bounded lattice $L$, we define a new lattice $\widehat{L}$ having the same base set as $L$ but with the roles of $\wedge$ and $\vee$ (and of $\bot$ and $\top$) interchanged. This $\widehat{L}$ is called the dual lattice of $L$.

Let $V$ be a vector space (of arbitrary dimension) over a field $F$. We let $\mathcal{S}_F(V)$ denote the set of all $F$-subspaces of $V$. We claim that $(\mathcal{S}_F(V), \cap, +, V, 0)$ is a bounded lattice. The least obvious of the axioms to check are the absorption laws. Indeed, note that for all subspaces $U,W \subseteq V$, we have $U \cap (U + W) = U$ and $U + (U \cap W) = U$.

Now let $V$ be a vector space (again of arbitrary dimension) over a field $F$, and let $\widehat{V} = \mathsf{Hom}_F(V,F)$ denote its dual space. If $S \subseteq \widehat{V}$ is an arbitrary subset and $\mathsf{Ann}(S)$ is defined as above, note that $f(0) = 0$ for all $f \in S$, and that if $x,y \in \mathsf{Ann}(S)$ and $r \in F$, we have $f(x+ry) = f(x)+rf(y) = 0$ for all $f \in S$. By the submodule criterion, $\mathsf{Ann}(S) \subseteq V$ is a subspace.

Now define $A : \mathcal{S}_F(\widehat{V}) \rightarrow \widehat{\mathcal{S}_F(V)}$ by $A(\widehat{W}) = \mathsf{Ann}(\widehat{W})$. We claim that if $V$ is finite dimensional, then $A$ is a bounded lattice homomorphism.

1. ($A(\widehat{0}) = V$) Note that for all $v \in V$, we have $\widehat{0}(v) = 0$. Thus $V = \mathsf{Ann}(\widehat{0}) = A(\widehat{0})$. ($\widehat{0}$ is the zero function $V \rightarrow F$.)
2. ($A(\widehat{V}) = 0$) Suppose there exists a nonzero element $v \in \mathsf{Ann}(\widehat{V})$. Then there exists a basis $E$ of $V$ containing $v$, and we may construct a homomorphism $\varphi : V \rightarrow F$ such that $\varphi(v) \neq 0$. In particular, $v \notin A(\widehat{V})$. On the other hand, it is certainly the case that $0 \in A(\widehat{V})$. Thus we have $A(\widehat{V}) = 0$.
3. ($A(\widehat{W}_1 + \widehat{W}_2) = A(\widehat{W}_1) \cap A(\widehat{W}_2)$) $(\subseteq)$ Let $v \in A(\widehat{W}_1 + \widehat{W}_2)$. Then for all $f + g \in \widehat{W}_1 + \widehat{W}_2$, we have $(f+g)(v) = f(v) + g(v) = 0$. In particular, if $f \in \widehat{W}_1$, then $f(v) = (f+0)(v) = 0$, so that $v \in A(\widehat{W}_1)$. Similarly, $v \in A(\widehat{W}_2)$, and thus $v \in A(\widehat{W}_1) \cap A(\widehat{W}_2)$. $(\supseteq)$ Suppose $v \in A(\widehat{W}_1) \cap A(\widehat{W}_2)$. Then for all $f+g \in \widehat{W}_1 + \widehat{W}_2$, we have $(f+g)(v) = f(v)+g(v) = 0$; thus $v \in A(\widehat{W}_1+\widehat{W}_2)$. Thus $A(\widehat{W}_1 + \widehat{W}_2) = A(\widehat{W}_1) \cap A(\widehat{W}_2)$.
4. ($A(\widehat{W}_1 \cap \widehat{W}_2) = A(\widehat{W}_1) + A(\widehat{W}_2)$) $(\supseteq)$ Suppose $v \in A(\widehat{W}_1)$. Then for all $f \in \widehat{W}_1$, $f(v) = 0$. In particular, for all $f \in \widehat{W}_1 \cap \widehat{W}_2$. Thus $v \in A(\widehat{W}_1 \cap \widehat{W}_2)$. Similarly we have $A(\widehat{W}_2) \subseteq A(\widehat{W}_1 \cap \widehat{W}_2)$; thus $A(\widehat{W}_1) + A(\widehat{W}_2) \subseteq A(\widehat{W}_1 \cap \widehat{W}_2)$. $(\subseteq)$ First, we claim that this inclusion holds for all pairs of one dimensional subspaces. If $\widehat{W}_1$ and $\widehat{W}_2$ intersect in a dimension 1 subspace (that is, $\widehat{W}_1 = \widehat{W}_2$), then certianly $A(\widehat{W}_1 \cap \widehat{W}_2) \subseteq A(\widehat{W}_1) + A(\widehat{W}_2)$. If they intersect trivially, then we have $\widehat{W}_1 = F\widehat{t}_1$ and $\widehat{W}_2 = F\widehat{t}_2$, and $A(\widehat{W}_1) = \mathsf{ker}\ \widehat{t}_1$ and $A(\widehat{W}_2) = \mathsf{ker}\ \widehat{t}_2$. Now $\widehat{t}_1$ and $\widehat{t}_2$ are nonzero linear transformations $V \rightarrow F$, and so by the first isomorphism theorem for modules their kernels have dimension $(\mathsf{dim}\ V) - 1$. Note that linear transformations $V \rightarrow F$ are realized (after fixing a basis) by matrices of dimension $1 \times \mathsf{dim}\ V$; in particular, if $\widehat{t}_1$ and $\widehat{t}_2$ have the same kernel, then they are row equivalent, and so are $F$-multiples of each other. Thus we have $A(\widehat{W}_1) + A(\widehat{W}_2) = V$. Now suppose $\widehat{W}_1 = \sum \widehat{W}_{1,i}$ and $\widehat{W}_2 = \sum \widehat{W}_{2,i}$ are sums of one dimensional subspaces. We have $A(\widehat{W}_1 \cap \widehat{W}_2) = A((\sum \widehat{W}_{1,i}) \cap (\sum \widehat{W}_{2,j}))$ $= A(\sum (\widehat{W}_{1,i} \cap \widehat{W}_{2,j}))$ $= \bigcap A(\widehat{W}_{1,i} \cap \widehat{W}_{2,j})$. From the one-dimensional case, this is equal to $\bigcap (A(\widehat{W}_{i,1} + A(\widehat{W}_{2,j}) = (\bigcap A(\widehat{W}_{1,i})) + (\bigcap A(\widehat{W}_{2,i}))$ $= A(\widehat{W}_1) + A(\widehat{W}_2)$. (Note that our proof depends on $V$ being finite dimensional.)

Thus $A$ is a bounded lattice homomorphism. We claim also that $A$ is bijective. To see surjectivity, let $W \subseteq V$ be a subspace. Define $\widehat{W} = \{ f \in \widehat{V} \ |\ \mathsf{ker}\ f \supseteq W \}$. We claim that $A(\widehat{W}) = W$. To see this, it is clear that $W \subseteq A(\widehat{A})$. Moreover, there is a mapping $f \in \widehat{W}$ whose kernel is exactly $W$, and thus $A(\widehat{W}) = W$. Before we show injectivity, we give a lemma.

Lemma: Let $\widehat{W} \subseteq \widehat{V}$ be a subspace with basis $\{\widehat{v}_i\}_{i=1}^k$, and extend to a basis $\{\widehat{v}_i\}_{i=1}^n$. Let $\{v_i\}_{i=1}^n$ be the dual basis to $\{\widehat{v}_i\}_{i=1}^n$, obtained using the natural isomorphism $V \cong \widehat{\widehat{V}}$. Then $A(\widehat{W}) = \mathsf{span}\ \{v_i\}_{i=k+1}^n$. Proof: Let $\sum \alpha_i v_i \in A(\widehat{W})$. In particular, we have $\widehat{v}_j(\sum \alpha_i v_i) = \alpha_j = 0$ for all $1 \leq j \leq k$. Thus $\sum \alpha_iv_i \in \mathsf{span}\ \{v_i\}_{i=k+1}^n$. Conversely, note that $\widehat{v}_j(v_i) = 0$ for all $k+1 \leq i \leq n$, so that $\mathsf{span}\ \{v_i\}_{i=k+1}^n \subseteq A(\widehat{W})$. $\square$

In particular, we have $\mathsf{dim}\ \widehat{W} + \mathsf{dim}\ A(\widehat{W}) = \mathsf{dim}\ V$. Now suppose $A(\widehat{W}) = \mathsf{span}\ \{v_i\}_{i=1}^k$, and extend to a basis $\{v_i\}_{i=1}^n$ of $V$. Let $\{\widehat{v}_i\}_{i=1}^n$ denote the dual basis. Note that for all $f \in \widehat{W}$, writing $f = \sum \alpha_i \widehat{v}_i$, we have $\alpha_j = f(v_j) = 0$ whenever $1 \leq j \leq k$. In particular, $\widehat{W} \subseteq \mathsf{span}\ \{\widehat{v}_i\}_{i=k+1}^n$. Condiering dimension, we have equality. Now to see injectivity for $A$, note that if $A(\widehat{W}_1) = A(\widehat{W}_2)$, then $\widehat{W}_1$ and $\widehat{W}_2$ share a basis- hence $\widehat{W}_1 = \widehat{W}_2$, and so $A$ is injective.

Thus, as lattices, we have $\mathcal{S}_F(\widehat{V}) \cong \widehat{\mathcal{S}_F(V)}$.

Finally, note that it is clear we have $\mathsf{Ann}(\mathsf{span}\ S) \subseteq \mathsf{Ann}(S)$. Conversely, if $v \in \mathsf{Ann}(S)$ and $f = \sum \alpha_i s_i \in \mathsf{span}\ S$, then $f(v) = 0$. Thus $\mathsf{Ann}(S) = \mathsf{Ann}(\mathsf{span}\ S)$.

### Express a given linear transformation in terms of a dual basis

Let $V \subseteq \mathbb{Q}[x]$ be the $\mathbb{Q}$-vector space consisting of those polynomials having degree at most 5. Recall that $B = \{1,x,x^2,x^3,x^4,x^5\}$ is a basis for this vector space over $\mathbb{Q}$. For each of the following maps $\varphi : V \rightarrow \mathbb{Q}$, verify that $\varphi$ is a linear transformation and express $\varphi$ in terms of the dual basis $B^\ast$ on $V^\ast = \mathsf{Hom}_F(V,F)$.

1. $\varphi(p) = p(\alpha)$, where $\alpha \in \mathbb{Q}$.
2. $\varphi(p) = \int_0^1 p(t)\ dt$
3. $\varphi(p) = \int_0^1 t^2p(t)\ dt$
4. $\varphi(p) = p^\prime(\alpha)$, where $\alpha \in \mathbb{Q}$. (Prime denotes the usual derivative of a polynomial.)

Let $v_i$ be the element of the dual basis $B^\ast$ such that $v_i(x^j) = 1$ if $i = j$ and 0 otherwise. I’m going to just assume that integration over an interval is linear.

1. Note that $\varphi(p+rq) = (p+rq)(\alpha) = p(\alpha) + rq(\alpha) = \varphi(p) + r \varphi(q)$; thus $\varphi$ is indeed a linear transformation. Moreover, note that $(\sum \alpha^i v_i)(\sum c_jx^j) = \sum \alpha^i v_i(\sum c_jx^j)$ $= \sum \alpha^i \sum c_j v_i(x^j)$ $= \sum \alpha^i c_i$ $= (\sum c_ix^i)(\alpha)$. Thus $\varphi = \sum \alpha^i v_i$.
2. Note that $\varphi(\sum \alpha_i x^i) = \sum \frac{\alpha_i}{i+1}$. Now $(\sum \frac{1}{i+1} v_i)(\sum \alpha_j x^j) = \sum \frac{1}{i+1} v_i(\sum \alpha_j x^j)$ $= \sum \frac{1}{i+1} \sum \alpha_j v_i(x^j)$ $= \sum \frac{\alpha_i}{i+1}$ $= \varphi(\sum \alpha_i x^i)$. So $\varphi = \sum \frac{1}{i+1} v_i$.
3. Note that $\varphi(\sum \alpha_i x^i) = \sum \frac{\alpha_i}{i+3}$. Now $(\sum \frac{1}{i+3} v_i)(\sum \alpha_j x^j) = \sum \frac{1}{i+3} v_i(\sum \alpha_j x^j)$ $= \sum \frac{1}{i+3} \sum \alpha_j v_i(x^j)$ $= \sum \frac{\alpha_i}{i+3}$ $= \varphi(\sum \alpha_i x^i)$. Thus $\varphi = \sum \frac{1}{i+3} v_i$.
4. Since differentiation (of polynomials) is linear and the evaluation map is linear, this $\varphi$ is linear. Note that $(\sum (i+1)\alpha^i v_{i+1})(\sum c_jx^j) = \sum (i+1)\alpha^i v_{i+1}(\sum c_jx^j)$ $= \sum (i+1)\alpha^i \sum c_j v_{i+1}(x^j)$ $= \sum (i+1)\alpha^i c_{i+1}$ $= \varphi(\sum c_ix^i)$. Thus $\varphi = \sum (i+1)\alpha^iv_{i+1}$.

### The endomorphism rings of a vector space and its dual space are isomorphic as algebras over the base field

Let $F$ be a field and let $V$ be a vector space over $F$ of some finite dimension $n$. Show that the mapping $\Omega : \mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F))$ given by $\Omega(\varphi)(\tau) = \tau \circ \varphi$ is an $F$-vector space isomorphism but not a ring isomorphism for $n > 1$. Exhibit an $F$-algebra isomorphism $\mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F))$.

We begin with a lemma.

Lemma: Let $R$ be a unital ring and let $M,A,B$ be left unital $R$-modules. If $\varphi : M \times A \rightarrow B$ is $R$-bilinear, then the induced map $\Phi : M \rightarrow \mathsf{Hom}_R(A,B)$ given by $\Phi(m)(a) = \phi(m,a)$ is a well-defined $R$-module homomorphism. Proof: To see well definedness, we need to verify that $\Phi(m) : A \rightarrow B$ is a module homomorphism. To that end note that $\Phi(m)(x+ry) = \varphi(m,x+ry) = \varphi(m,x) + r \varphi(m,y)$ $= \Phi(m)(x) + r\Phi(m)(y)$. Similarly, to show that $\Phi$ is a module homomorphism, note that $\Phi(x+ry)(a) = \varphi(x+ry,a) = \varphi(x,a)+ r\varphi(y,a)$ $= \Phi(x)(a) + r\Phi(y)(a)$ $= (\Phi(x)+r\Phi(y))(a)$, so that $\Phi(x+ry) = \Phi(x) + r\Phi(y)$. $\square$

[Note to self: In a similar way, if $R$ is a unital ring and $M,N,A,B$ unital modules, and $\varphi : M \times N \times A \rightarrow B$ is trilinear, then $\Phi : M \times N \rightarrow \mathsf{Hom}_R(A,B)$ is bilinear. (So that the induced map $M \rightarrow \mathsf{Hom}_R(N,\mathsf{Hom}_R(A,B))$ is a module homomorphism, or unilinear- if you will.) That is to say, in a concrete fashion we can think of multilinear maps as the uncurried versions of higher order functions on modules. (!!!) (I just had a minor epiphany and it made me happy. Okay, so the usual isomorphism $V \rightarrow \mathsf{Hom}_F(V,F)$ is just this lemma applied to the dot product $V \times V \rightarrow F$… that’s cool.) Moreover, if $A = B$ and if $M$ and $\mathsf{End}_R(A)$ are $R$-algebras, then the induced map $\Phi$ is an algebra homomorphism if and only if $\varphi(m_1m_2,a) = \varphi(m_1,\varphi(m_2,a))$ and $\varphi(1,a) = a$.]

Define $\overline{\Omega} : \mathsf{End}_F(V) \times \mathsf{Hom}_F(V,F) \rightarrow \mathsf{Hom}_F(V,F)$ by $\overline{\Omega}(\varphi,\tau) = \tau \circ \varphi$. This map is certainly bilinear, and so by the lemma induces the linear transformation $\Omega : \mathsf{Hom}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V))$. Since $V$ has finite dimension, and since its dual space $\mathsf{Hom}_F(V,F)$ has the same dimension, to see that $\Omega$ is an isomorphism of vector spaces it suffices to show that the kernel is trivial. To that end, suppose $\varphi \in \mathsf{ker}\ \Omega$. Then we have $\Omega(\varphi)(\tau) = \tau \circ \varphi = 0$ for all $\tau$. In particular, we have $\mathsf{im}\ \varphi \subseteq \mathsf{ker}\ \tau$ for all $\tau$. If there exists a nonzero element $v \in \mathsf{im}\ \varphi$, then by the Building-up lemma there is a basis $B$ of $V$ containing $v$. In particular, there is a linear transformation $\tau$ such that $\tau(v) \neq 0$. That is, we have $\mathsf{im}\ \varphi = 0$, so that $\varphi = 0$. Hence $\Omega$ is injective, and so an isomorphism of vector spaces.

Note that $\Omega(\varphi \circ \psi)(\tau) = \tau \circ \varphi \circ \psi$, while $(\Omega(\varphi) \circ \Omega(\psi))(\tau) = \Omega(\varphi)(\Omega(\psi)(\tau))$ $= \Omega(\varphi)(\tau \circ \psi)$ $= \tau \circ \psi \circ \varphi$. If $V$ has dimension greater than 1, then $\mathsf{End}_F(V)$ is not a commutative ring. Thus these expressions need not be equal in general. In fact, if we choose $\tau$, $\varphi$, and $\psi$ such that $M(\varphi) = \left[ \begin{array}{c|c} 0 & 1 \\ \hline 0 & 0 \end{array} \right]$, $M(\psi) = \left[ \begin{array}{c|c} 0 & 0 \\ \hline 1 & 0 \end{array} \right]$, and $M(\tau) = [1 | 0]$, then clearly $\tau \circ \varphi \circ \psi \neq 0$ and $\tau \circ \psi \circ \varphi = 0$. In particular, $\Omega$ is not a ring isomorphism if $n > 1$. On the other hand, if $n = 1$, then $\mathsf{End}_F(V) \cong F$ is commutative, and $\Omega$ is a ring isomorphism.

On the other hand, these rings are clearly isomorphic since $V$ and $\mathsf{Hom}_F(V,F)$ are vector spaces of the same dimension.

Note that $\mathsf{End}_F(V)$ and $\mathsf{End}_F(\mathsf{Hom}_F(V,F))$ are both $F$-algebras via the usual scalar multiplication by $F$. Fix a basis $B$ of $V$, and identify the linear transformation $\varphi \in \mathsf{End}_F(V)$ with its matrix $M^B_B(\varphi)$ with respect to this basis. (Likewise for $\mathsf{Hom}_F(V,F)$.) Now define $\Theta : \mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F))$ by $\Theta(M)(N) = NM^\mathsf{T}$. It is clear that $\Theta$ is well defined, and moreover is an $F$-vector space homomorphism. Note also that $\Theta(M_1M_2)(N) = N(M_1M_2)^\mathsf{T}$ $= NM_2^\mathsf{T}M_1^\mathsf{T}$ $= \Theta(M_1)(\Theta(M_2)(N))$, so that $\Theta(M_1M_2) = \Theta(M_1)\Theta(M_2)$. Thus $\Theta$ is a ring homomorphism; since $\Theta(I)(N) = N$, we have $\Theta(I) = 1$, and indeed $\Theta$ is an $F$-algebra homomorphism. It remains to be seen that $\Theta$ is an isomorphism; it suffices to show injectivity. To that end, suppose $\Theta(M)(N) = NM^\mathsf{T} = 0$ for all $N$. Then $MN^\mathsf{T} = 0$ for all $N$, and so $M = 0$. Thus $\Theta$ is an $F$-algebra isomorphism $\mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F))$. Note that $\Theta$ depends essentially on our choice of a basis $B$, and so is not “natural”.

### The matrix of an extension of scalars

Let $K$ be a field and let $F \subseteq K$ be a subfield. Let $\psi : V \rightarrow W$ be a linear transformation of finite dimensional vector spaces over $F$.

1. Prove that $1 \otimes \psi : K \otimes_F V \rightarrow K \otimes_F W$ is a $K$-linear transformation.
2. Fix bases $B$ and $E$ of $V$ and $W$, respectively. Compare the matrices of $\psi$ and $1 \otimes \psi$ with respect to these bases.

Note that since $K$ is an $(K,F)$-bimodule, $1 \otimes \psi$ is a $K$-module homomorphism- that is, a linear transformation.

Let $B = \{v_i\}$ and $E = \{w_i\}$. We claim that $B^\prime = \{ 1 \otimes v_i \}$ and $E^\prime = \{1 \otimes w_i\}$ are bases of $K \otimes_F V$ and $K \otimes_F W$, respectively, as $K$ vector spaces. In fact, we showed this in a previous exercise. (See part (1).)

Suppose $\psi(v_j) = \sum a_{i,j} w_i$. Then $(1 \otimes \psi)(1 \otimes v_j) = \sum a_{i,j} (1 \otimes w_j)$. Thus the columns of $M^E_B(\psi)$ and $M^{E^\prime}_{B^\prime}(\psi)$ agree, and we have $M^E_B(\psi) = M^{E^\prime}_{B^\prime}(\psi)$.

### The trace of a Kronecker product is the product of traces

Let $A$ and $B$ be square matrices over a ring $R$. Recall that the trace of a square matrix is the sum of its diagonal entries. Let $A\ \mathsf{kr}\ B$ denote the Kronecker product of $A$ and $B$. Prove that $\mathsf{tr}(A \mathsf{kr}\ B) = \mathsf{tr}(A) \mathsf{tr}(B)$.

First we will give a concrete recursive characterization of the Kronecker product.

Let $A$ and $B$ be matrices. If $A = [a]$, then $A\ \mathsf{kr}\ B = aB$. If $A = \left[ \begin{array}{c|c} a & A_{1,2} \\ \hline A_{2,1} & A_{2,2} \end{array} \right]$, then $A\ \mathsf{kr}\ B = \left[ \begin{array}{c|c} aB & A_{1,2}\ \mathsf{kr}\ B \\ \hline A_{2,1}\ \mathsf{kr}\ B & A_{2,2}\ \mathsf{kr}\ B \end{array} \right]$.

Now to the main result; we will proceed by induction on the dimensions of $A$. If $A = [a]$, then certainly $\mathsf{tr}(A) = a$, and $\mathsf{tr}(A\ \mathsf{kr}\ B) = \mathsf{tr}(A)\mathsf{tr}(B)$.

For the inductive step, suppose the result holds when $A$ has dimension $n$, and let $A$ be a matrix with dimension $n+1$. Say $A = \left[ \begin{array}{c|c} a & A_{1,2} \\ \hline A_{2,1} & A_{2,2} \end{array} \right]$. Then $\mathsf{tr}(A\ \mathsf{kr}\ B) = \mathsf{tr}\left( \left[ \begin{array}{c|c} aB & A_{1,2}\ \mathsf{kr}\ B \\ \hline A_{2,1}\ \mathsf{kr}\ B & A_{2,2}\ \mathsf{kr}\ B \end{array} \right] \right)$ $= \mathsf{tr}(aB) + \mathsf{tr}(A_{22} \mathsf{kr}\ B)$ $= \mathsf{tr}(A)\mathsf{tr}(B)$ as desired.