Tag Archives: vector space

Show that a given linear transformation has no eigenvectors

Let V = \mathbb{R}^\mathbb{N} be a countable-dimensional \mathbb{R} vector space, and define T : V \rightarrow V by T((a_i))_j = 0 if j = 0 and a_{j-i} otherwise. Show that T has no eigenvectors.


Suppose there exists a \in V, r \in \mathbb{R} nonzero, such that T(a) = ra. We claim that a_i = 0 for all i, and prove it by induction. For the base case i = 0, we have ra_0 = 0. Since r \neq 0, a_0 = 0. For the inductive step, if a_i = 0, then ra_{i+1} = a_i = 0. Again since r \neq 0, a_{i+1} = 0. So in fact a = 0.

So T has no eigenvectors.

Some properties of a QQ-vector space given the existence of a linear transformation with given properties

Let V be a finite dimensional vector space over \mathbb{Q} and suppose T is a nonsingular linear transformation on V such that T^{-1} = T^2 + T. Prove that the dimension of V is divisible by 3. If the dimension of V is precisely 3, prove that all such transformations are similar.


If T is such a transformation, then we have 1 = T^3 + T^2, and so T^3 + T^2 - 1 = 0. So the minimal polynomial of T divides p(x) = x^3 + x^2 - 1. Now p(x) is irreducible over \mathbb{Q} by the Rational Root Test (Prop. 11 on page 308 in D&F). So the minimal polynomial of T is precisely p(x). Now the characteristic polynomial of T divides some power of p(x) (Prop. 20 in D&F) and so has degree 3k for some k. But of course the degree of the characteristic polynomial is precisely the dimension of V, as desired.

Now if V has dimension 3, then the minimal polynomial of V is the characteristic polynomial, so that the invariant factors of T are simply p(x). In particular, all such T are similar.

The degree of the minimal polynomial of a matrix of dimension n is at most n²

Let A \in \mathsf{Mat}_n(F) be a matrix over a field F. Prove that the dimension of the minimal polynomial m_A(x) of A is at most n^2.


If V = F^n, note that the set \mathsf{End}_F(V) is isomorphic to \mathsf{Mat}_n(F) once we select a pair of bases. The set of all n \times n matrices over F is naturally an F-vector space of dimension n^2. Consider now the powers of A: 1 = A^0, A^1, A^2, …, A^{n^2}. These matrices, as elements of \mathsf{End}_F(V), are necessarily linearly dependent, so that \sum r_i A^i = p(A) = 0 for some r_i \in F. The minimal polynomial m_A(x) divides this p(x), and so has degree at most n^2.

Over a field of characteristic not two, the tensor square of a vector space decomposes as the direct sum of the symmetric and exterior tensor squares

Let F be a field of characteristic not 2, and let V be an n-dimensional vector space over F. Recall that in this case, we can realize \mathcal{S}^2(V) and \bigwedge^2(V) as subspaces of V \otimes_F V. Prove that V \otimes_F V = \mathcal{S}^2(V) \oplus \bigwedge^2(V).


Recall that \mathsf{dim}_F\ V \otimes_F V = n^2.

Suppose z \in \mathcal{S}^2(V) \cap \bigwedge^2(V). Then we have \mathsf{Alt}_2(z) = z = \mathsf{Sym}_2(z). Expanding these, we see that z + (1\ 2) z = z - (1\ 2) z, so that 2(1\ 2) z = 0, so that z = 0. That is, \mathcal{S}^2(V) and \mathsf{Alt}_2(V) intersect trivially.

Counting dimensions, recall that \mathsf{dim}_F\ \mathcal{S}^2(V) = \frac{n(n+1)}{2} and \mathsf{dim}_F\ \bigwedge^2(V) = \frac{n(n-1)}{2}.

Thus V \otimes_F V = \mathcal{S}^2(V) \oplus \bigwedge^2(V).

Facts about alternating bilinear maps on vector spaces

Let F be a field, let V be an n-dimensional F-vector space, and let f : V \times V \rightarrow W be a bilinear map, where W is an F-vector space.

  1. Prove that if f is an alternating F-bilinear map on V then f(x,y) = -f(y,x) for all x,y \in V.
  2. Suppose \mathsf{char}\ F \neq 2. Prove that f(x,y) is an alternating bilinear map on V if and only if f(x,y) = -f(y,x) for all x,y \in V.
  3. Suppose \mathsf{char}\ F = 2. Prove that every alternating bilinear form f(x,y) on V is symmetric. (I.e. f(x,y) = f(y,x) for all x,y \in V.) Prove that there exist symmetric bilinear maps which are not alternating.

Let x,y \in V, and suppose f is an alternating bilinear map. Now 0 = f(x-y,x-y) = f(x,x) - f(x,y) - f(y,x) + f(y,y), so that f(x,y) = -f(x,y).

Suppose \mathsf{char}\ F \neq 2; in particular, 2 is a unit in F. If f is bilinear such that f(x,y) = -f(y,x) for all x,y \in V, then in particular we have f(x,x) = -f(x,x), so that 2f(x,x) = 0. Thus f(x,x) = 0 for all x \in V, and so f is alternating. Conversely, if f is alternating then by part (1) above we have f(x,y) = -f(y,x) for all x, y \in V.

Now suppose \mathsf{char}\ F = 2. Note that (x+y) \otimes (x+y) = x \otimes x - x \otimes y + y \otimes x - y \otimes y. Mod \mathcal{A}^2(V), we have x \otimes y - y \otimes x = 0. In particular, the submodule \mathcal{C}^2(V) generated by all tensors of the form x \otimes y - y \otimes x is contained in \mathcal{A}^2(V).

We have already seen that \mathsf{dim}_F\ \mathcal{S}^2(V) = {{n+1} \choose {n-1}} = \frac{n(n+1)}{2} and \mathsf{dim}_F\ \bigwedge^2(V) = {n \choose 2} = \frac{n(n-1)}{2}. Thus the containment \mathcal{C}^2(V) \subsetneq \mathcal{A}^2(V) is proper.

If V is an infinite dimensional vector space, then its dual space has strictly larger dimension

Let V be an infinite dimensional vector space over a field F, say with basis A. Prove that the dual space \widehat{V} = \mathsf{Hom}_F(V,F) has strictly larger dimension than does V.


We claim that \widehat{V} \cong_F \prod_A F. To prove this, for each a \in A let F_a be a copy of F. Now define \varepsilon_a : \widehat{V} \rightarrow F_a by \varepsilon_a(\widehat{v}) = \widehat{v}(a). By the universal property of direct products, there exists a unique F-linear transformation \theta : \widehat{V} \rightarrow \prod_A F_a such that \pi_a \circ \theta = \varepsilon_a for all a \in A. We claim that \theta is an isomorphism. To see surjectivity, let (v_a) \in \prod_A F_a. Now define \varphi \in \widehat{V} by letting \varphi(a) = v_a and extending linearly; certianly \theta(\varphi) = (v_a). To see injectivity, suppose \varphi \in \mathsf{ker}\ \theta. Then \theta(\varphi) = 0, so that (\pi_a \circ \theta)(\varphi) = 0, and thus \varepsilon_a(\varphi) = 0 for all a. Thus \varepsilon(a) = 0 for all a \in A. Since A is a basis of V, we have \varphi = 0. Thus \theta is an isomorphism, and we have \widehat{V} \cong_F \prod_A F.

By this previous exercise, \widehat{V} has strictly larger dimension than does V.

The dual basis of an infinite dimensional vector space does not span the dual space

Let F be a field and let V be an infinite dimensional vector space over F; say V has basis B = \{v_i\}_I. Prove that the dual basis \{\widehat{v}_i\}_I does not span the dual space \widehat{V} = \mathsf{Hom}_F(V,F).


Define a linear transformation \varphi on V by taking v_i to 1 for all i \in I. Note that for all i \in I, \varphi(v_i) \neq 0. Suppose now that \varphi = \sum_{i \in K} \alpha_i\widehat{v}_i where K \subseteq I is finite; for any j \notin K, we have (\sum \alpha_i \widehat{v}_i)(v_j) = 0, while \varphi(v_j) = 1. Thus \varphi is not in \mathsf{span}\ \{\widehat{v}_i\}_I.

So the dual basis does not span \widehat{V}.

The annihilator of a subset of a dual vector space

Let V be a vector space over a field F and let \widehat{V} = \mathsf{Hom}_F(V,F) denote the dual vector space of V. Given S \subseteq \widehat{V}, define \mathsf{Ann}(S) = \{v \in V \ |\ f(v) = 0\ \mathrm{for\ all}\ f \in S \}. (This set is called the annihilator of S in V.)

  1. Prove that \mathsf{Ann}(\widehat{S}) is a subspace of V for all \widehat{S} \subseteq \widehat{V}.
  2. Suppose \widehat{W}_1 and \widehat{W}_2 are subspaces of \widehat{V}. Prove that \mathsf{Ann}(\widehat{W}_1 + \widehat{W}_2) = \mathsf{Ann}(\widehat{W}_1) \cap \mathsf{Ann}(\widehat{W}_2) and \mathsf{Ann}(\widehat{W}_1 \cap \widehat{W}_2) = \mathsf{Ann}(\widehat{W}_1) + \mathsf{Ann}(\widehat{W}_2).
  3. Let \widehat{W}_1, \widehat{W}_2 \subseteq \widehat{V} be subspaces. Prove that \mathsf{Ann}(\widehat{W}_1) = \mathsf{Ann}(\widehat{W}_2) if and only if \widehat{W}_1 = \widehat{W}_2.
  4. Prove that, for all \widehat{S} \subseteq \widehat{V}, \mathsf{Ann}(\widehat{S}) = \mathsf{Ann}(\mathsf{span}\ \widehat{S}).
  5. Assume V is finite dimensional with basis B = \{v_i\}_{i=1}^n, and let \widehat{B} = \{\widehat{v}_i\}_{i=1}^n denote the basis dual to B. Prove that if \widehat{S} = \{\widehat{v}_i\}_{i=1}^k for some 1 \leq k \leq n, then \mathsf{Ann}(\widehat{S}) = \mathsf{span} \{v_i\}_{i=k+1}^n.
  6. Assume V is finite dimensional. Prove that if \widehat{W} \subseteq \widehat{V} is a subspace, then \mathsf{dim}\ \mathsf{Ann}(\widehat{W}) = \mathsf{dim}\ V - \mathsf{dim}\ \widehat{W}.

[This needs to be cleaned up.]

Recall that a bounded lattice is a tuple (L, \wedge, \vee, \top, \bot), where \wedge and \vee are binary operators on L and \top and \bot are elements of L satisfying the following:

  1. \wedge and \vee are associative and commutative,
  2. \top and \bot are identity elements with respect to \wedge and \vee, respectively, and
  3. a \wedge (a \vee b) = a and a \vee (a \wedge b) = a for all a,b \in L. (Called the “absorption laws”.)

If L_1 and L_2 are bounded lattices, a bounded lattice homomorphism is a mapping \varphi : L_1 \rightarrow L_2 that preserves the operators- \varphi(a \wedge b) = \varphi(a) \wedge \varphi(b), \varphi(a \vee b) = \varphi(a) \vee \varphi(b), \varphi(\bot) = \bot, and \varphi(\top) = \top. As usual, a lattice homomorphism which is also bijective is called a lattice isomorphism.

The interchangability of \wedge and \vee (and of \bot and \top) immediately suggests the following definition. Given a bounded lattice L, we define a new lattice \widehat{L} having the same base set as L but with the roles of \wedge and \vee (and of \bot and \top) interchanged. This \widehat{L} is called the dual lattice of L.

Let V be a vector space (of arbitrary dimension) over a field F. We let \mathcal{S}_F(V) denote the set of all F-subspaces of V. We claim that (\mathcal{S}_F(V), \cap, +, V, 0) is a bounded lattice. The least obvious of the axioms to check are the absorption laws. Indeed, note that for all subspaces U,W \subseteq V, we have U \cap (U + W) = U and U + (U \cap W) = U.

Now let V be a vector space (again of arbitrary dimension) over a field F, and let \widehat{V} = \mathsf{Hom}_F(V,F) denote its dual space. If S \subseteq \widehat{V} is an arbitrary subset and \mathsf{Ann}(S) is defined as above, note that f(0) = 0 for all f \in S, and that if x,y \in \mathsf{Ann}(S) and r \in F, we have f(x+ry) = f(x)+rf(y) = 0 for all f \in S. By the submodule criterion, \mathsf{Ann}(S) \subseteq V is a subspace.

Now define A : \mathcal{S}_F(\widehat{V}) \rightarrow \widehat{\mathcal{S}_F(V)} by A(\widehat{W}) = \mathsf{Ann}(\widehat{W}). We claim that if V is finite dimensional, then A is a bounded lattice homomorphism.

  1. (A(\widehat{0}) = V) Note that for all v \in V, we have \widehat{0}(v) = 0. Thus V = \mathsf{Ann}(\widehat{0}) = A(\widehat{0}). (\widehat{0} is the zero function V \rightarrow F.)
  2. (A(\widehat{V}) = 0) Suppose there exists a nonzero element v \in \mathsf{Ann}(\widehat{V}). Then there exists a basis E of V containing v, and we may construct a homomorphism \varphi : V \rightarrow F such that \varphi(v) \neq 0. In particular, v \notin A(\widehat{V}). On the other hand, it is certainly the case that 0 \in A(\widehat{V}). Thus we have A(\widehat{V}) = 0.
  3. (A(\widehat{W}_1 + \widehat{W}_2) = A(\widehat{W}_1) \cap A(\widehat{W}_2)) (\subseteq) Let v \in A(\widehat{W}_1 + \widehat{W}_2). Then for all f + g \in \widehat{W}_1 + \widehat{W}_2, we have (f+g)(v) = f(v) + g(v) = 0. In particular, if f \in \widehat{W}_1, then f(v) = (f+0)(v) = 0, so that v \in A(\widehat{W}_1). Similarly, v \in A(\widehat{W}_2), and thus v \in A(\widehat{W}_1) \cap A(\widehat{W}_2). (\supseteq) Suppose v \in A(\widehat{W}_1) \cap A(\widehat{W}_2). Then for all f+g \in \widehat{W}_1 + \widehat{W}_2, we have (f+g)(v) = f(v)+g(v) = 0; thus v \in A(\widehat{W}_1+\widehat{W}_2). Thus A(\widehat{W}_1 + \widehat{W}_2) = A(\widehat{W}_1) \cap A(\widehat{W}_2).
  4. (A(\widehat{W}_1 \cap \widehat{W}_2) = A(\widehat{W}_1) + A(\widehat{W}_2)) (\supseteq) Suppose v \in A(\widehat{W}_1). Then for all f \in \widehat{W}_1, f(v) = 0. In particular, for all f \in \widehat{W}_1 \cap \widehat{W}_2. Thus v \in A(\widehat{W}_1 \cap \widehat{W}_2). Similarly we have A(\widehat{W}_2) \subseteq A(\widehat{W}_1 \cap \widehat{W}_2); thus A(\widehat{W}_1) + A(\widehat{W}_2) \subseteq A(\widehat{W}_1 \cap \widehat{W}_2). (\subseteq) First, we claim that this inclusion holds for all pairs of one dimensional subspaces. If \widehat{W}_1 and \widehat{W}_2 intersect in a dimension 1 subspace (that is, \widehat{W}_1 = \widehat{W}_2), then certianly A(\widehat{W}_1 \cap \widehat{W}_2) \subseteq A(\widehat{W}_1) + A(\widehat{W}_2). If they intersect trivially, then we have \widehat{W}_1 = F\widehat{t}_1 and \widehat{W}_2 = F\widehat{t}_2, and A(\widehat{W}_1) = \mathsf{ker}\ \widehat{t}_1 and A(\widehat{W}_2) = \mathsf{ker}\ \widehat{t}_2. Now \widehat{t}_1 and \widehat{t}_2 are nonzero linear transformations V \rightarrow F, and so by the first isomorphism theorem for modules their kernels have dimension (\mathsf{dim}\ V) - 1. Note that linear transformations V \rightarrow F are realized (after fixing a basis) by matrices of dimension 1 \times \mathsf{dim}\ V; in particular, if \widehat{t}_1 and \widehat{t}_2 have the same kernel, then they are row equivalent, and so are F-multiples of each other. Thus we have A(\widehat{W}_1) + A(\widehat{W}_2) = V. Now suppose \widehat{W}_1 = \sum \widehat{W}_{1,i} and \widehat{W}_2 = \sum \widehat{W}_{2,i} are sums of one dimensional subspaces. We have A(\widehat{W}_1 \cap \widehat{W}_2) = A((\sum \widehat{W}_{1,i}) \cap (\sum \widehat{W}_{2,j})) = A(\sum (\widehat{W}_{1,i} \cap \widehat{W}_{2,j})) = \bigcap A(\widehat{W}_{1,i} \cap \widehat{W}_{2,j}). From the one-dimensional case, this is equal to \bigcap (A(\widehat{W}_{i,1} + A(\widehat{W}_{2,j}) = (\bigcap A(\widehat{W}_{1,i})) + (\bigcap A(\widehat{W}_{2,i})) = A(\widehat{W}_1) + A(\widehat{W}_2). (Note that our proof depends on V being finite dimensional.)

Thus A is a bounded lattice homomorphism. We claim also that A is bijective. To see surjectivity, let W \subseteq V be a subspace. Define \widehat{W} = \{ f \in \widehat{V} \ |\ \mathsf{ker}\ f \supseteq W \}. We claim that A(\widehat{W}) = W. To see this, it is clear that W \subseteq A(\widehat{A}). Moreover, there is a mapping f \in \widehat{W} whose kernel is exactly W, and thus A(\widehat{W}) = W. Before we show injectivity, we give a lemma.

Lemma: Let \widehat{W} \subseteq \widehat{V} be a subspace with basis \{\widehat{v}_i\}_{i=1}^k, and extend to a basis \{\widehat{v}_i\}_{i=1}^n. Let \{v_i\}_{i=1}^n be the dual basis to \{\widehat{v}_i\}_{i=1}^n, obtained using the natural isomorphism V \cong \widehat{\widehat{V}}. Then A(\widehat{W}) = \mathsf{span}\ \{v_i\}_{i=k+1}^n. Proof: Let \sum \alpha_i v_i \in A(\widehat{W}). In particular, we have \widehat{v}_j(\sum \alpha_i v_i) = \alpha_j = 0 for all 1 \leq j \leq k. Thus \sum \alpha_iv_i \in \mathsf{span}\ \{v_i\}_{i=k+1}^n. Conversely, note that \widehat{v}_j(v_i) = 0 for all k+1 \leq i \leq n, so that \mathsf{span}\ \{v_i\}_{i=k+1}^n \subseteq A(\widehat{W}). \square

In particular, we have \mathsf{dim}\ \widehat{W} + \mathsf{dim}\ A(\widehat{W}) = \mathsf{dim}\ V. Now suppose A(\widehat{W}) = \mathsf{span}\ \{v_i\}_{i=1}^k, and extend to a basis \{v_i\}_{i=1}^n of V. Let \{\widehat{v}_i\}_{i=1}^n denote the dual basis. Note that for all f \in \widehat{W}, writing f = \sum \alpha_i \widehat{v}_i, we have \alpha_j = f(v_j) = 0 whenever 1 \leq j \leq k. In particular, \widehat{W} \subseteq \mathsf{span}\ \{\widehat{v}_i\}_{i=k+1}^n. Condiering dimension, we have equality. Now to see injectivity for A, note that if A(\widehat{W}_1) = A(\widehat{W}_2), then \widehat{W}_1 and \widehat{W}_2 share a basis- hence \widehat{W}_1 = \widehat{W}_2, and so A is injective.

Thus, as lattices, we have \mathcal{S}_F(\widehat{V}) \cong \widehat{\mathcal{S}_F(V)}.

Finally, note that it is clear we have \mathsf{Ann}(\mathsf{span}\ S) \subseteq \mathsf{Ann}(S). Conversely, if v \in \mathsf{Ann}(S) and f = \sum \alpha_i s_i \in \mathsf{span}\ S, then f(v) = 0. Thus \mathsf{Ann}(S) = \mathsf{Ann}(\mathsf{span}\ S).

Express a given linear transformation in terms of a dual basis

Let V \subseteq \mathbb{Q}[x] be the \mathbb{Q}-vector space consisting of those polynomials having degree at most 5. Recall that B = \{1,x,x^2,x^3,x^4,x^5\} is a basis for this vector space over \mathbb{Q}. For each of the following maps \varphi : V \rightarrow \mathbb{Q}, verify that \varphi is a linear transformation and express \varphi in terms of the dual basis B^\ast on V^\ast = \mathsf{Hom}_F(V,F).

  1. \varphi(p) = p(\alpha), where \alpha \in \mathbb{Q}.
  2. \varphi(p) = \int_0^1 p(t)\ dt
  3. \varphi(p) = \int_0^1 t^2p(t)\ dt
  4. \varphi(p) = p^\prime(\alpha), where \alpha \in \mathbb{Q}. (Prime denotes the usual derivative of a polynomial.)

Let v_i be the element of the dual basis B^\ast such that v_i(x^j) = 1 if i = j and 0 otherwise. I’m going to just assume that integration over an interval is linear.

  1. Note that \varphi(p+rq) = (p+rq)(\alpha) = p(\alpha) + rq(\alpha) = \varphi(p) + r \varphi(q); thus \varphi is indeed a linear transformation. Moreover, note that (\sum \alpha^i v_i)(\sum c_jx^j) = \sum \alpha^i v_i(\sum c_jx^j) = \sum \alpha^i \sum c_j v_i(x^j) = \sum \alpha^i c_i = (\sum c_ix^i)(\alpha). Thus \varphi = \sum \alpha^i v_i.
  2. Note that \varphi(\sum \alpha_i x^i) = \sum \frac{\alpha_i}{i+1}. Now (\sum \frac{1}{i+1} v_i)(\sum \alpha_j x^j) = \sum \frac{1}{i+1} v_i(\sum \alpha_j x^j) = \sum \frac{1}{i+1} \sum \alpha_j v_i(x^j) = \sum \frac{\alpha_i}{i+1} = \varphi(\sum \alpha_i x^i). So \varphi = \sum \frac{1}{i+1} v_i.
  3. Note that \varphi(\sum \alpha_i x^i) = \sum \frac{\alpha_i}{i+3}. Now (\sum \frac{1}{i+3} v_i)(\sum \alpha_j x^j) = \sum \frac{1}{i+3} v_i(\sum \alpha_j x^j) = \sum \frac{1}{i+3} \sum \alpha_j v_i(x^j) = \sum \frac{\alpha_i}{i+3} = \varphi(\sum \alpha_i x^i). Thus \varphi = \sum \frac{1}{i+3} v_i.
  4. Since differentiation (of polynomials) is linear and the evaluation map is linear, this \varphi is linear. Note that (\sum (i+1)\alpha^i v_{i+1})(\sum c_jx^j) = \sum (i+1)\alpha^i v_{i+1}(\sum c_jx^j) = \sum (i+1)\alpha^i \sum c_j v_{i+1}(x^j) = \sum (i+1)\alpha^i c_{i+1} = \varphi(\sum c_ix^i). Thus \varphi = \sum (i+1)\alpha^iv_{i+1}.

The endomorphism rings of a vector space and its dual space are isomorphic as algebras over the base field

Let F be a field and let V be a vector space over F of some finite dimension n. Show that the mapping \Omega : \mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F)) given by \Omega(\varphi)(\tau) = \tau \circ \varphi is an F-vector space isomorphism but not a ring isomorphism for n > 1. Exhibit an F-algebra isomorphism \mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F)).


We begin with a lemma.

Lemma: Let R be a unital ring and let M,A,B be left unital R-modules. If \varphi : M \times A \rightarrow B is R-bilinear, then the induced map \Phi : M \rightarrow \mathsf{Hom}_R(A,B) given by \Phi(m)(a) = \phi(m,a) is a well-defined R-module homomorphism. Proof: To see well definedness, we need to verify that \Phi(m) : A \rightarrow B is a module homomorphism. To that end note that \Phi(m)(x+ry) = \varphi(m,x+ry) = \varphi(m,x) + r \varphi(m,y) = \Phi(m)(x) + r\Phi(m)(y). Similarly, to show that \Phi is a module homomorphism, note that \Phi(x+ry)(a) = \varphi(x+ry,a) = \varphi(x,a)+ r\varphi(y,a) = \Phi(x)(a) + r\Phi(y)(a) = (\Phi(x)+r\Phi(y))(a), so that \Phi(x+ry) = \Phi(x) + r\Phi(y). \square

[Note to self: In a similar way, if R is a unital ring and M,N,A,B unital modules, and \varphi : M \times N \times A \rightarrow B is trilinear, then \Phi : M \times N \rightarrow \mathsf{Hom}_R(A,B) is bilinear. (So that the induced map M \rightarrow \mathsf{Hom}_R(N,\mathsf{Hom}_R(A,B)) is a module homomorphism, or unilinear- if you will.) That is to say, in a concrete fashion we can think of multilinear maps as the uncurried versions of higher order functions on modules. (!!!) (I just had a minor epiphany and it made me happy. Okay, so the usual isomorphism V \rightarrow \mathsf{Hom}_F(V,F) is just this lemma applied to the dot product V \times V \rightarrow F… that’s cool.) Moreover, if A = B and if M and \mathsf{End}_R(A) are R-algebras, then the induced map \Phi is an algebra homomorphism if and only if \varphi(m_1m_2,a) = \varphi(m_1,\varphi(m_2,a)) and \varphi(1,a) = a.]

Define \overline{\Omega} : \mathsf{End}_F(V) \times \mathsf{Hom}_F(V,F) \rightarrow \mathsf{Hom}_F(V,F) by \overline{\Omega}(\varphi,\tau) = \tau \circ \varphi. This map is certainly bilinear, and so by the lemma induces the linear transformation \Omega : \mathsf{Hom}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V)). Since V has finite dimension, and since its dual space \mathsf{Hom}_F(V,F) has the same dimension, to see that \Omega is an isomorphism of vector spaces it suffices to show that the kernel is trivial. To that end, suppose \varphi \in \mathsf{ker}\ \Omega. Then we have \Omega(\varphi)(\tau) = \tau \circ \varphi = 0 for all \tau. In particular, we have \mathsf{im}\ \varphi \subseteq \mathsf{ker}\ \tau for all \tau. If there exists a nonzero element v \in \mathsf{im}\ \varphi, then by the Building-up lemma there is a basis B of V containing v. In particular, there is a linear transformation \tau such that \tau(v) \neq 0. That is, we have \mathsf{im}\ \varphi = 0, so that \varphi = 0. Hence \Omega is injective, and so an isomorphism of vector spaces.

Note that \Omega(\varphi \circ \psi)(\tau) = \tau \circ \varphi \circ \psi, while (\Omega(\varphi) \circ \Omega(\psi))(\tau) = \Omega(\varphi)(\Omega(\psi)(\tau)) = \Omega(\varphi)(\tau \circ \psi) = \tau \circ \psi \circ \varphi. If V has dimension greater than 1, then \mathsf{End}_F(V) is not a commutative ring. Thus these expressions need not be equal in general. In fact, if we choose \tau, \varphi, and \psi such that M(\varphi) = \left[ \begin{array}{c|c} 0 & 1 \\ \hline 0 & 0 \end{array} \right], M(\psi) = \left[ \begin{array}{c|c} 0 & 0 \\ \hline 1 & 0 \end{array} \right], and M(\tau) = [1 | 0], then clearly \tau \circ \varphi \circ \psi \neq 0 and \tau \circ \psi \circ \varphi = 0. In particular, \Omega is not a ring isomorphism if n > 1. On the other hand, if n = 1, then \mathsf{End}_F(V) \cong F is commutative, and \Omega is a ring isomorphism.

On the other hand, these rings are clearly isomorphic since V and \mathsf{Hom}_F(V,F) are vector spaces of the same dimension.

Note that \mathsf{End}_F(V) and \mathsf{End}_F(\mathsf{Hom}_F(V,F)) are both F-algebras via the usual scalar multiplication by F. Fix a basis B of V, and identify the linear transformation \varphi \in \mathsf{End}_F(V) with its matrix M^B_B(\varphi) with respect to this basis. (Likewise for \mathsf{Hom}_F(V,F).) Now define \Theta : \mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F)) by \Theta(M)(N) = NM^\mathsf{T}. It is clear that \Theta is well defined, and moreover is an F-vector space homomorphism. Note also that \Theta(M_1M_2)(N) = N(M_1M_2)^\mathsf{T} = NM_2^\mathsf{T}M_1^\mathsf{T} = \Theta(M_1)(\Theta(M_2)(N)), so that \Theta(M_1M_2) = \Theta(M_1)\Theta(M_2). Thus \Theta is a ring homomorphism; since \Theta(I)(N) = N, we have \Theta(I) = 1, and indeed \Theta is an F-algebra homomorphism. It remains to be seen that \Theta is an isomorphism; it suffices to show injectivity. To that end, suppose \Theta(M)(N) = NM^\mathsf{T} = 0 for all N. Then MN^\mathsf{T} = 0 for all N, and so M = 0. Thus \Theta is an F-algebra isomorphism \mathsf{End}_F(V) \rightarrow \mathsf{End}_F(\mathsf{Hom}_F(V,F)). Note that \Theta depends essentially on our choice of a basis B, and so is not “natural”.