Tag Archives: kernel

A generating set for the kernel of a given module homomorphism

Let V, T, A, et cetera be as in this previous exercise.

Show that \mathsf{ker}\ \varphi is generated by D = \{\omega_1,\ldots,\omega_n\}.


Let \zeta \in \mathsf{ker}\ \varphi. Now \varphi(\zeta) = 0, and by this previous exercise, we have \zeta \in \mathsf{span}_{F[x]}(\omega_1,\ldots,\omega_n) + \mathsf{span}_F(\xi_1,\ldots,\xi_n). Recall that the \omega_i are in \mathsf{ker}\ \varphi, so that 0 = \varphi(\zeta) = \sum b_i v_i for some b_i \in F. Since the v_i are a basis for V over F, we have b_i = 0 for all i. Thus \mathsf{ker}\ \varphi \subseteq \mathsf{span}_{F[x]}(\omega_1,\ldots,\omega_n). The reverse inclusion is immediate, and so the \omega_i are a generating set for \mathsf{ker}\ \varphi over F[x].

Two facts about the kernel of a linear transformation on an F[x]-module

Let V, T, B, A, E, and \varphi be as described in the previous exercise.

  1. Show that x \xi_j = \omega_j + f_j, where f_j \in \sum_i F\xi_i is in the F-vector space spanned by the \xi_i.
  2. Show that \mathsf{span}_{F[x]}(\xi_1,\ldots,\xi_n) = \mathsf{span}_{F[x]}(\omega_1,\ldots,\omega_n) + \mathsf{span}_F(\xi_1,\ldots,\xi_n).

Recall that \omega_j = x \xi_j - \sum a_{i,j} \xi_i by definition, so that x \xi_j = \omega_j + \sum a_{i,j} \xi_i = \omega_j + f_j as desired.

We claim that in fact x^t \cdot \xi_j \in \mathsf{span}_{F[x]}(\omega_1,\ldots,\omega_n) + \mathsf{span}_F(\xi_1,\ldots,\xi_n) has this form for any t, and prove this by induction. The base case is already shown; for the inductive step, suppose the conclusion holds for x^t \xi_j. Now x^{t+1} \xi_j = x^t(\omega_j + f_j); Now x^t \omega \in \mathsf{span}_{F[x]}(\omega_1,\ldots,\omega_n) and f_j = \sum a_{i,j} \xi_i, so that x^t f_j is also in the desired span. Thus \mathsf{span}_{F[x]}(\xi_1,\ldots,\xi_n) \subseteq \mathsf{span}_{F[x]}(\omega_1,\ldots,\omega_n) + \mathsf{span}_F(\xi_1,\ldots,\xi_n). The reverse inclusion is immediate, and so these subspaces are equal.

A fact about the kernel of a linear transformation on an F[x]-vector space

Let V be an n-dimensional vector space over a field F with basis B = \{v_1, \ldots, v_n\}, let T be a linear transformation on V with matrix A, and make V into an F[x]-module in the usual way. That is, we have T(v_j) = \sum a_{i,j} v_i for each v_j and A = [a_{i,j}]. Let F[x]^n be the free module of rank n over F[x] and let E = \{ \xi_1, \ldots, \xi_n \} be a basis. Let \varphi : F[x]^n \rightarrow V be the (surjective) F[x]-module homomorphism given by defining \varphi(\xi_i) = v_i and extending linearly.

As demonstrated in the series of exercises beginning here, once we have a generating set for \mathsf{ker}\ \varphi, we can compute the invariant factors of A. In the next few exercises, we will find such a generating set.

Show that the elements \omega_j = \sum_{i=1}^n (\delta_{i,j}x - a_{i,j}) \xi_i = x \xi_j - \sum_{i=1}^n a_{i,j} \xi_i are in \mathsf{ker}\ \varphi, where \delta_{i,j} is the Kronecker delta.


Note that \varphi(\omega_j) = \varphi(x \xi_j - \sum a_{i,j} \xi_i) = x \varphi(\xi_j) - \sum a_{i,j} \varphi(\xi_i) = x \cdot v_j - \sum a_{i,j} v_i = T(v_j) - \sum a_{i,j} v_i = 0 as desired.

Exhibit the idempotents and principal ideals in a given semigroup

Denote the function \mathsf{max} on \mathbb{N} by \wedge. Let S = \{0,1\} \times \mathbb{N}, and define a binary operator \star on S by (s,a) \star (t,b) = (0, a \wedge b) if s = t and (1, a \wedge b) if s \neq t. Show that (S,\star) is a semigroup and exhibit all of its idempotents and principal ideals. Does S have a kernel?


First, we argue that \star is associative. To show this, we refer to the following tree diagram.

Associativity diagram

This diagram is to be read from left to right. Labels on an edge indicate an assumption that holds in all subsequent branches. Each path from the root to a leaf corresponds to a string of equalities, and together these imply that \star is associative.

So (S,\star) is a semigroup.

Suppose (s,a) is idempotent. Then (s,a) = (s,a) \star (s,a) = (0, a), and we have s = 0. Conversely, (0,a) is clearly idempotent for all a \in \mathbb{N}. So the idempotents in S are precisely elements of the form (0,a) with a \in \mathbb{N}.

Next we claim that S is commutative. Indeed, if s = t, then (s,a) \star (t,b) = (0,a \wedge b) = (0,b \wedge a) = (t,b) \star (s,a) and if s \neq t then (s,a) \star (t,b) = (1, a \wedge b) latex = (1, b \wedge a)$ = (t,b) \star (s,b).

We claim that the principal left ideal L(s,a) = (s,a) \cup S(s,a) is \{(t,b)\ |\ b \geq a\}. Indeed, if (t,b) \in L(s,a), then either (t,b) = (s,a) or (t,b) = (u,c)(s,a) for some (u,a). But then b = c \wedge a \geq a. Conversely, consider (t,b) with b \geq a, and let u be 0 if s =1 and 1 otherwise. Now if t = 0, then (t,b) = (s,b) \star (s,a) and if t = 1 then (t,b) = (u,b) \star (s,a). So L(s,a) = \{(t,b)\ |\ b \geq a\}.

In particular, for every element (s,a) \in S, there is an ideal of S not containing (s,a) (for instance, (s,a+1).) So S has no kernel.

A fact about alternating tensors over rings with enough units

Let R be a commutative ring in which k! is a unit, and let M be an (R,R)-bimodule such that latex rm = mr$. Recall that \mathcal{A}^k(M) is the set of all tensors in \mathcal{T}^k(M) having two consecutive entries equal, and \mathsf{Alt}(z) = \sum_{\sigma \in S_k} \epsilon(\sigma)\sigma z, where S_k acts on \mathcal{T}^k(M) by permuting entries.

Prove that z - (1/k!) \mathsf{Alt}(z) = (1/k!) \sum_{\sigma \in S_n} (z - \epsilon(\sigma)\sigma z) for all z \in \mathcal{T}^k(M). Use this to prove that \mathsf{ker}\ \frac{1}{k!} \mathsf{Alt} = \mathcal{A}^k(M).


Let z \in \mathcal{T}^k(M). We have z - \frac{1}{k!} \mathsf{Alt}(z) = z - \frac{1}{k!} \sum_{\sigma \in S_k} \epsilon(\sigma)\sigma z = \frac{1}{k!}(k!z - \sum_{\sigma \in S_k} \epsilon(\sigma)\sigma z) = \frac{1}{k!}(\sum_{\sigma \in S_k} z - \sum_{\sigma \in S_k} \epsilon(\sigma)\sigma z) = \frac{1}{k!}(\sum z - \epsilon(\sigma)\sigma z), as desired.

Suppose z \in \mathcal{A}^k(M); say the i and i+1 components of z are equal. Note that \sigma z = \sigma (1\ i+1) z, and in particular \epsilon(\sigma)\sigma z + \epsilon(\sigma(i\ i+1)) \sigma (i\ i+1) z = 0. In the equation \mathsf{Alt}(z) = \sum_{\sigma \in S_k} \epsilon(\sigma)\sigma z, we can break up the right hand side as a summation over the cosets of \langle (i\ i+1) \rangle, each of which is 0. Thus \frac{1}{k!}\mathsf{Alt}(z) = 0, and we have \mathcal{A}^k(M) \subseteq \mathsf{ker}\ \frac{1}{k!}\mathsf{Alt}.

Now suppose z \in \mathsf{ker}\ \frac{1}{k!} \mathsf{Alt}. From the equality proved above, we have \frac{1}{k!} \sum_{\sigma \in S_k} (z - \epsilon(\sigma)\sigma z) = z. Note that z - \epsilon(\sigma)\sigma z \in \mathcal{A}^k(M) for each \sigma. (It suffices to notice this is true for adjacent transpositions- i.e. \sigma = (i\ i+1).) Thus z \in \mathcal{A}^k(M).

So \mathsf{ker}\ \frac{1}{k!} \mathsf{Alt} = \mathcal{A}^k(M).

The annihilator of a subset of a dual vector space

Let V be a vector space over a field F and let \widehat{V} = \mathsf{Hom}_F(V,F) denote the dual vector space of V. Given S \subseteq \widehat{V}, define \mathsf{Ann}(S) = \{v \in V \ |\ f(v) = 0\ \mathrm{for\ all}\ f \in S \}. (This set is called the annihilator of S in V.)

  1. Prove that \mathsf{Ann}(\widehat{S}) is a subspace of V for all \widehat{S} \subseteq \widehat{V}.
  2. Suppose \widehat{W}_1 and \widehat{W}_2 are subspaces of \widehat{V}. Prove that \mathsf{Ann}(\widehat{W}_1 + \widehat{W}_2) = \mathsf{Ann}(\widehat{W}_1) \cap \mathsf{Ann}(\widehat{W}_2) and \mathsf{Ann}(\widehat{W}_1 \cap \widehat{W}_2) = \mathsf{Ann}(\widehat{W}_1) + \mathsf{Ann}(\widehat{W}_2).
  3. Let \widehat{W}_1, \widehat{W}_2 \subseteq \widehat{V} be subspaces. Prove that \mathsf{Ann}(\widehat{W}_1) = \mathsf{Ann}(\widehat{W}_2) if and only if \widehat{W}_1 = \widehat{W}_2.
  4. Prove that, for all \widehat{S} \subseteq \widehat{V}, \mathsf{Ann}(\widehat{S}) = \mathsf{Ann}(\mathsf{span}\ \widehat{S}).
  5. Assume V is finite dimensional with basis B = \{v_i\}_{i=1}^n, and let \widehat{B} = \{\widehat{v}_i\}_{i=1}^n denote the basis dual to B. Prove that if \widehat{S} = \{\widehat{v}_i\}_{i=1}^k for some 1 \leq k \leq n, then \mathsf{Ann}(\widehat{S}) = \mathsf{span} \{v_i\}_{i=k+1}^n.
  6. Assume V is finite dimensional. Prove that if \widehat{W} \subseteq \widehat{V} is a subspace, then \mathsf{dim}\ \mathsf{Ann}(\widehat{W}) = \mathsf{dim}\ V - \mathsf{dim}\ \widehat{W}.

[This needs to be cleaned up.]

Recall that a bounded lattice is a tuple (L, \wedge, \vee, \top, \bot), where \wedge and \vee are binary operators on L and \top and \bot are elements of L satisfying the following:

  1. \wedge and \vee are associative and commutative,
  2. \top and \bot are identity elements with respect to \wedge and \vee, respectively, and
  3. a \wedge (a \vee b) = a and a \vee (a \wedge b) = a for all a,b \in L. (Called the “absorption laws”.)

If L_1 and L_2 are bounded lattices, a bounded lattice homomorphism is a mapping \varphi : L_1 \rightarrow L_2 that preserves the operators- \varphi(a \wedge b) = \varphi(a) \wedge \varphi(b), \varphi(a \vee b) = \varphi(a) \vee \varphi(b), \varphi(\bot) = \bot, and \varphi(\top) = \top. As usual, a lattice homomorphism which is also bijective is called a lattice isomorphism.

The interchangability of \wedge and \vee (and of \bot and \top) immediately suggests the following definition. Given a bounded lattice L, we define a new lattice \widehat{L} having the same base set as L but with the roles of \wedge and \vee (and of \bot and \top) interchanged. This \widehat{L} is called the dual lattice of L.

Let V be a vector space (of arbitrary dimension) over a field F. We let \mathcal{S}_F(V) denote the set of all F-subspaces of V. We claim that (\mathcal{S}_F(V), \cap, +, V, 0) is a bounded lattice. The least obvious of the axioms to check are the absorption laws. Indeed, note that for all subspaces U,W \subseteq V, we have U \cap (U + W) = U and U + (U \cap W) = U.

Now let V be a vector space (again of arbitrary dimension) over a field F, and let \widehat{V} = \mathsf{Hom}_F(V,F) denote its dual space. If S \subseteq \widehat{V} is an arbitrary subset and \mathsf{Ann}(S) is defined as above, note that f(0) = 0 for all f \in S, and that if x,y \in \mathsf{Ann}(S) and r \in F, we have f(x+ry) = f(x)+rf(y) = 0 for all f \in S. By the submodule criterion, \mathsf{Ann}(S) \subseteq V is a subspace.

Now define A : \mathcal{S}_F(\widehat{V}) \rightarrow \widehat{\mathcal{S}_F(V)} by A(\widehat{W}) = \mathsf{Ann}(\widehat{W}). We claim that if V is finite dimensional, then A is a bounded lattice homomorphism.

  1. (A(\widehat{0}) = V) Note that for all v \in V, we have \widehat{0}(v) = 0. Thus V = \mathsf{Ann}(\widehat{0}) = A(\widehat{0}). (\widehat{0} is the zero function V \rightarrow F.)
  2. (A(\widehat{V}) = 0) Suppose there exists a nonzero element v \in \mathsf{Ann}(\widehat{V}). Then there exists a basis E of V containing v, and we may construct a homomorphism \varphi : V \rightarrow F such that \varphi(v) \neq 0. In particular, v \notin A(\widehat{V}). On the other hand, it is certainly the case that 0 \in A(\widehat{V}). Thus we have A(\widehat{V}) = 0.
  3. (A(\widehat{W}_1 + \widehat{W}_2) = A(\widehat{W}_1) \cap A(\widehat{W}_2)) (\subseteq) Let v \in A(\widehat{W}_1 + \widehat{W}_2). Then for all f + g \in \widehat{W}_1 + \widehat{W}_2, we have (f+g)(v) = f(v) + g(v) = 0. In particular, if f \in \widehat{W}_1, then f(v) = (f+0)(v) = 0, so that v \in A(\widehat{W}_1). Similarly, v \in A(\widehat{W}_2), and thus v \in A(\widehat{W}_1) \cap A(\widehat{W}_2). (\supseteq) Suppose v \in A(\widehat{W}_1) \cap A(\widehat{W}_2). Then for all f+g \in \widehat{W}_1 + \widehat{W}_2, we have (f+g)(v) = f(v)+g(v) = 0; thus v \in A(\widehat{W}_1+\widehat{W}_2). Thus A(\widehat{W}_1 + \widehat{W}_2) = A(\widehat{W}_1) \cap A(\widehat{W}_2).
  4. (A(\widehat{W}_1 \cap \widehat{W}_2) = A(\widehat{W}_1) + A(\widehat{W}_2)) (\supseteq) Suppose v \in A(\widehat{W}_1). Then for all f \in \widehat{W}_1, f(v) = 0. In particular, for all f \in \widehat{W}_1 \cap \widehat{W}_2. Thus v \in A(\widehat{W}_1 \cap \widehat{W}_2). Similarly we have A(\widehat{W}_2) \subseteq A(\widehat{W}_1 \cap \widehat{W}_2); thus A(\widehat{W}_1) + A(\widehat{W}_2) \subseteq A(\widehat{W}_1 \cap \widehat{W}_2). (\subseteq) First, we claim that this inclusion holds for all pairs of one dimensional subspaces. If \widehat{W}_1 and \widehat{W}_2 intersect in a dimension 1 subspace (that is, \widehat{W}_1 = \widehat{W}_2), then certianly A(\widehat{W}_1 \cap \widehat{W}_2) \subseteq A(\widehat{W}_1) + A(\widehat{W}_2). If they intersect trivially, then we have \widehat{W}_1 = F\widehat{t}_1 and \widehat{W}_2 = F\widehat{t}_2, and A(\widehat{W}_1) = \mathsf{ker}\ \widehat{t}_1 and A(\widehat{W}_2) = \mathsf{ker}\ \widehat{t}_2. Now \widehat{t}_1 and \widehat{t}_2 are nonzero linear transformations V \rightarrow F, and so by the first isomorphism theorem for modules their kernels have dimension (\mathsf{dim}\ V) - 1. Note that linear transformations V \rightarrow F are realized (after fixing a basis) by matrices of dimension 1 \times \mathsf{dim}\ V; in particular, if \widehat{t}_1 and \widehat{t}_2 have the same kernel, then they are row equivalent, and so are F-multiples of each other. Thus we have A(\widehat{W}_1) + A(\widehat{W}_2) = V. Now suppose \widehat{W}_1 = \sum \widehat{W}_{1,i} and \widehat{W}_2 = \sum \widehat{W}_{2,i} are sums of one dimensional subspaces. We have A(\widehat{W}_1 \cap \widehat{W}_2) = A((\sum \widehat{W}_{1,i}) \cap (\sum \widehat{W}_{2,j})) = A(\sum (\widehat{W}_{1,i} \cap \widehat{W}_{2,j})) = \bigcap A(\widehat{W}_{1,i} \cap \widehat{W}_{2,j}). From the one-dimensional case, this is equal to \bigcap (A(\widehat{W}_{i,1} + A(\widehat{W}_{2,j}) = (\bigcap A(\widehat{W}_{1,i})) + (\bigcap A(\widehat{W}_{2,i})) = A(\widehat{W}_1) + A(\widehat{W}_2). (Note that our proof depends on V being finite dimensional.)

Thus A is a bounded lattice homomorphism. We claim also that A is bijective. To see surjectivity, let W \subseteq V be a subspace. Define \widehat{W} = \{ f \in \widehat{V} \ |\ \mathsf{ker}\ f \supseteq W \}. We claim that A(\widehat{W}) = W. To see this, it is clear that W \subseteq A(\widehat{A}). Moreover, there is a mapping f \in \widehat{W} whose kernel is exactly W, and thus A(\widehat{W}) = W. Before we show injectivity, we give a lemma.

Lemma: Let \widehat{W} \subseteq \widehat{V} be a subspace with basis \{\widehat{v}_i\}_{i=1}^k, and extend to a basis \{\widehat{v}_i\}_{i=1}^n. Let \{v_i\}_{i=1}^n be the dual basis to \{\widehat{v}_i\}_{i=1}^n, obtained using the natural isomorphism V \cong \widehat{\widehat{V}}. Then A(\widehat{W}) = \mathsf{span}\ \{v_i\}_{i=k+1}^n. Proof: Let \sum \alpha_i v_i \in A(\widehat{W}). In particular, we have \widehat{v}_j(\sum \alpha_i v_i) = \alpha_j = 0 for all 1 \leq j \leq k. Thus \sum \alpha_iv_i \in \mathsf{span}\ \{v_i\}_{i=k+1}^n. Conversely, note that \widehat{v}_j(v_i) = 0 for all k+1 \leq i \leq n, so that \mathsf{span}\ \{v_i\}_{i=k+1}^n \subseteq A(\widehat{W}). \square

In particular, we have \mathsf{dim}\ \widehat{W} + \mathsf{dim}\ A(\widehat{W}) = \mathsf{dim}\ V. Now suppose A(\widehat{W}) = \mathsf{span}\ \{v_i\}_{i=1}^k, and extend to a basis \{v_i\}_{i=1}^n of V. Let \{\widehat{v}_i\}_{i=1}^n denote the dual basis. Note that for all f \in \widehat{W}, writing f = \sum \alpha_i \widehat{v}_i, we have \alpha_j = f(v_j) = 0 whenever 1 \leq j \leq k. In particular, \widehat{W} \subseteq \mathsf{span}\ \{\widehat{v}_i\}_{i=k+1}^n. Condiering dimension, we have equality. Now to see injectivity for A, note that if A(\widehat{W}_1) = A(\widehat{W}_2), then \widehat{W}_1 and \widehat{W}_2 share a basis- hence \widehat{W}_1 = \widehat{W}_2, and so A is injective.

Thus, as lattices, we have \mathcal{S}_F(\widehat{V}) \cong \widehat{\mathcal{S}_F(V)}.

Finally, note that it is clear we have \mathsf{Ann}(\mathsf{span}\ S) \subseteq \mathsf{Ann}(S). Conversely, if v \in \mathsf{Ann}(S) and f = \sum \alpha_i s_i \in \mathsf{span}\ S, then f(v) = 0. Thus \mathsf{Ann}(S) = \mathsf{Ann}(\mathsf{span}\ S).

Find bases for the image and kernel of a given linear transformation over different fields

Let F be a field, and let V \subseteq F[x] be the 7-dimensional vector space over F consisting precisely of those polynomials having degree at most 6. Let \varphi : V \rightarrow V be the linear transformation given by \varphi(p) = p^\prime. (See this previous exercise about the derivative of a polynomial.) For each of the following concrete fields F, find bases for the image and kernel of \varphi.

  1. \mathbb{R}
  2. \mathbb{Z}/(2)
  3. \mathbb{Z}/(3)
  4. \mathbb{Z}/(5)

Note that the elements x^i, with 0 \leq i \leq 6, form a basis of V. We will now compute the matrix of \varphi with respect to this basis. To that end, note that \varphi(1) = 0 and \varphi(x^i) = ix^{i-1} for 1 \leq i \leq 6. These computations hold in any field (and indeed in any unital ring), as we regard i as the i-fold sum of 1.

  1. Over \mathbb{R}, the matrix of \varphi is
    A = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 4 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 5 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 6 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right].

    The reduced row echelon form of this matrix is

    A^\prime = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right],

    all of whose columns are pivotal except the first. Thus the set \{1,x,x^2,x^3,x^4,x^5\} is a basis of \mathsf{im}\ \varphi. The solutions of A^\prime X = 0 now have the form X(x_1) = [x_1\ 0\ 0\ 0\ 0\ 0\ 0]^\mathsf{T}, and thus \{1\} is a basis of \mathsf{ker}\ \varphi.

  2. Over \mathbb{Z}/(2), the matrix of \varphi is
    A = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right].

    The reduced row echelon form of this matrix is

    A^\prime = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right],

    whose 2nd, 4th, and 6th columns are pivotal. Thus \{1,x^2,x^4\} is a basis of \mathsf{im}\ \varphi. The solutions of A^\prime X = 0 now have the form X(x_1,x_3,x_5,x_7) = [x_1\ 0\ x_3\ 0\ x_5\ 0\ x_7]^\mathsf{T}. Choosing x_i appropriately, we see that \{1,x^2,x^4,x^6\} is a basis for \mathsf{ker}\ \varphi.

  3. Over \mathbb{Z}/(3), the matrix of \varphi is
    A = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right].

    The reduced row echelon form of this matrix is

    A^\prime = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right],

    whose 2nd, 3rd, 5th, and 6th columns are pivotal. Thus \{1,2x,x^3,2x^4\} is a basis for \mathsf{im}\ \varphi. Now the solutions of A^\prime X = 0 have the form X(x_1,x_4,x_7) = [x_1\ 0\ 0\ x_4\ 0\ 0\ x_7]^\mathsf{T}. Choosing the x_i appropriately, we see that \{1,x^3,x^6\} is a basis for \mathsf{ker}\ \varphi.

  4. Over \mathbb{Z}/(5), the matrix of \varphi is
    A = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 3 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 4 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right].

    The reduced row echelon form of this matrix is

    A^\prime = \left[ \begin{array}{ccccccc} 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right],

    all of whose columns are pivotal except the 1st and the 6th. Thus \{1,x,x^2,x^3,x^4\} is a basis for \mathsf{im}\ \varphi. The solutions of A^\prime X = 0 have the form X(x_1,x_7) = [x_1\ 0\ 0\ 0\ 0\ x_5\ 0]^\mathsf{T}; thus \{1,x^5\} is a basis for \mathsf{ker}\ \varphi.

Find bases for the image and kernel of a given linear transformation

Let V \subseteq \mathbb{Q}[x] be the 6 dimensional vector space over \mathbb{Q} consisting of the polynomials having degree at most 5. Let \varphi : V \rightarrow V be the map given by \varphi(p) = x^2p^{\prime\prime} - 6xp^\prime + 12p, where p^\prime and p^{\prime\prime} denote the first and second derivative of p with respect to x. (See this previous exercise.)

  1. Prove that \varphi is a linear transformation.
  2. We showed previously that the set \{1,x,x^2,x^3,x^4,x^5\} is a basis of V. Find bases for the image and kernel of \varphi with respect to this basis.

We begin with a lemma.

Lemma: Let R be a commutative unital ring. Then D : R[x] \rightarrow R[x] given by D(p) = p^\prime is a module homomorphism. Proof: Let p(x), q(x) \in R[x] and let r \in R. Then (p+rq)^\prime(x) = \sum (i+1)(p_i+rq_i)x^i = \sum(i+1)p_ix^i + r\sum (i+1)q_ix^i = p^\prime(x) + rq^\prime(x). Thus D(p+rq) = D(p) + rD(q), and so D is a module homomorphism. \square

Thus it is clear that \varphi is a linear transformation.

Note the following:

  1. \varphi(1) = 12
  2. \varphi(x) = 6x
  3. \varphi(x^2) = 2x^2
  4. \varphi(x^3) = 0
  5. \varphi(x^4) = 0
  6. \varphi(x^5) = 2x^5

Thus we see that the matrix of \varphi is

A = \left[ \begin{array}{cccccc} 12 & 0 & 0 & 0 & 0 & 0 \\ 0 & 6 & 0 & 0 & 0 & 0 \\ 0 & 0 & 2 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 2 \end{array} \right].

The reduced row echelon form of A is the matrix

A^\prime = \left[ \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right].

Since only the 1st, 2nd, 3rd, and 6th columns of A^\prime are pivotal, we see that the set \{12, 6x, 2x^2, 2x^5 \} forms a basis for \mathsf{im}\ \varphi. Next, the solutions of A^\prime X = 0 have the form X(x_4,x_5) = \left[ \begin{array}{c} 0 \\ 0 \\ 0 \\ x_4 \\ x_5 \\ 0 \end{array} \right]. Letting (x_4,x_5) \in \{(1,0),(0,1)\}, we see that \{(0,0,0,1,0,0),(0,0,0,0,1,0)\} is a basis for \mathsf{ker}\ \varphi.

Find bases for the image and kernel of a given linear transformation

Let V = \mathsf{Mat}_{2,2}(\mathbb{R}) be the set of all 2 \times 2 matrices over \mathbb{R}, and consider V as an \mathbb{R} vector space in the usual way. Let \mathsf{tr} : V \rightarrow \mathbb{R} be defined by \mathsf{tr}\left( \left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] \right) = a+d.

  1. Show that E_{1,1} = \left[ \begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array} \right], E_{1,2} = \left[ \begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right], E_{2,1} = \left[ \begin{array}{cc} 0 & 0 \\ 1 & 0 \end{array} \right], and E_{2,2} = \left[ \begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array} \right] form a basis for V.
  2. Prove that \mathsf{tr} is a linear transformation and determine its matrix with respect to the basis given in part (1) and the basis \{1\} of \mathbb{R}. Find bases for the image and kernel of \mathsf{tr}.

Note that

\alpha E_{1,1} + \beta E_{1,2} + \gamma E_{2,1} + \delta E_{2,2} = \left[ \begin{array}{cc} \alpha & \beta \\ \gamma & \delta \end{array} \right].

In particular, this linear combination is zero if and only if each coefficient is zero. Thus the E_{i,j} are linearly independent. Moreover, they certainly generate V, and thus form a basis.

It is clear that \mathsf{tr} is a linear transformation. Note that \mathsf{tr}(E_{1,1}) = 1, \mathsf{tr}(E_{1,2}) = 0, \mathsf{tr}(E_{2,1}) = 0, and \mathsf{tr}(E_{2,2}) = 1. Thus the matrix of \mathsf{tr}, with respect to this basis, is A = [1\ 0\ 1\ 0]. Note that A is in reduced row echelon form, and only its first column is pivotal. Thus \{1\} is a basis for \mathsf{im}\ \mathsf{tr}. The solutions of AX = 0 have the form X(x_2,x_3,x_4) = \left[ \begin{array}{c} -x_3 \\ x_2 \\ x_3 \\ x_4 \end{array} \right]. Letting (x_2,x_3,x_4) \in \{(1,0,0),(0,1,0),(0,0,1)\}, we see that the set \{(0,1,0,0),(-1,0,1,0),(0,0,0,1)\} is a basis for \mathsf{ker}\ \mathsf{tr}.

Find bases for the image and kernel of a given linear transformation

Consider \mathbb{R}^4 and \mathbb{R}^2 as \mathbb{R}-vector spaces in the usual way. Let \varphi : \mathbb{R}^4 \rightarrow \mathbb{R}^2 be the linear transformation such that \varphi(1,0,0,0) = (1,-1), \varphi(1,-1,0,0) = (0,0), \varphi(1,-1,1,0) = (1,-1), and \varphi(1,-1,1,-1) = (0,0). Find bases for the image and kernel of \varphi.


We will use the strategy given in this previous exercise.

Letting f_1 = (1,0,0,0), f_2 = (1,-1,0,0), f_3 = (1,-1,1,0), and f_4 = (1,-1,1,-1) and letting e_i denote the standard basis vectors, we evidently have e_1 = f_1, e_2 = f_1 - f_2, e_3 = f_3 - f_2, and e_4 = f_3 - f_4. Thus \varphi(e_1) = (1,-1), \varphi(e_2) = (1,-1), \varphi(e_3) = (1,-1), and \varphi(e_4) = (1,-1).

Thus the matrix of \varphi with respect to the standard bases is A = \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ \text{-}1 & \text{-}1 & \text{-}1 & \text{-}1 \end{array} \right]. The reduced row echelon form A is then A^\prime = \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 0 & 0 & 0 & 0 \end{array} \right]. Since only the first column of A^\prime is pivotal, the image of \varphi is spanned by (1,-1), which is certainly a basis.

The solutions of A^\prime X = 0 have the form X(x_2,x_3,x_4) = \left[ \begin{array}{c} -x_2-x_3-x_4 \\ x_2 \\ x_3 \\ x_4 \end{array} \right]. Letting (x_2,x_3,x_4) \in \{(1,0,0),(0,1,0),(0,0,1)\}, we see that the set \{(-1,1,0,0),(-1,0,1,0),(-1,0,0,1)\} is a basis for \mathsf{ker}\ \varphi.