Tag Archives: complex numbers

If the eigenvalues of a complex matrix have negative real parts, then the zero solution of the corresponding linear system of first order differential equations is globally asymptotically stable

Consider the first order differential equation (1) \frac{d}{dt} Y = AY, where A is an n \times n complex matrix. A solution of (1) (indeed, of any differential equation) is called a steady state if it is constant in t. A steady state C is called globally asymptotically stable if for any other solution Y, we have \mathsf{lim}_{t \rightarrow \infty} Y(t) = C. (Limits are taken entrywise.) See pages 507-508 of D&F for a more lucid explanation of steady states and globally asymptotically stable steady states.

Prove that if the eigenvalues of A have negative real parts, then the zero solution of \frac{d}{dt} Y = AY is globally asymptotically stable.


[My usual disclaimer about analytical problems applies. Be on the lookout for Blatantly Silly Statements.]

As we have seen, every solution of (1) is a linear combination of the columns of \mathsf{exp}(At). If P^{-1}AP = B is in Jordan canonical form, then every solution of (1) is a linear combination of the columns of P\mathsf{exp}(Jt). Note the exponential of a Jordan block as we computed here; in particular, we can see that if Y is a solution of (1), then each entry of Y is a linear combination of functions of the form p(t)e^{\lambda t}, where \lambda is an eigenvalue of A and p(t) is a polynomial in t. Say \lambda = a+bi; then e^{\lambda t} = e^{at}(\mathsf{cos}(bt) + i \mathsf{sin}(bt)).

[Start handwaving]

Since a is negative, as t approaches infinity, e^{at} tends to 0 faster than any polynomial, so that our solution Y tends to the constant solution 0.

[End handwaving]

Thus the zero solution is globally asymptotically stable.

A class of group homomorphisms from CC to GL(n,CC)

Fix an n \times n matrix M over \mathbb{C}. Define a mapping \psi_M : \mathbb{C} \rightarrow \mathsf{GL}_n(\mathbb{C}) by \alpha \mapsto \mathsf{exp}(M\alpha). Prove that \psi_M is a group homomorphism on (\mathbb{C},+).


Note that \mathsf{exp}(M\alpha) is nonsingular by this previous exercise, so that \psi_M is properly defined.

Now \psi_M(\alpha + \beta) = \mathsf{exp}(M(\alpha+\beta)) = \mathsf{exp}(M\alpha + M\beta) = \mathsf{exp}(M\alpha) \cdot \mathsf{exp}(M\beta) by this previous exercise, which equals \psi_M(\alpha) \psi_M(\beta). So \psi_M is a group homomorphism.

Compute the Jordan canonical form of a given matrix

Let A be a 2 \times 2 matrix with entries in \mathbb{Q} such that A^3 = I and A \neq I. Compute the rational canonical form of A and the Jordan canonical form of A over \mathbb{C}.


The minimal polynomial of A divides x^3-1 = (x-1)(x^2+x+1) and is not x-1, and so must be p(x) = x^2+x+1. Since A has dimension 2, p(x) is the list of invariant factors of A. So the rational canonical form of A is \begin{bmatrix} 0 & -1 \\ 1 & -1 \end{bmatrix}.

Now p(x) factors over \mathbb{C} as p(x) = (x - \frac{-1+i\sqrt{3}}{2})(x-\frac{-1-i\sqrt{3}}{2}). So the Jordan canonical form of A is \begin{bmatrix} \frac{-1+i\sqrt{3}}{2} & 0 \\ 0 & \frac{-1-i\sqrt{3}}{2} \end{bmatrix}.

Any matrix A such that A³ = A can be diagonalized

Let A be an n \times n matrix over \mathbb{C} such that A^3 = A. Show that A can be diagonalized. Is this result true if we replace \mathbb{C} by an arbitrary field F?


Note that the minimal polynomial of A divides x^3-x = x(x+1)(x-1). By Corollary 25 in D&F, A is similar to a diagonal matrix D, and moreover the diagonal entries of D are either 0, 1, or -1. Since this factorization of x^3-x holds over any field F, in fact A is diagonalizable over any field.

Find the possible Jordan canonical forms of matrices of dimension 2, 3, or 4 over CC

Find all the possible Jordan canonical forms of matrices of dimension 2, 3, or 4 over \mathbb{C}.


We begin by finding the possible lists of invariant factors, starting with the possible minimal polynomials. Recall that every polynomial of degree at least 1 over \mathbb{C} has a root in \mathbb{C}, so that every polynomial is a product of linear factors.

If A has dimension 2, then the characteristic polynomial of A has degree 2 and thus the minimal polynomial has degree at most 2. The possible minimal polynomials are thus x-\alpha, (x-\alpha)^2, and (x-\alpha)(x-\beta). In this case, with the minimal polynomial chosen the remaining invariant factors are determined. So the possible lists of invariant factors are as follows.

  1. x-\alpha, x-\alpha
  2. (x-\alpha)^2
  3. (x-\alpha)(x-\beta)

The corresponding lists of elementary divisors are as follows.

  1. x-\alpha, x-\alpha
  2. (x-\alpha)^2
  3. x-\alpha, x-\beta

And so the possible Jordan canonical forms are as follows.

  1. \begin{bmatrix} \alpha & 0 \\ 0 & \alpha \end{bmatrix}
  2. \begin{bmatrix} \alpha & 1 \\ 0 & \alpha \end{bmatrix}
  3. \begin{bmatrix} \alpha & 0 \\ 0 & \beta \end{bmatrix}

Again, if A has dimension 3, we can construct all the possible minimal polynomials, and in each case the remaining invariant factors are determined (in one case, without loss of generality). The possible lists of invariant factors are as follows.

  1. x-\alpha, x-\alpha, x-\alpha
  2. (x-\alpha)^2, x-\alpha
  3. (x-\alpha)^3
  4. (x-\alpha)(x-\beta), x-\alpha
  5. (x-\alpha)^2(x-\beta)
  6. (x-\alpha)(x-\beta)(x-\gamma)

The possible lists of elementary divisors are as follows.

  1. x-\alpha, x-\alpha, x-\alpha
  2. (x-\alpha)^2, x-\alpha
  3. (x-\alpha)^3
  4. x-\alpha, x-\beta, x-\alpha
  5. (x-\alpha)^2, x-\beta
  6. x-\alpha, x-\beta, x-\gamma

The possible Jordan canonical forms are then as follows.

  1. \begin{bmatrix} \alpha & 0 & 0 \\ 0 & \alpha & 0 \\ 0 & 0 & \alpha \end{bmatrix}
  2. \begin{bmatrix} \alpha & 1 & 0 \\ 0 & \alpha & 0 \\ 0 & 0 & \alpha \end{bmatrix}
  3. \begin{bmatrix} \alpha & 1 & 0 \\ 0 & \alpha & 1 \\ 0 & 0 & \alpha \end{bmatrix}
  4. \begin{bmatrix} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \alpha \end{bmatrix}
  5. \begin{bmatrix} \alpha & 1 & 0 \\ 0 & \alpha & 0 \\ 0 & 0 & \beta \end{bmatrix}
  6. \begin{bmatrix} \alpha & 0 & 0 \\ 0 & \beta & 0 \\ 0 & 0 & \gamma \end{bmatrix}

If A has degree 4, evidently there are 11 possible minimal polynomials and 14 possible lists of invariant factors (in some cases, without loss of generality due to symmetry). The possible lists of invariant factors are as follows.

  1. x-\alpha, x-\alpha, x-\alpha, x-\alpha
  2. (x-\alpha)^2, (x-\alpha)^2
  3. (x-\alpha)^2, x-\alpha, x-\alpha
  4. (x-\alpha)(x-\beta), (x-\alpha)(x-\beta)
  5. (x-\alpha)(x-\beta), x-\alpha, x-\alpha
  6. (x-\alpha)^3, x-\alpha
  7. (x-\alpha)^2(x-\beta), x-\alpha
  8. (x-\alpha)^2(x-\beta), x-\beta
  9. (x-\alpha)(x-\beta)(x-\gamma), x-\alpha
  10. (x-\alpha)^4
  11. (x-\alpha)^3(x-\beta)
  12. (x-\alpha)^2(x-\beta)^2
  13. (x-\alpha)^2(x-\beta)(x-\gamma)
  14. (x-\alpha)(x-\beta)(x-\gamma)(x-\delta)

The possible lists of elementary divisors are as follows.

  1. x-\alpha, x-\alpha, x-\alpha, x-\alpha
  2. (x-\alpha)^2, (x-\alpha)^2
  3. (x-\alpha)^2, x-\alpha, x-\alpha
  4. x-\alpha, x-\beta, x-\alpha, x-\beta
  5. x-\alpha, x-\beta, x-\alpha, x-\alpha
  6. (x-\alpha)^3, x-\alpha
  7. (x-\alpha)^2, x-\beta, x-\alpha
  8. (x-\alpha)^2, x-\beta, x-\beta
  9. x-\alpha, x-\beta, x-\gamma, x-\alpha
  10. (x-\alpha)^4
  11. (x-\alpha)^3, x-\beta
  12. (x-\alpha)^2, (x-\beta)^2
  13. (x-\alpha)^2, x-\beta, x-\gamma
  14. x-\alpha, x-\beta, x-\gamma, x-\delta

The corresponding Jordan canonical forms are as follows.

  1. \begin{bmatrix} \alpha & 0 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  2. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \alpha & 1 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  3. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  4. \begin{bmatrix} \alpha & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \beta \end{bmatrix}
  5. \begin{bmatrix} \alpha & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  6. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 1 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  7. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \beta & 0 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  8. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \beta & 0 \\ 0 & 0 & 0 & \beta \end{bmatrix}
  9. \begin{bmatrix} \alpha & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & \gamma & 0 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  10. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 1 & 0 \\ 0 & 0 & \alpha & 1 \\ 0 & 0 & 0 & \alpha \end{bmatrix}
  11. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 1 & 0 \\ 0 & 0 & \alpha & 0 \\ 0 & 0 & 0 & \beta \end{bmatrix}
  12. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \beta & 1 \\ 0 & 0 & 0 & \beta \end{bmatrix}
  13. \begin{bmatrix} \alpha & 1 & 0 & 0 \\ 0 & \alpha & 0 & 0 \\ 0 & 0 & \beta & 0 \\ 0 & 0 & 0 & \gamma \end{bmatrix}
  14. \begin{bmatrix} \alpha & 0 & 0 & 0 \\ 0 & \beta & 0 & 0 \\ 0 & 0 & \gamma & 0 \\ 0 & 0 & 0 & \delta \end{bmatrix}

Exhibit the matrices of dimension 2 over QQ having multiplicative order 4

Compute, up to similarity, all the elements of \mathsf{GL}_2(\mathbb{Q}) having order 4. Do the same in \mathsf{GL}_2(\mathbb{C}).


Our task is to find all the 2 \times 2 matrices over \mathbb{Q} which satisfy p(x) = x^4-1 but not x^2-1 or x-1. Note that over \mathbb{Q}, p(x) = (x^2+1)(x+1)(x-1). If A is a matrix of order 4, then the minimal polynomial must be divisible by x^2+1 and must divide x^4-1. Since the characteristic polynomial of A has degree 2, there is only one possible list of invariant factors for A, namely x^2+1. The corresponding rational canonical form matrix is \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}. Thus every element of \mathsf{GL}_2(\mathbb{Q}) of order 4 is similar to this matrix.

Now over \mathbb{C}, we have p(x) = (x+1)(x-1)(x+i)(x-i). If A is a matrix of order 4, then its minimal polynomial must be divisible by either x+i or x-i, must divide p(x), and must have degree at most 2. There are seven possibilities for the minimal polynomial of A, and with the minimal polynomial chosen, the remaining invariant factors are determined (since the characteristic polynomial has degree 2). The possible lists of invariant factors are as follows.

  1. x+i, x+i
  2. (x+i)(x-i)
  3. (x+i)(x+1)
  4. (x+i)(x-1)
  5. x-i, x-i
  6. (x-i)(x+1)
  7. (x-i)(x-1)

The corresponding rational canonical forms are as follows.

  1. \begin{bmatrix} -i & 0 \\ 0 & -i \end{bmatrix}
  2. \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}
  3. \begin{bmatrix} 0 & -i \\ 1 & -1-i \end{bmatrix}
  4. \begin{bmatrix} 0 & i \\ 1 & 1-i \end{bmatrix}
  5. \begin{bmatrix} i & 0 \\ 0 & i \end{bmatrix}
  6. \begin{bmatrix} 0 & i \\ 1 & i-1 \end{bmatrix}
  7. \begin{bmatrix} 0 & -i \\ 1 & 1+i \end{bmatrix}

Every matrix in \mathsf{GL}_2(\mathbb{C}) of order 4 is similar (i.e. conjugate) to exactly one matrix in this list.

In any extension of QQ, an algebraic element and its complex conjugate are conjugates

Let E be an extension of \mathbb{Q} contained in \mathbb{C}. Suppose \alpha \in E is algebraic over \mathbb{Q} with minimal polynomial p(x) and such that the complex conjugate \overline{\alpha} of \alpha is also in E. Prove that \alpha and \overline{\alpha} are conjugates in E.


Note that complex conjugation is an automorphism of \mathbb{C}. Thus p(\overline{\alpha}) = \overline{p(\alpha)} = \overline{0} = 0. Since \overline{\alpha} is a root of the minimal polynomial p(x), \alpha and \overline{\alpha} are conjugate in E.

As a ring, the ZZ-tensor product of ZZ[i] and RR is isomorphic to CC

Let \mathbb{Z}[i] and \mathbb{R} be \mathbb{Z}-algebras via the inclusion map. Prove that as rings, \mathbb{Z}[i] \otimes_\mathbb{Z} \mathbb{R} and \mathbb{C} are isomorphic.


Define \varphi : \mathbb{Z}[i] \times \mathbb{R} \rightarrow \mathbb{C} by \varphi(z,x) = zx. Certainly this mapping is \mathbb{Z}-bilinear, and so induces a \mathbb{Z}-algebra homomorphism \Phi : \mathbb{Z}[i] \otimes_\mathbb{Z} \mathbb{R} \rightarrow \mathbb{C} such that \Phi(z \otimes x) = zx.

Note that every simple tensor (hence every element) of \mathbb{Z}[i] \otimes_\mathbb{Z} \mathbb{R} can be written in the form 1 \otimes x + i \otimes y, where x,y \in \mathbb{R}. Now \Phi(1 \otimes x + i \otimes y) = x+iy, so that \Phi is surjective. Suppose now that 0 = \Phi(1 \otimes x + i \otimes y) = x+iy; then x = y = 0, so that 1 \otimes x + i \otimes y = 0. Thus \mathsf{ker}\ \Phi = 0, and so \Phi is injective. Thus \Phi is a ring isomorphism.

As a ring, the complex numbers are isomorphic to RR[x]/(x² + 1)

Prove that the ring of complex numbers is isomorphic to \mathbb{R}[x]/(x^2+1).


Define \varphi : \mathbb{R}[x] \rightarrow \mathbb{C} by embedding coefficients as usual and mapping x to i; this map is a ring homomorphism, and moreover is surjective since \varphi(a + bx) = a+bi.

Note that \varphi(x^2 + 1) = i^2 + 1 = 0, so that x^2+1 \in \mathsf{ker}\ \varphi. Now if x^2+1 is reducible over the reals, then (being of degree 2) it must have a linear factor and thus a root. Suppose \alpha is such a root; that is, \alpha^2 = -1. Note that \alpha^2 is nonnegative for all real numbers, so we have a contradiction. Thus x^2+1 is irreducible over the reals, and thus \mathbb{R}[x]/(x^2+1) is a field. If \mathsf{ker}\ \varphi properly contains (x^2+1), then in fact \mathsf{ker}\ \varphi = \mathbb{R}[x] since fields have no nontrivial ideals and using the lattice isomorphism theorem for rings. So we have \mathsf{ker}\ \varphi = (x^2 + 1), and by the first isomorphism theorem, \mathbb{C} \cong \mathbb{R}[x]/(x^2+1).

Embed the complex numbers in a ring of real matrices

Prove that the ring M_2(\mathbb{R}) contains a subring isomorphic to \mathbb{C}.


Define \varphi : \mathbb{C} \rightarrow M_2(\mathbb{R}) by a+bi \mapsto \begin{bmatrix} a & b \\ -b & a \end{bmatrix}. This mapping is clearly well defined. Moreover, for all a_1 + b_1i, a_2 + b_2i \in \mathbb{C}, note the following.

\varphi((a_1 + b_1i) + (a_2 + b_2i))  =  \varphi((a_1+a_2) + (b_1+b_2)i)
 =  \begin{bmatrix} a_1+a_2 & b_1+b_2 \\ -b_1-b_2 & a_1+a_2 \end{bmatrix}
 =  \begin{bmatrix} a_1 & b_1 \\ -b_1 & a_1 \end{bmatrix} + \begin{bmatrix} a_2 & b_2 \\ -b_2 & a_2 \end{bmatrix}
 =  \varphi(a_1 + b_1i) + \varphi(a_2 + b_2i)
\varphi((a_1 + b_1i)(a_2 + b_2i))  =  \varphi((a_1a_2 - b_1b_2) + (a_1b_2 + a_2b_1)i)
 =  \begin{bmatrix} a_1a_2 - b_1b_2 & a_1b_2 + a_2b_1 \\ -a_1b_2 - a_2b_1 & a_1a_2 - b_1b_2 \end{bmatrix}
 =  \begin{bmatrix} a_1 & b_1 \\ -b_1 & a_1 \end{bmatrix} \cdot \begin{bmatrix} a_2 & b_2 \\ -b_2 & a_2 \end{bmatrix}
 =  \varphi(a_1 + b_1i) \varphi(a_2 + b_2i)

Thus \varphi is a ring homomorphism. Now suppose a+bi is in the kernel of \varphi. Then \begin{bmatrix} a & b \\ -b & a \end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}, so that a+bi = 0. Since the kernel of \varphi is trivial, it is injective.

By Proposition 5 in the text, \mathsf{im}\ \varphi is a subring of M_2(\mathbb{R}) to which \mathbb{C} is isomorphic.