Tag Archives: real numbers

Minkowski’s Criterion

Suppose A is an n \times n matrix with real entries such that the diagonal entries are all positive, off diagonal entries are all negative, and the row sums are all positive. Prove that \mathsf{det}(A) \neq 0.


Suppose to the contrary that \mathsf{det}(A) = 0. Then there must exist a nonzero solution X to the matrix equation AX = 0. Let X = [x_1\ \cdots\ x_n]^\mathsf{T} be such a solution, and choose k such that |x_k| is maximized. Using the triangle inequality, we have |\sum_j |a_{i,j}|x_j| \leq \sum_j |a_{i,j}||x_j| \leq \sum_j|a_{i,j}||x_k|. Recall that a consequence of the triangle inequality is that |a|-|b| \leq |a-b| for all a and b. Here, we have |a_{k,k}x_k| - \sum_{j \neq k} |a_{k,j}||x_k| \leq |a_{k,k}x_k| - |\sum_{j \neq k} |a_{k,j}|x_j| \leq |a_{k,k}x_k - \sum_{j \neq k} |a_{k,j}|x_j| = \sum_j a_{k,j}x_j. On the other hand, |a_{k,k}x_k| - \sum_{j \neq k} |a_{k,j}||x_k| = |x_k|(\sum_j a_{k,j}) > 0. Thus \sum_j a_{k,j}x_j > 0, a contradiction since AX = 0.

Thus \mathsf{det}(A) \neq 0.

Compute the matrix of a linear transformation

Consider V = \mathbb{R}^2 as an \mathbb{R}-vector space in the usual way. Let \varphi be the linear transformation V \rightarrow V which rotates the plane counterclockwise about the origin by an angle \theta. Compute the matrix of \varphi with respect to the standard basis on V.


Note that we can write \theta as \theta^\prime + k\frac{\pi}{2}, where k is in \{0,1,2,3\} and 0 \leq \theta < \frac{\pi}{2}. Moreover, if we let \theta_\alpha be the transformation which rotates by \alpha, then \varphi_{\alpha+\beta} = \varphi_\alpha + \varphi_\beta. Thus we will consider separately the cases 0 \leq \theta < \frac{\pi}{2} and \theta = \frac{\pi}{2}. First suppose 0 \leq \theta < \frac{\pi}{2}.

To compute M(\varphi_\theta), we express \varphi_\theta(1,0) and \varphi_\theta(0,1) in terms of the standard basis. Using some basic trig, we see that \varphi_\theta(1,0) = (\cos \theta, \sin \theta) and \varphi_\theta(0,1) = (\sin \theta, \cos \theta). (See the diagram below. Note that the angles are not to scale.)

a cruddy diagram (click for full size)

Thus M(\varphi_\theta) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix}.

It is easy to see that \varphi_{\pi/2}(1,0) = (0,1) and \varphi_{\pi/2}(0,1) = (-1,0). Thus M(\varphi_{\pi/2}) = \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix}.

Suppose \zeta = \theta + \frac{\pi}{2}, where 0 \leq \theta < \frac{\pi}{2}. Now M(\varphi_\zeta) = M(\varphi_\theta) \cdot M(\varphi_{\pi/2}) = \begin{bmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{bmatrix} \cdot \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} = \begin{bmatrix} -\sin \theta & -\cos \theta \\ \cos \theta & -\sin \theta \end{bmatrix} = \begin{bmatrix} \cos \zeta & -\sin \zeta \\ \sin \zeta & \cos \zeta \end{bmatrix}, where we made use of the trig identities \sin(\theta + \frac{\pi}{2}) = \cos \theta and \cos(\theta + \frac{\pi}{2}) = -\sin \theta.

Similarly, M(\varphi_\zeta) = \begin{bmatrix} \cos \zeta & -\sin \zeta \\ \sin \zeta & \cos \zeta \end{bmatrix} for all 0 \leq \zeta < 2\pi.

As vector spaces over QQ, RR is isomorphic to any finite direct power of itself

Prove that as vector spaces over \mathbb{Q}, \mathbb{R} and \mathbb{R}^n are isomorphic for any positive natural number n.


Note that \mathbb{Q} is countable. By this previous exercise, any basis of \mathbb{R} over \mathbb{Q} must have cardinality \mathsf{card}\ \mathbb{R}. Likewise, any basis of \mathbb{R}^n over \mathbb{Q} must have cardinality \mathsf{card}\ \mathbb{R}^n = \mathsf{card}\ \mathbb{R}.

Thus, as vector spaces over the rationals, \mathbb{R} \cong_\mathbb{Q} \bigoplus_\mathbb{R} \mathbb{Q} \cong_\mathbb{Q} \mathbb{R}^n. In particular, \mathbb{R} and \mathbb{R}^n are isomorphic as abelian groups.

Prove that a subset of a vector space is a subspace and find a basis

Let V = \mathbb{R}^n and let (a_i) \in V be a fixed vector. Prove that the set W = \{ (x_i) \in V \ |\ \sum a_ix_i = 0 \} is a subspace of V. Determine the dimension of W and find a basis.


First we show that W is a subspace of V. Note that 0 \in W, since \sum a_i0 = 0. Now let (x_i), (y_i) \in W and let r \in \mathbb{R}. Now \sum a_ix_i = 0 and \sum y_ia_i = 0. Consider (x_i) + r(y_i) = (x_i + ry_i); we have \sum (x_i+ry_i)a_i = (\sum x_ia_i) + r(\sum y_ia_i) = 0+r0 = 0, so that (x_i) + r(y_i) \in W. By the submodule criterion, W \subseteq V is an \mathbb{R}-subspace.

If (a_i) = 0, then clearly W = V, so that \mathsf{dim}\ W = n and we may take the standard basis for W.

Suppose now that (a_i) \neq 0, with a_k \neq 0. Now define vectors w_i \in V, where i \neq k, as follows: w_i = (w_{i,j}), where w_{i,j} = 1 if j = i, \frac{-a_j}{a_k} if j = k, and 0 otherwise. We claim that the set E = \{e_i \ |\ 1 \leq i \leq n, i \neq k \} is a basis for W.

First, note that for each i, \sum a_je_{i,j} = a_i - a_k\frac{a_i}{a_k} = 0, so that E \subseteq W. Now suppose \sum r_ie_i = 0. Now 0 = \sum_i r_ie_i = \sum_i r_i(e_{i,j}) = (\sum_i r_ie_{i,j}). Consider the ith component of this vector, where i \neq k; r_ie_{i,j} is 0 if j \neq i, so that r_i = 0. Thus the e_i are linearly independent.

Finally, suppose (x_i) \in W. Then \sum a_ix_i = 0. Rearranging, we see that x_k = \frac{-1}{a_k} \sum_{i \neq k} a_ix_i. Evidently, then, (x_i) = \sum_{i \neq k} x_ie_i.

E is a linearly independent generating set for W, hence a basis. In particular, W has dimension n-1.

An inequality involving sqrt(2)

Prove that for all a,b \in \mathbb{Z}^+, | \sqrt{2} - \frac{a}{b} | \geq \frac{1}{3b^2}.


Let p(x) = x^2 - 2.

First we will show that the inequality holds for b = 1. Note that |\sqrt{2} - \frac{1}{1}| = 0.414 + \epsilon > 1/3 and |\sqrt{2} - \frac{2}{1}| = 0.585 + \epsilon > 1/3. For a > 2, we have |\sqrt{2} - \frac{a}{1}| > 1 > 1/3. So the inequality holds for b = 1. Henceforth, we will assume that b \geq 2.

Suppose |\sqrt{2} - \frac{a}{b}| \geq \frac{3 - 2\sqrt{2}}{2}. Note that since b \geq 2, b^2 \geq 4 > 3.88 + \epsilon  = \frac{2}{9 - 6\sqrt{2}}; so \frac{3 - 2\sqrt{2}}{2} > \frac{1}{3b^2}. Hence |\sqrt{2} - \frac{a}{b}| \geq \frac{1}{3b^2}.

Now suppose |\sqrt{2} - \frac{a}{b}| \leq \frac{3 - 2\sqrt{2}}{2}. By the Mean Value Theorem from calculus (which we will assume to be valid), there exists an element \xi between \sqrt{2} and a/b such that p^\prime(\xi) = \frac{p(\sqrt{2}) - p(a/b)}{\sqrt{2} - a/b}, and hence |p^\prime(\xi)| = \frac{|p(\sqrt{2}) - p(a/b)|}{|\sqrt{2} - a/b|}. Now \xi \in [\sqrt{2} - \frac{3-2\sqrt{2}}{2}, \sqrt{2} + \frac{3-2\sqrt{2}}{2}] = [1.328+\epsilon, 1.5]. Since p^\prime(x) = 2x is strictly increasing, we have |p^\prime(\xi)| \leq 3. Since p(\sqrt{2}) = 0, we have |p(a/b)| \leq 3|\sqrt{2} - \frac{a}{b}|.

Note that p(a/b) \neq 0, since (for example) p(x) is irreducible over \mathbb{Q}. Now |p(a/b)| = |\frac{a^2}{b^2} - 2| = \frac{|a^2 - 2b^2|}{b^2}. Since p(a/b) \neq 0, a^2 - 2b^2 is a nonzero integer. In particular, we have |a^2 - 2b^2| \geq 1, so that |p(a/b)| \geq \frac{1}{b^2}.

Hence |\sqrt{2} - \frac{a}{b}| \geq \frac{1}{3b^2}.

As a ring, the ZZ-tensor product of ZZ[i] and RR is isomorphic to CC

Let \mathbb{Z}[i] and \mathbb{R} be \mathbb{Z}-algebras via the inclusion map. Prove that as rings, \mathbb{Z}[i] \otimes_\mathbb{Z} \mathbb{R} and \mathbb{C} are isomorphic.


Define \varphi : \mathbb{Z}[i] \times \mathbb{R} \rightarrow \mathbb{C} by \varphi(z,x) = zx. Certainly this mapping is \mathbb{Z}-bilinear, and so induces a \mathbb{Z}-algebra homomorphism \Phi : \mathbb{Z}[i] \otimes_\mathbb{Z} \mathbb{R} \rightarrow \mathbb{C} such that \Phi(z \otimes x) = zx.

Note that every simple tensor (hence every element) of \mathbb{Z}[i] \otimes_\mathbb{Z} \mathbb{R} can be written in the form 1 \otimes x + i \otimes y, where x,y \in \mathbb{R}. Now \Phi(1 \otimes x + i \otimes y) = x+iy, so that \Phi is surjective. Suppose now that 0 = \Phi(1 \otimes x + i \otimes y) = x+iy; then x = y = 0, so that 1 \otimes x + i \otimes y = 0. Thus \mathsf{ker}\ \Phi = 0, and so \Phi is injective. Thus \Phi is a ring isomorphism.

Every polynomial in RR[x] which takes only nonnegative values is a sum of two squares

Let p(x) \in \mathbb{R}[x] be a polynomial such that p(c) \geq 0 for all c. Prove that p(x) = a(x)^2 + b(x)^2 for some polynomials a,b \in \mathbb{R}[x].


Note that p(x) must have even degree. We proceed by induction on the degree of p(x). Note as a lemma that if h = a^2+b^2 and k = c^2 + d^2 are sums of squares, then so is hk since (evidently) hk = (ac-bd)^2 + (ad+bc)^2. We find this identity by rearranging and partially simplifying the factorization hk = (a+bi)(a-bi)(c+di)(c-di).

The base case p(x) = c is trivial; p(x) = 0^2 + 0^2 if c = 0 and p(x) = (\sqrt{c}/2)^2 + (\sqrt{c}/2)^2 if c \neq 0.

For the inductive step, suppose the result holds for all polynomials of degree n and let p(x) have degree n+2. Suppose p(x) has a real root c. Now p(x) is concave up on a sufficiently small neighborhood about c, so that p^\prime(c) = 0. In particular, c is a root of p of multiplicity at least 2. Say p(x) = q(x)(x-c)^2. Now q(x) is a sum of two squares, and (x-c)^2 = (x-c)^2 + 0^2 is as well, so that by the lemma p(x) is a sum of two squares. Suppose instead that p(x) has no real roots. Instead, we have a complex root z. Since conjugation is a ring homomorphism, \overline{z} is also a root of p. Letting z = a+bi, we see that x^2+2ax + a^2+b^2 = (x+a)^2 + b^2 is a factor of p(x); say p(x) = q(x)((x+a)^2 + b^2). Again, p(x) is a sum of two squares.

The real numbers contain a maximal subring not containing 1/2

Prove that the real numbers \mathbb{R} contain a subring A with 1 \in A and such that A is inclusion-maximal with respect to the property that 1/2 \notin A.


Let \mathcal{C} be the set of all subrings of \mathbb{R} containing 1 and not containing 1/2; since \mathbb{Z} contains 1 and not 1/2, \mathcal{C} is not empty. Now let \{C_i\}_\mathbb{N} be a chain in \mathcal{C}, and let C = \bigcup C_i. Recall that C is a subring of \mathbb{R}. Note that 1 \in C since 1 \in C_0. Suppose 1/2 \in C; then we have 1/2 \in C_i for some i, a contradiction. Thus C \in \mathcal{C}, and thus C is an upper bound for the chain \{C_i\}_\mathbb{N}. By Zorn’s Lemma, \mathcal{C} contains a maximal element A as desired.

Every subfield of the real numbers must contain the rational numbers

Prove that any subfield of \mathbb{R} must contain \mathbb{Q}.


By the previous exercise, \mathbb{R} contains a unique inclusion-smallest subfield which is isomorphic either to \mathbb{Z}/(p) for a prime p or to \mathbb{Q}.

Suppose the unique smallest subfield of \mathbb{R} is isomorphic to \mathbb{Z}/(p), and let a in this subfield be nonzero. Then pa = 0 in \mathbb{R}, and since p \in \mathbb{R} is a unit, a = 0, a contradiction.

Thus the unique smallest subfield of \mathbb{R} is isomorphic to \mathbb{Q}. In particular, any subfield of \mathbb{R} contains a subfield isomorphic to \mathbb{Q}.

Embed the Hamiltonian quaternions in a ring of real matrices

Prove that the ring M_4(\mathbb{R}) contains a subring isomorphic to the real Hamiltonian quaternions \mathbb{H}.


Define \varphi : \mathbb{H} \rightarrow M_4(\mathbb{R}) as follows.

a+bi+cj+dk \mapsto \begin{bmatrix} a & b & c & d \\ -b & a & -d & c \\ -c & d & a & -b \\ -d & -c & b & a \end{bmatrix}

We will show that this mapping is an injective ring homomorphism. To that end, let \alpha = a_1 + b_1i + c_1j + d_1k and \beta = a_2 + b_2i + c_2j + d_2k. Then we have the following.

\varphi(\alpha + \beta)  =  \varphi((a_1 + b_1i + c_1j + d_1k)+(a_2 + b_2i + c_2j + d_2k))
 =  \varphi((a_1 + a_2) + (b_1+b_2)i + (c_1+c_2)j + (d_1+d_2)k)
 =  \begin{bmatrix} a_1+a_2 & b_1+b_2 & c_1+c_2 & d_1+d_2 \\ -b_1-b_2 & a_1+a_2 & -d_1-d_2 & c_1+c_2 \\ -c_1-c_2 & d_1+d_2 & a_1+a_2 & -b_1-b_2 \\ -d_1-d_2 & -c_1-c_2 & b_1+b_2 & a_1+a_2 \end{bmatrix}
 =  \begin{bmatrix} a_1 & b_1 & c_1 & d_1 \\ -b_1 & a_1 & -d_1 & c_1 \\ -c_1 & d_1 & a_1 & -b_1 \\ -d_1 & -c_1 & b_1 & a_1 \end{bmatrix} + \begin{bmatrix} a_2 & b_2 & c_2 & d_2 \\ -b_2 & a_2 & -d_2 & c_2 \\ -c_2 & d_2 & a_2 & -b_2 \\ -d_2 & -c_2 & b_2 & a_2 \end{bmatrix}
 =  \varphi(a_1 + b_1i + c_1j + d_1k) + \varphi(a_2 + b_2i + c_2j + d_2k)
 =  \varphi(\alpha) + \varphi(\beta)
\varphi(\alpha\beta)  =  \varphi((a_1 + b_1i + c_1j + d_1k)(a_2 + b_2i + c_2j + d_2k))
 =  \varphi((a_1a_2 - b_1b_2 - c_1c_2 - d_1d_2) + (a_1b_2 + b_1a_2 + c_1d_2 - d_1c_2)i + (a_1c_2 - b_1d_2 + c_1a_2 + d_1b_2)j + (a_1d_2 + b_1c_2 - c_1b_2 + d_1a_2)k)
 =  \begin{bmatrix} a_1a_2 - b_1b_2 - c_1c_2 - d_1d_2 & a_1b_2 + b_1a_2 + c_1d_2 - d_1c_2 & a_1c_2 - b_1d_2 + c_1a_2 + d_1b_2 & a_1d_2 + b_1c_2 - c_1b_2 + d_1a_2 \\ -a_1b_2 - b_1a_2 - c_1d_2 + d_1c_2 & a_1a_2 - b_1b_2 - c_1c_2 - d_1d_2 & -a_1d_2 - b_1c_2 + c_1b_2 - d_1a_2 & a_1c_2 - b_1d_2 + c_1a_2 + d_1b_2 \\ -a_1c_2 + b_1d_2 - c_1a_2 - d_1b_2 & a_1d_2 + b_1c_2 - c_1b_2 + d_1a_2 & a_1a_2 - b_1b_2 - c_1c_2 - d_1d_2 & -a_1b_2 - b_1a_2 - c_1d_2 + d_1c_2 \\ -a_1d_2 - b_1c_2 + c_1b_2 - d_1a_2 & -a_1c_2 + b_1d_2 - c_1a_2 - d_1b_2 & a_1b_2 + b_1a_2 + c_1d_2 - d_1c_2 & a_1a_2 - b_1b_2 - c_1c_2 - d_1d_2 \end{bmatrix}
 =  \begin{bmatrix} a_1 & b_1 & c_1 & d_1 \\ -b_1 & a_1 & -d_1 & c_1 \\ -c_1 & d_1 & a_1 & -b_1 \\ -d_1 & -c_1 & b_1 & a_1 \end{bmatrix} \cdot \begin{bmatrix} a_2 & b_2 & c_2 & d_2 \\ -b_2 & a_2 & -d_2 & c_2 \\ -c_2 & d_2 & a_2 & -b_2 \\ -d_2 & -c_2 & b_2 & a_2 \end{bmatrix}
 =  \varphi(a_1 + b_1i + c_1j + d_1k) \cdot \varphi(a_2 + b_2i + c_2j + d_2k)
 =  \varphi(\alpha) \cdot \varphi(\beta)

Thus \varphi is a ring homomorphism.

Suppose now that \alpha = a+bi+cj+dk \in \mathsf{ker}\ \varphi; clearly, then, we have a = b = c= d = 0, so that \alpha = 0. Thus \varphi is injective.

By Proposition 5 in the text, \mathsf{im}\ \varphi is a subring of M_4(\mathbb{R}) which is isomorphic to \mathbb{H}.