Tag Archives: tensor product

When the tensor product of finite field extensions is a field

Let K_1 and K_2 be finite extensions of a field F contained in a field K. Prove that the F-algebra K_1 \otimes_F K_2 is a field if and only if [K_1K_2 : F] = [K_1:F][K_2:F].


First, define \psi : K_1 \times K_2 \rightarrow K_1K_2 by (a,b) \mapsto ab. Clearly \psi is F-bilinear, and so induces an F-module homomorphism \Psi : K_1 \otimes_F K_2 \rightarrow K_1K_2. In fact \Psi is an F-algebra homomorphism. Using Proposition 21 in D&F, if A = \{\alpha_i\} and B = \{\beta_j\} are bases of K_1 and K_2 over F, then AB = \{\alpha_i\beta_j\} spans K_1K_2 over F. In particular, \Psi is surjective.

Suppose K_1 \otimes_F K_2 is a field. Now \mathsf{ker}\ \Psi is an ideal of K_1 \otimes_F K_2, and so must be trivial- so \Psi is an isomorphism of F-algebras, and thus an isomorphism of fields. Using Proposition 21 on page 421 of D&F, K_1 \otimes_F K_2 has dimension [K_1:F][K_2:F], and so [K_1K_2 : F] = [K_1:F][K_2:F] as desired.

Conversely, suppose [K_1K_2 : F] = [K_1:F][K_2:F]. That is, K_1 \otimes_F K_2 and K_1K_2 have the same dimension as F-algebras. By Corollary 9 on page 413 of D&F, \Psi is injective, and so K_1 \otimes_F K_2 and K_1K_2 are isomorphic as F-algebras, hence as rings, and so K_1 \otimes_F K_2 is a field.

A fact about the rank of a module over an integral domain

Let R be an integral domain with quotient field F and let M be any (left, unital) R-module. Prove that the rank of M equals the dimension of F \otimes_R M over F.


Recall that the rank of a module over a domain is the maximal number of R-linearly independent elements.

Suppose B \subseteq M is an R-linearly independent set, and consider B^\prime \subseteq F \otimes_R M consisting of the tensors 1 \otimes b for each b \in B. Suppose \sum \alpha_i (1 \otimes b_i) = 0. For some \alpha_i \in F. Clearing denominators, we have \sum r_i ( 1 \otimes b_i) = 0 for some r_i \in R. Now \sum 1 \otimes r_ib_i = 0, and we have 1 \otimes \sum r_ib_i = 0. By this previous exercise, there exists a nonzero r \in R such that \sum rr_ib_i = 0. Since B is linearly independent, we have rr_i = 0 for each i, and since r \neq 0 and R is a domain, r_i = 0 for each i. Thus \alpha_i = 0 (since the denominators of each \alpha_i are nonzero). So B^\prime is F-linearly independent in F \otimes_R M. In particular, we have \mathsf{rank}_R(M) \leq \mathsf{dim}_F(F \otimes_R M).

Now note that if b_i^\prime = \sum \alpha_{i,j} \otimes m_{i,j} \in F \otimes_R M, is a linearly independent set, with \alpha_{i,j} = r_{i,j}/s_{i,j}, then ‘clearing denominators’ by multiplying b_i^\prime by \prod_j s_{i,j}, we have a second linearly independent set with the same cardinality whose elements are of the form 1 \otimes m_i for some m \in M. Suppose there exist r_i \in R such that \sum r_im_i = 0; then \sum r_i(1 \otimes b_i) = 0, and (since the 1 \otimes b_i form a basis) we have r_i = 0. So the m_i are R-linearly independent in M. Thus \mathsf{rank}_R(M) \geq \mathsf{dim}_F(R \otimes_R M).

A fact about the Alt map on a tensor power

Let F be a field in which k! is a unit and let V be an F-vector space. Recall that S_k acts on the tensor power \mathcal{T}^k(V) by permuting the components, and that \mathsf{Alt}_k is defined on \mathcal{T}^k(V) by \mathsf{Alt}_k(z) = \sum_{\sigma \in S_k} \epsilon(\sigma) \sigma z. Prove that \mathsf{im}\ \mathsf{Alt}_k is the unique largest subspace of \mathcal{T}^k(V) on which each permutation \sigma \in S_k acts as multiplication by \epsilon(\sigma).


Suppose z \in \mathcal{T}^k(V) such that \sigma z = \epsilon(\sigma) z for all \sigma \in S_k. Then \mathsf{Alt}_k(z) = \frac{1}{k!} \sum_{\sigma \in S_k} \epsilon(\sigma) \sigma z = \frac{1}{k!} \sum_{\sigma \in S_k} \epsilon(\sigma) \epsilon(\sigma) z = \frac{1}{k!} \sum_{\sigma \in S_k} z = z, so that z \in \mathsf{im}\ \mathsf{Alt}_k.

In particular, any subspace of \mathcal{T}^k(V) upon which every permutation \sigma \in S_k acts as scalar multiplication by \epsilon(\sigma) is contained in \mathsf{im}\ \mathsf{Alt}_k.

Note also that \sigma \in S_k acts on \mathsf{im}\ \mathsf{Alt}_k as multiplication by \epsilon(\sigma) since \mathsf{im}\ \mathsf{Alt}_k \cong_F \bigwedge^k(V).

Exterior powers of subdomains of a field of fractions

Let R be an integral domain, and let F be its field of fractions.

  1. Considering F as an R-module in the usual way, prove that \bigwedge^2(F) = 0.
  2. Let I \subseteq F be a submodule. Prove that \bigwedge^k(I) is torsion for all k \geq 2.
  3. Exhibit an integral domain R and an R-submodule I \subseteq F such that \bigwedge^k(I) \neq 0 for all k \geq 0.

Let \frac{a}{b} \otimes \frac{c}{d} \in \mathcal{T}^2(F) be a simple tensor. Note that \frac{a}{b} \otimes \frac{c}{d} = \frac{ad}{bd} \otimes \frac{cb}{bd} = abcd(\frac{1}{bd} \otimes \frac{1}{bd}) \in \mathcal{A}^2(F). In particular, we have \frac{a}{b} \wedge \frac{c}{d} = 0 in \bigwedge^2(I).

Suppose \frac{a_1}{b_1} \wedge \frac{a_2}{b_2} \wedge \cdots \wedge \frac{a_k}{b_k} \in \bigwedge^k(I) be nonzero; then the a_i are nonzero, and certainly the b_i are nonzero. Now a_1a_2b_1b_2 \neq 0 in R, and evidently a_1a_2b_1b_2(\frac{a_1}{b_1} \wedge \frac{a_2}{b_2} \wedge \cdots \wedge \frac{a_k}{b_k}) =  \frac{a_1a_2}{1} \wedge \frac{a_1a_2}{1} \wedge \cdots \wedge \frac{a_k}{b_k} = 0. So every element of \bigwedge^k(I) is torsion, and thus \bigwedge^k(I) is torsion as an R-module.

Now consider R = \mathbb{Z}[x_1,\ldots,x_n], and let I = (x_1,\ldots,x_n). We claim that if \sum \alpha_ix_i = \sum \beta_ix_i \in I, then there exist h_i \in I such that \alpha_i - \beta_i = h_i. To see this, choose some j and consider \alpha_jx_j - \beta_jx_j = \sum_{i \neq j} (\beta_i - \alpha_i)x_i. Since R is a domain, x_j divides the right hand side of this equation; say \sum_{i \neq j} (\beta_i - \alpha_i)x_i = x_jh_j. Note that every term on the left hand side is divisible by some x_i other than x_j. Hence every term in h_j is divisible by some x_i other than x_j. In particular, h_j \in I, and \alpha_i - \beta_i = h_i.

Now consider the elements of \prod^k I as “column vectors”- that is, write \sum \alpha_i x_i as [\alpha_1\ \alpha_2\ \ldots\ \alpha_n]^\mathsf{T}. Note that this does not give a unique representation of the elements of I. However, if two column vectors A and B represent the same element in I, then there exists a third column vector H such that A = B+H, and moreover the entries of H are in I.

Now write the elements of R^k as (square) matrices. Let us consider the determinant of such a matrix A, as an element of R, reduced mod I. This is an alternating bilinear map on the set of matrices over R; we claim that this is also well-defined up to our nonunique identification of matrices over R with elements of I^k. To that end, suppose matrices A and B represent the same element of I^k; then we have a matrix H such that A = B+H. Consider the determinant of both sides mod I. Using the combinatorial formula for computing determinants, we have \mathsf{det}(B+H) = \sum_{\sigma \in S_n} \epsilon(\sigma) \prod (\beta_{\sigma(i),i} + h_{\sigma(i),i}). Note that each h_{i,j} is divisible by some x_i, and so goes to zero in the quotient R/I. So in fact \mathsf{det}(A) = \mathsf{det}(B+H) \equiv \mathsf{det}(B) \mod\ I; thus the map \mathsf{det} : I^k \rightarrow R/I is a well-defined alternating bilinear map. Since \mathsf{det}(x_1 \otimes \cdots \otimes x_n) = 1 is nonzero, this map is nontrivial. Thus \bigwedge^k(I) \neq 0 for all k.

Exhibit a nontrivial alternating bilinear map from a tensor square

Let R = \mathbb{Z}[x,y] and I = (x,y).

  1. Prove that if ax + by = a^\prime x + b^\prime y in R, then there exists h \in R such that a^\prime = a + yh and b^\prime = b - xh.
  2. Prove that the map \varphi(ax+by,cx+dy) = ad-bc \mod (x,y) is a well-defined, alternating, bilinear map I \times I \rightarrow R/I.

Note that (a-a^\prime)x = (b^\prime - b)y. Since x and y are irreducible (hence prime) in the UFD R, we have that y divides a - a^\prime and x divides b^\prime - b. Say a - a^\prime = yh and b^\prime - b = xh^\prime. Since R is a domain, we see that h = h^\prime, and thus a^\prime = a - yh and b^\prime = b + xh.

Suppose now that a_1x + b_1y = a_2x + b_2y and c_1x + d_1y = c_2x + d_2y. Then there exist h,g \in R such that a_2 = a_1 - yh, b_2 = b_1 + xh, c_2 = c_1 - yg, and d_2 = d_1 + xg. Evidently, a_2d_2 - b_2c_2 \equiv a_1d_1 - b_1c_1 \mod (x,y). Thus \varphi is well defined. Note that \varphi((a_1+a_2)x + (b_1+b_2)y, cx+dy) = (a_1+a_2)d - (b_1+b_2)c = (a_1d - b_1c) + (a_2d - b_2c) = \varphi(a_1x+b_1y,cx+dy) + \varphi(a_2x+b_2y,c_x+d_y), so that \varphi is linear in the first argument; similarly it is linear in the second argument. Moreover, \varphi((ax+by)r,cx+dy) = \varphi(arx+bry,cx+dy) = ard-brc = \varphi(ax+by,r(cx+dy)). Thus \varphi is R-bilinear. Since \varphi(ax+by,ax+by) = ab-ab = 0, \varphi is alternating. Finally, note that \varphi(1,0) = 1. Thus there exists a nontrivial alternating R-bilinear map on I \times I.

A fact about 2-tensors in the tensor algebra of the abelian group ZZ

Compute the image of the map \mathsf{Sym}_2 on \mathbb{Z} \otimes_\mathbb{Z} \mathbb{Z}. Conclude that the symmetric tensor 1 \otimes 1 is not in the image of \mathsf{Sym}_2.


Recall that \mathsf{Sym}_2 : \mathcal{T}^2(M) \rightarrow \mathcal{T}^2(M) is given by \mathcal{Sym}_2(z) = \sum_{\sigma \in S_2} \sigma \cdot z, where the action of S_n on \mathcal{T}^n(M) permutes the entries of a tensor.

Let a \otimes b \in \mathcal{T}^2(M). Now \mathsf{Sym}_2(a \otimes b) = a \otimes b + b \otimes a = 2 (a \otimes b), using the fact that \mathbb{Z} has a 1. In particular, \mathsf{im}\ \mathsf{Sym}_2 \subseteq \{k(a \otimes b) \ |\ a, b \in \mathbb{Z}, k \in 2\mathbb{Z} \}. The reverse inclusion is clear as well, so that these sets are in fact equal.

Suppose now that 1 \otimes 1 = \mathsf{Sym}_2(a \otimes b) = 2(a \otimes b). Consider the map \psi : \mathbb{Z} \times \mathbb{Z} \rightarrow \mathbb{Z}/(2) given by (a,b) \mapsto \overline{ab}. This map is clearly \mathbb{Z}-bilinear, and so induces an abelian group homomorphism \overline{\psi} : \mathcal{T}^2(\mathbb{Z}) \rightarrow \mathbb{Z}/(2) by the universal property of tensor products. Note that, under this map, \overline{\psi}(1 \otimes 1) = \overline{1}, while \overline{\psi}(2(a \otimes b)) = \overline{0}. Thus 1 \otimes 1 is not in the image of \mathsf{Sym}_2.

The matrix of an extension of scalars

Let K be a field and let F \subseteq K be a subfield. Let \psi : V \rightarrow W be a linear transformation of finite dimensional vector spaces over F.

  1. Prove that 1 \otimes \psi : K \otimes_F V \rightarrow K \otimes_F W is a K-linear transformation.
  2. Fix bases B and E of V and W, respectively. Compare the matrices of \psi and 1 \otimes \psi with respect to these bases.

Note that since K is an (K,F)-bimodule, 1 \otimes \psi is a K-module homomorphism- that is, a linear transformation.

Let B = \{v_i\} and E = \{w_i\}. We claim that B^\prime = \{ 1 \otimes v_i \} and E^\prime = \{1 \otimes w_i\} are bases of K \otimes_F V and K \otimes_F W, respectively, as K vector spaces. In fact, we showed this in a previous exercise. (See part (1).)

Suppose \psi(v_j) = \sum a_{i,j} w_i. Then (1 \otimes \psi)(1 \otimes v_j) = \sum a_{i,j} (1 \otimes w_j). Thus the columns of M^E_B(\psi) and M^{E^\prime}_{B^\prime}(\psi) agree, and we have M^E_B(\psi) = M^{E^\prime}_{B^\prime}(\psi).

A module is flat if and only if the tensor product of 1 with any injective homomorphism from a finitely generated module is injective

Let R be a ring and let A be a unital right R-module. Prove that A is flat if and only if for all injective left R-module homomorphisms \psi : L \rightarrow M where L is finitely generated, the map 1 \otimes \psi : A \otimes_R L \rightarrow A \otimes_R M is injective.


Certainly if A is flat, then every such map 1 \otimes \psi is injective.

Suppose now that A is a right R-module such that for all injective left R-module homomorphisms \psi : L \rightarrow M with L finitely generated, 1 \otimes \psi is injective. Let \psi : L \rightarrow M be an arbitrary injective module homomorphism. Suppose (1 \otimes \psi)(\sum a_i \otimes \ell_i) = 0. Now there exists a finitely generated submodule L^\prime \subseteq L containing all of the \ell_i. (For example, the submodule generated by the \ell_i.) The restriction \psi^\prime of \psi to L^\prime is injective by our hypothesis, and certainly (1 \otimes \psi)(\sum a_i \otimes \ell_i) = (1 \otimes \psi^\prime)(\sum a_i \otimes \ell_i). Thus \sum a_i \otimes \ell_i = 0. In particular, \mathsf{ker}\ 1 \otimes \psi = 0, so that 1 \otimes \psi is injective.

Thus A is flat.

The change of base of a flat module is flat

Let R and S be rings with 1 and let \theta : R \rightarrow S be a ring homomorphism such that \theta(1) = 1. Make S into a left R-module by r \cdot s = \varphi(r)s. In fact, S is an (R,S)-bimodule. Let M be a right, unital R-module. Show that if M is flat as an R-module, then M \otimes_R S is flat as a right S-module.


Let \psi : L \rightarrow N be an injective homomorphism of left S-modules. Note that S is free as a right S-module, hence flat, so that 1 \otimes \psi : S \otimes_S L \rightarrow S \otimes_S N is injective. Now we claim that 1 \otimes \psi is a homomorphism of left R-modules. To see this, let r \in R and let s \otimes \ell be a simple tensor in S \otimes_S L. Now (1 \otimes \psi)(r \cdot (s \otimes \ell)) = (1 \otimes \psi)(rs \otimes \ell) = rs \otimes \psi(\ell) = r \cdot (s \otimes \psi(\ell)) = r \cdot (1 \otimes \psi)(s \otimes \ell). Since M is flat as a right R-module, 1 \otimes (1 \otimes \psi) : M \otimes_R (S \otimes_S L) \rightarrow M \otimes_R (S \otimes_S N) is injective. Via the canonical isomorphisms, 1 \otimes \psi : (M \otimes_R S) \otimes_S L \rightarrow (M \otimes_R S) \otimes_S N is injective. Thus M \otimes_R S is flat.

Over a commutative ring, the tensor product of two flat modules is flat

Let R be a commutative ring and let M and N be R-modules. Show that if M and N are flat over R, then M \otimes_R N is flat over R.


This is a special case of the previous exercise, since both M and N can naturally be considered (R,R)-bimodules.