Tag Archives: ring

Every semigroup is isomorphic to a multiplicative subsemigroup of some ring

Show that every semigroup is isomorphic to a multiplicative subsemigroup of some ring. Exhibit a semigroup which is not isomorphic to the (entire) multiplicative semigroup of any ring.


Let S be a semigroup and let R be a ring. Let R[S] = \{ r : S \rightarrow R\ |\ r_s = 0\ \mathrm{for\ all\ but\ finitely\ many}\ s \in S\}. That is, R[S] is the set of all functions S \rightarrow R which vanish everywhere except for a finite subset of S.

Given f,g \in R[S], we define f+g and f \cdot g as follows.

  1. (f+g)(s) = f(s) + g(s)
  2. (f \cdot g)(s) = \sum_{tu = s} f(t)g(u)

In the definition of \cdot, if the summation is empty, then (f \cdot g)(s) = 0 as usual.

We claim that (R[S], +, \cdot) is a ring.

  1. (+ is associative) For all s \in S, we have ((f+g)+h)(s) = (f+g)(s) + h(s) = f(s) + g(s) + h(s) = f(s) + (g+h)(s) = (f+(g+h))(s). So (f+g)+h = f+(g+h).
  2. (A +-identity exists) Define 0 : S \rightarrow R by 0(s) = 0_R for all s \in S. Now (f+0)(s) = f(s) + 0(s) = f(s) + 0 = f(s), so that f+0 = f. Similarly, 0 + f = f.
  3. (Every element has a +-inverse) Given f, define \overline{f} : S \rightarrow R by \overline{f}(s) = -f(s). Then (f + \overline{f})(s) = f(s) + \overline{f}(s) = f(s) - f(s) = 0 = 0(s). So f + \overline{f} = 0. Similarly, \overline{f} + f = 0. We will denote the additive inverse of f by -f.
  4. (+ is commutative) We have (f+g)(s) = f(s) + g(s) = g(s) + f(s) = (g+f)(s).
  5. (\cdot is associative)
    ((f \cdot g) \cdot h)(s)  =  \displaystyle\sum_{tu = s} (f \cdot g)(t) h(u)
     =  \displaystyle\sum_{tu = s} \left(\displaystyle\sum_{vw = t} f(v)g(w)\right) h(u)
     =  \displaystyle\sum_{tu = s} \displaystyle\sum_{vw = t} f(v)g(w)h(u)
     =  \displaystyle\sum_{vwu = s} f(v)g(w)h(u)
     =  \displaystyle\sum_{vt = s} \displaystyle\sum_{wu = t} f(v) g(w) h(u)
     =  \displaystyle\sum_{vt = s} f(v) \left( \displaystyle\sum_{wu = t} g(w) h(u) \right)
     =  \displaystyle\sum_{vt = s} f(v) (g \cdot h)(t)
     =  (f \cdot (g \cdot h))(s)

    So f \cdot (g \cdot h) = (f \cdot g) \cdot h.

  6. (\cdot distributes over + from the left)
    (f \cdot (g + h))(s)  =  \displaystyle\sum_{tu = s} f(t) (g+h)(u)
     =  \displaystyle\sum_{tu = s} f(t)(g(u) + h(u))
     =  \displaystyle\sum_{tu = s} \left( f(t) g(u) + f(t) h(u) \right)
     =  \displaystyle\sum_{tu = s} f(t) g(u) + \displaystyle\sum_{tu = s} f(t)h(u)
     =  (f \cdot g)(s) + (f + h)(s)
     =  ((f \cdot g) + (f \cdot h))(s)

    So f \cdot (g + h) = (f \cdot g) + (f \cdot h)

  7. (\cdot distributes over + from the right) Similar to the proof for left distributivity.
  8. (If S has a left identity, then R[S] has a left identity) If e \in S is a left identity, define \overline{e} : S \rightarrow R by \overline{e}(e) = 1 and \overline{e}(s) = 0 if s \neq e. Now (\overline{e} \cdot f)(s) = \sum_{tu = s} \overline{e}(t) f(u). If t \neq e, then \overline{e}(t) = 0, and if t = e, then u = s. (The pair (e,s) is certainly in the index set of the summation.) So this is equal to f(s). Hence \overline{e} \cdot f = f.
  9. (If S has a right identity, then R[S] has a right identity) Similar to the proof for left identity.

So R[S] is a ring. Rings of this type are called semigroup rings, and their elements are typically written as ‘polynomials’ in the elements of S with coefficients from R. Addition and multiplication are then carried out as usual for polynomials – multiply the semigroup elements (preserving order) with their coefficients, and then collect like terms.

Given s \in S, we define \varphi_s : S \rightarrow R by \varphi_s(t) = 1 if t = s and 0 otherwise. We claim that the map \Phi : S \rightarrow R[S] given by s \mapsto \varphi_s is a semigroup homomorphism. Indeed, \Phi(st)(a) = \varphi_{st}(a) = 1 if a = st and 0 otherwise. Now (\varphi_s \cdot \varphi_t)(a) = \sum_{uv = a} \varphi_s(u) \varphi_t(v). If u \neq s, then \varphi_s(u) = 0, and if v \neq t, then \varphi_t(v) = 0. So (\varphi_s \cdot \varphi_t)(a) = 1 if a = st and 0 otherwise. Hence \Phi(st) = \Phi(s) \cdot \Phi(t).

So every semigroup can be embedded in the multiplicative semigroup of a ring. However, not every semigroup is isomorphic to the full multiplicative semigroup of some ring, as we argue. Note that every multiplicative semigroup of a ring has a zero. So any semigroup without a zero cannot be the multiplicative semigroup of a ring.

Finitely generated modules over R are precisely the module-homomorphic images of Rⁿ

Let R be a ring. Prove that M is a finitely generated module over R if and only if M is a module-homomorphic image of R^n.


Suppose first that M is a finitely generated R-module – say by A = \{a_1,\ldots,a_n\}. Let e_i denote the ith standard basis vector in R^n, and define \varphi : R^n \rightarrow M by e_i \mapsto a_i, and extend linearly. Certainly then \varphi is a surjective module homomorphism.

Conversely, suppose \varphi : R^n \rightarrow M is a surjective module homomorphism, and for each e_i let a_i = \varphi(e_i). Since \varphi is surjective, and since the e_i generate R^n, the set A = \{a_1, \ldots, a_n\} generates M over R as desired.

If R is a Noetherian ring, then Rⁿ is a Noetherian R-module

Let R be a Noetherian ring. Prove that R^n, as an R-module in the usual way, is also Noetherian.


Recall that a ring is Noetherian if every ideal is finitely generated, and a module is Noetherian if every submodule is finitely generated. (That is, a ring is Noetherian (as a ring) if it is Noetherian (as a module) over itself.)

We will proceed by induction on n.

For the base case, n = 1, R^1 is certainly Noetherian as a module over R.

For the inductive step, suppose R^n is Noetherian. Now let M \subseteq R^{n+1} be a submodule; our goal is to show that M must be finitely generated. To that end, let A = \{r \in R \ |\ (m_i)_0 = r\ \mathrm{for\ some}\ (m_i) \in M \}. That is, A is the collection (in R) of all zeroth coordinates of elements in M.

We claim that A \subseteq R is an ideal. If a,b \in A, then there exist elements (m_i) and (n_i) in M such that m_0 = a and n_0 = b. Now since M \subseteq R^{n+1} is a submodule, we have (m_i) + r(n_i) \in M for all r \in R, so that a+rb \in A. We clearly have 0 \in A, so that by the submodule criterion, A is an ideal of R.

In particular, since R is Noetherian, the ideal A is finitely generated – say by \{a_0, a_1, \ldots, a_k\}. We will let \alpha_i be an element of M whose zeroth coordinate is a_i. Now let m = (m_0,\ldots,m_{n+1}) \in M. Now m_0 = \sum c_i a_i for some c_i \in R, and so m - \sum c_i m_i = (0,t_1,\ldots,t_{n+1}) is an element of M whose first coordinate is 0. In particular, we have M = (m_0,\ldots,m_k) + B, where B = M \cap (0 \times R^n). (We showed the (\subseteq) direction, and the (\supseteq) inclusion is clear.) Now M \cap (0 \times R^n) is an ideal of 0 \times R^n \cong_R R^n, which is Noetherian by the induction hypothesis. So B is finitely generated as an R-module, and thus M is finitely generated over R.

Since M was an arbitrary submodule of R^{n+1}, R^{n+1} is Noetherian as a module.

The quotient of a product is module isomorphic to the product of quotients

Let R be a ring, let \{A_i\}_{i=1}^n be a finite family of (left, unital) R-modules, and let B_i \subseteq A_i be a submodule for each i. Prove that (\prod A_i)/(\prod B_i) \cong_R \prod A_i/B_i.


We did this previously. D&F, why repeat an exercise?

Representativewise multiplication of cosets of an ideal is well-defined

Let R be a ring and let I \subseteq R be an ideal. Show that if a_1 - b_1 \in I and a_2 - b_2 \in I, then a_1a_2 - b_1b_2 \in I.


Since I is an ideal, a_1a_2 - b_1a_2 \in I and b_1a_2 - b_1b_2 \in I. So a_1a_2 - b_1a_2 + b_1a_2 - b_1b_2 = a_1a_2 - b_1b_2 \in I as desired.

Facts about sums of ideals

Let R be a commutative ring with 1, and let A,B,C \subseteq R be ideals. Recall that A+B = \{a+b \ |\ a \in A, b \in B\}. Prove the following.

  1. A+B is an ideal
  2. A,B \subseteq A+B
  3. If A,B \subseteq C then A+B \subseteq C
  4. A+B = (A,B)
  5. If (A,B) = (1), then there exist \alpha \in A and \beta \in B such that \alpha+\beta = 1.
  6. If (A,B) = (1) and BC \subseteq A, then C \subseteq A.

Suppose a_1+b_1,a_2+b_2 \in A+B, and let r \in R. Then (a_1+b_1) + r(a_2+b_2) = (a_1+ra_2) + (b_1+rb_2) \in A+B since A and B are ideals. Moreover, 0 = 0+0 \in A+B. By the submodule criterion, A+B \subseteq R is an ideal.

For all a \in A, a = a+0 \in A+B. So A \subseteq A+B, and similarly B \subseteq A+B.

Suppose A, B \subseteq C for some ideal C. Since C is an ideal, it is closed under sums, so that a+b \in C for all a \in A and b \in B. Thus A+B \subseteq C.

Note that A, B \subseteq (A,B), so that A+B \subseteq (A,B) by the previous point. Now let x \in (A,B); by definition, x = \sum r_ic_i for some r_i \in R and c_i \in A \cup B. Collecting terms in A and B, we have x = \alpha + \beta for some \alpha \in A and \beta \in B. Thus A+B = (A,B).

Suppose (A,B) = (1). By the previous point, A+B = (1), and in particular 1 = \alpha+\beta for some \alpha \in A and \beta \in B.

Suppose (A,B) = (1). By the previous point, there exist \alpha \in A and \beta \in B such that \alpha+\beta = 1. Now let \gamma \in C. We have \gamma = \gamma(\alpha+\beta) = \gamma\alpha + \gamma\beta. Since BC \subseteq A, \gamma\beta \in A, and so \gamma \in A. Thus C \subseteq A.

Basic properties of ideal products

Let R be a commutative ring with 1 and let A,B,C \subseteq R be ideals. Prove the following. (1) A(BC) = (AB)C, (2) AB = BA, (3) If a \in A and b \in B then ab \in AB, and (4) AB \subseteq A.


Let a \in A and b \in B. Certainly then by our equivalent characterization of ideal products, ab \in AB.

By this previous exercise, if z \in A(BC) then z = \sum_i a_i (\sum_j b_{i,j}c_{i,j}) = \sum_i \sum_j a_i b_{i,j} c_{i,j}. Each a_ib_{i,j} is in AB, so that (a_ib_{i,j})c_{i,j} \in (AB)C. So A(BC) \subseteq (AB)C. The reverse inclusion is similar.

Suppose \sum a_ib_i \in AB. Since A is an ideal, each a_ib_i is in A. Then \sum a_ib_i \in A.

Since R is commutative, AB = \{\sum_{i=1}^n a_ib_i \ |\ n \in \mathbb{N}, a_i \in A, b_i \in B\} = \{\sum_{i=1}^n b_ia_i \ |\ n \in \mathbb{N}, a_i \in A, b_i \in B\} = BA.

An equivalent characterization of ideal products

In TAN, we defined the product of (finitely generated) ideals I = (A) and J = (B) to be I \star J = (ab \ |\ a \in A, b \in B). We can also define an ideal product IJ = \{\sum_T x_iy_i \ |\ T\ \mathrm{finite}, x_i \in I, y_i \in J\}. Prove that IJ = I \star J.


We did this previously.

The ideal product is well-defined

In TAN, the product of two finitely generated ideals (A) and (B) was defined to be (ab \ |\ a \in A, b \in B). Argue that this is a well-defined operator on the set of ideals of a fixed ring R.


More broadly, IJ = (xy \ |\ x \in I, y \in J). If I = (A) and J = (B), then each x is an R-linear combination of the a_i \in A, and each y a combination of the b_i \in B. Since R is a commutative ring, then, each xy is an R-linear combination of a_ib_j where a_i \in A and b_j \in B; that is, IJ \subseteq (ab \ |\ a \in A, b \in B). The reverse inclusion follows from this previous exercise.

A more modern treatment would derive the generating set characterization of IJ from the more general definition.

In a ring with 1, an ideal which contains 1 is the entire ring

Let R be a ring with 1 and let I \subseteq R be an ideal. Show that if 1 \in I, then I = R.


Recall that if a \in R and b \in I, then ab \in I. Letting a be arbitrary and b = 1, then, we have R \subseteq I. Hence R = I.