Category Archives: IS:P

Show that two sets of equations define the same subvariety of semigroups

Let E_1 = \{x = xyx\} and E_2 = \{x^2=x, xy = xzy\}. Show that these sets of equations define the same varieties of semigroups. (That is, if S is a semigroup, then the equations in E_1 hold for all elements if and only if the equations in E_2 hold for all elements.)


Suppose S is a semigroup such that for all x,y \in S, x = xyx. Let x,y,z \in S. Now x = xxx and x = xx^2x, so that x = xxxx = xx. So x = x^2 for all x. Moreover, xy = (xy)(xy) = x(yx)y = x(yxzyx)z = xzy for all x,y,z.

Now suppose S is a semigroup such that for all x,y,z \in S, x = x^2 and xy = xzy. In particular, for all x,y \in S, we have x = xx = xyx.

Exhibit a semigroup which has a minimal but no least proper congruence

Exhibit a semigroup S which has a minimal proper congruence, but no least proper congruence.


Let S = \{a,b,0\} be a three-element zero semigroup. As we showed in this previous exercise, every equivalence relation on S is a congruence.

The equivalences \varepsilon_1 = \{\{a,0\},\{b\}\} and \varepsilon_2 = \{\{b,0\},\{a\}\} (abusing notation; we give the classes rather than the pairs) are certainly minimal proper congruences, but \varepsilon_1 \cap \varepsilon_2 = \Delta. So S has no least proper congruence.

Transitivity of subdirect products

Let S be a semigroup. Suppose S is a subdirect product of the family \{S_a\}_A, and that for each a \in A, S_a is a subdirect product of the family \{S_{b,a}\}_{B_a}. Show that S is a subdirect product of the family \{S_{b,a}\}_{B_a,A}. (Roughly speaking, subdirect productness is transitive.)


By definition, we have injective semigroup homomorphisms \varphi : S \rightarrow \prod_A S_a and \psi_a : S_a \rightarrow \prod_{B_a} S_{b,a} which are surjective in each component. Now for each a \in A and b \in B_a, \zeta_{b,a} = \pi_{b,a} \circ \psi_a \circ \pi_a \circ \varphi : S \rightarrow S_{b,a} is a surjective homomorphism (being the composite of two surjective homomorphisms).

Consider the induced family of congruences \sigma_{b,a} = \mathsf{ker}\ \zeta_{b,a}. Let x,y \in S; since S is a subdirect product of the S_a, there exist a_1, a_2 such that (\pi_{a_1} \circ \varphi)(x) \neq (\pi_{a_2} \circ \varphi)(y), and so in particular x and y are separated by the congruences \sigma_{b,a}. By Proposition II.1.4, S is a subdirect product of the family S/\sigma_{b,a}, which are naturally isomorphic to S_{b,a} by the First Isomorphism Theorem. So S is a subdirect product of the S_{b,a}.

An equivalent of subdirect irreducibility for semigroups

Let S be a semigroup. Prove that S is a subdirect product of the family of semigroups \{S_a\}_A indexed by A if and only if for every a \in A, there is a surjective semigroup homomorphism \varphi_a : S \rightarrow S_a such that the family of induced congruences (that is, the kernels of the \varphi_a) separate the elements of S.


First, suppose \psi : S \rightarrow \prod_A S_a is a subdirect product; then by definition the composites \varphi_a = \pi_a \circ \psi : S \rightarrow S_a are surjective semigroup homomorphisms. The family of kernels of the \varphi_a separate the elements of S by Proposition II.1.4.

Conversely, if we have such a family of homomorphisms, then by Proposition II.1.4, S is a subdirect product of the family \{S/\mathsf{ker}\ \varphi_a\}_A. By the First Isomorphism Theorem for semigroups (proved here), S/\mathsf{ker}\ \varphi_a \cong S_a, so that S is a subdirect product of the S_a.

Equivalent characterizations of subdirect irreducibility

Let S be a nontrivial semigroup. Show that the following are equivalent.

  1. S is subdirectly irreducible.
  2. The set of proper congruences on S is closed under arbitrary intersections.
  3. S has a \subseteq-least proper congruence.

We will follow the strategy (1) \Rightarrow (2) \Rightarrow (3) \Rightarrow (1). Note that the condition that S is nontrivial is required to have the set \mathsf{Q}(S) of nonidentity congruences be nonempty.

(1) \Rightarrow (2): Suppose S is a subdirectly irreducible semigroup. Let Q \subseteq \mathsf{Q}(S) be a nonempty set of proper congruences on S, and consider the product \prod_{\sigma \in Q} S/\sigma.

Suppose \bigcap Q = \Delta. (That is, suppose the intersection over Q is the identity congruence.) By II.1.3, the family of congruences in Q separate S. By II.1.4, S is a subdirect product of \{S/\sigma\}_Q via the map s \mapsto ([s]_\sigma). Since S is subdirectly irreducible, then for some congruence \sigma, the map s \mapsto [s]_\sigma is an isomorphism. Clearly then \sigma = \Delta, a contradiction. So \bigcap Q \supsetneq \Delta.

(2) \Rightarrow (3): If \mathsf{Q}(S) is the set of all proper (i.e. nontrivial) congruences on S, then (by our hypothesis) \bigcap \mathsf{Q}(S) = \tau \in \mathsf{Q}(S) is a nontrivial congruence. Certainly if \sigma \in \mathsf{Q}(S) then \tau \subseteq \sigma, so that \tau is \subseteq-least among the elements of \mathsf{Q}(S).

(3) \Rightarrow (1): Suppose S has a \subseteq-least proper congruence \tau. Suppose further that \varphi : S \rightarrow \prod_B S_\beta is a subdirect product; that is, \varphi is injective and each \pi_\beta \circ \varphi is surjective. By II.1.4, the family of congruences \mathsf{ker}\ \pi_\beta \circ \varphi separate the elements of S. By II.1.3, we have \bigcap \mathsf{ker}\ \pi_\beta \circ \varphi = \Delta. Since S has a \subseteq-least proper congruence, one of the \mathsf{ker}\ \pi_\beta \circ \varphi must be \Delta. So \pi_\beta \circ \varphi : S \rightarrow S_\beta is an isomorphism.

Characterize the units and zero divisors in a multiplicative semigroup of square matrices over a field

Let F be a field, let n \geq 1, and let S = \mathsf{Mat}_n(F) denote the multiplicative semigroup of n \times n matrices over F. Characterize the units and zero divisors in S. Does S have a kernel?


In this previous exercise, we showed that a matrix is invertible if and only if it is row equivalent to the identity matrix. In this previous exercise, we showed that a matrix is row equivalent to the identity if and only if it has nonzero determinant (remember, this is over a field). So a matrix is invertible if and only if it has nonzero determinant.

Now suppose a matrix A has determinant 0. Then as a linear transformation, A is not injective. So there exists a nonzero column matrix x such that Ax = 0. Then A[x|\cdots|x] = 0, and in particular A is a zero divisor.

That is, every matrix is either a unit or a zero divisor, and to distinguish the two cases, we need only look at the determinant.

Note that S has a zero, namely the zero matrix. Now every ideal in S contains zero, so that the kernel (i.e. the intersection of all two-sided ideals) is nonempty.

Suppose J \subseteq S is an ideal. Note that if J contains a unit A, then for all B \in S we have BA^{-1}A = B \in J, so that J = S. So every nontrivial ideal of S is contained in the set Z of zero divisors.

Let J \subseteq S be an ideal, and suppose there exists a nonzero matrix A \in J. Suppose the (h,k) entry of A is nonzero. By left- and right- multiplying by appropriate permutation matrices, we can construct matrices A_{i,j} whose (i,j) entry is nonzero for all 1 \leq i,j \leq n. By this previous exercise, left- and right-multiplying by appropriate matrices having only one nonzero entry, we see that J contains matrices B_{i,j} having a nonzero entry in (i,j) and 0 elsewhere for any choice of (i,j). Finally, multiplying by an appropriate scalar matrix \alpha I, in fact J contains every matrix of the form E_{h,k} having a 1 in entry (h,k) and 0 elsewhere for all 1 \leq h,k \leq n. In particular, J contains the ideal generated by such matrices.

So the kernel of S is precisely the ideal generated by the matrices E_{h,k}. In fact, since each E_{h,k} is row- and column- equivalent to E_{1,1}, the kernel of S is the principal ideal generated by the matrix E_1 with a 1 in entry (1,1) and 0 elsewhere.

Recall that every square matrix over F is row- and column-equivalent to a diagonal matrix, and that two matrices A and B are row- and column-equivalent precisely when we have B = PAQ for some invertible matrices P and Q. In particular, A \in J(E_1) if and only if a diagonal matrix to which A is row- and column equivalent is in J(E_1). Moreover, we can assume that these diagonal matrices have only 1 or 0 on the main diagonal.

We claim that a diagonal matrix which has more than one nonzero entry on the main diagonal is not in J(E_1). To see this, recall that J(E_1) = E_1 \cup SE_1 \cup E_1S \cup SE_1S. If D is a diagonal matrix with more than one nonzero entry on the main diagonal, then certainly D \neq E_1. As we saw in this previous exercise, every element of SE_1 and E_1S has at most one nonzero entry on the main diagonal. So it remains to be seen that D \notin SE_1S. Suppose A = [a_{i,j}] and B = [b_{i,j}] and consider AE_1B = [\sum_{k,\ell} a_{i,k}e_{k,\ell}b_{\ell,j}] = [a_{i,1}b_{1,j}]. If this matrix is diagonal with nonzero diagonal entry a_{i,1}b_{1,j}, then a_{t,1} = 0 for all t \neq 1, so that we have at most one nonzero diagonal entry.

So A is in the kernel of S if and only if it is row- and column- equivalent to a diagonal matrix with only one nonzero entry.

Every semigroup is isomorphic to a multiplicative subsemigroup of some ring

Show that every semigroup is isomorphic to a multiplicative subsemigroup of some ring. Exhibit a semigroup which is not isomorphic to the (entire) multiplicative semigroup of any ring.


Let S be a semigroup and let R be a ring. Let R[S] = \{ r : S \rightarrow R\ |\ r_s = 0\ \mathrm{for\ all\ but\ finitely\ many}\ s \in S\}. That is, R[S] is the set of all functions S \rightarrow R which vanish everywhere except for a finite subset of S.

Given f,g \in R[S], we define f+g and f \cdot g as follows.

  1. (f+g)(s) = f(s) + g(s)
  2. (f \cdot g)(s) = \sum_{tu = s} f(t)g(u)

In the definition of \cdot, if the summation is empty, then (f \cdot g)(s) = 0 as usual.

We claim that (R[S], +, \cdot) is a ring.

  1. (+ is associative) For all s \in S, we have ((f+g)+h)(s) = (f+g)(s) + h(s) = f(s) + g(s) + h(s) = f(s) + (g+h)(s) = (f+(g+h))(s). So (f+g)+h = f+(g+h).
  2. (A +-identity exists) Define 0 : S \rightarrow R by 0(s) = 0_R for all s \in S. Now (f+0)(s) = f(s) + 0(s) = f(s) + 0 = f(s), so that f+0 = f. Similarly, 0 + f = f.
  3. (Every element has a +-inverse) Given f, define \overline{f} : S \rightarrow R by \overline{f}(s) = -f(s). Then (f + \overline{f})(s) = f(s) + \overline{f}(s) = f(s) - f(s) = 0 = 0(s). So f + \overline{f} = 0. Similarly, \overline{f} + f = 0. We will denote the additive inverse of f by -f.
  4. (+ is commutative) We have (f+g)(s) = f(s) + g(s) = g(s) + f(s) = (g+f)(s).
  5. (\cdot is associative)
    ((f \cdot g) \cdot h)(s)  =  \displaystyle\sum_{tu = s} (f \cdot g)(t) h(u)
     =  \displaystyle\sum_{tu = s} \left(\displaystyle\sum_{vw = t} f(v)g(w)\right) h(u)
     =  \displaystyle\sum_{tu = s} \displaystyle\sum_{vw = t} f(v)g(w)h(u)
     =  \displaystyle\sum_{vwu = s} f(v)g(w)h(u)
     =  \displaystyle\sum_{vt = s} \displaystyle\sum_{wu = t} f(v) g(w) h(u)
     =  \displaystyle\sum_{vt = s} f(v) \left( \displaystyle\sum_{wu = t} g(w) h(u) \right)
     =  \displaystyle\sum_{vt = s} f(v) (g \cdot h)(t)
     =  (f \cdot (g \cdot h))(s)

    So f \cdot (g \cdot h) = (f \cdot g) \cdot h.

  6. (\cdot distributes over + from the left)
    (f \cdot (g + h))(s)  =  \displaystyle\sum_{tu = s} f(t) (g+h)(u)
     =  \displaystyle\sum_{tu = s} f(t)(g(u) + h(u))
     =  \displaystyle\sum_{tu = s} \left( f(t) g(u) + f(t) h(u) \right)
     =  \displaystyle\sum_{tu = s} f(t) g(u) + \displaystyle\sum_{tu = s} f(t)h(u)
     =  (f \cdot g)(s) + (f + h)(s)
     =  ((f \cdot g) + (f \cdot h))(s)

    So f \cdot (g + h) = (f \cdot g) + (f \cdot h)

  7. (\cdot distributes over + from the right) Similar to the proof for left distributivity.
  8. (If S has a left identity, then R[S] has a left identity) If e \in S is a left identity, define \overline{e} : S \rightarrow R by \overline{e}(e) = 1 and \overline{e}(s) = 0 if s \neq e. Now (\overline{e} \cdot f)(s) = \sum_{tu = s} \overline{e}(t) f(u). If t \neq e, then \overline{e}(t) = 0, and if t = e, then u = s. (The pair (e,s) is certainly in the index set of the summation.) So this is equal to f(s). Hence \overline{e} \cdot f = f.
  9. (If S has a right identity, then R[S] has a right identity) Similar to the proof for left identity.

So R[S] is a ring. Rings of this type are called semigroup rings, and their elements are typically written as ‘polynomials’ in the elements of S with coefficients from R. Addition and multiplication are then carried out as usual for polynomials – multiply the semigroup elements (preserving order) with their coefficients, and then collect like terms.

Given s \in S, we define \varphi_s : S \rightarrow R by \varphi_s(t) = 1 if t = s and 0 otherwise. We claim that the map \Phi : S \rightarrow R[S] given by s \mapsto \varphi_s is a semigroup homomorphism. Indeed, \Phi(st)(a) = \varphi_{st}(a) = 1 if a = st and 0 otherwise. Now (\varphi_s \cdot \varphi_t)(a) = \sum_{uv = a} \varphi_s(u) \varphi_t(v). If u \neq s, then \varphi_s(u) = 0, and if v \neq t, then \varphi_t(v) = 0. So (\varphi_s \cdot \varphi_t)(a) = 1 if a = st and 0 otherwise. Hence \Phi(st) = \Phi(s) \cdot \Phi(t).

So every semigroup can be embedded in the multiplicative semigroup of a ring. However, not every semigroup is isomorphic to the full multiplicative semigroup of some ring, as we argue. Note that every multiplicative semigroup of a ring has a zero. So any semigroup without a zero cannot be the multiplicative semigroup of a ring.

Exhibit a 0-simple semigroup

Give an example of a 0-simple semigroup.


Recall that a semigroup S with zero is called 0-simple if S^2 \neq 0 and S has no ideals other than 0 and S.

Let G be a group, and consider the semigroup S = G_0. (Recall that if T is a semigroup, T_0 is the semigroup obtained by attaching a new element 0 to T which acts like a zero.) So S is a semigroup with zero. Certainly S^2 \neq 0. Now let J \subseteq S be an ideal. If J \neq 0, then there exists an element u \in J such that u \neq 0. Now Gu = G (since G is a group). So G = Gu \subseteq SJ \subseteq J, and certianly 0 \in J. So J = S. Thus S = G_0 is 0-simple.

The order of elements in a free semigroup

Let X be a set and let S = \mathsf{F}(X) denote the free semigroup on X. Find the orders of elements in S. Does S have a kernel?


To answer this question the Right Way, we really need to have a more rigorous definition for \mathsf{F}(X). To accomplish this, we will introduce ‘lists’. The next several paragraphs borrow heavily from the book “Set Theory” by Monk and from the paper “Functional Programming with Bananas, Lenses, Envelopes and Barbed Wire” by Meijer, Fokkinga, and Paterson (primarily from the latter, with a bit of influence from the former). Both of those publications are very good.

This post was originally written for an experimental blog I started writing a while ago but quickly abandoned, so the style is a bit different.

One of the most basic “data structures” we use is the list; that is, a finite sequence of elements from some set. In this post we will give a precise definition for lists and prove some basic facts about them. The definition of list that we will use might seem strange, but it lends itself well to recursive proofs and to implementation in software. This will allow us to implement our proofs in a computer program, reasonably confident that the program does as we expect.

Definition: Let A be a set. We define a function L on \mathbb{N} recursively as follows: L_A(0) = \{\emptyset\} and L_A(k+1) = A \times L_A(k). Then we define the set of lists in A to be the set \mathcal{L}(A) = \bigcup_{k \in \mathbb{N}} L_A(k).

Note that for all sets A, \emptyset \in \mathcal{L}(A). We will call this the empty list and denote it by \varnothing. By definition, every element of \mathcal{L}(A) is either \varnothing or of the form (a,\ell) for some a \in A and \ell \in \mathcal{L}(A).

Theorem (Basic facts about \mathcal{L}(A)):

  1. L_A(k) \neq \emptyset for all nonzero k \in \mathbb{N} if and only if A \neq \emptyset.
  2. If A is nonempty and \ell \in \mathcal{L}(A), then there exists a unique k \in \mathbb{N} such that \ell \in L_A(k). We call this unique k the length of \ell and denote it \mathsf{len}(\ell).
  3. If \ell \in \mathcal{L}(A) with \mathsf{len}(\ell) = k+1, then \ell = (a,\ell^\prime) for some a \in A and \ell^\prime \in \mathcal{L}(A) with \mathsf{len}(\ell^\prime) = k.

Proof:

  1. (\Rightarrow) Since in particular L_A(1) = A \times \{\varnothing\} \neq \emptyset, we have A \neq \emptyset. (\Leftarrow) Suppose to the contrary, and choose k minimal such that L_A(k+1) is empty. Then A \times L_A(k) is empty, so that L_A(k) is empty, a contradiction.
  2. Let S \subseteq \mathcal{L}(A) be the set of all lists which are not contained in a unique L_A(k). Suppose S is not empty. Note that each \ell \in S is contained in a smallest L_A(k), say L_A(k_\ell). Now let k \in \mathbb{N} be the unique least element among the k_\ell, and let \ell be in L_A(k). Then we have \ell \in L_A(n) for some n > k. If k = 0, then we have \ell = \varnothing since L_A(0) = \{\varnothing\}. On the other hand, we have n = m+1 for some m, and \ell \in A \times L_A(m). This is a contradiction because no element of A \times L_A(m) is the empty set. Suppose now that k \neq 0; then we have k = t+1 and n = m+1. Now \ell = (a,\ell_1), where \ell_1 \in L_A(t) \cap L_A(m). This is a contradiction since k was chosen to be minimal.
  3. This follows from the definition of L_A(k+1). \square

Note that by definition, every list has finite length. Elements of \mathcal{L}(A) are ordered pairs which may be nested on the right; for instance, (1,\varnothing), (2,(1,\varnothing)), and (4,(2,(1,\varnothing))) are lists. To cut down on the number of parentheses we have to write, we will frequently use bracket notation as a shorthand. For instance, [1] means (1,\varnothing) and [4,2,1] means (4,(2,(1,\varnothing))).

One of the more useful things we can do with a list is deconstruct it. Functions that do this in a certain well-behaved way are called catamorphisms, and they have a nice uniqueness property.

Theorem (List catamorphisms): Let A and B be sets with A nonempty, let e \in B, and let \oplus : A \times B \rightarrow B be a function. Then there exists a unique function h : \mathcal{L}(A) \rightarrow B such that h(\varnothing) = e and h(a,\ell) = a \oplus h(\ell). We will denote this function by (\!|\oplus,e|\!), and call functions of this kind list catamorphisms.

Proof:
(Existence) Define a function h_k : L_A(k) \rightarrow B recursively as follows: h_0(\varnothing) = e and h_{k+1}(a,\ell) = a \oplus h_k(\ell). Certainly then h = \bigcup_{k \in \mathbb{N}} h_k is a function from \mathcal{L}(A) to B. Moreover, we have h(\varnothing) = h_0(\varnothing) = e, and if (a,\ell) \in L_A(k+1), then h(a,\ell) = h_{k+1}(a,\ell) = a \oplus h_k(\ell) = a \oplus h(\ell). (Uniqueness) Suppose \theta : \mathcal{L}(A) \rightarrow B is a function such that \theta(\varnothing) = e and \theta(a,\ell) = a \oplus \theta(\ell). We prove that \theta = h by induction on the length. Certainly \theta(\varnothing) = e = h(\varnothing). Now suppose that for all lists \ell of length k we have \theta(\ell) = h(\ell), and let \tau = (a,\ell) have length k+1. Then \theta(a,\ell) = a \oplus \theta(\ell) = a \oplus h(\ell) = h(a,\ell). Thus h is unique. \square

A catamorphism takes a binary operator and an element of B, and returns a function which takes lists over A and deconstructs them into a single element of B using the operator. More literally, a catamorphism takes a list an replaces all commas with the operator \oplus and replaces the empty list with the fixed element e. This may seem strange at first, but as we will see, several useful functions which take lists as input can be written as a catamorphism or a chain of catamorphisms composed together. One advantage of this is that certain functional transformation rules can be used to rewrite a given function to be “more efficient”, whatever that means. Two basic rules are called the Fusion law and the Split law; roughtly speaking, the Fusion law says when the composition of a function and a catamorphism is a catamorphism, and the Split law says that catamorphisms interact with direct products in a nice way.

Theorem:

  1. (Fusion law) Let A, B, and C be nonempty sets, let e_B \in B and e_C \in C, and let \oplus : A \times B \rightarrow B and \otimes : A \times C \rightarrow C. Finally, let f : B \rightarrow C be a function. Then f \circ (\!|\oplus,e_B|\!) = (\!|\otimes,e_C|\!) if and only if f(e_B) = e_C and for all a \in A and b \in B such that b \in \mathsf{im}\ (\!|\oplus,e_B|\!), f(a \oplus b) = a \otimes f(b).
  2. (Split law) Let A, B, and C be nonempty sets, let b \in B and c \in C, and let \oplus : A \times B \rightarrow B and \otimes : A \times C \rightarrow C. Define \star : A \times (B \times C) \rightarrow B \times C by a \star (b,c) = (a \oplus b, a \otimes c), and let \pi_1 and \pi_2 denote the coordinate projections. Then \pi_1 \circ (\!|\star,(b,c)|\!) = (\!|\oplus,b|\!) and \pi_2 \circ (\!|\star,(b,c)|\!) = (\!|\otimes,c|\!).

Proof:

  1. (\Rightarrow) Note that c = (\!|\otimes,c|\!)(\varnothing) = (f \circ (\!|\oplus,b|\!))(\varnothing) = f((\!|\oplus,b|\!)(\varnothing) = f(b). Moreover, if a \in A and b = (\!|\oplus,e_B|\!)(\ell), then f(a \oplus b) = f(a \oplus (\!|\oplus,e_B|\!)(\ell)) = f((\!|\oplus,e_B|\!)(a,\ell)) = (f \circ (\!|\oplus,e_B|\!))(a,\ell) = (\!|\otimes,e_C|\!)(a,\ell) = a \otimes (\!|\otimes,e_C|\!)(\ell) = a \otimes (f \circ (\!|\oplus,e_B|\!))(\ell) = a \otimes f((\!|\oplus,e_B|\!)(\ell)) = a \otimes f(b). (\Leftarrow) Note that (f \circ (\!|\oplus,e_B|\!))(\varnothing) = f((\!|\oplus,e_B|\!)(\varnothing)) = f(e_B) = e_C and that (f \circ (\!|\oplus,e_B|\!))(a,\ell) = f((\!|\oplus,e_B|\!)(a,\ell)) = f(a \oplus (\!|\oplus,e_B|\!)(\ell)) = a \otimes f((\!|\oplus,e_B|\!)(\ell)) = a \otimes (f \circ (\!|\oplus,e_B|\!))(\ell). By the uniqueness of catamorphisms, f \circ (\!|\oplus,e_B|\!) = (\!|\otimes,e_C|\!).
  2. We proceed by induction on the length of the input \ell. If \ell = \varnothing, then (\!|\star,(e_B,e_C)|\!)(\varnothing) = (e_B,e_C) = ((\!|\oplus,e_B|\!), (\!|\otimes,e_C|\!)). Suppose now that the result holds for all lists of length k, and suppose \ell = (a, \ell^\prime) has length k+1. Then (\!|\star,(e_B,e_C)|\!)(a,\ell^\prime) = a \star (\!|\star,(e_B,e_C)|\!)(\ell^\prime) = a \star ((\!|\oplus,e_B|\!)(\ell^\prime), (\!|\otimes,e_C|\!)(\ell^\prime)) = (a \star (\!|\oplus,e_B|\!)(\ell^\prime), a \star (\!|\otimes,e_C|\!)(\ell^\prime)) = ((\!|\oplus,e_B|\!)(\ell^\prime), (\!|\otimes,e_C|\!)(\ell^\prime)). \square

The fusion and split laws for catamorphisms will be useful both for proving theorems about functions defined in terms of catamorphisms (as we will see) and for developing efficient programs for performing explicit computations.

Definition: (List Concatenation) Let A be a nonempty set. We define a binary operator \ddagger : \mathcal{L}(A) \times \mathcal{L}(A) \rightarrow \mathcal{L}(A) recursively as follows: \varnothing \ddagger \ell = \ell and (a,\ell^\prime) \ddagger \ell = (a, \ell^\prime \ddagger \ell). This oeprator is called concatenation.

Lemma: \ddagger is associative. Proof: Let (\alpha,\beta,\gamma) be a triple of lists in \mathcal{L}(A). We proceed by induction on the length of \alpha. For the base case, we have (\varnothing \ddagger \beta) \ddagger \gamma = \beta \ddagger \gamma = \varnothing \ddagger (\beta \ddagger \gamma). For the inductive step, suppose \ddagger is associative on all triples (\alpha^\prime,\beta,\gamma) where \alpha^\prime has length k, and that \alpha has length k+1. Now ((a,\alpha^\prime) \ddagger \beta) \ddagger \gamma = (a, \alpha^\prime \ddagger \beta) \ddagger \gamma = (a, (\alpha^\prime \ddagger \beta) \ddagger \gamma) = (a, \alpha^\prime \ddagger (\beta \ddagger \gamma)) = (a,\alpha^\prime) \ddagger (\beta \ddagger \gamma). So \ddagger is associative. \square

Lemma: \varnothing \ddagger \alpha = \alpha \ddagger \varnothing = \alpha for all lists \alpha. Proof: \varnothing \ddagger \alpha = \alpha by definition. To see the second equality, we use induction on the length of \alpha. Certainly \varnothing \ddagger \varnothing = \varnothing. If the result holds for \alpha^\prime, then (a,\alpha^\prime) \ddagger \varnothing = (a,\alpha^\prime \ddagger \varnothing) = (a,\alpha^\prime) as desired. \square

So if X is nonempty, the set \mathcal {L}(X) with the operator \ddagger is a semigroup, and in fact is a monoid with identity element \varnothing. We define the free monoid on X to be \mathsf{F}_1(X) = (\mathcal{L}(X), \ddagger).

Lemma: \mathsf{len}(\alpha \ddagger \beta) = \mathsf{len}(\alpha) + \mathsf{len}(\beta). Proof: By induction on the length of \alpha. Certainly \mathsf{len}(\varnothing \ddagger \beta) = \mathsf{len}(\beta) = \mathsf{len}(\varnothing) + \mathsf{len}(\beta). If the result holds for \alpha^\prime, then \mathsf{len}((a,\alpha^\prime) \ddagger \beta) = \mathsf{len}((a, \alpha^\prime \ddagger \beta)) = 1 + \mathsf{len}(\alpha^\prime \ddagger \beta) = 1 + \mathsf{len}(\alpha^\prime) + \mathsf{len}(\beta) = \mathsf{len}((a,\alpha^\prime)) + \mathsf{len}(\beta). \square

Lemma: \mathsf{F}_0(X) has no zero divisors. Proof: If \alpha \ddagger \beta = \varnothing, then comparing lengths, \alpha = \beta = \varnothing. \square

Since \mathsf{F}_0(X) has no zero divisors, the set \mathsf{F}(X) = \mathsf{F}_0(X) \setminus \{\varnothing\} is a subsemigroup. We call \mathsf{F}(X) the free semigroup on X.

We are finally in a position to answer the original problem: what are the orders of elements in \mathsf{F}(X)? Since every element of \mathsf{F}(X) has positive length, and since length is additive over products, the powers of any element \alpha \in \mathsf{F}(X) have monotonically increasing length. That is, the subsemigroup generated by \alpha is infinite. So every element of \mathsf{F}(X) has infinite order.

Next we can consider the kernel of \mathsf{F}(X). Note that for any word \alpha of length k, every element of the two sided ideal J(\alpha) has length at least k. As a consequence, for any element \alpha \in \mathsf{F}(X) of length k, we can construct an ideal in which every element has length at least k+1– and so the ideal does not include \alpha. In particular, the intersection of all the ideals in \mathsf{F}(X) is empty. So this semigroup has no kernel.

Exhibit a semigroup of transformations having no idempotents

Give an example of a semigroup of transformations having no idempotents.


We saw in this previous exercise that every finite semigroup contains idempotent elements. So any example we find will have to be transfinite.

Given k \in \mathbb{N}^+, define \varphi_k : \mathbb{N}^+ \rightarrow \mathbb{N}^+ by \varphi_k(a) = a+k. Let S \subseteq \mathsf{T}(\mathbb{N}^+) be the subset S = \{\varphi_k\ |\ k \in \mathbb{N}^+\}. We claim that S is a subsemigroup of the full semigroup of transformations on \mathbb{N}^+; indeed, for all k,\ell \in \mathbb{N}^+ and a \in \mathbb{N}^+, we have (\varphi_k \circ \varphi_\ell)(a) = \varphi_k(\varphi_\ell(a)) = \varphi_k(a + \ell) = a+\ell+k = \varphi(\ell+k)(a), so that S is closed under composition. So S is a semigroup of transformations.

If \varphi_k is idempotent, then a+k = \varphi_k(a) = (\varphi_k \circ \varphi_k)(a) = a+2k for all a \in \mathbb{N}^+, and so k = 2k. This equation has no solutions in \mathbb{N}^+, so that S does not contain an idempotent.