Euclidean rings

Over the previous two chapters, we covered the Euclidean Algorithm, which allows you to compute the greatest common divisor of two integers. We also encountered a new example of a ring, namely polynomials, and noticed that they both support a very similar kind of division.

In this chapter we will see how to generalise the Euclidean Algorithm and discuss the resulting structure, which is called a euclidean ring.

Divisors, again

Instead of working in \(\mathbb{Z}\) we will now work in an arbitrary integral domain \(R\). The first thing we will want to do is generalise our definition of “divisor”; fortunately this is easy:

Let \(a, b \in R\). We say that \(a\) divides \(b\) if there exists some \(q \in R\) such that \(aq = b\).

In fact it’s the exact same definition except that we just replace \(\mathbb{Z}\) with \(R\). The definition of “common divisors” also immediately generalises with no extra effort required. However, it’s less obvious how to define a greatest common divisor, since we might not have a good way of ordering elements of an arbitrary integral domain. We address this as follows:

Let \(a, b, d \in R\) and suppose \(d \mid a\) and also \(d \mid b\), that is, \(d\) is a common divisor of \(a\) and \(b\). We say that \(d\) is a greatest common divisor of \(a\) and \(b\) if for any other common divisor \(d'\) of \(a\) and \(b\), we have that \(d' \mid d\).

Note that we have started saying “a greatest common divisor” rather than “the greatest common divisor”; this is because greatest common divisors are no longer guaranteed to be unique. For example, in the previous chapter we saw that a greatest common divisor of \(462\) and \(1071\) was \(21\). In this setting we would also consider \(-21\) to be a greatest common divisor of these two numbers.

I wouldn’t blame you if, at this point, you said this was a nonsense definition, because it’s not clear that “greatest” really means anything at this point. Don’t worry — we will clear this all up shortly.

Generalising the Euclidean Algorithm

In the last chapter we saw two key ideas which the Euclidean Algorithm relies on to work:

  1. For \(a, b, d \in \mathbb{Z}\), if \(d \mid a\) and \(d \mid b\), then \(d \mid ma + nb\) for any \(m, n \in \mathbb{Z}\).
  2. Remainders keep getting smaller, and are guaranteed to eventually reach \(0\).

It’s very pleasing to see that the first of these immediately generalises from \(\mathbb{Z}\) to an arbitrary integral domain. We can even use the exact same proof as we did in the case of \(\mathbb{Z}\)!

However, we can’t generalise the second idea if all we have is an integral domain — we need something a little stronger.

Let \(R\) be an integral domain. A euclidean function is a function \(f : R \setminus \{ 0 \} \rightarrow \mathbb{N}\) satisfying:

  • For \(a\) and \(b\) in \(R\), with \(b \neq 0\), there exist \(q\) and \(r\) in \(R\) such that \(a = bq + r\) and either \(r = 0\) or \(f(r) < f(b)\).
  • For all nonzero \(a\) and \(b\) in \(R\), we have \(f(a) \leq f(ab)\).

A euclidean ring, or euclidean domain, is then defined as an integral domain which can be endowed with a euclidean function.


On notation: if \(A\) and \(B\) are sets then their difference is defined as

\[A \setminus B = \{\, x \in A \,|\, x \notin B \,\}\]

that is, the elements of \(A\) which are not in \(B\). So if \(R\) is a ring, then the set \(R \setminus \{0\}\) consists of all elements of \(R\) except \(0\). Essentially, what we are doing here is saying that for a euclidean function \(f\), the result of applying \(f\) to \(0\) need not be defined.

The idea of this definition is that it allows us to say when “remainders keep getting smaller” still holds in a more general setting. If we perform the Euclidean Algorithm on a pair of elements \(a, b\) from an arbitrary euclidean ring, then the first remainder \(r_1\) will be smaller than \(b\) in the sense that \(f(r_1) < f(b)\), and the second remainder \(r_2\) will be smaller than \(r_1\) in the sense that \(f(r_2) < f(r_1)\), and so on until we reach \(0\).

So now we’re done — we can use what is essentially the same argument as in the case of integers to show that our GCD algorithm actually works with any euclidean ring!

We still need to verify that integers and polynomials do actually form euclidean rings, though, and to do this we need to find euclidean functions for each of them.

Cast your mind back one more time to the theorem about integer division:


If we have two integers \(a, b\), with \(b > 0\), then there always exists a pair of integers \(q\) and \(r\) such that \(a = qb + r\), and \(0 \leq r < b\).

This is almost in the right form for us to show that \(\mathbb{Z}\) is a euclidean ring, but not quite. In particular we need to be able to deal any nonzero \(b\), not just positive \(b\).

We can address this with the absolute value function. For any \(x \in \mathbb{R}\), the absolute value of \(x\) is defined as:

\[\begin{split}\lvert x \rvert = \begin{cases} x & \mathrm{if} \; x \geq 0 \\ -x & \mathrm{if} \; x < 0 \end{cases}\end{split}\]

Note that for any real number \(x\), the absolute value of \(x\) is always nonnegative. Also, again for any real number \(x\), note that \(\lvert x \rvert = \lvert -x \rvert\) (check this if you need to).

We can prove that the absolute value function is a euclidean function on \(\mathbb{Z}\) by cases. First, we will cover the case where \(b > 0\), and then we will cover the case where \(b < 0\).

In the case where \(b > 0,\) we already know that we can find an appropriate \(r\) with \(0 \leq r < b;\) since \(r\) and \(b\) are both nonnegative, we must have \(r = \lvert r \rvert\) and \(b = \lvert b \rvert,\) so \(\lvert r \rvert < \lvert b \rvert\) as required. In the case where \(b < 0,\) we know that \(-b\) is positive, so we can divide \(a\) by \(-b\) and get a \(q\) and \(r\) satisfying \(a = q(-b) + r\) and \(0 \leq r < -b\). Rearranging a little, we can write \(a = (-q)b + r,\) showing that our quotient is \(-q\) and our remainder \(r\). We also have that \(r < -b,\) so \(\lvert r \rvert < \lvert -b \rvert,\) and of course \(\lvert -b \rvert = \lvert b \rvert,\) so again \(\lvert r \rvert < \lvert b \rvert\) as required.

Exercise 11.1. Complete the proof that the absolute value function is a euclidean function on \(\mathbb{Z}\) by showing that \(\lvert a \rvert \leq \lvert ab \rvert\) for all nonzero \(a, b \in \mathbb{Z}\).

Polynomials are a bit easier, since our polynomial division theorem is already in the correct form; looking back at the theorem from the previous chapter:


Let \(F\) be a field, and let \(a, b \in F[x]\), with \(b \neq 0\). Then, there exists \(q, r \in F[x]\) such that \(a = qb + r\), and either \(\deg(r) < \deg(b)\), or \(r = 0\).

So the degree function satisfies the first condition for being a euclidean function.

Exercise 11.2. Complete the proof that the degree function is a euclidean function on \(F[x]\) by showing that \(\deg(a) \leq \deg(ab)\) for all nonzero \(a, b \in F[x]\). Hint: can you find an expression for \(\deg(ab)\) in terms of \(\deg(a)\) and \(\deg(b)\)?

So the degree function is a euclidean function on polynomials, and therefore polynomials are indeed euclidean rings.

There’s one more example of a euclidean ring which we should mention, and that is any field. Of course, in a field, you can always divide exactly, so this isn’t the most interesting example of a euclidean ring — but it’s good to be aware of nonetheless.

Let \(F\) be a field, and suppose \(f : F \setminus \{ 0 \} \rightarrow \mathbb{N}\) is a euclidean function. Recall the second condition for being a euclidean function, which is that for all nonzero \(a, b \in F\), we have that \(f(a) \leq f(ab)\). Let \(x\) be any element of \(F\). If we set \(a = 1\) and \(b = x\) then we see that \(f(1) \leq f(x)\). Also, since \(F\) is a field, \(x\) must have a multiplicative inverse, \(x^{-1}\). So if we set \(a = x\) and \(b = x^{-1}\) we see that \(f(x) \leq f(1)\). The only way that both of these things can be true is if \(f(x) = f(1)\), that is, \(f\) is constant: it always gives us back the same thing, no matter what we put in.

Now, we look back to the first condition, which says that for all nonzero \(a, b \in F\), there exist \(q, r \in F\) such that \(a = qb + r\) and either \(r = 0\) or \(f(r) < f(b)\). However, since \(f\) is constant, there is no pair of elements \(r, b \in F\) such that \(f(r) < f(b)\). What this means is that whenever we divide two elements, we must always hit the \(r = 0\) case, i.e. we must always have \(q = ab^{-1}\) and \(r = 0\).

Therefore, whenever we try to run our GCD algorithm, it always terminates immediately. In fact every single element of a field (apart from \(0\)) is a “greatest common divisor” of any pair of elements (I put “greatest common divisor” in quotes here, because in this context it breaks down, and doesn’t really mean anything any more). But we have established an interesting fact nonetheless: for any field, the only option for a euclidean function is a constant function, which means that no field can have any euclidean ring structure other than this rather uninteresting one.


The answer to the question “what is a euclidean ring” of course is the definition; there’s no substitute for it, that is what a euclidean ring is. However it’s often useful to have an intuitive understanding to go along with actual definition of what something is, and allowing you to develop this intuition has been my aim in these last three chapters. My intuitive understanding of a euclidean ring is a ring which behaves “a bit like the integers”, in the sense that

  • elements can be divided to give a quotient and a remainder,
  • any pair of elements has at least one greatest common divisor, in the sense that any other common divisor divides a GCD,
  • it has a euclidean function which tells you the “size” of an element (and this sense of “size” is exactly same as the sense of “greatest” in “greatest common divisor”)
  • the Euclidean Algorithm can be used to find a GCD of any two elements of the ring.