Polynomials

It is time to meet yet another example of a ring.

A polynomial is a finite collection of terms, all added together, where each term is formed of a product of two things: the coefficient, and the variable, usually \(x\), raised to a non-negative integer power. For example, the following are all polynomials:

\[\begin{split}&3 \\ &x \\ &3x + 2 \\ &5x^2 + 2x + 4 \\ &x^3 + 1\end{split}\]

Looking again at the example \(5x^2 + 2x + 4,\) we see that it has three terms: namely, \(5x^2, 2x,\) and \(4\). The coefficients of these terms are \(5, 2,\) and \(4\) respectively.

The degree of a polynomial is the highest power of \(x\) appearing. So for example, the degree of \(5x^2 + 2x + 4\) is \(2\), and the degree of \(x^3 + 1\) is \(3\). Notationally, we write \(\deg(p)\) for the degree of a polynomial \(p\), so e.g. \(\deg(5x^2 + 2x + 4) = 2\).

The coefficient of the term with the highest power is also important and therefore has a special name: it is called the leading coefficient. For example, the leading coefficient of the polynomial \(5x^2 + 2x + 4\) is \(5\). A monic polynomial is a polynomial whose leading coefficient is \(1\).

Polynomial addition and multiplication both work how you would expect. To add two polynomials together, we simply add together the coefficients of matching pairs of terms. So for example, we add together the coefficients of \(x\) to obtain the coefficient of \(x\) in the result, we add the coefficients of \(x^2\) to obtain the coefficient of \(x^2\) in the result, and so on. For example:

\[\begin{split}(5x^2 + 2x + 4) + (3x + 2) &= (5+0)x^2 + (2+3)x + (4+2) \\ &= 5x^2 + 5x + 6\end{split}\]

To multiply polynomials of just one term, we multiply the coefficients and add the powers. For example, \((6x)(7x^2) = (6 \times 7)x^{1 + 2} = 42x^3\). To multiply polynomials with more than one term, we make use of distributivity to break them down into sums of products of single terms, and then combine them. For example:

\[\begin{split}&(5x^2 + 2x + 4)(3x + 2) \\ &= 5x^2(3x + 2) + 2x(3x + 2) + 4(3x + 2) \\ &= (5x^2)(3x) + (5x^2)(2) + (2x)(3x) + (2x)(2) + 4(3x) + 4(2) \\ &= 15x^3 + 10x^2 + 6x^2 + 4x + 12x + 8 \\ &= 15x^3 + 16x^2 + 16x + 8\end{split}\]

In the examples we have seen so far, the coefficients have come from \(\mathbb{R}\). However, we can choose coefficients from any ring. We denote the set of polynomials with coefficients in some ring \(R\) by \(R[x]\).

So, if we let \(R\) be any ring, then \(R[x]\) is a ring too; the additive and multiplicative identities in \(R[x]\) are \(0_R\) and \(1_R\) respectively. Notice that if \(R\) is a commutative ring, then so is \(R[x]\); this follows from how multiplication is defined in \(R[x]\).

Here are some polynomials in the ring \(\mathbb{Z}_3[x]\):

\[\begin{split}&\overline{2} \\ &x^3 + \overline{2} \\ &\overline{2}x^2 + x + \overline{1}\end{split}\]

Note that we usually don’t bother writing down the coefficient if it is the multiplicative identity; the second example there could also have been written \(\overline{1}x^3 + \overline{2}\).

We have already seen that \(R\) being a commutative ring implies that \(R[x]\) is a commutative ring. Another similar result is that if \(R\) has no zero-divisors then neither does \(R[x]\). To see this, first notice that if \(p, q \in R[x],\) with \(p \neq 0, q \neq 0,\) the leading coefficient of \(pq\) is equal to the product of the leading coefficients of \(p\) and \(q\). If a polynomial is nonzero then its leading coefficient is necessarily also nonzero, so it follows that the leading coefficients of \(p\) and \(q\) are both nonzero, and therefore the leading coefficient of \(pq\) is also nonzero (here we are using the fact that \(R\) has no zero-divisors). So \(pq\) is nonzero, and we are done.

We can neatly wrap all of this up by simply saying that if \(R\) is an integral domain then so is \(R[x]\).

Polynomial division

Consider the ring of polynomials with coefficients in some integral domain \(R\). Let \(a, b \in R[x]\), with \(b \neq 0\), and \(b\) monic. Then, there exists \(q, r \in R[x]\) such that \(a = qb + r\), and either \(\deg(r) < \deg(b)\), or \(r = 0\).

Depending on your philosophy and outlook, it might or might not be a problem that the following proof of this result is non-constructive, i.e. it proves that \(q\) and \(r\) exist, but it doesn’t give you an algorithm for finding them. It’s also a little trickier than many of the proofs we’ve seen so far, so don’t worry if you can’t quite get your head around it straight away. We won’t go on to do anything that requires understanding this proof; we really just want to make sure we’re aware of the result.

Anyway, to prove this result, we start by choosing a polynomial \(q\) which ensures that the degree of \(a - qb\) is as small as possible. Note that it is always possible to find such a polynomial \(q\), because the degree is a nonnegative integer, and any set of nonnegative integers is guaranteed to have a smallest element.

Let \(s = \deg(a - qb)\), and let c be the leading coefficient of \(a - qb\). So the leading term of \(a - qb\) is \(cx^s\). Also, let \(d = \deg(b)\).

Now, suppose that \(s \geq d\), and consider the polynomial \(a - (q + cx^{s-d})b = a - qb - (cx^{s-d})b\). Since the leading term of \(b\) is \(x^d\) (by assumption), the leading term of \((cx^{s-d})b\) is \(cx^s\). Therefore, when we subtract \((cx^{s-d})b\) from \(a - qb\), the \(x^s\) terms cancel and the polynomial we are left with has degree no higher than \(s-1\). This is a contradiction: we chose \(q\) to minimise the degree of \(a - qb\), but here we have another polynomial \(q + cx^{s-d}\), for which \(a - (q + cx^{s-d})b\) gives us a smaller degree still.

Because we have reached a contradiction, we can deduce that \(s < d\), i.e. \(\deg(a - qb) < \deg(s)\). Therefore, we can define \(r = a - qb\), and we are done: \(a = qb + r\) by construction, and also either \(r = 0\) or \(\deg(r) < \deg(b)\).

If we want to allow division by any nonzero polynomial, not just monic polynomials, we need to impose one additional requirement: that \(R\) is a field. In this case we can divide coefficients exactly, so if we want to divide a polynomial \(a\) by another polynomial \(b\), we can multiply \(b\) by the multiplicative inverse of its leading coefficient to make it monic.

Note

For example, in \(\mathbb{R}[x]\), we can multiply the polynomial \(2x + 1\) by \(\frac{1}{2}\) to give \(x + \frac{1}{2}\), which is monic. Note that we could do this if we were working in \(\mathbb{Z}[x]\).

Let \(c\) be the leading coefficient of \(b\), so that \(c^{-1}b\) is monic. Now we can use the previous result to divide \(a\) by \(c^{-1}b\), which tells us that there are \(q\) and \(r\) such that \(a = c^{-1}bq + r\), with either \(\deg(r) < \deg(b)\) or \(r = 0\). With a small shift in perspective we can now say that we have divided \(a\) by \(b\), by considering the quotient to be \(c^{-1}q\).

So the final form of our polynomial division theorem is as follows.

Let \(F\) be a field, and let \(a, b \in F[x]\), with \(b \neq 0\). Then, there exists \(q, r \in F[x]\) such that \(a = qb + r\), and either \(\deg(r) < \deg(b)\), or \(r = 0\).

The important thing to notice is that this theorem bears a strong resemblance to the theorem regarding integer division which we saw in the previous chapter. So now we might ask: is there a generalisation which can unify these two concepts? The answer is of course yes: it’s called a euclidean ring.