Solving Polynomials
A well known (if not by name) theorem is the Abel–Ruffini theorem, which states that there is no algebraic expression for the roots of polynomials with degree higher than 4.
A not-so-well-known fact is that for any polynomial , it is possible to find (with exact arithmetic) a set of ranges each containing exactly one root of . One such algorithm is due to James Victor Uspensky in 1948.
Uspensky's algorithm requires all roots to have multiplicity 1; can be trivially replaced with to eliminate duplicate roots, if necessary. The algorithm relies on Descartes' rule of signs, which states that the number of positive solutions (counting multiplicities) to a polynomial is equal to , where is the number of sign changes between consecutive non-zero coefficients of , and is a non-negative integer.
This clearly implies that if is 0 or 1, then the polynomial must have exactly 0 or 1 root in , respectively. Otherwise, we can transform the polynomial and try again. Expanding , we can test to learn about the roots in . Expanding , where is the degree of the polynomial , we can similarly test to learn about the roots in . If , then is a root of .
If we don't get conclusive information from one of the tests (i.e. or ), apply the same two transformations ( and to the offending polynomial(s). Uspensky proves that this recursive procedure always terminates for square-free polynomials. A reasonable bound on the recursion depth may also be shown.
From a CS perspective, the two transformations are easy to implement. If , then
Another CS-related point is that one of the resultant intervals may have an infinite upper bound. Cauchy provides a general upper bound which may be used to produce a finite upper bound.
A C implementation of this algorithm as a root-solver in my ray-tracer Dimension can be seen here.