Polynomial Approximations to Continuous Functions

When I was taking undergrad real analysis, the professor, Ali Tahzibi, was teaching us about the Stone-Weierstrass theorem, and he asked my class: let en(f)=infp[x],deg(p)nfp where is the supremum norm. The theorem says en(f)0 , but can the convergence be arbitrarily slow? He offered one million bucks for the solution instantly grabbing my attention.

You can scarcely do mathematical analysis without polynomial approximations, so the Stone-Weierstrass theorem is foundational, and if it was possible to improve its rate of convergence that would be a major result. Initially, it seemed impossible to construct a function so pathological that even en(f)>1log(n) , and the question was asking for a lot more than that. There are upper bounds on en(f) for large classes of functions, including Hölder continuous, or those whose modulus of continuity is good enough. In particular, even famously pathological functions such as Weierstrass's example of a continuous nowhere differentiable function, or (with probability 1) a Wiener process have upper bounds on en(f) .

I pondered the question for a few days, and finally found a counterexample proving that the answer is no by relating approximations to the number of roots of a polynomial. Precisely, for any sequence (an)n of positive real numbers converging to 0, there is a function f:[0,1] such that en(f)>an for all n. He encouraged me to submit it to the American Mathematical Monthly for publication, which I did, and it was accepted a week later. I hope all my publications throughout my career follow a similar Blitzkrieg timeframe.

The construction of the counterexample is brutally simple. The key insight is that to force a polynomial approximant to have a high degree, you must force it to have a lot of roots. We can do this by making f oscillate around 0, forcing a good-enough approximant to do the same. Fix a sequence (an)n of numbers decreasing to 0, and let g:[0,1] be an increasing continuous function such that g(0)=0 and f(x)={0x=0g(x)cos(πx)x0. Where g(1n+2)>an for all n, which can be achieved defining g as affine by parts in the intervals (1k+1,1k) .

A polynomial approximation p satisfying fp<an must have a root in each of the intervals (1n+2,1n+1)(12,1) i.e. at least n+1 roots so its degree is more than n. Indeed, for each positive integer kn+2 , we have f(1k)=g(1k)cos(πk)=g(1k)(1)k , and

|f(1k)p(1k)|<an<g(1n+2)g(1k)=|f(1k)|

so p(1k) and f(1k) have the same sign.

In the literature, there is an equivalent construction. [1] They achieve it by letting f=k=1(ak+1ak)T3k where Tk(x)=cos(karccos(x)) is a Chebyschev polynomial. Using Chebyschev polynomials as a basis is very useful to determine approximation-theoretical properties but unfortunately not much else. The construction as a lacunary series further obscures the properties of f. I think my construction gives more insight into why convergence is slow.

I think this is a pretty fun result, and I'm grateful to professor Tahzibi for suggesting it.

[1] Cheney, E. W. (2000). Introduction to Approximation Theory. Providence, RI: American Mathematical Society.