# The Theorems of Schauder and Peano

In the text on Brouwer’s Fixed Point Theorem, I had confidently stated that Schauder’s Theorem follows from it with less effort and that one may easily conclude things like Peano’s Theorem from there. As a matter of fact, things lie considerably deeper than I had naively thought. If sound proofs are to be given, there is technical work to be done in many instances. The ideas are not tough themselves, but the sheer number of steps to be taken and the methodological machinery cannot be neglected. Let’s see.

We shall follow the lines of Heuser’s books (both this one and this one), as we did before, to collect the ingredients to give a proof of Schauder’s Fixed Point Theorem. It involves a statement about convex sets, on which we will focus first, followed by an excursion on approximation in normed vector spaces. We shall also need the theorem named after Arzelà and Ascoli, being a basic glimpse into the ways of thinking of functional analysis. All of this will allow us to prove Schauder’s Theorem in a rather strong flavour. For the conclusion, we split our path: we give both Heuser’s treatment of Peano’s Theorem in the spirit of functional analysis and Walter’s more elementary approach (which, however, also makes use of the Theorem of Arzelà-Ascoli).

Remember that a set $K$ is called convex, if for any $x,y\in K$ and for any $\alpha\in[0,1]$, we have $\alpha x + (1-\alpha)y\in K$. This formalizes the intuition that the line connecting $x$ and $y$ be contained in $K$ as well.

Lemma (on convex sets): Let $E$ be a normed space and let $x_1,\ldots,x_n\in E$. Let

$\displaystyle\mathrm{conv}(x_1,\ldots,x_n) = \bigcap_{\substack{K\subset E~\mathrm{ convex}\\\{x_1,\ldots,x_n\}\subset K}} K$

the convex hull. Then,

$\displaystyle \mathrm{conv}(x_1,\ldots,x_n) = \left\{v\in E\colon v=\sum_{i=1}^n\lambda_ix_i\text{ with }\sum_{i=1}^n\lambda_i=1; \lambda_i\geq0\right\},\qquad(\diamondsuit)$

and $\mathrm{conv}(x_1,\ldots,x_n)$ is compact.

Proof: Let us first prove the representation of the convex hull. For the “$\subset$“-direction, we will show that the set on the right-hand side of $(\diamondsuit)$ is convex. Let $x = \sum_{i=1}^n\lambda_ix_i$ and $y=\sum_{i=1}^n\mu_ix_i$, with $\sum\lambda_i=\sum\mu_i=1$. Let $\alpha\in[0,1]$, then

$\displaystyle \alpha x + (1-\alpha)y = \sum_{i=1}^n\bigl(\alpha\lambda_i+(1-\alpha)\mu_i\bigr)x_i,$

where $\sum_{i=1}^n\bigl(\alpha\lambda_i+(1-\alpha)\mu_i\bigr) = \alpha+(1-\alpha) = 1$. Hence, $\alpha x+(1-\alpha)y$ is part of the set on the right-hand side of $(\diamondsuit)$ .

We now turn to the “$\supset$“-direction. Let $y_1,\ldots, y_m\in\mathrm{conv}(x_1,\ldots,x_n)$. We show that $\sum_{j=1}^m\mu_jy_j\in\mathrm{conv}(x_1,\ldots,x_m)$ if $\sum_{j=1}^m\mu_j = 1$. That means any point that as a representation as in the right-hand side of $(\diamondsuit)$ must be in $\mathrm{conv}(x_1,\ldots,x_n)$. This is clear for $m=1$. For $m>1$, we take $C:=\sum_{j=1}^{m-1}\mu_j$ to see

\displaystyle \begin{aligned} \sum_{j=1}^m\mu_jy_j &= \sum_{j=1}^{m-1}\mu_jy_j + \mu_my_m\\ &= C\sum_{j=1}^{m-1}\frac{\mu_j}{C}y_j + \mu_my_m\\ &= C\sum_{j=1}^{m-1}\tilde\mu_j y_j + (1-C)y_m. \end{aligned}

Note that $\mu_m = \sum_{j=1}^m\mu_j - \sum_{j=1}^{m-1}\mu_j = 1-C$. By induction,

$\displaystyle \sum_{j=1}^{m-1}\tilde\mu_jy_j\in\mathrm{conv}(x_1,\ldots,x_n),$

since $\sum_{j=1}^{m-1}\tilde\mu_j = \frac1C\sum_{j=1}^{m-1}\mu_j = 1.$ Hence,

$\displaystyle\sum_{j=1}^m\mu_jy_j\in\mathrm{conv}(x_1,\ldots,x_n).$

Finally, we shall prove compactness. Let $(y_k)_k\subset\mathrm{conv}(x_1,\ldots,x_n)$, with a representation $y_k = \sum_{i=1}^n\lambda_i^{(k)}x_i$. The sequences $(\lambda_i^{(k)})_k$ are bounded by $[0,1]$ and hence have convergent subsequences. Choosing subsequences $n$ times (for each $i=1,\ldots,n$), we find a subsequence $(\lambda_i^{(k_\ell)})_\ell$ that converges to some $\lambda_i$ for each $i=1,\ldots,n$. Besides,

$\displaystyle\sum_{i=1}^n\lambda_i = \sum_{i=1}^n\lim_{\ell\to\infty}\lambda_i^{(k_\ell)} = \lim_{\ell\to\infty}1 = 1.$

This yields

$\displaystyle\lim_{\ell\to\infty}y_{k_\ell} = \lim_{\ell\to\infty}\sum_{j=1}^n\lambda_j^{(k_\ell)}x_j = \sum_{j=1}^n\lambda_jx_j\in\mathrm{conv}(x_1,\ldots,x_n).$

q.e.d.

We will now prove a result that extends Brouwer’s Fixed Point Theorem to a more general setting. This is the one that I had skimmed earlier, believing it consisted only of standard arguments; in principle, this is true. But let’s have a closer look at it and how these standard arguments work together.

Theorem (on fixed points in real convex sets): Let $\emptyset\neq K\subset\mathbb{R}^p$ be convex, compact, and let $f:K\to K$ be continuous. Then $f$ has a fixed point.

Proof: 0th step. As $K$ is compact, it is bounded and thus there is some $r>0$ such that $K\subset B_r(0)$.

1st step. Let us construct the best approximation of some $x\in B_r(0)$ within $K$; that means we look for a $z\in K$ with $\left|x-z\right| = \inf_{y\in K}\left|x-y\right|$.

Taking $\gamma:=\inf_{y\in K}\left|x-y\right|$, there is a sequence $(z_n)_n\subset K$ with $\lim_{n\to\infty}\left|x-z_n\right| = \gamma$.

We wish to prove that $(z_n)_n$ is a Cauchy sequence. From the basic properties of any scalar product, we find

\displaystyle \begin{aligned} \left|u+v\right|^2+\left|u-v\right|^2 &= \left\langle u+v,u+v\right\rangle + \left\langle u-v,u-v\right\rangle \\ &= \left|u\right|^2 + \left|v\right|^2 + 2\left\langle u,v\right\rangle + \left|u\right|^2 + \left|v\right|^2 - 2\left\langle u,v\right\rangle \\ &= 2 \left|u\right|^2 + 2 \left|v\right|^2. \end{aligned}

In our case, this shows

\displaystyle \begin{aligned} \left|z_n-z_m\right|^2 &= \left|(z_n-x)-(z_m-x)\right|^2 \\ &= 2\left|z_n-x\right|^2 + 2\left|z_m-x\right|^2- \left|(z_m+z_n)-2x\right|^2 \\ &= 2\left|z_n-x\right|^2 + 2\left|z_m-x\right|^2 - 4\left|\frac{z_m+z_n}2-x\right|^2. \end{aligned}

Since $K$ is convex, $\frac12z_n+\frac12z_m\in K$. Therefore,

$\displaystyle\left|\frac{z_m+z_n}2-x\right|\geq\gamma.$

Thus,

\displaystyle \begin{aligned} \lim_{n,m\to\infty}\left|z_n-z_m\right|^2&\leq 2\lim_{n\to\infty}\left|z_n-x\right|^2 + 2\lim_{m\to\infty}\left|z_m-x\right|^2 - 4\gamma^2 \\ &= 2\gamma^2+2\gamma^2-4\gamma^2 = 0. \end{aligned}

Therefore, $(z_n)_n$ is a Cauchy sequence, having a limit $y$, say. As $K$ is closed, $y\in K$. In total, we have seen (noting that the absolute value is continuous)

$\displaystyle\gamma = \lim_{n\to\infty}\left|z_n-x\right| = \left|y-x\right|.$

$y$ is the best approximation to $x$ within $K$.

2nd step. The best approximation is unique.

If there were two of them, $u$ and $V$, say, then $\left|x-u\right| = \left|x-v\right| = \gamma$. If we consider the sequence $(z_n)_n$ that alternates between $u$ and $V$, we’d find $\left|x-z_n\right| = \gamma$ for all $n$, and hence $(z_n)_n$ is a Cauchy sequence by what we found in step 1. Therefore $z_n$ must be convergent, which implies $u=v$.

3rd step. The mapping $A:B_r(0) \to K$ that takes $x$ to its best approximation, is continuous.

Let $(x_n)_n\subset B_r(0)$ with $\lim_{n\to\infty}x_n =: x$. Let $\varepsilon>0$. For sufficiently large $n$, we have

$\displaystyle \gamma_n :=\inf_{y\in K}\left|x_n-y\right| \leq \inf_{y\in K}\bigl(\left|x_n-x\right|+\left|x-y\right|\bigr) < \varepsilon + \inf_{y\in K}\left|x-y\right| = \varepsilon + \gamma.$

Besides,

$\displaystyle \gamma\leq\left|x - A(x_n)\right|\leq\left|x_n-A(x_n)\right| + \left|x_n-x\right| = \gamma_n + \left|x_n-x\right| < \gamma_n+\varepsilon.$

These inequalities give us

$\displaystyle \gamma\leq\left|x-A(x_n)\right| < 2\varepsilon + \gamma,\qquad\text{ which means }\lim_{n\to\infty}\left|x-A(x_n)\right| = \gamma.$

Hence, $\lim_{n\to\infty}A(x_n)$ is the best approximation to $x$. As this is unique, we have shown that $A$ is sequentially continuous.

4th step. The quest for the fixed point.

The mapping $f\circ A:B_r(0)\to K\subset B_r(0)$ is continuous. By Brouwer’s Fixed Point Theorem, $f\circ A$ has a fixed point: there is some $w\in B_r(0)$ with $f\bigl(A(w)\bigr) = w$. As $f$ only takes images in $K$, we must have $w\in K$. By construction, for points in $K$, the mapping $A$ does not do anything: hence

$\displaystyle w = f\bigl(A(w)\bigr) = f(w).$

q.e.d.

Corollary (on fixed points in convex sets of normed spaces): Let $x_1,\ldots,x_n\in E$ a normed vector space, let $\emptyset\neq K\subset\mathrm{span}(x_1,\ldots,x_n)$ convex, compact, and let $f:K\to K$ continuous. Then $f$ has a fixed point.

Proof: Let us choose a base for $\mathrm{span}(x_1,\ldots,x_n)$ from the $x_1,\ldots,x_n$. We take w.l.o.g. $x_1,\ldots,x_p$ for a certain $p\leq n$. Then any $y\in\mathrm{span}(x_1,\ldots,x_n)$ has a unique representation as $y=\sum_{j=1}^p\beta_jx_j$, and the maping

$\displaystyle A:\mathrm{span}(x_1,\ldots,x_n)\to\mathbb{R}^p,\qquad y\mapsto(\beta_1,\ldots,\beta_p)$

is a bijection. As all norms on $\mathbb{R}^p$ are equivalent, convergence issues are not affected by this bijection. Hence, the theorem and its proof work out in the setting of this corollary, too. q.e.d.

Note that this Corollary may deal with an infinite-dimensional space, however we make use of a finite-dimensional subspace only. This will become relevant in Schauder’s Theorem as well.

Theorem (Arzelà 1895, Ascoli 1884): Let $X\subset\mathbb{R}^d$ compact, let $\mathcal F$ be a family of continuous real-valued functions on $X$, which satisfies two properties:

• it is pointwise bounded: for any $x\in X$, there is some $M(x)\in\mathbb{R}$ with $\left|f(x)\right|\leq M(x)$, for all $f\in\mathcal F$.
• it is equicontinuous: for any $\varepsilon>0$ there is some $\delta>0$ such that for any $x,y\in X$ with $\left|x-y\right|<\delta$ we have $\left|f(x)-f(y)\right|<\varepsilon$, for all $f\in\mathcal{F}$.

Then, $\mathcal F$ is relatively compact, that means every sequence in $\mathcal F$ has a uniformly convergent subsequence.

Note, that we do not demand the limit of the convergent subsequence to be contained in $\mathcal F$; that would mean compact, instead of relatively compact.

Proof: 1st step. We get hold of a countable dense subset of $X$.

For our immediate uses of the theorem, it should suffice to choose $\mathbb{Q}\cap X$, since we will take $X$ to be intervals and there will not be any need for more exotic applications. However, to show up something a little more general, have a look at the sets $\bigl(U_{1/k}(x)\bigr)_{x\in X}$. This is a covering of $X$ and finitely many of them will suffice to cover $X$, for instance $M_k:=\{x_{k1},\ldots,x_{kn}\}$. The set $M:=\bigcup_{k=1}^\infty M_k$ is countable. By construction, for any $x_0\in X$ and any $\varepsilon>0$, we can find some point $y\in M$ that has $\left|x-y\right|<\varepsilon$. Therefore, $M$ is dense in $X$.

2nd step. We construct a certain subsequence to a given sequence $(f_n)_n\subset\mathcal F$.

This step is at the heart of the Arzelà-Ascoli-Theorem, with a diagonal argument to make it work. Let us enumerate the set $M$ from the step 1 as $\{x_1,x_2,\ldots\}$.

As $\mathcal F$ is pointwise bounded, the sequence $\bigl(f_n(x_1)\bigr)_n\subset\mathbb{R}$ is bounded as well. By Bolzano-Weierstrass, it has a convergent subsequence that we will call $\bigl(f_{1,n}(x_1)\bigr)_n$.

If we evaluate this new sequence in $x_2$, we arrive at $\bigl(f_{1,n}(x_2)\bigr)_n\subset\mathbb{R}$, which is bounded as well. Again, we find a convergent subsequence that is now called $\bigl(f_{2,n}(x_2)\bigr)_n$. As this is a subsequence of $\bigl(f_{1,n}(x_1)\bigr)_n$, it converges in $x_1$ as well.

We continue this scheme and we find an array of sequences like this

 $f_{11}$ $f_{12}$ $f_{13}$ $\cdots$ $f_{21}$ $f_{22}$ $f_{23}$ $\cdots$ $f_{31}$ $f_{32}$ $f_{33}$ $\cdots$ $\vdots$ $\vdots$ $\vdots$ $\ddots$

where each row is a subsequence of the row above. Row $K$ is convergent in the point $x_k$ by Bolzano-Weierstrass and convergent in the points $x_1,\ldots,x_{k-1}$ by construction.

Now, consider the sequence $(f_{nn})_n$. It will converge in any point of $M$.

3rd step. Our subsequence of the 2nd step converges uniformly on $X$. We will use equicontinuity to expand the convergence from $M$ to the whole of $X$.

As $\mathcal F$ is equicontinuous, we will find for any $\varepsilon>0$ some $\delta>0$ with $\left|f_{nn}(x)-f_{nn}(y)\right| < \frac\varepsilon3$, for all $n\in\mathbb{N}$, as long as $\left|x-y\right|<2\delta$. Since $X$ is compact, there are some points $y_1,\ldots,y_p\in X$ with $X\subset\bigcup_{j=1}^pU_\delta(y_j)$. And as $M$ is dense in $X$, we can find some $\xi_j\in U_\delta(y_j)\cap M$ for any $j=1,\ldots,p$.

Let $x\in U_\delta(y_j)$, then

$\displaystyle \left|x-\xi_j\right| \leq \left|x-y_j\right|+\left|y_j-\xi_j\right| < 2\delta,$

which shows

$\displaystyle \left| f_{nn}(x)-f_{nn}(\xi_j)\right| < \frac\varepsilon 3\qquad\text{ for any }n\in\mathbb{N}\text{ and }x\in X\cap U_\delta(y_j).$

We have already seen that $(f_{nn})_n$ is convergent on $M$, and hence (convergent sequences are Cauchy-sequences)

$\displaystyle \left|f_{nn}(\xi_j) - f_{mm}(\xi_j)\right| < \frac\varepsilon 3\qquad \text{ for sufficiently large }m,n\text{ and for any }j=1,\ldots,p.$

Now, let $x\in X$, no longer restricted. Then, there is some $j=1,\ldots,p$ such that $x\in U_\delta(x_j)$, and

\displaystyle \begin{aligned} \left|f_{nn}(x)-f_{mm}(x)\right| &\leq \left| f_{nn}(x)-f_{nn}(\xi_j)\right| + \left|f_{nn}(\xi_j) - f_{mm}(\xi_j)\right| + \left|f_{mm}(\xi_j) - f_{mm}(x)\right| \\ &< \frac\varepsilon3+\frac\varepsilon3+\frac\varepsilon3 = \varepsilon. \end{aligned}

Thus, $\left\|f_{nn}-f_{mm}\right\|_\infty < \varepsilon$ for sufficiently large $n,m$. This sequence is a Cauchy sequence and hence convergent. q.e.d.

This was our last stepping stone towards Schauder’s Theorem. Let’s see what we can do.

Theorem (Schauder, 1930): Let $E$ be a normed vector space, $\emptyset\neq K\subset E$ convex and closed, let $f:K\to K$ continuous, $f(K)$ relatively compact. Then $f$ has a fixed point.

Proof: 1st step. As $f(K)$ is relatively compact, its closure is compact. We construct a finite approximating subset of $\overline{f(K)}$.

Let $\varepsilon>0$. There are some finitely many points $x_1,\ldots,x_m\in\overline{f(K)}$ with $\overline{f(K)}\subset\bigcup_{j=1}^mU_\varepsilon(x_j)$. In particular, for any $x\in f(K)$, there is some $j=1,\ldots,m$ with $\left|x_j-x\right| < \varepsilon$. Let us consider the function for $x\in f(K)$

$\displaystyle \varphi_j(x):=\mathbf{1}_{\left|x_j-x\right|<\varepsilon}\bigl(\varepsilon-\left|x-x_j\right|\bigr).$

It is obviously continuous, and as $\overline{f(K)}$ is covered by these $U_\varepsilon$,

$\displaystyle \varphi(x)=\sum_{j=1}^m\varphi_j(x) > 0.$

This allows $\psi_j(x):=\frac{\varphi_j(x)}{\varphi(x)}$ to be well-defined, and by construction $\psi(x)=\sum_{j=1}^m\psi_j(x) = 1$. Hence, the function $g:f(K)\to\mathrm{conv}(x_1,\ldots,x_m)$

$\displaystyle g(x):=\sum_{j=1}^m\psi_j(x) x_j$

is continuous (the Lemma on convex sets tells us that this actually maps into the convex hull). Now, let $x\in f(K)$. We find

$\displaystyle g(x)-x = \sum_{j=1}^m\psi_j(x)x_j - x = \sum_{j=1}^m\psi_j(x)\bigl(x_j-x\bigr) = \sum_{\substack{j=1\\\left|x_j-x\right|<\varepsilon}}^m\psi_j(x)\bigl(x_j-x),$

and therefore, for any $x\in f(K)$,

$\displaystyle \left|g(x)-x\right| \leq \sum_{\substack{j=1\\\left|x_j-x\right|<\varepsilon}}^m\psi_j(x)\left|x_j-x\right| < \varepsilon\sum_{j=1}^m\psi_j(x) = \varepsilon.$

This shows that $g$ uniformly approximates the identity on $\mathrm{conv}(x_1,\ldots,x_m)\subset f(K)$. Note that $g$ depends on the choice of $\varepsilon$.

2nd step. Reference to the Theorem on fixed points in convex sets and approximation of the fixed point.

We set $h:=g\circ f$, which is a continuous mapping

$\displaystyle h:K\to\mathrm{conv}(x_1,\ldots,x_m)\subset f(K) \subset K.$

We can restrict it to $\mathrm{conv}(x_1,\ldots,x_m)$ and then re-name it $\tilde h$.  By the Lemma on convex sets, $\mathrm{conv}(x_1,\ldots,x_m)$ is compact, it is finite-dimensional, and by the Corollary on fixed point sets in normed spaces, $\tilde h$ has a fixed point $z$:

$z = \tilde h(z) = g\bigl(f(z)\bigr)\qquad\text{ for some }z\in\mathrm{conv}(x_1,\ldots,x_m).$

Therefore,

$\displaystyle \left|f(z)-z\right| = \left|f(z)-g\bigl(f(z)\bigr)\right| < \varepsilon.$

Note that $z$ depends on $g$ and hence on $\varepsilon$.

3rd step. Construction of the fixed point.

For any $n\in\mathbb{N}$, by step 2, we find some $z_n\in\mathrm{conv}(x_1^{(n)},\ldots,x_{m(n)}^{(n)})\subset f(K)\subset K$ with

$\displaystyle\left|f(z_n)-z_n\right| < \frac1n.$

As $f(K)$ is relatively compact, the sequence $\bigl(f(z_n)\bigr)_n$ has a convergent subsequence: there is some $\tilde z\in\overline{f(K)}$ with $\tilde z=\lim_{k\to\infty}f(z_{n_k})$. As $K$ is closed, we get $\tilde z\in \overline{f(K)}\subset\overline K = K$. Now,

$\displaystyle\left|z_{n_k} - \tilde z\right| \leq \left|z_{n_k} - f(z_{n_k})\right| + \left|f(z_{n_k}) - \tilde z\right| < \frac1{n_k} + \varepsilon_{n_k},$

which means that $z_{n_k}$ and $\tilde z$ get arbitrarily close: $\tilde z = \lim_{k\to\infty}z_{n_k}$. Since $f$ is continuous, we arrive at

$\displaystyle f(\tilde z) = \lim_{k\to\infty}f(z_{n_k}) = \tilde z$. q.e.d.

It is apparent that Schauder’s Theorem already has very general conditions that are tough to weaken further. Obviously the Theorem gets false if $f$ is not continuous. If $K$ were not closed, we’d get the counter-example of $f:(0,1)\to(0,1)$, $x\mapsto x^2$, which doesn’t have any fixed points. If $K$ were not convex, we’d get the counter-example of $f:\partial B_1(0)\to \partial B_1(0)$, $e^{it} \mapsto e^{i(t+\pi)}$. It is hard to give a counter-example if $f(K)$ is not relatively compact – in fact I would be interested to hear of any such counter-example or of the generalization of Schauder’s Theorem to such cases. Which is the most general such fixed point theorem?

Now, we are able to harvest the ideas of all this work and apply it to differential equations. Usually, in courses on ordinary differential equations, the famous Picard-Lindelöf-Theorem is proved, which states that for well-behaved functions $f$ (meaning that they satisfy a Lipschitz-condition), the initial-value problem

$y'(x) = f\bigl(x,y(x)\bigr),\qquad y(x_0) = y_0,$

has a unique solution. This is a powerful theorem which simplifies the entire theory of differential equations. However, a little more holds true: it suffices that $f$ is continuous to guarantee a solution. However, uniqueness is lost in general. While in many applications one can assume continuity of $f$ without remorse (especially in physics), a Lipschitz-condition is much harder to justify. This is not to diminish the usefulness of Picard and Lindelöf, as any model has assumptions to be justified – the Lipschitz-condition is just one of them (if one even bothers to demand for a proper justification of existence and uniqueness – sometimes this would seem obvious from the start).

Let us have a look at what Peano told us:

Theorem (Peano, 1886/1890): Let $f:R\to\mathbb{R}$ be continuous, where

$\displaystyle R:=\bigl\{(x,y)\in\mathbb{R}^2\colon \left|x-x_0\right| \leq a, \left|y-y_0\right|\leq b\bigr\},$

let $M:=\max_{(x,y)\in R}\left|f(x,y)\right|$, $\alpha := \min\bigl(a, \frac bM\bigr)$.

Then, the initial value problem $y'(x)= f\bigl(x,y(x)\bigr)$, $y(x_0)=y_0$, has a solution on the interval $[x_0-\alpha, x_0+\alpha]$.

Concerning the interval on which we claim the solution to exist, have a look at how such a solution $y$ might behave: as we vary $x$, the solution may “leave” $R$ either to the vertical bounds (to left/right) or to the horizontonal bounds (up/down). A solution $y$ can at most have a slope of $\pm M$, and thus, if it leaves on the horizontal bounds, this will happen at $x_0\pm\frac bM$ as the earliest point. If it doesn’t leave there, it will exist until $x_0\pm a$. Of course, it might exist even further, but we have only demanded $f$ to be defined till there. A little more formally, the mean value theorem tells us

$\displaystyle \left|y(x)-y_0\right| = \left|y'(\xi)\right|\left|x-x_0\right| = \left|f\bigl(\xi,y(\xi)\bigr)\right|\left|x-x_0\right| \leq M\alpha\leq b.$

This guarantees that the solution $y$ is well-defined on $R$, because $f$ is defined there.

Proof: 0th step. To simplify notation, let us set

\displaystyle \begin{aligned} J&:=[x_0-\alpha, x_0+\alpha],\\ \mathcal{C}(J)&:=\bigl\{f:J\to\mathbb{R}\text{ continuous}\bigr\},\\ K&:=\bigl\{y\in\mathcal{C}(J)\colon \left|y(x)-y_0\right|\leq b\text{ for any }x\in J\bigr\}. \end{aligned}

1st step. We twist the problem to another equivalent shape, making it more accessible to our tools.

First, let $y(x)$ be a solution to the initial value problem on a sub-interval $I\subset J$. Then, for any $x\in I$,

$\displaystyle y'(x) = f\bigl(x,y(x)\bigr),\quad y(x_0)=y_0,$

and hence

$\displaystyle y(x) = y_0 + \int_{x_0}^x f\bigl(t,y(t)\bigr)dt.\qquad (\heartsuit)$

On the other hand, if we start from this equation and suppose that it holds for any $x\in I$, $y$ must be differentiable with $y'(x) = f\bigl(x,y(x)\bigr)$ and $y(x_0)=y_0$.

We have seen that a function $y$ solves the initial value problem on $J$ if and only if it satisfies the equation $(\heartsuit)$ on $J$.

2nd step. We try to give a representation of the problem as a fixed-point-problem.

Let us consider the mapping

$\displaystyle A:K\to\mathcal{C}(J),~ y\mapsto y_0 + \int_{x_0}^{\text{\large\textbf{.}}} f\bigl(t,y(t)\bigr)dt.$

This is a functional where we plug in a continuous function and where we get a continuous function back. In particular, and to make it even more painfully obvious,

$\displaystyle (Ay)(x) = y_0+\int_{x_0}^xf\bigl(t,y(t)\bigr)dt,\qquad\text{for any }x\in J$.

Therefore, $y$ is a solution to the intial value problem, if it is a fixed point of $A$, meaning $Ay=y$.

3rd step. We show that $A$ maps $K$ to itself. We have defined $A$ only on $K$, so let $y\in K$ and $x\in J$; then:

$\displaystyle \left|(Ay)(x)-y_0\right| = \left|\int_{x_0}^x f\bigl(t,y(t)\bigr) dt\right| \leq M\left|x-x_0\right| \leq \alpha M\leq b.$

The second-to-last inequality follows from $x\in J$, the last one from the definition of $\alpha$.

This shows that $Ay\in K$.

4th step. $K\neq\emptyset$ is obvious, as the constant function $y_0$ is in $K$.

5th step. $K$ is convex. Let $f,g\in K$ and let $\beta\in[0,1]$. Then, for any $x\in J$,

\displaystyle \begin{aligned} \left|(1-\beta)f(x)+\beta g(x) - y_0\right| &= \left|(1-\beta)\bigl(f(x)-y_0\bigr) + \beta\bigl(g(x)-y_0\bigr)\right| \\ &\leq (1-\beta)\left|f(x)-y_0\right| + \beta\left|g(x)-y_0\right| \\ &\leq (1-\beta)b+\beta b = b. \end{aligned}

This proves $(1-\beta)f+\beta g\in K$.

6th step. $K$ is a closed set in $\mathcal{C}(J)$, where we use the topology of uniform convergence.

Consider the sequence $(f_n)_n\subset K$ which converges uniformly to some $f\in\mathcal{C}(J)$. Remember that $\mathcal{C}(J)$ is complete, which is why we can do this. Then, for any $x\in J$,

$\displaystyle\left|f(x)-y_0\right| = \left|\lim_{n\to\infty}f_n(x)-y_0\right| = \lim_{n\to\infty}\left|f_n(x)-y_0\right| \leq \lim_{n\to\infty} b = b.$

This shows that $f\in K$.

7th step. Using the topology of uniform convergence, the mapping $A:K\to K$ is continuous.

Let $\varepsilon>0$. The function $f$ is continuous on the compact set $R$ and hence uniformly continuous. Therefore, there is some $\delta>0$ such that for $\left|u-v\right|<\delta$,

$\displaystyle \left|f(t,u)-f(t,v)\right|<\frac\varepsilon\alpha.$

Now, let $y,z\in K$ with $\left\|y-z\right\|_\infty < \delta$. Then we have just seen that for any $t\in J$

$\displaystyle \left|f\bigl(t,y(t)\bigr)-f\bigl(t,z(t)\bigr)\right| < \frac\varepsilon\alpha.$

That yields

\displaystyle \begin{aligned} \left|(Ay)(x)-(Az)(x)\right| &= \left|\int_{x_0}^xf\bigl(t,y(t)\bigr)dt - \int_{x_0}^xf\bigl(t,z(t)\bigr)dt\right| \\ &< \left|x-x_0\right|\frac\varepsilon\alpha\\ &\leq\varepsilon. \end{aligned}

In particular,

$\displaystyle\left\|Ay-Az\right\|_\infty < \varepsilon.$

8th step. The set $A(K)\subset K$ is relatively compact. Note that $A(K)$ is a set of continuous functions.

Let $y\in K$ and let $x,x_1,x_2\in J$. Then, every function of $A(K)$ is bounded pointwise, since

$\displaystyle \left|(Ay)(x)\right| = \left|y_0+\int_{x_0}^xf\bigl(t,y(t)\bigr)dt\right|\leq \left|y_0\right|+\left|x-x_0\right|M \leq \left|y_0\right|+\alpha M.$

Besides, $A(K)$ is equicontinuous, because of

\displaystyle \begin{aligned} \left|(Ay)(x_1)-(Ay)(x_2)\right| &= \left|\int_{x_0}^{x_1}f\bigl(t,y(t)\bigr)dt - \int_ {x_0}^{x_2}f\bigl(t,y(t)\bigr)dt\right| \\ &= \left|\int_{x_1}^{x_2}f\bigl(t,y(t)\bigr) dt\right| \\ &\leq \left|x_1-x_2\right| M. \end{aligned}

Arzelà and Ascoli now tell us that any sequence in $A(K)$ has a uniformly convergent subsequence.

9th and final step. From Schauder’s Fixed Point Theorem and steps 3 to 8, $A$ has a fixed point in $K$. From step 2, the initial value problem has a solution. q.e.d.

There was a lot of technical work that we have only needed to invoke Schauder’s Theorem. Some of this could have been avoided, if we had a more elementary proof of Schauder’s Theorem. Such a proof exists, however, some of our machinery is still needed – the proof cannot honestly be called elementary. In some way, the proof matches our procedure given above, however, not everything is needed in such a fine manner. Let’s have short look at it; this is taken from Walter’s book.

Proof (Peano’s Theorem in a more elementary fashion): We proceed in two parts. First, we shall prove the weaker statement that if $f$ is continuous and bounded on the (non-compact) set $[x_0,x_0+a]\times\mathbb{R}$, then there is a solution to the intial-value problem on $[x_0,x_0+a]$. Afterwards, we extend this to our compact set $R$. We won’t deal with extending the solution to the left of $x_0$ as it’s neither important nor difficult. In the previous proof we didn’t need to bother about this.

1st step. Let us define a function on $[x_0,x_0+a]$ using some parameter $\alpha\in(0,a]$ by

$\displaystyle z_\alpha(x) = y_0\mathbf{1}_{x\leq x_0} + \left(y_0+\int_{x_0}^xf\bigl(t,z_\alpha(t)\bigr)dt\right)\mathbf{1}_{x>x_0}.$

This is well-defined, since on the sub-interval $[x_0+k\alpha, x_0+(k+1)\alpha)$ we have $t-\alpha \in[x_0+(k-1)\alpha, x_0+k\alpha)$, and thus $z_\alpha(t-\alpha)$ has been recursively defined; hence $z_\alpha$ is defined as well.

Let us denote $\mathcal{F}:=\bigl\{z_\alpha\colon\alpha\in(0,a]\bigr\}\subset\mathcal{C}\bigl([x_0,x_0+a]\bigr).$

2nd step. $\mathcal F$ is equicontinuous. Let $\varepsilon > 0$ and let $x_1,x_2\in[x_0,x_0+a]$. Then we get, as $f$ is bounded by some $M$,

$\displaystyle \left|z_\alpha(x_1)-z_\alpha(x_2)\right| = \left|\int_{x_1}^{x_2} f\bigl(t, z_\alpha(t-\alpha)\bigr)dt\right| \leq \left|x_2-x_1\right|M,$

which doesn’t depend on $\alpha$, $x_1$ or $x_2$ (only on their distance). Hence, if $\left|x_1-x_2\right|<\delta = \frac\varepsilon M$,

$\displaystyle \left|z_\alpha(x_1)-z_\alpha(x_2)\right| \leq \varepsilon.$

3rd step. $\mathcal F$ is pointwise bounded. This is obvious from $\left|z_\alpha(x)\right|\leq\left|y_0\right|+aM$, which doesn’t depend on $\alpha$ or $x$.

4th step. We determine a solution to the intial-value problem.

From steps 2 and 3 and from Arzelà-Ascoli, we know that the sequence $(z_{1/n})_n\subset\mathcal{F}$ has a uniformly convergent subsequence. Let us denote its limit by $y(x)$, which is defined for all $x\in[x_0,x_0+a]$. This allows us to get

\displaystyle \begin{aligned} \left|z_{1/n_k}\left(t-\frac1{n_k}\right) - y(t)\right| &\leq \left| z_{1/n_k}\left(t-\frac1{n_k}\right) - z_{1/n_k}(t)\right| + \left|z_{1/n_k}(t) - y(t)\right| \\ &\leq \frac M{n_k} + \varepsilon\\ &< \overline\delta, \end{aligned}

for any $t\in[x_0,x_0+a]$ and for sufficiently large $K$. It should be clear what we intend to say with $\overline\delta$ (let’s bring in a little sloppiness here, shall we). Since $f$ is continuous in its second component, this proves

$\displaystyle \left|f\left(t, z_{1/n_k}\left(t-\frac1{n_k}\right)\right) - f\bigl(t,y(t)\bigr)\right| < \overline\varepsilon\qquad\text{for any }t\in[x_0,x_0+a].$

Hence, as every participant here converges uniformly,

\displaystyle \begin{aligned} y(x) &= \lim_{k\to\infty}z_{1/n_k}(x) \\ &= \lim_{k\to\infty}\left(y_0 + \int_{x_0}^x f\left(t, z_{1/n_k}\left(t-\frac1{n_k}\right)\right)dt\right)\\ &= y_0 + \int_{x_0}^x \lim_{k\to\infty} f\left(t, z_{1/n_k}\left(t-\frac1{n_k}\right)\right) dt\\ &= y_0 + \int_{x_0}^x f\bigl(t, y(t)\bigr) dt. \end{aligned}

This shows that

$y'(x) = f\bigl(x, y(x)\bigr),\qquad y(x_0) = y_0.$

5th step. Extension to the general case: Let $f$ be defined on the compact rectangle $R$.

We give a continuation of $f$ beyond $[y_0-b,y_0+b]$ for all $x\in[x_0-a, x_0+a]$ via

$\displaystyle \tilde f(x,y) = \begin{cases}f(x, y_0-b) ,&\text{for }y < y_0-b\\ f(x,y), &\text{for }(x,y)\in R\\ f(x,y_0+b),&\text{for }y>y_0+b\end{cases}$

Obviously, $\tilde f$ is continuous and bounded. From Step 1, $y' = \tilde f(x, y)$ has a solution on $[x_0, x_0+a]$. For $\left|x-x_0\right|\leq\frac bM$, we get

\displaystyle \begin{aligned} \left|y(x) - y_0\right| &\leq\left|y'(\xi)\right|\left|x-x_0\right| \\ &= \left|\tilde f\bigl(\xi,y(\xi)\bigr)\right|\left|x-x_0\right| \\ &\leq M\left|x-x_0\right|\leq b \end{aligned}

Therefore, the solution of $y'=f(x,y)$ is well-defined if $\left|x-x_0\right|\leq\frac bM$, and thus the solution is guaranteed to exist for $\left|x-x_0\right|<\alpha = \min\bigl(a,\frac bM\bigr)$. q.e.d.

As a sort of last-minute addendum, I have stumbled upon two articles from the 1970s that shed some more light on the issue of elementary proofs to Peano’s Theorem which completely avoid the technicalties of Schauder’s Theorem and of Arzelà-Ascoli. One is called “There is an Elementary Proof of Peano’s Existence Theorem” by Wolfgang Walter (the author of the book we cited earlier; Amer. Math. Monthly 78 1971, 170-173), the other is “On Elementary Proofs of Peano’s Existence Theorems” by Johann Walter (Amer. Math. Monthly 80, 1973, 282-286). The issue of whether Arzelà-Ascoli can be avoided is solved by both papers positively: they give proofs of Peano’s Theorem which only deal with standard calculus methods. Basically, the employ the Euler polygon method to construct a solution of the intitial value problem. However, again, the proofs are not constructive. Besides, “elementary” is not to be confused with “easy”, Peano’s Theorem is still nothing that lies directly on the surface of things. A brief look at the second of those articles (to the best of my knowledge the identical names of the authors are a coincidence) raises hope that this proof is actually not too hard – it should be understandable with a lot less effort than the proof via Schauder’s Theorem that we gave above in full detail; remember that Schauder itself required many non-standard theorems on its way. The elementary proof will only work for one-dimensional differential equations, but we bothered only with those anyway; it uses monotonicity of its approximating sequence which is only applicable in the real numbers. On the plus side, the proof explicitly constructs a solution via the Euler method.

The papers also shed some light on the history of Peano’s Theorem and the quest for its proof (together with some rather unusual disagreement on whether an earlier proof is valid or not; some interesting lines to read in passing). This should be enough on this matter for now. If the interest holds up (which is, to this extent, rather unlikely), we’ll return to it. But not for now.

Advertisements

# What is a Brachistochrone?

The cycloid was a core object of mathematical studies during the development of calculus and, before that, for geometry. It arises as a special case of curves in astronomy and it has been used as a challenge for competitors during the rise of the analytic method.

Let us have a look at its definition first, and what it describes heuristically. The cycloid is the planar curve parametized via

$\displaystyle\begin{pmatrix}x(t)\\y(t)\end{pmatrix} = \begin{pmatrix}r(t-\sin t)\\r(1-\cos t)\end{pmatrix},\qquad t\in[0,2\pi], \quad r>0.$

It is the curve of a point on the periphery of a circle that is rolled along the $x$-axis. The parametization follows like this:

The circle rolls along the $x$-axis with constant speed, therefore the angle $\sphericalangle PMD = t$. As $\sin t = \frac{PD}{r}$, we get $BA = r\sin t$. Now, the $x$-component of $P$ is

$\displaystyle x = OB = OA-BA = rt-r\sin t = r(t-\sin t).$

For the $y$-component of $P$$BP = AD = AM-DM = r- r\cos t$, and thus

$\displaystyle y = BP = r(1-\cos t).$

The cycloid can be considered as a special case of the epicycloid. Those have been a matter of interest in the pre-Kepler era, when astronomers tried to explain the motion of the planets in the night sky. As they considered perfect circles as orbits only (as opposed to the ellipses that they actually are), and as they postulated the Earth to be in the center of all those orbits, it was tricky to explain away the observations of different arc speeds and loops that the planets sometimes take. The solution was to imagine the planets circling around Earth, but on this circle was the center of another circle, on which the planets moved. Thus, the planets travelled on an epicycle; and sometimes one of those wasn’t enough (“salvation of the phenomena”).

A first simplification of this theory came from Copernicus who dropped the assumption that the Earth would be in the center of all things, but who didn’t get rid of the perfect circles. For the final resolution, the world had to wait for Kepler and Tycho Brahe. But anyway, that’s not why we’re here.

The cycloid is the curve, on which a point of mass will travel the quickest, if it just rolls along it, drawn by gravitation only (if friction is disregarded); in Greek, this is called the “brachistochrone“. It is remarkable that this quickest path is not the shortest path – there are quicker ways than a straight line. There is some sort of trade-off between gaining speed quickly and between keeping the path sufficiently short. We will look at two different approaches to prove that the cycloid can do this trick.

Another remarkable property of the cycloid is being the “tautochrone“: if points of mass are placed anywhere on this curve, they will travel to another point on this curve in exactly the same amount of time. Points that are farther away will gain more speed in order to close the distance. This is a highly interesting property for building a pendulum: no matter how big the amplitude, the frequency will always be the same. This, in turn, is the core feature of an exact clock, which was a sort of holy grail for scientists to find during the 17th century (not just for ship navigation). This property has been found by Huygens, who had not been able to use calculus methods for this (his solution is hidden in quite cumbersome geometry).

More on this curve and some very nice experiments may be seen in this youtube-video from the highly interesting channel vsauce. I especially love the excitement of both guys when they actually see these properties of the cycloid curve in action.

The brachistochrone problem was posed by Johann Bernoulli in a journal as a quest for the most enlightened mathematicians of the world (“acutissimis mathematicis qui toto orbe florent“). We will see his very elegant approach right below. His brother Jacob found a more general approach, but his train of thought is much more cumbersome – we will see a modernized simplification of this later. Both brothers engaged in a non-friendly competition by posing problems like this one to each other, always hoping for each other’s errors to gloat over. In retrospect, both of them advanced the applications of calculus when it was conceived; note however, that very many of the things that are named after Bernoulli (Bernoulli numbers, Bernoulli distribution, the Law of Large Numbers) have come from Jacob, not from Johann. But the other enlightened mathematicians of the time also retrieved the solution, particularly Leibniz and Newton who both are said to have found the solution in a matter of few hours, and both of them appreciating the beauty of the problem.

Now, let’s see how Johann came to his solution. We will look at some physical properties first.

The Speed Lemma: Consider a point of mass $m$ that travels without friction along any sort of curve in $\mathbb{R}^2$, the only force on it being the gravitation. Let $g$ be Newton’s gravitational constant. Then, when it has travelled height $h$, its speed is $v = \sqrt{2gh}$.

Proof: As physics tells us, the sum of kinetic and potential energy is constant. One may prove this mathematically by doing very basic integration and thinking of Newton’s second axiom (the one with force, mass and acceleration); we won’t go into this. Now, the kinetic energy is $\frac12mv^2$ (for physicists that’s the definition, for mathematicians that’s an easy lemma), while the potential energy is $mgh$. Our zero-level for the potential energy is set such that the potential energy vanishes, when the point of mass has travelled height $h$. By our set-up, the point of mass has no speed in the beginning and hence no kinetic energy. We have found

$\displaystyle 0 + mgh = E_{\mathrm{kin}}^{\mathrm{start}} + E_{\mathrm{pot}}^{\mathrm{start}} = E_{\mathrm{kin}}^{\mathrm{end}} + E_{\mathrm{kin}}^{\mathrm{end}} = \frac12mv^2 + 0,$

which means

$\displaystyle v^2 = 2gh,$

which was to be shown. q.e.d.

One might wonder if there is some problem here, that the speed formula does not depend on the kind of curve that the point of mass moves on. Indeed, without friction there is no problem. One can argue in an entirely different way about decomposition of the gravitational force in a force directed along the (derivative of the) curve and a normal force orthogonal to this one. This decomposed force is of course smaller than the gravitation and hence brings less acceleration to our point. In turn, one can compute the time it takes the point to travel to height $h$, and the speed that it has gained by then. As physics is consistent in itself (surprise!), we arrive at the same result that we gained via kinetic and potential energy. Not being a physicist, I can’t tell with certainty if this connection just stems from a little proof that I didn’t see, or if this is some sort of recognition that the world actually behaves responsibly and rationally. I won’t even start to question this here.

The Time Lemma: Consider the same setting as in the Speed Lemma. On top of that, let the curve on which our point travels be given by a differentiable function $y = f(x)$. Let the point travel from $(0,0)$ to $(b,d)$. The time it takes for this is

$\displaystyle T = \int_0^b\sqrt{\frac{1+(f'(x))^2}{2gf(x)}}\mathrm{d}x.$

Proof (by a little hand-waving): Consider any point $(x,y)$ on the curve, with $x\in(0,b)$. The infinitesimal time our point of mass spends in $(x,y)$ is

$\displaystyle \mathrm{d}t = \frac{\mathrm{d}s}{v(x)} = \frac{\sqrt{(\mathrm{d}x)^2 + (\mathrm{d}y)^2}}{\sqrt{2gf(x)}} = \sqrt{\frac{1+(\mathrm{d}y/\mathrm{d}x)^2}{2gf(x)}}\mathrm{d}x.$

Taking a leap of faith and integrating this (which is supposed to amount to the sum of all such infinitesimal times) gives

$\displaystyle T = \int_0^b \sqrt{\frac{1+(f'(x))^2}{2gf(x)}}\mathrm{d}x.$

In a post on physical interprations of mathematics, a little physical computation can’t be too wrong now, can it. q.e.d.

The Reflection Principle: Consider a ray of light travelling in $\mathbb{R}^2$ from point $(0,h_1)$ to point $(a,h_2)$, being reflected somewhere on the $x$-axis. The resulting angles of reflection $\alpha$ and $\beta$ are equal.

Proof: The underlying physical principle is to choose the line of minimal length for the reflection. A mathematician would put this as an axiom, a physicist will consider this granted by the way that nature behaves. Let’s go with it: the length of the chosen path is, as long as the ray of light is reflected in the point $(x,0)$,

$\displaystyle L(x) = \sqrt{x^2+h_1^2} + \sqrt{(x-a)^2+h_2^2},$

hence we look for some $x$ with

$\displaystyle 0 = L'(x) = \frac{x}{\sqrt{x^2+h_1^2}} + \frac{x-a}{\sqrt{(x-a)^2+h_2^2}} = \sin\alpha - \sin\beta.$

As, $\alpha, \beta\in(0,\frac\pi2)$ for obvious reasons, this is the assertion. q.e.d.

The Refraction Lemma: Consider a ray of light changing the medium in which it travels. Let the speeds of light in those media be $c_1$ and $c_2$. The resulting angles of refraction have a constant proportion:

$\displaystyle\frac{\sin\alpha}{\sin\beta} = \frac{c_1}{c_2}.$

Proof: Now, the speed of light gets relevant and the physical principle is to find the path of minimal time. By the basic laws on time and speed we get

$\displaystyle T(x) = \frac{\sqrt{x^2+h_1^2}}{c_1} + \frac{\sqrt{(x-a)^2+h_2^2}}{c_2}$

and we look for some $x$ with

$\displaystyle 0 = T'(x) = \frac1{c_1}\frac{x}{\sqrt{x^2+h_1^2}} + \frac1{c_2}\frac{x-a}{\sqrt{(x-a)^2+h_2^2}} = \frac1{c_1}\sin\alpha - \frac1{c_2}\sin\beta.$

The lemma is proved. q.e.d.

We have all ingredients to follow Johann Bernoulli’s idea to find the brachistochrone now. The basic question is, what is the quickest path for a point of mass to take, if it is to travel from one point in the plane, $(0,0)$ say, to another one $(b,d)$? Johann’s ingenious idea was to compare this to the path that a ray of light will take – as we have postulated, the ray of light will choose the quickest path as well. The acceleration may stem from gravitation or the path may result from the change of the media, but the aim is the same; as Bernoulli wrote: “who would deny us to replace one approach by the other?”

Hence, let us consider a “continous” change of media, for instance by making a limit of finer layers of media for the ray of light to traverse. As the Refraction Lemma showed, we will get a constant quotient of $\frac{\sin\alpha}{v}$. By the Speed Lemma, our point of mass has gained $v=\sqrt{2gy}$, if it has arrived at level $y$ only being accelerated by gravitation.

Now, using the designations of the following picture (note that $\beta=\frac\pi2-\alpha$),

\begin{aligned} \displaystyle \sin\alpha = \cos\beta = \sqrt{\frac{\cos^2\beta}{\cos^2\beta+\sin^2\beta}} = \sqrt{\frac1{1+\tan^2\beta}} = \sqrt{\frac1{1+(\frac{\mathrm{d}y}{\mathrm{d}x})^2}}. \end{aligned}

In particular,

$\displaystyle\sin\alpha = \frac{1}{\sqrt{1+(y')^2}}.$

As $\frac{\sin\alpha}{v}$ is constant, we find the differential equation

$\displaystyle \frac{1}{\sqrt{1+(y')^2}} = k\cdot v = k\sqrt{2gy},$

or equivalently,

$\displaystyle y' = \sqrt{\frac{1}{k^2\sqrt{2gy}^2}-1} = \sqrt{\frac{1-2gk^2y}{2gk^2y}} = \sqrt{\frac{\frac{1}{2gk^2} - y}{y}}.$

By setting $a:=\frac{1}{2gk^2}$ and by separation of variables,

$\displaystyle x+C = \int\sqrt{\frac{y}{a-y}} \mathrm{d}y.$

Then, we substitute $y(s):=a\sin^2s$, yielding $\frac{\mathrm{d}y}{\mathrm{d}s} = 2a\sin s \cos s$,

\displaystyle \begin{aligned} \int\sqrt{\frac{a\sin^2 s}{a-a\sin^2s}}2a\sin s\cos s \mathrm{d}s &= 2a\int\sqrt{\frac{\sin^2s}{1-\sin^2s}}\sin s\cos s \mathrm{d}s \\ &= 2a\int\sin^2s\mathrm{d}s. \end{aligned}

This integral can be readily solved via partial integration:

$\displaystyle \int\sin^2s\mathrm{d}s = -\sin s\cos s + \int\cos^2s\mathrm{d}s = -\sin s\cos s + s - \int\sin^2s\mathrm{d}s,$

meaning

$\displaystyle \int\sin^2s\mathrm{d}s = \frac12(s-\sin s\cos s).\qquad(\spadesuit)$

Altogether we have found (note that we do not re-substitute $y$ for $s$, since we are not interested in a parametrization like $x=x(y)$)

$\displaystyle x+C = 2a\frac12(s-\sin s\cos s) = \frac a2\bigl(2s-\sin(2s)\bigr).$

As we can set our coordinates such that $x(0)=0$ (the point will begin its voyage in $(0,0)$), we get $C=0$. This shows

\displaystyle \begin{aligned} x(s) &= \frac a2\bigl(2s-\sin(2s)\bigr),\\ y(s) &= a\sin^2s = \frac a2\bigl((1-\cos^2s)+\sin^2s\bigr) = \frac a2\bigr(1-\cos(2s)\bigr). \end{aligned}

Setting $r:=\frac a2$ and $t:=2s$, we retrieve the standard parametrization of the cycloid:

$\displaystyle x(t) = r(t-\sin t),\qquad y(t) = r(1-\cos t).$

The brachistochrone must be a cycloid.

But now for a completely different approach. The brachistochrone can also be found via calculus of variations, which is considerably harder, from a technical point of view, than what we did above. On the other hand, these techniques can be applied to a much broader spectrum of problems. We can only sketch many of the issues here.

Historically, the brachistochrone problem has been the start to calculus of variations. Jacob Bernoulli solved the problem with methods like this, much more general but much less elegant than his brother.

At the core is the observation that we wish to minimize a functional

$\displaystyle J(f) = \int_a^bF\bigl(t, f(t), f'(t)\bigr) \mathrm{d}t,$

over the set $M:=\{f:[a,b]\to\mathbb{R}\colon f(a)=c, f(b)=d, f\in\mathcal{C}^2\}$. Is there some $g\in M$ such that $J(g)\leq J(f)$ for all $f\in M$?

We consider the function $F$ to be defined as $F(t,y,p)$. The inputs $y$ and $p$ will play the roles of the solution function and its derivative, respectively.

Notice that we restrict ourselves already to smooth functions $f\in\mathcal{C}^2$. From a physical point of view, there is no reason why the brachistochrone shouldn’t be just continuous. However, tougher mathematics would be necessary to track down this one.

If the space $M$ is well-behaved, usual compactness arguments tell us that there is a minimum. But it is much harder to pinpoint.

Theorem (Euler-Lagrange; tiny special case): A necessary condition for a $\mathcal C^2$-function $g$ to be a solution to the minimization problem is

$\displaystyle \frac{\partial}{\partial y}F\bigl(t,g(t),g'(t)\bigr) - \frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(t,g(t),g'(t)\bigr) = 0.$

In its expanded form, this is (dropping the arguments for reasons of better legibility)

$\displaystyle \frac{\partial F}{\partial y}- \frac{\partial^2F}{\partial t\partial p}-\frac{\partial^2F}{\partial y\partial p}\cdot g'(t)-\frac{\partial^2F}{\partial p^2}\cdot g''(t) = 0.$

Proof: Let $g$ be the minimum and let $\eta\in\mathcal{C}^2$ with $\eta(a)=\eta(b)=0$. We then consider

$\displaystyle \varphi(\varepsilon):=\int_a^bF\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr) \mathrm{d}t.$

Since we chose everything to be well-behaved, $\varphi$ will be differentiable. As $g$ minimizes the functional $J$, $\varphi(0) = J(g) \leq \varphi(\varepsilon)$, and hence $\varphi'(0) = 0$. Note that the derivative is a $\frac{\mathrm{d}}{\mathrm{d}\varepsilon}$ here. For the function $g$, the derivative means $\frac{\mathrm{d}}{\mathrm{d}t}$.

Now, let us compute this (calculemus!)

\displaystyle \begin{aligned} \varphi'(\varepsilon) &= \int_a^b\frac{\mathrm{d}}{\mathrm{d}\varepsilon} F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\mathrm{d}t \\ &= \int_a^b \left[\frac{\partial}{\partial y}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t) + \varepsilon \eta'(t)\bigr)\eta(t) +\right.\\ &\hphantom{=}\qquad\left.+ \frac{\partial}{\partial p}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\eta'(t) \right]\mathrm{d}t \end{aligned}

Integration by parts yields, together with the fact that $\eta(a) = \eta(b) = 0$,

\displaystyle \begin{aligned} \int_a^b \frac{\partial}{\partial p}F(t,y,p)\eta'(t) \mathrm{d}t &= \left[\frac{\partial}{\partial p}F(t,y,p)\eta(t)\right]_a^b - \int_a^b\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F(t,y,p)\eta(t) \mathrm{d}t \\ &= -\int_a^b\eta(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F(t,y,p) \mathrm{d}t. \end{aligned}

We conclude

\displaystyle \begin{aligned} \varphi'(\varepsilon) &= \int_a^b \left[\frac{\partial}{\partial y}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\eta(t) \right.\\ &\hphantom{=}\qquad\left.- \eta(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p} F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr) \right]\mathrm{d}t\\ &= \int_a^b\eta(t)\left[\frac{\partial}{\partial y}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\right.\\ &\hphantom{=}\qquad\left. - \frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p} F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta(t)\bigr)\right]\mathrm{d}t. \end{aligned}

This expression must vanish, as we demand $\varphi'(0)=0$, if $g$ is supposed to be a solution to the minimization problem. We have an arbitrary function $\eta$ involved, so the expression in brackets will have to vanish entirely. Formally, one can see this by contradiction: if in some point $t_0$, the bracket-expression did not vanish, we could choose some interval $[t_1,t_2]\subset [a,b]$ where this bracket-expression didn’t vanish at all (it is continuous, after all). On this interval, we set $\eta(t):=(t-t_1)^4(t-t_2)^4$, we find the integrand strictly positive there, and vanishing outside. Contradiction to $\varphi'(0)=0$.

For $\varepsilon=0$, the statement follows. We have thus proved the Euler-Lagrange equation in this particular case. q.e.d.

Notice that we didn’t speak about sufficient conditions. That would overstretch this text by far – let’s ignore this.

The Simplification Lemma: In the special case, when $F$ only depends on $y$ and $p$, and not directly on its first argument $t$, the Euler-Lagrange equation will simplify to the condition

$\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\left(F\bigl(g(t), g'(t)\bigr)-g'(t)\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\right) = 0.$

Proof: This follows by a straight-forward computation:

\begin{aligned} \displaystyle &\hphantom{=}\frac{\mathrm{d}}{\mathrm{d}t}\left(F\bigl(g(t),g'(t)\bigr)-g'(t)\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\right) \\ &\stackrel{(\circ)}{=} \frac{\partial}{\partial y}F\bigl(g(t),g'(t)\bigr)g'(t) + \frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)g''(t) +\\ &\hphantom{=}\quad- g''(t)\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)-g'(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr) \\ &\stackrel{\hphantom{(\ast)}}{=} \frac{\partial}{\partial y}F\bigl(g(t),g'(t)\bigr)g'(t) - g'(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\\ &\stackrel{(\ast)}{=} \left(\frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr) g'(t) + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g''(t)\right) g'(t) +\\ &\hphantom{=}\quad - g'(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\\ &\stackrel{(\circ)}{=} \frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr) \bigl(g'(t)\bigr)^2 + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g'(t)g''(t) +\\ &\hphantom{=}\quad - g'(t)\left(\frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr)g'(t) + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g''(t)\right)\\ &\stackrel{\hphantom{(\ast)}}{=} \frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr) \bigl(g'(t)\bigr)^2 + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g'(t)g''(t) +\\ &\hphantom{=}\quad - \frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr)\bigl(g'(t)\bigr)^2 - \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g'(t)g''(t)\\ &\stackrel{\hphantom{(\ast)}}{=} 0. \end{aligned}

We have used the expanded form of the Euler-Lagrange equation in $(\ast)$ together with the chain-rule and the feature that in the present special case $\frac{\partial}{\partial t}F=0$, and the chain-rule all by itself in $(\circ)$. All over the place, we have used that $g$ is a solution to the Euler-Lagrange equation and thus needs to be plugged into $F$. q.e.d.

Now that we have the ingredients, let’s try and find the brachistochrone by calculus of variations. By the Time Lemma, we want to minimize the expression

$\displaystyle \int_0^b F\bigl(f(x), f'(x)\bigr)\mathrm{d}x,\qquad F(y,p) = \sqrt{\frac{1+p^2}{2gy}}.$

By the Simplification Lemma, any solution $\varphi$ will have

\displaystyle \begin{aligned} c &= \sqrt{\frac{1+(\varphi'(x))^2}{2g\varphi(x)}} - \varphi'(x)\frac{\varphi'(x)}{2g\varphi(x)}\sqrt{\frac{2g\varphi(x)}{1+(\varphi'(x))^2}}\\ &= \sqrt{\frac{1+(\varphi'(x))^2}{2g\varphi(x)}}\left(1-\frac{(\varphi'(x))^2}{1+(\varphi'(x))^2}\right)\\ &= \sqrt{\frac{1+(\varphi'(x))^2}{2g\varphi(x)}}\frac1{1+(\varphi'(x))^2}\\ &= \frac1{\sqrt{2g\varphi(x)}\sqrt{1+(\varphi'(x))^2}}, \end{aligned}

which means

$\displaystyle \varphi(x)\left(1+\bigl(\varphi'(x)\bigr)^2\right) = \frac1{2gc^2} =: C\qquad\qquad(\clubsuit)$

$\varphi$ will be a solution depending on $x$. On the other hand, we look for a parametrization of a curve in $\mathbb{R}^2$, hence we try to find both functions $x(t)$ and $y(t)$, that are connected via $\varphi(x) = \varphi\bigl(x(t)\bigr) = y(t)$. We set, by divine insight,

$\displaystyle y(t) = C\frac{1-\cos t}{2} = C\sin^2\frac t2.$

The chain rule then says $\frac{\mathrm{d}}{\mathrm{d}t}\varphi\bigl(x(t)\bigr) = \frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)\frac{\mathrm{d}}{\mathrm{d}t}x(t)$, and hence

$\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = \frac{\frac{\mathrm{d}}{\mathrm{d}t}\varphi\bigl(x(t)\bigr)}{\frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)} = \frac{\frac{\mathrm{d}}{\mathrm{d}t}y(t)}{\frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)}.$

By $(\clubsuit)$,

\displaystyle \begin{aligned} \frac{\mathrm{d}}{\mathrm{d}x}\varphi(x) = \frac{\mathrm{d}}{\mathrm{d}x}\varphi\bigl(x(t)\bigr) &= \sqrt{\frac{C}{\varphi(x(t))} - 1} \\ &= \sqrt{\frac{C}{C\sin^2\frac t2}-1} \\ &= \sqrt{\frac{1-\sin^2\frac t2}{\sin^2\frac t2}} \\ &= \cot\frac t2. \end{aligned}

Altogether,

\displaystyle \begin{aligned} \frac{\mathrm{d}x}{\mathrm{d}t} = \frac{\frac{\mathrm{d}}{\mathrm{d}t}y(t)}{\frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)} &= \frac{C2\frac12\sin\frac t2\cos\frac t2}{\cot\frac t2} \\ &= \frac{C\sin\frac t2\cos\frac t2}{\cos\frac t2}\sin\frac t2 \\ &= C\sin^2\frac t2. \end{aligned}

We already have almost integrated this one before in $(\spadesuit)$, the substitution $s(t)=\frac t2$ yields

\displaystyle \begin{aligned} x(t) = C\int\sin^2\frac t2 \mathrm{d}t = 2C\int\sin^2(s)\mathrm{d}s &= 2C\frac{s-\sin s\cos s}{2} \\ &= C\left[s-\frac12\sin(2s)\right] \\ &= \frac C2(t-\sin t). \end{aligned}

This shows, that any solution to the minimization problem must look like

$\displaystyle\begin{pmatrix}x(t)\\y(t)\end{pmatrix} = \begin{pmatrix}\frac C2(t-\sin t)\\\frac C2(1-\cos t)\end{pmatrix},$

and is hence a cycloid. What we haven’t proved is, that it actually is a solution to the minimization problem – we didn’t speak about the sufficient condition with Euler-Lagrange, not about regularity of our set $M$ and only about $\mathcal C^2$-functions in the first place (I won’t even go into the physical hand-waving). But anyway, the little tricks and the big machinery of technique make both approaches really insightful and interesting. This makes it a good place to end.

# Brouwer’s Fixed Point Theorem

Recently, we have concluded the text on the Transformation formula, remarking that it was a tool in an elementary proof of Brouwer’s Fixed Point Theorem. Let’s have a closer look at that.

Brouwer’s Fixed Point Theorem is at the core of many insights in topology and functional analysis. As many other powerful theorems, it can be stated and understood very easily, however the proof is quite deep. In particular, the conclusions that are drawn from it, are considered even deeper. As we shall see, Brouwer’s theorem can be shown in an elementary fashion, where the Transformation Formula, the Inverse Function Theorem and Weierstrass’ Approximation Theorem are the toughest stepping stones; note that we have given a perfectly elementary proof of Weierstrass’ Theorem before. This makes Brouwer’s theorem accessible to undergraduate calculus students (even though, of course, these stepping stones already mean bringing the big guns to the fight). The downside is that the proof, even though elementary, is quite long-ish. The undergraduate student needs to exercise some patience.

Theorem (Brouwer, 1910): Let $K:=\{x\in\mathbb{R}^p\colon \left|x\right|\leq1\}$ be the compact unit ball, and let $f:K\to K$ be continuous. Then $f$ has a fixed point, i.e. there is some $x\in K$ such that $f(x)=x$.

There are many generalizations of the Theorem, considering more complex sets instead of $K$, and taking place in the infinite-dimensional space. We shall get back to that later. First, we shall look at a perfectly trivial and then a slightly less trivial special case.

For $p=1$, the statement asks to find a fixed point for the continuous mapping $f:[0,1]\to[0,1]$. W.l.o.g. we have shrunk the set $K$ to $[0,1]$ instead of $[-1,1]$ to avoid some useless notational difficulty. This is a standard exercise on the intermediate value theorem with the function $g(x):=f(x)-x$. Either, $f(1)=1$ is the fixed point, or else $f(1)<1$, meaning $g(1)<0$ and $g(0)=f(0)\geq0$. As $g$ is continuous, some point needs to be the zero of $g$, meaning $0=g(\xi) = f(\xi)-\xi$ and hence $f(\xi)=\xi$. q.e.d. ($p=1$)

For $p=2$, things are still easy to see, even though a little less trivial. This is an application of homotopy theory (even though one doesn’t need to know much about it). The proof is by contradiction however. We will show an auxiliary statement first: there is no continuous mapping $h:K\to\partial K$, which is the identity on $\partial K$, i.e. $h(x)=x$ for $x\in\partial K$. If there was, we would set

$\displaystyle H(t,s):=h(se^{it}), \qquad t\in[0,2\pi], s\in[0,1].$

$H$ is a homotopy of the constant curve $h(0) = H(t,0)$ to the circle $e^{it} = h(e^{it}) = H(t,1)$. This means, we can continuously transform the constant curve to the circle. This is a contradiction, as the winding number of the constant is $0$, but the winding number of the circle is $1$. There can be no such $h$.

Now, we turn to the proof of the actual statement of Brouwer’s Theorem: If $f$ had no fixed point, we could define a continuous mapping as follows: let $x\in K$, and consider the line through $x$ and $f(x)$ (which is well-defined by assumption). This line crosses $\partial K$ in the point $h(x)$; actually there are two such points, we shall use the one that is closer to $x$ itself. Apparently, $h(x)=x$ for $x\in\partial K$. By the auxiliary statement, there is no such $h$ and the assumption fails. $f$ must have a fixed point. q.e.d. ($p=2$)

For the general proof, we shall follow the lines of Heuser who has found this elementary fashion in the 1970’s and who made it accessible in his second volume of his book on calculus. It is interesting, that most of the standard literature for undergraduate students shies away from any proof of Brouwer’s theorem. Often, the theorem is stated without proof and then some conclusions and applications are drawn from it. Sometimes, a proof via differential forms is given (such as in Königsberger’s book, where it is somewhat downgraded to an exercise after the proof of Stoke’s Theorem) which I wouldn’t call elementary because of the theory which is needed to be developed first. The same holds for proofs using homology groups and the like (even though this is one of the simplest fashions to prove the auxiliary statement given above – it was done in my topology class, but this is by no means elementary).

A little downside is the non-constructiveness of the proof we are about to give. It is a proof by contradiction and it won’t give any indication on how to find the fixed point. For many applications, even the existence of a fixed point is already a gift (think of Peano’s theorem on the existence of solutions to a differential equation, for instance). On the other hand, there are constructive proofs as well, a fact that is quite in the spirit of Brouwer.

In some way, the basic structure of the following proof is similar to the proof that we gave for the case $p=2$. We will apply the same reasoning that concluded the proof for the special case (after the auxiliary statement), we will just add a little more formality to show that the mapping $g$ is actually continuous and well-defined. The trickier part in higher dimensions is to show the corresponding half from which the contradiction followed. Our auxiliary statement within this previous proof involved the non-existence of a certain continuous mapping, that is called a retraction: for a subset $A$ of a topological space $X$, $f:X\to A$ is called a retraction of $X$ to $A$, if $f(x)=x$ for all $x\in A$. We have found that there is no retraction from $K$ to $\partial K$. As a matter of fact, Brouwer’s Fixed Point Theorem and the non-existence of a retraction are equivalent (we’ll get back to that at the end).

The basic structure of the proof is like this:

• we reduce the problem to polynomials, so we only have to deal with those functions instead of a general continuous $f$;
• we formalize the geometric intuition that came across in the special case $p=2$ (this step is in essence identical to what we did above): basing on the assumption that Brouwer’s Theorem is wrong, we define a mapping quite similar to a retraction of $K$ to $\partial K$;
• we show that this almost-retraction is locally bijective;
• we find, via the Transformation Formula, a contradiction: there can be no retraction and there must be a fixed point.

Steps 3 and 4 are the tricky part. They may be replaced by some other argument that yields a contradiction (homology theory, for instance), but we’ll stick to the elementary parts. Let’s go.

Lemma (The polynomial simplification): It will suffice to show Brouwer’s Fixed Point Theorem for those functions $f:K\to K$, whose components are polynomials on $K$ and which have $f(K)\subset\mathring K$.

Proof: Let $f:K\to K$ continuous, it has the components $f = (f_1,\ldots,f_p)$, each of which has the arguments $x_1,\ldots,x_p$. By Weierstrass’ Approximation Theorem, for any $\varepsilon>0$ there are polynomials $p_k^\varepsilon$ such that $\left|f_k(x)-p_k^\varepsilon(x)\right| < \varepsilon$, $k=1,\ldots,p$, for any $x\in K$. In particular, there are polynomials $\varphi_{k,n}$ such that

$\displaystyle \left|f_k(x)-\varphi_{k,n}(x)\right| < \frac{1}{\sqrt{p}n}\qquad\text{for any }x\in K.$

If we define the function $\varphi_n:=(\varphi_{1,n},\ldots,\varphi_{p,n})$ which maps $K$ to $\mathbb{R}^p$, we get

\displaystyle \begin{aligned} \left|f(x)-\varphi_n(x)\right|^2 &= \sum_{k=1}^p\left|f_k(x)-\varphi_{k,n}(x)\right|^2 \\ &< \frac{p}{pn^2} \\ &= \frac1{n^2},\qquad\text{for any }x\in K \end{aligned}

and in particular $\varphi_n\to f$ uniformly in $K$.

Besides,

$\displaystyle \left|\varphi_n(x)\right|\leq\left|\varphi_n(x)-f(x)\right| + \left|f(x)\right| < \frac1n + \left|f(x)\right| \leq \frac1n + 1 =:\alpha_n.$

This allows us to set

$\displaystyle \psi_n(x) = \frac{\varphi_n(x)}{\alpha_n}$.

This function also converges uniformly to $f$, as for any $x\in K$,

\displaystyle \begin{aligned} \left|\psi_n(x)-f(x)\right| &= \left|\frac{\varphi_n(x)}{\alpha_n} - f(x)\right| \\ &= \frac1{\left|\alpha_n\right|}\left|\varphi_n(x)-\alpha_nf(x)\right|\\ &\leq \frac1{\left|\alpha_n\right|}\left|\varphi_n(x)-f(x)\right| + \frac1{\left|\alpha_n\right|}\left|f(x)-\alpha_nf(x)\right|\\ &< \frac1{\left|\alpha_n\right|}\frac1n + \frac1{\left|\alpha_n\right|}\left|f(x)\right|\left|1-\alpha_n\right|\\ &< (1+\delta)\frac1n + \frac{\delta}{1+\delta}\left|f(x)\right|\\ &< \varepsilon \qquad\text{for }n\gg0. \end{aligned}

Finally, for $x\in K$, by construction, $\left|\varphi_n(x)\right|\leq\alpha_n$, and so $\left|\psi_n(x)\right| = \frac{\left|\varphi_n(x)\right|}{\alpha_n} < 1$, which means that $\psi_n:K\to\mathring K$.

The point of this lemma is to state that if we had shown Brouwer’s Fixed Point Theorem for every such function $\psi_n:K\to\mathring K$, whose components are polynomials, we had proved it for the general continuous function $f$. This can be seen as follows:

As we suppose Brouwer’s Theorem was true for the $\psi_n$, there would be a sequence $(x_n)\subset K$ with $\psi_n(x_n) = x_n$. As $K$ is (sequentially) compact, there is a convergent subsequence $(x_{n_j})\subset(x_n)$, and $\lim_jx_{n_j} = x_0\in K$. For sufficiently large $j$, we see

$\displaystyle \left|\psi_{n_j}(x_{n_j})-f(x_0)\right| \leq\left|\psi_{n_j}(x_{n_j})-f(x_{n_j})\right| + \left|f(x_{n_j})-f(x_0)\right| < \frac\varepsilon2 + \frac\varepsilon2.$

The first bound follows from the fact that $\psi_{n_j}\to f$ uniformly, the second bound is the continuity of $f$ itself, with the fact that $x_{n_j}\to x_0$. In particular,

$\displaystyle x_0 = \lim_{j} x_{n_j} = \lim_{j} \psi_{n_j}(x_{n_j}) = f(x_0).$

So, $f$ has the fixed point $x_0$, which proves Brouwer’s Theorem.

In effect, it suffices to deal with functions like the $\psi_n$ for the rest of this text. q.e.d.

Slogan (The geometric intuition): If Brouwer’s Fixed Point Theorem is wrong, then there is “almost” a retraction of $K$ to $\partial K$.

Or, rephrased as a proper lemma:

Lemma: For $f$ being polynomially simplified as in the previous lemma, assuming $x\neq f(x)$ for any $x\in K$, we can construct a continuously differentiable function $g_t:K\to K$, $t\in[0,1]$, with $g_t(x)=x$ for $x\in\partial K$. This function is given via

$\displaystyle g_t(x) =x + t\lambda(x)\bigl(x-f(x)\bigr),$

$\displaystyle \lambda(x)=\frac{-x\cdot\bigl(x-f(x)\bigr)+\sqrt{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2+\bigl(1-\left|x\right|^2\bigr)\left|x-f(x)\right|^2}}{\left|x-f(x)\right|^2}.$

The mapping $t\mapsto g_t$ is the direct line from $x$ to the boundary of $\partial K$, which also passes through $f(x)$. $\lambda(x)$ is the parameter in the straight line that defines the intersection with $\partial K$.

Proof: As we suppose, Brouwer’s Fixed Point Theorem is wrong the continuous function $\left|x-f(x)\right|$ is positive for any $x\in K$. Because of continuity, for every $y\in \partial K$, there is some $\varepsilon = \varepsilon(y) > 0$, such that still $\left|x-f(x)\right|>0$ in the neighborhood $U_{\varepsilon(y)}(y)$.

Here, we have been in technical need of a continuation of $f$ beyond $K$. As $f$ is only defined on $K$ itself, we might take $f(x):=f\bigl(\frac{x}{\left|x\right|}\bigr)$ for $\left|x\right|>1$. We still have $\left|f(x)\right| < 1$ and $f(x)\neq x$, which means that we don’t get contradictions to our assumptions on $f$. Let’s not dwell on this for longer than necessary.

On the compact set $\partial K$, finitely many of the neighborhoods $U_{\varepsilon(y)}(y)$ will suffice to cover $\partial K$. One of them will have a minimal radius. We shall set $\delta = \min_y\varepsilon(y) +1$, to find: there is an open set $U = U_\delta(0)\supset K$ with $\left|x-f(x)\right| >0$ for all $x\in U$.

Let us define for any $x\in U$

$\displaystyle d(x):=\frac{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2+\bigl(1-\left|x\right|\bigr)^2\left|x-f(x)\right|^2}{\left|x-f(x)\right|^4}$.

It is well-defined by assumption. We distinguish three cases:

$a)$ $\left|x\right|<1$: Then $1-\left|x\right|^2>0$ and the numerator of $d(x)$ is positive.

$b)$ $\left|x\right|=1$: Then the numerator of $d(x)$ is

$\displaystyle \left(x\cdot\bigl(x-f(x)\bigr)\right)^2 = \bigl(x\cdot x - x\cdot f(x)\bigr)^2 = \bigl(\left|x\right|^2-x\cdot f(x)\bigr)^2 = \bigl(1-x\cdot f(x)\bigr)^2,$

where by Cauchy-Schwarz and by assumption on $f$, we get

$\displaystyle x\cdot f(x) \leq \left|x\right|\left|f(x)\right| = \left|f(x)\right| < 1\qquad (\spadesuit).$

In particular, the numerator of $d(x)$ is strictly positive.

$c)$ $\left|x\right|>1$: This case is not relevant for what’s to come.

We have seen that $d(x)>0$ for all $\left|x\right|\leq 1$. Since $d$ is continuous, a compactness argument similar to the one above shows that there is some $V = V_{\delta'}(0)\supset K$ with $d(x)>0$ for all $x\in V$. If we pick $\delta'=\delta$ if $\delta$ is smaller, we find: $d$ is positive and well-defined on $V$.

The reason why we have looked at $d$ is not clear yet. Let us grasp at some geometry first. Let $x\in V$ and $\Gamma_x = \left\{x+\lambda\bigl(x-f(x)\bigr)\colon\lambda\in\mathbb{R}\right\}$ the straight line through $x$ and $f(x)$. If we look for the intersection of $\Gamma_x$ with $\partial K$, we solve the equation

$\displaystyle\left|x+\lambda\bigl(x-f(x)\bigr)\right| = 1.$

The intersection “closer to” $x$ is denoted by some $\lambda>0$.

This equation comes down to

\displaystyle \begin{aligned} && \left(x+\lambda\bigl(x-f(x)\bigr)\right) \cdot \left(x+\lambda\bigl(x-f(x)\bigr)\right) &=1 \\ &\iff& \left|x\right|^2 + 2\lambda x\cdot\bigl(x-f(x)\bigr) + \lambda^2\left|x-f(x)\right|^2 &=1\\ &\iff& \lambda^2\left|x-f(x)\right|^2 + 2\lambda x\cdot\bigl(x-f(x)\bigr) &= 1-\left|x\right|^2\\ &\iff& \left(\lambda+\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 &= \frac{1-\left|x\right|^2}{\left|x-f(x)\right|^2} + \left(\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 \\ &\iff& \left(\lambda+\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 &= \frac{(1-\left|x\right|)^2\left|x-f(x)\right|^2+\left(x\cdot\bigl(x-f(x)\bigr)\right)^2}{\left|x-f(x)\right|^4} \\ &\iff& \left(\lambda+\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 &= d(x). \end{aligned}

As $x\in V$, $d(x)>0$, and hence there are two real solutions to the last displayed equation. Let $\lambda(x)$ be the larger one (to get the intersection with $\partial K$ closer to $x$), then we find

\displaystyle \begin{aligned} \lambda(x) &= \sqrt{d(x)} - \frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\\ &= \frac{-x\cdot\bigl(x-f(x)\bigr)+\sqrt{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2+\bigl(1-\left|x\right|^2\bigr)\left|x-f(x)\right|^2}}{\left|x-f(x)\right|^2}. \end{aligned}

By construction,

$\displaystyle \left|x+\lambda(x)\bigl(x-f(x)\bigr)\right| = 1,\qquad\text{for all }x\in V.\qquad(\clubsuit)$

Let us define

$\displaystyle g_t(x) = x+t\lambda(x)\bigl(x-f(x)\bigr),\qquad t\in[0,1],~x\in V.$

This is (at least) a continuously differentiable function, as we simplified $f$ to be a polynomial and the denominator in $\lambda(x)$ is bounded away from $0$. Trivially and by construction, $g_0(x)=x$ and $\left|g_1(x)\right| = 1$ for all $x\in V$.

For $\left|x\right|<1$ and $t<1$, we have

\displaystyle \begin{aligned} \left|x+t\lambda(x)\bigl(x-f(x)\bigr)\right| &\stackrel{\hphantom{(\clubsuit)}}{=} \left|\bigl(t+(1-t)\bigr)x + t\lambda(x)\bigl(x-f(x)\bigr)\right|\\ &\stackrel{\hphantom{(\clubsuit)}}{=}\left|t\left(x+\lambda(x)\bigl(x-f(x)\bigr)\right)+(1-t)x\right|\\ &\stackrel{\hphantom{(\clubsuit)}}{\leq} t\left|x+\lambda(x)\bigl(x-f(x)\bigr)\right|+(1-t)\left|x\right|\\ &\stackrel{(\clubsuit)}{=} t+(1-t)\left|x\right|\\ &\stackrel{\hphantom{(\clubsuit)}}{<} t+(1-t) = 1\qquad (\heartsuit). \end{aligned}

Hence, $\left|g_t(x)\right|<1$ for $\left|x\right|<1$ and $t\in[0,1)$. This means $g_t(\mathring K)\subset\mathring K$ for $t<1$.

For $\left|x\right|=1$, we find (notice that $x\cdot\bigl(x-f(x)\bigr)>0$ for $\left|x\right|=1$, by $(\spadesuit)$).

\displaystyle \begin{aligned} \lambda(x) &= \frac{-x\cdot\bigl(x-f(x)\bigr)+\sqrt{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2}}{\left|x-f(x)\right|^4} \\ &= \frac{-x\cdot\bigl(x-f(x)\bigr)+x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^4} = 0. \end{aligned}

This is geometrically entirely obvious, since $\lambda(x)$ denotes the distance of $x$ to the intersection with $\partial K$; if $x\in\partial K$, this distance is apparently $0$.

We have seen that $g_t(x)=x$ for $\left|x\right|=1$ for any $t\in[0,1]$. Hence, $g_t(\partial K)=\partial K$ for all $t$. $g_t$ is almost a retraction, $g_1$ actually is a retraction. q.e.d.

Note how tricky the general formality gets, compared to the more compact and descriptive proof that we gave in the special case $p=2$. The arguments of the lemma and in the special case are identical.

Lemma (The bijection): Let $\hat K$ be a closed ball around $0$, $K\subset\hat K\subset V$. The function $g_t$ is a bijection on $\hat K$, for $t\geq0$ sufficiently small.

Proof: We first show that $g_t$ is injective. Let us define $h(x):=\lambda(x)\bigl(x-f(x)\bigr)$, for reasons of legibility. As we saw above, $h$ is (at least) continuously differentiable. We have

$\displaystyle g_t(x) = x+th(x),\qquad g_t'(x)=\mathrm{Id}+th'(x).$

As $\hat K$ is compact, $h'$ is bounded by $\left|h'(x)\right|\leq C$, say. By enlarging $C$ if necessary, we can take $C\geq1$. Now let $x,y\in\hat K$ with $g_t(x)=g_t(y)$. That means $x+th(x)=y+th(y)$ and so, by the mean value theorem,

$\displaystyle \left|x-y\right| = t\left|h(x)-h(y)\right|\leq tC\left|x-y\right|.$

By setting $\varepsilon:=\frac1C$ and taking $t\in[0,\varepsilon)$, we get $\left|x-y\right| = 0$. $g_t$ is injective for $t<\varepsilon$.

Our arguments also proved $\left|th'(x)\right| < 1$. Let us briefly look at the convergent Neumann series $\sum_{k=0}^\infty\bigl(th'(x)\bigr)^k$, having the limit $s$, say. We find

$\displaystyle sth'(x) = \sum_{k=0}^\infty\bigl(th'(x)\bigr)^{k+1} = s-\mathrm{Id},$

which tells us

$\displaystyle \mathrm{Id} = s-s\cdot th'(x) = s\bigl(\mathrm{Id}-th'(x)\bigr).$

In particular, $g_t'(x) = \mathrm{Id}-th'(x)$ is invertible, with the inverse $s$. Therefore, $\det g_t'(x)\neq0$. Since this determinant is a continuous function of $t$, and $\det g_0'(x) = \det\mathrm{Id} = 1$, we have found

$\displaystyle \det g_t'(x) > 0 \text{ for any }t\in[0,\varepsilon),~x\in\hat K$.

Now, let us show that $g_t$ is surjective. As $\det g_t'(x)$ never vanishes on $\hat K$, $g_t$ is an open mapping (by an argument involving the inverse function theorem; $g_t$ can be inverted locally in any point, hence no point can be a boundary point of the image). This means that $g_t(\mathring K)$ is open.

Let $z\in K$ with $z\notin g_t(\mathring K)$; this makes $z$ the test case for non-surjectivity. Let $y\in g_t(\mathring K)$; there is some such $y$ due to $(\heartsuit)$. The straight line between $y$ and $z$ is

$\displaystyle \overline{yz}:=\left\{(1-\lambda)y+\lambda z\colon \lambda\in[0,1]\right\}.$

As $g_t$ is continuous, there must be some point $v\in\partial g_t(\mathring K)\cap\overline{yz}$; we have to leave the set $g_t(\mathring K)$ somewhere. Let us walk the line until we do, and then set

$\displaystyle v=(1-\lambda_0)y+\lambda_0z,\qquad\text{with }\lambda_0=\sup\left\{\lambda\in[0,1]\colon\overline{y;(1-\lambda)y+\lambda z}\subset g_t(\mathring K)\right\}.$

Now, continuous images of compact sets remain compact: $g_t(K)$ is compact and hence closed. Therefore, we can conclude

$\displaystyle g_t(\mathring K)\subset g_t(K)\quad\implies\quad \overline{g_t(\mathring K)}\subset g_t(K)\quad\implies\quad v\in\overline{g_t(\mathring K)}\subset g_t(K).$

This means that there is some $u\in K$ such that $v=g_t(u)$. As $g_t(\mathring K)$ is open, $u\in\partial K$ (since otherwise, $v\notin\partial g_t(\mathring K)$ which contradicts the construction). Therefore, $\left|u\right|=1$, and since $g_t$ is almost a retraction, $g_t(u)=u$. Now,

$\displaystyle v=g_t(u) = u \quad\implies\quad v\in\partial K.$

But by construction, $v$ is a point between $z\in K$ and $y\in g_t(\mathring K)$; however, $y\notin\partial K$, since $g_t(\mathring K)$ is open. Due to the convexity of $K$, we have no choice but $z\in\partial K$, and by retraction again, $g_t(z)=z$. In particular, $z\in g_t(\partial K)$.

We have shown that if $z\notin g_t(\mathring K)$, then $z\in g_t(\partial K)$. In particular, $z\in g_t(K)$ for any $z\in K$. $g_t$ is surjective. q.e.d.

Lemma (The Integral Application): The real-valued function

$\displaystyle V(t)=\int_K\det g_t'(x)dx$

is a polynomial and satisfies $V(1)>0$.

Proof: We have already seen in the previous lemma that $\det g_t'(x)>0$ on $x\in\mathring{\hat K}$ for $t<\varepsilon$. This fact allows us to apply the transformation formula to the integral:

$\displaystyle V(t) = \int_{g_t(K)}1dx.$

As $g_t$ is surjective, provided $t$ is this small, $g_t(K) = K$, and therefore

$\displaystyle V(t) = \int_K1dx = \mu(K).$

In particular, this no longer depends on $t$, which implies $V(t)>0$ for any $t<\varepsilon$.

By the Leibniz representation of the determinant, $\det g_t'(x)$ is a polynomial in $t$, and therefore, so is $V(t)$. The identity theorem shows that $V$ is constant altogether: in particular $V(1)=V(0)>0$. q.e.d.

Now we can readily conclude the proof of Brouwer’s Fixed Point Theorem, and we do it in a rather unexpected way. After the construction of $g_t$, we had found $\left|g_1(x)\right|=1$ for all $x\in V$. Let us write this in its components and take a partial derivative ($j=1,\ldots,p$)

\displaystyle \begin{aligned} &&1 &= \sum_{k=1}^p\bigl(g_{1,k}(x)\bigr)^2\\ &\implies& 0 &= \frac\partial{\partial x_j}\sum_{k=1}^p\bigl(g_{1,k}(x)\bigr)^2 = \sum_{k=1}^p2\frac{\partial g_{1,k}(x)}{\partial x_j}g_{1,k}(x) \end{aligned}

This last line is a homogeneous system of linear equations, that we might also write like this

$\displaystyle \begin{pmatrix}\frac{\partial g_{1,1}(x)}{\partial x_1}&\cdots &\frac{\partial g_{1,p}(x)}{\partial x_1}\\ \ldots&&\ldots\\ \frac{\partial g_{1,1}(x)}{\partial x_p}&\cdots&\frac{\partial g_{1,p}(x)}{\partial x_p}\end{pmatrix} \begin{pmatrix}\xi_1\\\ldots\\\xi_p\end{pmatrix} = 0,$

and our computation has shown that the vector $\bigl(g_{1,1}(x),\ldots,g_{1,p}(x)\bigr)$ is a solution. But the vector $0$ is a solution as well. These solutions are different because of $\left|g_1(x)\right| = 1$. If a system of linear equations has two different solutions, it must be singular (it is not injective), and the determinant of the linear system vanishes:

$\displaystyle 0 = \det \begin{pmatrix}\frac{\partial g_{1,1}(x)}{\partial x_1}&\cdots &\frac{\partial g_{1,p}(x)}{\partial x_1}\\ \ldots&&\ldots\\ \frac{\partial g_{1,1}(x)}{\partial x_p}&\cdots&\frac{\partial g_{1,p}(x)}{\partial x_p}\end{pmatrix} = \det g_1'(x).$

This means

$\displaystyle 0 = \int_K\det g_1'(x)dx = V(1) > 0$.

A contradiction, which stems from the basic assumption that Brouwer’s Fixed Point Theorem were wrong. The Theorem is thus proved. q.e.d.

Let us make some concluding remarks. Our proof made vivid use of the fact that if there is a retraction, Brouwer’s Theorem must be wrong (this is where we got our contradiction in the end: the retraction cannot exist). The proof may also be started the other way round. If we had proved Brouwer’s Theorem without reference to retractions (this is how Elstrodt does it), you can conclude that there is no retraction from $K$ to $\partial K$ as follows: if there was a retraction $g:K\to\partial K$, we could consider the mapping $-g$. It is, in particular, a mapping of $K$ to itself, but it does not have any fixed point – a contradiction to Brouwer’s Theorem.

Brouwer’s Theorem, as we have stated it here, is not yet ready to drink. For many applications, the set $K$ is too much of a restriction. It turns out, however, that the hardest work has been done. Some little approximation argument (which in the end amounts to continuous projections) allows to formulate, for instance:

• Let $C\subset\mathbb{R}^p$ be convex, compact and $C\neq\emptyset$. Let $f:C\to C$ be continuous. Then $f$ has a fixed point.
• Let $E$ be a normed space, $K\subset E$ convex and $\emptyset\neq C\subset K$ compact. Let $f:K\to C$ be continuous. Then $f$ has a fixed point.
• Let $E$ be a normed space, $K\subset E$ convex and $K\neq\emptyset$. Let $f:K\to K$ be continuous. Let either $K$ be compact or $K$ bounded and $f(K)$ relatively compact. Then $f$ has a fixed point.

The last two statements are called Schauder’s Fixed Point Theorems, which may often be applied in functional analysis, or are famously used for proofs of Peano’s Theorem in differential equations. But at the core of all of them is Brouwer’s Theorem. This seems like a good place to end.

# What does the determinant have to do with transformed measures?

Let us consider transformations of the space $\mathbb{R}^p$. How does Lebesgue measure change by this transformation? And how do integrals change? The general case is answered by Jacobi’s formula for integration by substitution. We will start out slowly and only look at how the measure of sets is transformed by linear mappings.

It is folklore in the basic courses on linear algebra, when the determinant of a matrix is introduced, to convey the notion of size of the parallelogram spanned by the column vectors of the matrix. The following theorem shows why this folklore is true; this of course is based on the axiomatic description of the determinant which encodes the notion of size already. But coming from pure axiomatic reasoning, we can connect the axioms of determinant theory to their actual meaning in measure theory.

First, remember the definition of the pushforward measure. Let $X$ and $Y$ be measurable spaces, and $f:X\to Y$ a measurable mapping (i.e. it maps Borel-measurable sets to Borel-measurable sets; we shall not deal with finer details of measure theory here). Let $\mu$ be a measure on $X$. Then we define a measure on $Y$ in the natural – what would $\mu$ do? – way:

$\displaystyle \mu_f(A) := \mu\bigl(f^{-1}(A)\bigr).$

In what follows, $X = Y = \mathbb{R}^p$ and $\mu = \lambda$ the Lebesgue measure.

Theorem (Transformation of Measure): Let $f$ be a bijective linear mapping and let $A\subset\mathbb{R}^p$ be a measurable set. Then, the pushforward measure satisfies:

$\displaystyle \lambda_f(A) = \left|\det f\right|^{-1}\lambda(A)\qquad\text{for any Borel set }A\in\mathcal{B}^p.$

We will start with several Lemmas.

Lemma (The Translation-Lemma): Lebesgue measure is invariant under translations.

Proof: Let $c,d\in\mathbb{R}^p$ with $c\leq d$ component-wise. Let $t_a$ be the shift by the vector $a\in\mathbb{R}^p$, i.e. $t_a(v) = v+a$ and $t_a(A) = \{x+a\in\mathbb{R}^p\colon x\in A\}$. Then,

$\displaystyle t_a^{-1}\bigl((c,d]\bigr) = (c-a,d-a],$

where the interval is meant as the cartesian product of the component-intervals. For the Lebesgue-measure, we get

$\displaystyle \lambda_{t_a}\bigl((c,d]\bigr) = \lambda\bigl((c-a,d-a]\bigr) = \prod_{i=1}^p\bigl((d-a)-(c-a)\bigr) = \prod_{i=1}^p(d-c) = \lambda\bigl((c,d]\bigr).$

The measures $\lambda$, $\lambda_{t_a}$ hence agree on the rectangles (open on their left-hand sides), i.e. on a semi-ring generating the $\sigma$-algebra $\mathcal{B}^p$. With the usual arguments (which might as well involve $\cap$-stable Dynkin-systems, for instance), we find that the measures agree on the whole $\mathcal{B}^p$.

q.e.d.

Lemma (The constant-multiple-Lemma): Let $\mu$ be a translation-invariant measure on $\mathcal{B}^p$, and $\alpha:=\mu\bigl([0,1]^p\bigr) < \infty$. Then $\mu(A) = \alpha\lambda(A)$ for any $A\in\mathcal(B)^p$.

Note that the Lemma only holds for finite measures on $[0,1]^p$. For instance, the counting measure is translation-invariant, but it is not a multiple of Lebesgue measure.

Proof: We divide the set $(0,1]^p$ via a rectangular grid of sidelengths $\frac1{n_i}$, $i=1,\ldots,p$:

$\displaystyle (0,1]^p = \bigcup_{\stackrel{k_j=0,\ldots,n_j-1}{j=1,\ldots,p}}\left(\times_{i=1}^p\left(0,\frac1{n_i}\right] + \left(\frac{k_1}{n_1},\ldots,\frac{k_p}{n_p}\right)\right).$

On the right-hand side there are $\prod_{i=1}^pn_i$ sets which have the same measure (by translation-invariance). Hence,

$\displaystyle \mu\bigl((0,1]^p\bigr) = \mu\left(\bigcup\cdots\right) = n_1\cdots n_p \cdot \mu\left(\times_{i=1}^p \left(0,\frac1{n_i}\right]\right).$

Here, we distinguish three cases.

Case 1: $\mu\bigl((0,1]^p\bigr) = 1$. Then,

$\displaystyle\mu\left(\times_{i=1}^p (0,\frac1{n_i}]\right) = \prod_{i=1}^p\frac1{n_i} = \lambda\left(\times_{i=1}^p (0,\frac1{n_i}]\right).$

By choosing appropriate grids and further translations, we get that $\mu(I) = \lambda(I)$ for any rectangle $I$ with rational bounds. Via the usual arguments, $\mu=\lambda$ on the whole of $\mathcal{B}^p$.

Case 2: $\mu\bigl((0,1]^p\bigr) \neq 1$ and $>0$. By assumption, however, this measure is finite. Setting $\gamma = \mu\bigl((0,1]^p\bigr)$, we can look at the measure $\gamma^{-1}\mu$, which of course has $\gamma^{-1}\mu\bigl((0,1]^p\bigr) = 1$. By Case 1, $\gamma^{-1}\mu = \lambda$.

Case 3: $\mu\bigl((0,1]^p\bigr) = 0$. Then, using translation invariance again,

$\displaystyle \mu(\mathbb{R}^p) = \mu\bigl(\bigcup_{z\in\mathbb{Z}^p}((0,1]^p+z)\bigr) = \sum_{z\in\mathbb{Z}^p}\mu\bigl((0,1]^p\bigr) = 0.$

Again, we get $\mu(A) = 0$ for all $A\in\mathcal{B}^p$.

That means, in all cases, $\mu$ is equal to a constant multiple of $\lambda$, the constant being the measure of $(0,1]^p$. That is not quite what we intended, as we wish the constant multiple to be the measure of the compact set $[0,1]^p$.

Remember our setting $\alpha:=\mu\bigl([0,1]^p\bigr)$ and $\gamma := \mu\bigl((0,1]^p\bigr)$. Let $A\in\mathcal{B}^p$. We distinguish another two cases:

Case $a)$ $\alpha = 0$. By monotony, $\gamma = 0$ and Case 3 applies: $\mu(A) = 0 = \alpha\lambda(A)$.

Case $b)$ $\alpha > 0$. By monotony and translation-invariance,

$\displaystyle \alpha \leq \mu\bigl((-1,1]^p\bigr) = 2^p\gamma,$

meaning $\gamma\geq\frac{\alpha}{2^p}$. Therefore, as $\alpha>0$, we get $\gamma>0$, and by Case 1, $\frac1\gamma\mu(A) = \lambda(A)$. In particular,

$\displaystyle \frac\alpha\gamma = \frac1\gamma\mu\bigl([0,1]^p\bigr) = \lambda\bigl([0,1]^p\bigr) = 1,$

and so, $\alpha = \gamma$, meaning $\mu(A) = \gamma\lambda(A) = \alpha\lambda(A)$.

q.e.d.

Proof (of the Theorem on Transformation of Measure). We will first show that the measure $\lambda_f$ is invariant under translations.

We find, using the Translation-Lemma in $(\ast)$, and the linearity of $f$ before that,

\displaystyle \begin{aligned} \lambda_{t_a\circ f}(A) = \lambda_f(A-a) &\stackrel{\hphantom{(\ast)}}{=} \lambda\bigl(f^{-1}(A-a)\bigr) \\ &\stackrel{\hphantom{(\ast)}}{=} \lambda\bigl(f^{-1}(A) - f^{-1}(a)\bigr) \\ &\stackrel{(\ast)}{=} \lambda\bigl(f^{-1}(A)\bigr) \\ &\stackrel{\hphantom{(\ast)}}{=} \lambda_f(A), \end{aligned}

which means that $\lambda_f$ is indeed invariant under translations.

As $[0,1]^p$ is compact, so is $f^{-1}\bigl([0,1]^p\bigr)$ – remember that continuous images of compact sets are compact (here, the continuous mapping is $f^{-1}$). In particular, $f^{-1}\bigl([0,1]^p\bigr)$ is bounded, and thus has finite Lebesgue measure.

We set $c(f) := \lambda_f\bigl([0,1]^p)\bigr)$. By the constant-multiple-Lemma, $\lambda_f$ is a multiple of Lebesgue measure: we must have

$\displaystyle \lambda_f(A) = c(f)\lambda(A)\text{ for all }A\in\mathcal{B}^p.\qquad (\spadesuit)$

We only have left to prove that $c(f) = \left|\det f\right|^{-1}$. To do this, there may be two directions to follow. We first give the way that is laid out in Elstrodt’s book (which we are basically following in this whole post). Later, we shall give the more folklore-way of concluding this proof.

We consider more and more general fashions of the invertible linear mapping $f$.

Step 1: Let $f$ be orthogonal. Then, for the unit ball $B_1(0)$,

$\displaystyle c(f)\lambda\bigl(B_1(0)\bigr) \stackrel{(\spadesuit)}{=} \lambda_f\bigl(B_1(0)\bigr) = \lambda\left(f^{-1}\bigl(B_1(0)\bigr)\right) = \lambda\bigl(B_1(0)\bigr).$

This means, that $c(f) = 1 = \left|\det(f)\right|$.

This step shows for the first time how the properties of a determinant encode the notion of size already: we have only used the basic lemmas on orthogonal matrices (they leave distances unchanged and hence the ball $B_1(0)$ doesn’t transform; besides, their inverse is their adjoint) and on determinants (they don’t react to orthogonal matrices because of their multiplicative property and because they don’t care for adjoints).

Step 2: Let $f$ have a representation as a diagonal matrix (using the standard basis of $\mathbb{R}^p$). Let us assume w.l.o.g. that $f = \mathrm{diag}(d_1,\ldots,d_p)$ with $d_i>0$. The case of $d_i<0$ is only notationally cumbersome. We get

$\displaystyle c(f)\lambda_f\bigl([0,1]^p\bigr) = \lambda\left(f^{-1}\bigl([0,1]^p\bigr)\right) = \lambda\bigl(\times_{i=1}^p[0,d_i^{-1}]\bigr) = \prod_{i=1}^pd_i^{-1} = \left|\det f\right|^{-1}.$

Again, the basic lemmas on determinants already make use of the notion of size without actually saying so. Here, it is the computation of the determinant by multiplication of the diagonal.

Step 3: Let $f$ be linear and invertible, and let $f^\ast$ be its adjoint. Then $f^\ast f$ is non-negative definite (since for $x\in\mathbb{R}^p$, $x^\ast(f^\ast f)x = (fx)^\ast(fx) = \left\|fx\right\|^2\geq0$). By the Principal Axis Theorem, there is some orthogonal matrix $v$ and some diagonal matrix with non-negative entries $d$ with $f^\ast f = vd^2v^\ast$. As $f$ was invertible, no entry of $d$ may vanish here (since then, its determinant would vanish and in particular, $f$ would no longer be invertible). Now, we set

$\displaystyle w:=d^{-1}v^\ast f^\ast,$

which is orthogonal because of

$\displaystyle ww^\ast = d^{-1} v^\ast f^\ast fv d^{-1} = d^{-1}v^\ast (vd^2v^\ast)v d^{-1} = d^{-1}d^2d^{-1}v^\ast v v^\ast v = \mathrm{id}.$

As $f = w^\ast dv$, we see from Step 1

\displaystyle \begin{aligned} c(f) = \lambda_f\bigl([0,1]^p\bigr) &= \lambda\left(v^{-1}d^{-1}w\bigl([0,1]^p\bigr)\right) \\ &= \lambda\left(d^{-1}w\bigl([0,1]^p\bigr)\right)\\ &= \left|\det d\right|^{-1}\lambda\left(w\bigl([0,1]^p\bigr)\right) \\ &= \left|\det d\right|^{-1}\lambda\bigl([0,1]^p\bigr) \\ &= \left|\det f\right|^{-1}\lambda\bigl([0,1]^p\bigr) \\ &= \left|\det f\right|^{-1}, \end{aligned}

by the multiplicative property of determinants again ($\det f = \det d$).

q.e.d.(Theorem)

As an encore, we show another way to conclude in the Theorem, once all the Lemmas are shown and applied. This is the more folklore way alluded to in the proof, making use of the fact that any invertible matrix is the product of elementary matrices (and, of course, making use of the multiplicative property of determinants). Hence, we only consider those.

Because Step 2 of the proof already dealt with diagonal matrices, we only have to look at shear-matrices like $E_{ij}(r) := \bigl(\delta_{kl}+r\delta_{ik}\delta_{jl}\bigr)_{k,l=1,\ldots,p}$. They are the identity matrix with the (off-diagonal) entry $r$ in row $i$ and column $j$. One readily finds $\bigl(E_{ij}(r)\bigr)^{-1} = E_{ij}(-r)$, and $\det E_{ij}(r) = 1$. Any vector $v\in[0,1]^p$ is mapped to

$\displaystyle E_{ij}(v_1,\ldots,v_i,\ldots, v_p)^t = (v_1,\ldots,v_i+rv_j,\ldots,v_p)^t.$

This gives

\displaystyle \begin{aligned}\lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr) &= \lambda\left(E_{ij}(-r)\bigl([0,1]^p\bigr)\right) \\ &= \lambda\left(\left\{x\in\mathbb{R}^p\colon x=(v_1,\ldots,v_i-rv_j,\ldots,v_p), v_k\in[0,1]\right\}\right). \end{aligned}

This is a parallelogram that may be covered by $n$ rectangles as follows: we fix the dimension $i$ and one other dimension to set a rectangle of height $\frac1n$, width $1+\frac rn$ (all other dimension-widths = 1; see the image for an illustration). Implicitly, we have demanded that $p\geq2$ here; but $p=1$ is uninteresting for the proof, as there are too few invertible linear mappings in $\mathbb{R}^1$.

By monotony, this yields

$\lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr) \leq n\frac1n\left(1+\frac{r}{n}\right) = 1+\frac rn\xrightarrow{n\to\infty}1.$

On the other hand, this parallelogram itself covers the rectangles of width $1-\frac rn$, and a similar computation shows that in the limit $\lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr)\geq1$.

In particular: $\lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr) = 1 = \left|\det E_{ij}(r)\right|^{-1}$.

q.e.d. (Theorem encore)

Proving the multidimensional transformation formula for integration by substitution is considerably more difficult than in one dimension, where it basically amounts to reading the chain rule reversedly. Let us state the formula here first:

Theorem (The Transformation Formula, Jacobi): Let $U,V\subset \mathbb{R}^p$ be open sets and let $\Phi:U\to V$ be a $\mathcal{C}^1$-diffeomorphism (i.e. $\Phi^{-1}$ exists and both $\Phi$ and $\Phi^{-1}$ are $\mathcal{C}^1$-functions). Let $f:V\to\mathbb{R}$ be measurable. Then, $f\circ\Phi:U\to\mathbb{R}$ is measurable and

$\displaystyle\int_V f(t)dt = \int_U f\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds.$

At the core of the proof is the Theorem on Transformation of Measure that we have proved above. The idea is to approximate $\Phi$ by linear mappings, which locally transform the Lebesgue measure underlying the integral and yield the determinant in each point as correction factor. The technical difficulty is to show that this approximation does no harm for the evaluation of the integral.

We will need a lemma first, which carries most of the weight of the proof.

The Preparatory Lemma: Let $U,V\subset \mathbb{R}^p$ be open sets and let $\Phi:U\to V$ be a $\mathcal{C}^1$-diffeomorphism. If $X\subset U$ is a Borel set, then so is $\Phi(X)\subset V$, and

$\displaystyle \lambda\bigl(\Phi(X)\bigr)\leq\int_X\left|\det\Phi'(s)\right|ds.$

Proof: Without loss of generality, we can assume that $\Phi$, $\Phi'$ and $(\Phi')^{-1}$ are defined on a compact set $K\supset U$. We consider, for instance, the sets

$\displaystyle U_k:=\left\{ x\in U\colon \left|x\right|\frac1k\right\}.$

The $U_k$ are open and bounded, $\overline U_k$ is hence compact, and there is a chain $U_k\subset\overline U_k\subset U_{k+1}\subset\cdots$ for all $k$, with $U=\bigcup_kU_k$. To each $U_k$ there is, hence, a compact superset on which $\Phi$, $\Phi'$ and $(\Phi')^{-1}$ are defined. Now, if we can prove the statement of the Preparatory Lemma on $X_k := X\cap U_k$, it will also be true on $X=\lim_kX_k$ by the monotone convergence theorem.

As we can consider all relevant functions to be defined on compact sets, and as they are continuous (and even more) by assumption, they are readily found to be uniformly continuous and bounded.

It is obvious that $\Phi(X)$ will be a Borel set, as $\Phi^{-1}$ is continuous.

Let us prove that the Preparatory Lemma holds for rectangles $I$ with rational endpoints, being contained in $U$.

There is some $r>0$ such that for any $a\in I$, $B_r(a)\subset U$. By continuity, there is a finite constant $M$ with

$\displaystyle M:=\sup_{t\in I}\left\|\bigl(\Phi'(t)\bigr)^{-1}\right\|,$

and by uniform continuity, $r$ may be chosen small enough such that, for any $\varepsilon>0$, even

$\displaystyle \sup_{x\in B_r(a)}\left\|\Phi'(x)-\Phi'(a)\right\|\leq\frac{\varepsilon}{M\sqrt p} \text{ for every }a\in I.$

With this $r$, we may now sub-divide our rectangle $I$ into disjoint cubes $I_k$ of side-length $d$ such that $d<\frac{r}{\sqrt p}$. In what follows, we shall sometimes need to consider the closure $\overline I_k$ for some of the estimates, but we shall not make the proper distinction for reasons of legibility.

For any given $b\in I_k$, every other point $c$ of $I_k$ may at most have distance $d$ in each of its components, which ensures

$\displaystyle\left\|b-c\right\|^2 \leq \sum_{i=1}^pd^2 = pd^2 < r^2.$

This, in turn, means, $I_k\subset B_r(b)$ (and $B_r(b)\subset U$ has been clear because of the construction of $r$).

Now, in every of the cubes $I_k$, we choose the point $a_k\in I_k$ with

$\displaystyle\left|\det\Phi'(a_k)\right| = \min_{t\in I_k}\left|\det\Phi'(t)\right|,$

and we define the linear mapping

$\displaystyle \Phi_k:=\Phi'(a_k)$.

Remember that for convex sets $A$, differentiable mappings $h:A\to\mathbb{R}^p$, and points $x,y\in A$, the mean value theorem shows

$\displaystyle \left\|h(x)-h(y)\right\|\leq\left\|x-y\right\|\sup_{\lambda\in[0,1]}\left\|h'\bigl(x+\lambda(y-x)\bigr)\right\|.$

Let $a\in I_k$ be a given point in one of the cubes. We apply the mean value theorem to the mapping $h(x):=\Phi(x)-\Phi_k(x)$, which is certainly differentiable, to $y:=a_k$, and to the convex set $A:=B_r(a)$:

\displaystyle \begin{aligned} \left\|h(x)-h(y)\right\|&\leq\left\|x-y\right\|\sup_{\lambda\in[0,1]}\left\|h'\bigl(x+\lambda(y-x)\bigr)\right\|\\ \left\|\Phi(x)-\Phi_k(x)-\Phi(a_k)+\Phi_k(a_k)\right\| & \leq \left\|x-a_k\right\|\sup_{\lambda\in[0,1]}\left\|\Phi'\bigl(x+\lambda(x-a_k)\bigr)-\Phi'(a_k)\right\|\\ \left\|\Phi(x)-\Phi(a_k)-\Phi_k(x-a_k)\right\| &< \left\|x-a_k\right\| \frac{\varepsilon}{M\sqrt p}\qquad (\clubsuit). \end{aligned}

Note that as $a_k\in I_k\subset B_r(a)$, $x+\lambda(x-a_k)\in B_r(a)$ by convexity, and hence the upper estimate of uniform continuity is applicable. Note beyond that, that $\Phi_k$ is the linear mapping $\Phi'(a_k)$ and the derivative of a linear mapping is the linear mapping itself.

Now, $\left\|x-a_k\right\|< d\sqrt p$, as both points are contained in $I_k$, and hence $(\clubsuit)$ shows

\displaystyle \begin{aligned} \Phi(I_k) &\subset \Phi(a_k)+\Phi_k(I_k-a_k)+B_{\frac{\varepsilon}{M\sqrt p}d\sqrt p}(0) \\ &\subset \Phi(a_k)+\Phi_k(I_k-a_k)+B_{\frac{d\varepsilon}{M}}(0). \end{aligned}

By continuity (and hence boundedness) of $\Phi'$, we also have

$\displaystyle \left\|(\Phi_k)^{-1}(x)\right\|\leq\left\|\bigl(\Phi'(a_k)\bigr)^{-1}\right\|\left\|x\right\|\leq M \left\|x\right\|$,

which means $B_{\frac{d\varepsilon}{M}}(0) = \Phi_k\left(\Phi_k^{-1}\bigl(B_{\frac{d\varepsilon}{M}}(0)\bigr)\right) \subset \Phi_k\bigl(B_{d\varepsilon}(0)\bigr)$.

Hence:

$\displaystyle \Phi(I_k) \subset \Phi(a_k) + \Phi_k\bigl(I_k-a_k+B_{d\varepsilon}(0)\bigr).$

Why all this work? We want to bound the measure of the set $\Phi(I_k)$, and we can get it now: the shift $\Phi(a_k)$ is unimportant by translation invariance. And the set $I_k-a_k+B_{d\varepsilon}(0)$ is contained in a cube of side-length $d+2d\varepsilon$. As promised, we have approximated the mapping $\Phi$ by a linear mapping $\Phi_k$ on a small set, and the transformed set has become only slightly bigger. By the Theorem on Transformation of Measure, this shows

\displaystyle \begin{aligned} \lambda\bigl(\Phi(I_k)\bigr) &\leq \lambda\left(\Phi_k\bigl(I_k-a_k+Blat_{d\varepsilon}(0)\bigr)\right) \\ &=\left|\det\Phi_k\right|\lambda\bigl(I_k-a_k+B_{d\varepsilon}(0)\bigr)\\ &\leq \left|\det\Phi_k\right|d^p(1+2\varepsilon)^p \\ &= \left|\det\Phi_k\right|(1+2\varepsilon)^p\lambda(I_k). \end{aligned}

Summing over all the cubes $I_k$ of which the rectangle $I$ was comprised, (remember that $\Phi$ is a diffeomorphism and disjoint sets are kept disjoint; besides, $a_k$ has been chosen to be the point in $I_k$ of smallest determinant for $\Phi'$)

\displaystyle \begin{aligned} \lambda\bigl(\Phi(I)\bigr) &\leq (1+2\varepsilon)^p\sum_{k=1}^n\left|\det \Phi_k\right|\lambda(I_k) \\ &= (1+2\varepsilon)^p\sum_{k=1}^n\left|\det \Phi'(a_k)\right|\lambda(I_k)\\ &= (1+2\varepsilon)^p\sum_{k=1}^n\int_{I_k}\left|\det\Phi'(a_k)\right|ds\\ &\leq (1+2\varepsilon)^p\int_I\left|\det\Phi'(s)\right|ds. \end{aligned}

Taking $\varepsilon\to0$ yields to smaller subdivisions $I_k$ and in the limit to the conclusion. The Preparatory Lemma holds for rectangles.

Now, let $X\subset U$ be any Borel set, and let $\varepsilon>0$. We cover $X$ by disjoint (rational) rectangles $R_k\subset U$, such that $\lambda\bigl(\bigcup R_k \setminus X\bigr)<\varepsilon$. Then,

\displaystyle \begin{aligned} \lambda\bigl(\Phi(X)\bigr) &\leq \sum_{k=1}^\infty \lambda\bigl(\Phi(R_k)\bigr)\\ &\leq\sum_{k=1}^\infty\int_{R_k}\left|\det \Phi'(s)\right|ds\\ &= \int_{\bigcup R_k}\left| \det\Phi'(s)\right| ds\\ &= \int_X\left| \det\Phi'(s)\right| ds + \int_{\bigcup R_k\setminus X}\left|\det\Phi'(s)\right|ds\\ &\leq \int_X\left| \det\Phi'(s)\right| ds + M\lambda\left(\bigcup R_k\setminus X\right)\\ &\leq \int_X\left| \det\Phi'(s)\right| ds + M\varepsilon. \end{aligned}

If we let $\varepsilon\to0$, we see $\lambda\bigl(\Phi(X)\bigr)\leq\int_X\bigl|\det\Phi'(s)\bigr|ds$.

q.e.d. (The Preparatory Lemma)

We didn’t use the full generality that may be possible here: we already focused ourselves on the Borel sets, instead of the larger class of Lebesgue-measurable sets. We shall skip the technical details that are linked to this topic, and switch immediately to the

Proof of Jacobi’s Transformation Formula: We can focus on non-negative functions $f$ without loss of generality (take the positive and the negative part separately, if needed). By the Preparatory Lemma, we already have

\displaystyle\begin{aligned} \int_{\Phi(U)}\mathbf{1}_{\Phi(X)}(s)ds &= \int_{V}\mathbf{1}_{\Phi(X)}(s)ds\\ &= \int_{\Phi(X)}ds\\ &= \lambda\bigl(\Phi(X)\bigr)\\ &\leq \int_X\left|\det\Phi'(s)\right|ds\\ &= \int_U\mathbf{1}_X(s)\left|\det\Phi'(s)\right|ds\\ &= \int_U\mathbf{1}_{\Phi(X)}\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds, \end{aligned}

which proves the inequality

$\displaystyle \int_{\Phi(U)}f(t)dt \leq \int_U f\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds,$

for indicator functions $f = \mathbf{1}_{\Phi(X)}$. By usual arguments (linearity of the integral, monotone convergence), this also holds for any measurable function $f$. To prove the Transformation Formula completely, we apply this inequality to the transformation $\Phi^{-1}$ and the function $g(s):=f\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|$:

\displaystyle \begin{aligned} \int_Uf\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds &= \int_{\Phi^{-1}(V)}g(t)dt\\ &\leq \int_Vg\bigl(\Phi^{-1}(t)\bigr)\left|\det(\Phi^{-1})'(t)\right|dt\\ &=\int_{\Phi(U)}f\Bigl(\Phi\bigl(\Phi^{-1}(t)\bigr)\Bigr)\left|\det\Phi'\bigl(\Phi^{-1}(t)\bigr)\right|\left|\det(\Phi^{-1})'(t)\right|dt\\ &=\int_{\Phi(U)}f(t)dt, \end{aligned}

since the chain rule yields $\Phi'\bigl(\Phi^{-1}\bigr)(\Phi^{-1})' = \bigl(\Phi(\Phi^{-1})\bigr)' = \mathrm{id}$. This means that the reverse inequality also holds. The Theorem is proved.

q.e.d. (Theorem)

There may be other, yet more intricate proofs of this Theorem. We shall not give any other of them here, but the rather mysterious looking way in which the determinant pops up in the transformation formula is not the only way to look at it. There is a proof by induction, given in Heuser’s book, where the determinant just appears from the inductive step. However, there is little geometric intuition in this proof, and it is by no means easier than what we did above (as it make heavy use of the theorem on implicit functions). Similar things may be said about the rather functional analytic proof in Königsberger’s book (who concludes the transformation formula by step functions converging in the $L^1$-norm, he found the determinant pretty much in the same way that we did).

Let us harvest a little of the hard work we did on the Transformation Formula. The most common example is the integral of the standard normal distribution, which amounts to the evaluation of

$\displaystyle \int_{-\infty}^\infty e^{-\frac12x^2}dx.$

This can happen via the transformation to polar coordinates:

$\Phi:(0,\infty)\times(0,2\pi)\to\mathbb{R}^2,\qquad (r,\varphi)\mapsto (r\cos \varphi, r\sin\varphi).$

For this transformation, which is surjective on all of $\mathbb{R}^2$ except for a set of measure $0$, we find

$\Phi'(r,\varphi) = \begin{pmatrix}\cos\varphi&-r\sin\varphi\\\sin\varphi&\hphantom{-}r\cos\varphi\end{pmatrix},\qquad \det\Phi'(r,\varphi) = r.$

From the Transformation Formula we now get

\displaystyle \begin{aligned} \left(\int_{-\infty}^\infty e^{-\frac12x^2}dx\right)^2 &= \int_{\mathbb{R}^2}\exp\left(-\frac12x^2-\frac12y^2\right)dxdy\\ &= \int_{\Phi\left((0,\infty)\times(0,2\pi)\right)}\exp\left(-\frac12x^2-\frac12y^2\right)dxdy\\ &= \int_{(0,\infty)\times(0,2\pi)}\exp\left(-\frac12r^2\cos^2(\varphi)-\frac12r^2\sin^2(\varphi)\right)\left|\det\Phi'(r,\varphi)\right|drd\varphi\\ &= \int_{(0,\infty)\times(0,2\pi)}\exp\left(-\frac12r^2\right)rdrd\varphi\\ &= \int_0^\infty\exp\left(-\frac12r^2\right)rdr\int_0^{2\pi}d\varphi \\ &= 2\pi \left[-\exp\left(-\frac12r^2\right)\right]_0^\infty\\ &= 2\pi \left(1-0\right)\\ &= 2\pi. \end{aligned}

In particular, $\int_{-\infty}^\infty e^{-\frac12x^2}dx=\sqrt{2\pi}$. One of the very basic results in probability theory.

Another little gem that follows from the Transformation Formula are the Fresnel integrals

$\displaystyle \int_{0}^\infty \cos(x^2)dx = \int_{0}^\infty\sin(x^2)dx = \sqrt{\frac{\pi}{8}}.$

They follow from the same basic trick given above for the standard normal density, but as other methods for deriving this result involve even trickier uses of similarly hard techniques (the Residue Theorem, for instance, as given in Remmert’s book), we shall give the proof of this here:

Consider

$\displaystyle F(t)=\int_{0}^\infty e^{-tx^2}\cos(x^2)dx\qquad\text{and}\qquad\int_{0}^\infty e^{-tx^2}\sin(x^2)dx.$

Then, the trigonometric identity $\cos a+b = \cos a \cos b - \sin a\sin b$ tells us

\displaystyle \begin{aligned} \bigl(F(t)\bigr)^2 - \big(G(t)\bigr)^2 &= \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\cos(x^2)\cos(y^2)dxdy - \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\sin(x^2)\sin(y^2)dxdy\\ &= \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\bigl(\cos(x^2+y^2)+\sin(x^2)\sin(y^2) - \sin(x^2)\sin(y^2)\bigr)dxdy\\ &= \int_0^\infty\int_0^{\frac\pi2}e^{-tr^2}\cos(r^2)rd\varphi dr\\ &= \frac\pi2 \int_0^\infty e^{-tu}\cos u\frac12du. \end{aligned}

This integral can be evaluated by parts to show

$\displaystyle \int_0^\infty e^{-tu}\cos udu\left(1+\frac1{t^2}\right) = \frac1t,$

which means

$\displaystyle \bigl(F(t)\bigr)^2 - \bigl(G(t)\bigr)^2 = \frac\pi4\int_0^\infty\int_0^\infty e^{-tu}\cos udu = \frac\pi4\frac t{t^2+1}.$

Then we consider the product $F(t)G(t)$ and use the identity $\sin(a+b) = \cos a\sin b + \cos b\sin a$, as well as the symmetry of the integrand and integration by parts, to get

\displaystyle \begin{aligned} F(t)G(t) &= \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\bigl(\cos(x^2)\sin(y^2)\bigr)dxdy\\ &=2\int_0^\infty\int_0^ye^{-t(x^2+y^2)}\sin(x^2+y^2)dxdy\\ &=2\int_0^\infty\int_0^{\frac\pi8}e^{-tr^2}\sin(r^2)rd\varphi dr\\ &=\frac\pi4\int_0^\infty e^{-tr^2}\sin(r^2)r dr\\ &=\frac\pi4\int_0^\infty e^{-tu}\sin u\frac12du\\ &=\frac\pi8\frac1{1+t^2}. \end{aligned}

We thus find by the dominated convergence theorem

\displaystyle \begin{aligned} \left(\int_0^\infty\cos x^2dx\right)^2-\left(\int_0^\infty\sin x^2dx\right)^2 &= \left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\cos x^2dx\right)^2-\left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\sin x^2dx\right)^2 \\ &=\lim_{t\downarrow0}\left(\bigl(F(t)\bigr)^2-\bigl(G(t)\bigr)^2\right)\\ &=\lim_{t\downarrow0}\frac\pi4\frac{t}{t^2+1}\\ &=0, \end{aligned}

and

\displaystyle \begin{aligned} \left(\int_0^\infty\cos x^2dx\right)^2 &= \left(\int_0^\infty\cos x^2dx\right)\left(\int_0^\infty\sin x^2dx\right)\\ &=\left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\cos x^2dx\right)\left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\sin x^2dx\right)\\ &=\lim_{t\downarrow0}F(t)G(t)\\ &=\lim_{t\downarrow0}\frac\pi8\frac1{1+t^2}\\ &=\frac\pi8. \end{aligned}

One can easily find the bound that both integrals must be positive and from the first computation, we get

$\int_0^\infty\cos x^2dx = \int_0^\infty\sin x^2dx,$

from the second computation follows that the integrals have value $\sqrt{\frac\pi8}$.

q.e.d. (Fresnel integrals)

Even Brouwer’s Fixed Point Theorem may be concluded from the Transformation Formula (amongst a bunch of other theorems, none of which is actually as deep as this one though). This is worthy of a seperate text, mind you.

# Johann Kepler and how planets move

Kepler’s Laws of planetary motion follow very smoothly from Newton’s Law of Gravitation. Very little tough mathematics is needed for the proofs, it can actually be done with ordinary differential calculus and some knowledge on path integration.

Of course, from a historical point of view, these laws appeared reversed. Kepler had neither Newton’s Law at his disposal, nor had he sufficient use of the calculus machinery that we have today. Instead, his way of coming up with his Laws included years of hard work on the astronomical tables compiled by Tycho Brahe; Kepler himself was unable to make astronomical observations himself, even with his self-invented telescope, as he was ill-sighted for all his life. Later, Newton could rely on Kepler’s results to find inspiration for his Law of Gravitation: indeed, unless he could relate his results to Kepler’s laws, he knew his results to be incomplete. Together with his many other achievements (for instance, Kepler was the first to state Simpson’s rule, which is accordingly called “Kepler’s barrel rule” sometimes) this makes him one of the most interesting minds of the early modern era. His interest in planet’s orbits was rooted in astrology and the construction of horoscopes; his observations and deductions led him to drop both Copernicus’ thought that the orbits were circles and, later, that there were no platonic solids involved. Both Copernicus and Kepler had revolutionized the thoughts on space itself, Copernicus by showing how much easier the orbits can be described when Earth is allowed to move itself (no more epicycles and the like), but Kepler made it even easier by dropping circles altogether.

To consider how hard the problem of finding the planet’s orbits is, consider that we have plenty of observational data of the planets, however the data contain two unknowns: the orbit of Earth and the orbit of the planet. On top of that, our observations only tell us about the angle under which the planet is observed, we don’t learn anything about the distances involved (unless we have Kepler’s third law at our disposal). In a very insightful talk for a wider audience, Terry Tao has explained some of Kepler’s ideas on this, especially how Kepler dealt with the orbit of Mars which had been for several reasons the most tricky one of the orbits in the models that preceded Kepler. Tao mentions that Einstein valued the finding of Kepler’s Laws one of the most shining moments in the history of human curiosity. “And if Einstein calls you a genius, you are really doing well.

From these Laws and from what Newton and his successors achieved, many things can be inferred that are impossible to measure directly. For instance the mass of the Sun and all planets can be computed from here, once the gravitational constant is known (which is tricky to pinpoint, actually). Voltaire is quoted with the sentence, regarding Newton’s achievements but this would fit to Kepler as well, that the insights gained “semblaient n’être pas faites pour l’esprit humain.

To give just a tiny bit of contrast, we mention that Kepler also had erroneous thoughts that show how deeply he was still rooted in ancient ideas of harmonics and aesthetics. For instance, Kepler tried to prove why the Solar system had exactly six planets (or to rephrase a little more accurately to his thinking: why God had found pleasure in creating exactly six planets). For some time, he believed that the reason was related to the fact that there are exactly five platonic solids which define the structure of the six orbits around the sun. Those were ideas also related to the integer harmonies of a vibrating string, as the planets were supposed to move in a harmonical way themselves. Of course, in these days the observations were limited up to Saturn, as the outer planets (and dwarf planets) cannot be found by eyesight or the telescopes at Kepler’s disposal; all such ideas were doomed to be incomplete. However, his quest for harmonics in the Solar system led him in the end to his Third Law. On another account, Kepler was mistaken in the deduction of his First Law, since he lacked the deep knowledge about integration that would be developed decades later; luckily, his mistake cancels out with another mistake later on: “Es ist schon atemberaubend, wie sich bei Keplers Rechnungen letztlich doch alles fügt.” (“It is stunning how everything in Kepler’s computations adds up in the end”; have a look at Sonar’s highly readable and interesting book on the history of calculus for this).

In what follows, we shall show how the Kepler’s Laws can be proved, assuming Newton’s Law of Gravitation, in a purely mathematical fashion. There will be no heuristics from physics or from astronomy, only the axiomatic mathematical deduction that mostly works without any intuition from the applications (though we will look at motivations for why some definitions are made the way they are).

As a nice aside, we can look at the mathematical descriptions of the conic sections on which the first Law relies. But here again, there’s no connection to why these curve are called this way.

Let us state Kepler’s Laws here first.

Kepler’s First Law of Planetary Motion: Planets orbit in ellipses, with the Sun as one of the foci.

Kepler’s Second Law of Planetary Motion: A planet sweeps out equal areas in equal times.

Kepler’s Third Law of Planetary Motion: The square of the period of an orbit is proportional to the cube of its semi-major axis.

Let us prove this, by following the account of Königsberger’s book on calculus. Many calculus books deal with Kepler’s Laws in a similar axiomatical fashion, yet we stick to this account as it appears to be the neatest one without conjuring up too much of physics.

We shall give a couple of technical lemmas first.

The Triangle-Lemma: The triangle marked by the points $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$ has area $\displaystyle \frac12\left|\mathrm{det}\begin{pmatrix}1&x_1&y_1\\1&x_2&y_2\\1&x_3&y_3\end{pmatrix}\right|$.

Proof: The triangle together with the coordinate axes marks the parallelograms:

$P_{13}$ with the points $(x_1,y_1)$, $(x_3, y_3)$, $(x_3,0)$, $(x_1,0)$;

$P_{23}$ with the points $(x_2,y_2)$, $(x_3, y_3)$, $(x_3,0)$, $(x_2,0)$;

$P_{12}$ with the points $(x_1,y_1)$, $(x_2, y_2)$, $(x_2,0)$, $(x_1,0)$.

Thus, the area of the triangle is:

$F_{T} = P_{13}+P_{23}-P_{12}$.

The sign represents the situation given in the figure. For other triangles, another permutation of the signs may be necessary, but there will always be exactly one negative sign. Other permutations of the sign only represent a re-numbering of the points and therefore a change of sign in the determinant given in the statement. As we put absolute values to our statement, we avoid any difficulties of this kind.

As each of the paralellograms has two of their points on the $x$-axis, we find

\begin{aligned} F_{T} &= \frac{y_1+y_3}{2}(x_3-x_1)+\frac{y_2+y_3}{2}(x_2-x_3)-\frac{y_1+y_2}{2}(x_2-x_1)\\ &= \frac12\left(x_1\bigl((y_1+y_2)-(y_1+y_3)\bigr) + x_2\bigl((y_2+y_3)-(y_1+y_2)\bigr) + x_3\bigl((y_1+y_3)-(y_2+y_3)\bigr)\right)\\ &= \frac12\left(x_1\bigl((y_1-y_3) + (y_2 - y_1)\bigr) + x_2(y_3-y_1) + x_3(y_1-y_2)\right)\\ &= \frac12\left((x_2-x_1)(y_3-y_1) - (x_3-x_1)(y_2-y_1)\right)\\ &= \frac12\mathrm{det}\begin{pmatrix}x_2-x_1&y_2-y_1\\x_3-x_1&y_3-y_1\end{pmatrix}\\ &= \frac12\mathrm{det}\begin{pmatrix}1&x_1&y_1\\0&x_2-x_1&y_2-y_1\\0&x_3-x_1&y_3-y_1\end{pmatrix}\\ &= \frac12\mathrm{det}\begin{pmatrix}1&x_1&y_1\\1&x_2&y_2\\1&x_3&y_3\end{pmatrix}. \end{aligned}

Q.e.d.

Lemma (Leibniz’ sector formula): Let $\gamma:[a,b]\to\mathbb{R}^2$ be a continuously differentiable path, $\gamma(t) = \begin{pmatrix}x(t)\\y(t)\end{pmatrix}$. Then the line segment from $0$ to the points of the path sweeps the area $\displaystyle\frac12\int_a^b(x\dot y - y\dot x)dt$.

Note that we have used Newton’s notation for derivatives. One might also write the integral as $\displaystyle\int_a^b\bigl(x(t)y'(t)-y(t)x'(t)\bigr)dt$.

Proof: Let us clarify first, what we understand by “sweeping” line segments. Consider the path $\gamma$ given in the image.

As this path is not closed (it’s not a contour), it doesn’t contain an area. But if we take the origin into account, we can define an area that is related to where the path is:

Now, pick a partition of $[a,b]$, such as $\{a=t_0,t_1,\ldots,t_{n-1},b=t_n\}$ and make a polygon of the partition and the origin – the corresponding triangles form an area that approximates the area bounded by $\gamma$, as above.

As the partition gets finer, we expect that the polygon-area converges to the $\gamma$-area. And this is where the definition originates, of the area that is swept by a path:

For any $\varepsilon>0$ there shall be $\delta>0$, such that for every partition $\{t_0,\ldots,t_n\}$ of $[a,b]$ that is finer than $\delta$, we get

$\displaystyle \left|\sum_{i=0}^{n-1}F_{T_i} - \frac12\int_a^b(x\dot y-y\dot x)dt\right| < \varepsilon.\qquad(\ast)$

Here, $F(T_i)$ is the area of the triangle bounded by $\gamma_{t_i}$, $\gamma_{t_{i+1}}$ and the origin. By the Triangle-Lemma, its area is $\displaystyle\frac12\mathrm{det}\begin{pmatrix}1&0&0\\1&x_i&y_i\\1&x_{i+1}&y_{i+1}\end{pmatrix}$.

Because the orientation of the $T_i$ might be of importance, $F(T_i)$ may keep its sign in what follows (Imagine, for instance, a path that traverses a line segment once from left to right and once from right to left; in total, no area is covered).

Now, let us prove that $(\ast)$ is true.

As $\dot x$ and $\dot y$ are continuous, choose $L=\max\left(\max_{[a,b]}\left|\dot x(t)\right|, \max_{[a,b]}\left|\dot y(t)\right|\right)$ and take $\delta = \frac{\varepsilon}{2L^2(b-a)}$. Take a partition $\{t_0,\ldots,t_n\}$ with $t_0=a$, $t_0=b$ and $\left|t_{k+1}-t_k\right| < \delta$ for $k=0,\ldots,n-1$. Then, for any such $k$,

\begin{aligned} F(T_k) &= \frac12\mathrm{det}\begin{pmatrix}1&0&0\\1&x_k&y_k\\1&x_{k+1}&y_{k+1}\end{pmatrix}\\ &=\frac12(x_ky_{k+1}-x_{k+1}y_k)\\ &= \frac12\Bigl(x_k\bigl(y_k+(y_{k+1}-y_k)\bigr) - y_k\bigl(x_k+(x_{k+1}-x_k)\bigr)\Bigr)\\ &=\frac12\bigl(x_k(y_{k+1}-y_k)-y_k(x_{k+1}-x_k)\bigr)\\ &=\frac12\left(x_k\int_{t_k}^{t_{k+1}}\dot y(t)dt - y_k\int_{t_k}^{t_{k+1}}\dot x(t)dt\right)\\ &=\frac12\int_{t_k}^{t_{k+1}}\bigl(x_k\dot y(t)-y_k\dot x(t)\bigr)dt. \end{aligned}

This yields, using the mean value theorem,

\begin{aligned} \left|2F(T_k)-\int_{t_k}^{t_{k+1}}(x\dot y-y\dot x)dt\right| &= \left|\int_{t_k}^{t_{k+1}}(x_k\dot y-y_k\dot x-x\dot y+y\dot x)dt\right|\\ &\leq \left|\int_{t_k}^{t_{k+1}}(x_k-x)\dot ydt\right| + \left|\int_{t_k}^{t_{k+1}}(y-y_k)\dot xdt\right|\\ &\leq\int_{t_k}^{t_{k+1}}\max_{[a,b]}\left|\dot x(t)\right|\left|{t_k-t}\right|\left|\dot y(t)\right|dt + \\ &\hphantom{=}+\int_{t_k}^{t_{k+1}}\max_{[a,b]}\left|\dot y(t)\right|\left|{t_k-t}\right|\left|\dot x(t)\right|dt\\ &\leq L^2\left|t_{k+1}-t_k\right|(t_{k+1}-t_k) + L^2\left|t_{k+1}-t_k\right|(t_{k+1}-t_k)\\ &< 2L^2\delta(t_{k+1}-t_k). \end{aligned}

We conclude

$\displaystyle\left|2\sum_{k=0}^{n-1}F(T_k) - \int_a^b(x\dot y - y\dot x)dt\right| < 2L^2\delta(b-a) = \varepsilon.$

Q.e.d.

One might as well prove this by applying Green’s Theorem, but in this case it just gets less elementary.

The $\times$-Lemma: Let $a,b\in\mathbb{R}^3$. We set

$\displaystyle a\times b := \begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix}$.

Obviously, this is linear both in $a$ and in $b$. We have

$(i)\quad\displaystyle \left\langle (a\times b),c\right\rangle = \mathrm{det}(a,b,c).$

$(ii)\quad a\times b$ is orthogonal to $a$ and to $b$.

$(iii)\quad\displaystyle (a\times b)\times c = -\left\langle b,c\right\rangle a + \left\langle a,c\right\rangle b$ (Grassmann’s equation).

$(iv)\quad\displaystyle (a\times b)^{\cdot} = \dot a\times b+a\times\dot b$.

Proof: For $(i)$:

\begin{aligned} \left\langle (a\times b),c\right\rangle &= \left\langle\begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix},\begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}\right\rangle\\ &=c_1(a_2b_3-a_3b_2)+c_2(a_3b_1-a_1b_3)+c_3(a_1b_2-a_2b_1)\\ &=c_1\mathrm{det}\begin{pmatrix}a_2&b_2\\a_3&b_3\end{pmatrix}-c_2\mathrm{det}\begin{pmatrix}a_1&b_1\\a_3&b_3\end{pmatrix}+c_3\mathrm{det}\begin{pmatrix}a_1&b_1\\a_2&b_2\end{pmatrix}\\ &=\mathrm{det}\begin{pmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{pmatrix}\\ &=\mathrm{det}(a,b,c). \end{aligned}

For $(ii)$: $\left\langle a\times b, a\right\rangle = \mathrm{det}(a,b,a)=0$ and $\left\langle a\times b,b\right\rangle = \mathrm{det}(a,b,b)=0$.

For $(iii)$:

\begin{aligned} (a\times b)\times c &=\begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix}\times\begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}\\ &=\begin{pmatrix}(a_3b_1-a_1b_3)c_3-(a_1b_2-a_2b_1)c_2\\(a_1b_2-a_2b_1)c_1-(a_2b_3-a_3b_2)c_3\\(a_2b_3-a_3b_2)c_2-(a_3b_1-a_1b_3)c_1\end{pmatrix}\\ &=\begin{pmatrix}a_3b_1c_3-a_1b_3c_3-a_1b_2c_2+a_2b_1c_2\\a_1b_2c_1-a_2b_1c_1-a_2b_3c_3+a_3b_2c_3\\a_2b_3c_2-a_3b_2c_2-a_3b_1c_1+a_1b_3c_1\end{pmatrix}\\ &=\begin{pmatrix}a_1b_1c_1+a_2b_1c_2+a_3b_1c_3-a_1b_1c_1-a_1b_2c_2-a_1b_3c_3\\a_1b_2c_1+a_2b_2c_2+a_3b_2c_3-a_2b_1c_1-a_2b_2c_2-a_2b_3c_3\\a_1b_3c_1+a_2b_3c_2+a_3b_3c_3-a_3b_1c_1-a_3b_2c_2-a_3b_3c_3\end{pmatrix}\\ &=(a_1c_1+a_2c_2+a_3c_3)\begin{pmatrix}b_1\\b_2\\b_3\end{pmatrix} - (b_1c_1+b_2c_2+b_3c_3)\begin{pmatrix}a_1\\a_2\\a_3\end{pmatrix}\\ &=\left\langle a,c\right\rangle b - \left\langle b,c\right\rangle a. \end{aligned}

For $(iv)$:

\begin{aligned} \bigl(a(t)\times b(t)\bigr)^\cdot &= \begin{pmatrix}a_2(t)b_3(t)-a_3(t)b_2(t)\\a_3(t)b_1(t)-a_1(t)b_3(t)\\a_1(t)b_2(t)-a_2(t)b_1(t)\end{pmatrix}^\cdot\\ &=\begin{pmatrix}\dot a_2b_3+a_2\dot b_3-\dot a_3b_2-a_3\dot b_2\\\dot a_3b_1+a_3\dot b_1-\dot a_1b_3-a_1\dot b_3\\\dot a_1b_2+a_1\dot b_2-\dot a_2b_1-a_2\dot b_1\end{pmatrix}\\ &=\begin{pmatrix}\dot a_2b_3-\dot a_3b_2\\\dot a_3b_1-\dot a_1b_3\\\dot a_1b_2-\dot a_2b_1\end{pmatrix}+\begin{pmatrix}a_2\dot b_3-a_3\dot b_2\\a_3\dot b_1-a_1\dot b_3\\ a_1\dot b_2-a_2\dot b_1\end{pmatrix} \\ &=\dot a\times b + a\times \dot b \end{aligned}

Q.e.d.

Now, let us look at conic sections and define them mathematically. We will not be interested in what these things have to do with cones – as stated at the beginning: pure mathematics here.

We are going to work in $\mathbb{R}^2$ here. Let $F$ be a point (the so-called focal point) and $l$ be a line (the so-called directrix), and the distance of $F$ and $l$ shall be some $p>0$. We are looking for all those points in $\mathbb{R}^2$ for which the distance to $F$ and the distance to $l$ are proportional – formally: For any point $(\xi,\eta)^t\in\mathbb{R}^2$, we set

$r=\mathrm{dist}\bigl(F,(\xi,\eta)^t\bigr)\qquad\text{and}\qquad d = \mathrm{dist}\bigl(l, (\xi,\eta)^t\bigr),$

and for $\varepsilon > 0$ we demand

$\displaystyle \frac{r}{d}=\varepsilon.$

For simplicity, we will put $F$ into the origin of our coordinate system, and $l$ parallel to one of the axes, as in the figure. In particular, $r^2 = \xi^2+\eta^2$ and $d = p+\xi$. Our equation for the interesting points thus becomes:

\begin{aligned} && r^2&=\varepsilon^2d^2\\ &\iff& \xi^2+\eta^2 &= \varepsilon^2p^2+2\varepsilon^2p\xi+\varepsilon^2\xi^2\\ &\iff&\xi^2(1-\varepsilon^2) &= \varepsilon^2p^2-\eta^2+2\varepsilon^2p\xi. \end{aligned}

Let us distinguish the following cases:

Case $\varepsilon = 1$. Then we set $x := \xi+\frac p2$, $y := \eta$ and find

\begin{aligned} &&\xi^2 (1-\varepsilon^2) &= \varepsilon^2p^2-\eta^2 + 2\varepsilon^2 p \xi \\ &\iff& 0 &= p^2-y^2+2p\left(x-\frac p2\right)\\ &\iff& 0 &= p^2-y^2+2px-p^2\\ &\iff& y^2 &= 2px. \end{aligned}

We see that the interesting points lie on a parabola which is open to the right (by choosing other coordinate systems, of course, any other parabola will appear; in some way, this is its normal form).

Case $\varepsilon < 1$. Here we set $x:=\xi-\frac{p\varepsilon^2}{1-\varepsilon^2}$, $y:=\eta$. Then we get

\begin{aligned} &&\xi^2 (1-\varepsilon^2) &= \varepsilon^2p^2-\eta^2 + 2\varepsilon^2 p \xi \\ &\iff &\left(x+\frac{p\varepsilon^2}{1-\varepsilon^2}\right)^2(1-\varepsilon^2) &= \varepsilon^2p^2-y^2+2\varepsilon^2p\left(x+\frac{p\varepsilon^2}{1-\varepsilon^2}\right)\\ &\iff& x^2(1-\varepsilon^2)+2xp\varepsilon^2 + \frac{p^2\varepsilon^4}{1-\varepsilon^2} &= \varepsilon^2p^2-y^2+2\varepsilon^2px+\frac{2\varepsilon^4p^2}{1-\varepsilon^2}\\ &\iff& x^2(1-\varepsilon^2) + y^2 &= \varepsilon^2p^2+\frac{\varepsilon^4p^2}{1-\varepsilon^2}\\ &\iff& x^2(1-\varepsilon^2) + y^2 &= \frac{(1-\varepsilon^2)\varepsilon^2p^2+\varepsilon^4p^2}{1-\varepsilon^2}\\ &\iff& x^2(1-\varepsilon^2) + y^2 &= \frac{\varepsilon^2p^2}{1-\varepsilon^2}\\ &\iff& \frac{x^2(1-\varepsilon^2)^2}{\varepsilon^2p^2} + \frac{y^2(1-\varepsilon^2)}{\varepsilon^2p^2} &= 1\\ &\iff& \frac{x^2}{a^2} + \frac{y^2}{b^2} &= 1, \end{aligned}

with $a = \frac{\varepsilon p}{1-\varepsilon^2}$ and $b = \frac{\varepsilon p}{\sqrt{1-\varepsilon^2}}$.

We have found that the interesting points lie on an ellipse.

Case $\varepsilon > 1$. This is exactly the same as the case $\varepsilon<1$, except for the last step. We mustn’t set $b$ as before, since $1-\varepsilon^2<0$ and we cannot get a real square root of this. Thus, we use $b = \frac{\varepsilon p}{\sqrt{\varepsilon^2-1}}$ and the resulting negative sign is placed in the final equation:

$\displaystyle \frac{x^2}{a^2}-\frac{y^2}{b^2} = 1.$

This is a hyperbola.

To conclude this part, we give the general representation of conic sections in polar coordinates. From the figure given above, we see $d = p+r\cos\varphi$ and so

$\displaystyle r = \varepsilon d = \varepsilon p + \varepsilon r\cos\varphi,$

which means

$\displaystyle r = \frac{\varepsilon p}{1-\varepsilon\cos\varphi}.$

That yields the polar coordinates (only depending on parameters and on the variable $\varphi$:

$\displaystyle re^{i\varphi} = \frac{\varepsilon p}{1-\varepsilon\cos\varphi}e^{i\varphi}.$

Now, let us turn to our base for the proofs of Kepler’s Laws: Newton’s Law of Gravitation. Let $m$ be the mass of a planet, $M$ the mass of the Sun, $\gamma$ a real constant (the gravitational constant), and let $x(t)$ be a path (the planet’s orbit). By Newton’s Law we have the differential equation

$\displaystyle m\ddot x = -\gamma Mm\frac{x}{\left\|x\right\|^3}.$

On the left-hand side, there’s the definition of force as mass multiplied by acceleration. On the right-hand side is Newton’s Law stating the gravitational force between Sun and planet.

We define the vector-valued functions of $t$ (note that $x$ depends on $t$):

$\displaystyle J = x\times m\dot x\qquad\text{and}\qquad A=\frac1{\gamma Mm}J\times\dot x+\frac{x}{\left\|x\right\|}.$

($J$ is the angular momentum, $A$ is an axis; but our math doesn’t care for either of those names or intentions).

The $A$$J$-Lemma: As functions of $t$, $A$ and $J$ are constant.

Proof: Let us look at $J$ first. Using the fact that by definition $a\times a = 0$, and Newton’s Law of Gravitation, we get

\begin{aligned} \dot J \underset{\times\text{-Lemma}}{\overset{(iv)}{=}} (x\times m\dot x)^\cdot &= \dot x \times m\dot x + x\times m\ddot x\\ &= \dot x \times m\dot x + x\times\left(-\gamma Mm\frac{x}{\left\|x\right\|^3}\right)\\ &= m(\dot x\times \dot x) - \frac{\gamma Mm}{\left\|x\right\|^3}(x\times x)\\ &= 0 - 0 = 0. \end{aligned}

Now, for $A$, we will need a side-result first.

\begin{aligned} \frac{d}{dt}\frac{1}{\left\|x\right\|} &= \frac{d}{dt}\bigl(x_1^2(t)+x_2^2(t)+x_3^2(t)\bigr)^{-1/2}\\ &= -\frac12\bigl(x_1^2(t)+x_2^2(t)+x_3^2(t)\bigr)^{-3/2}\bigl(2x_1(t)\dot x_1(t)+2x_2(t)\dot x_2(t)+2x_3(t)\dot x_3(t)\bigr)\\ &= -\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}. \end{aligned}

This yields

\begin{aligned} \dot A &\underset{\hphantom{\times\text{-Lemma}}}{=} \left(\frac{1}{\gamma Mm}J\times \dot x + \frac{x}{\left\|x\right\|}\right)^\cdot \\ &\underset{\times\text{-Lemma}}{\overset{(iv)}{=}} \frac{1}{\gamma Mm}\bigl(\dot J\times\dot x + J\times\ddot x\bigr) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} \frac1{\gamma Mm}\left(0+J\times\left(-\gamma M\frac{x}{\left\|x\right\|^3}\right)\right) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\ &\underset{\hphantom{\times\text{-Lemma}}}{=}-\frac1m\left((x\times m\dot x)\times \frac{x}{\left\|x\right\|^3}\right) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\ &\underset{\hphantom{\times\text{-Lemma}}}{=}-\left((x\times \dot x)\times \frac{x}{\left\|x\right\|^3}\right) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\ &\underset{\times\text{-Lemma}}{\overset{(iii)}{=}} -\left\langle x,\frac{x}{\left\|x\right\|^3}\right\rangle \dot x + \left\langle \dot x,\frac{x}{\left\|x\right\|^3}\right\rangle x + \frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x \\ &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\left\|x\right\|^3}\left\langle x,x\right\rangle \dot x + \frac1{\left\|x\right\|^3}\left\langle \dot x, x\right\rangle x + \frac1{\left\|x\right\|}\dot x-\frac1{\left\|x\right\|^3}\left\langle x,\dot x\right\rangle x\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\left\|x\right\|^3}\left\|x\right\|^2\dot x + \frac1{\left\|x\right\|}\dot x\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} 0. \end{aligned}

Q.e.d.

Now, we have all ingredients to prove Kepler’s Laws. We conclude axiomatically by assuming that planetary motion is governed by Newton’s Law of Gravitation (the differential equation given above).

Let’s start with

Theorem (Kepler’s First Law of Planetary Motion): Planets orbit in ellipses, with the Sun as one of the foci.

Proof: Let $x(t)$ denote the orbit of a planet around the Sun. By the $A$$J$-Lemma, $J$ is constant. By definition of $J = x\times m\dot x$ and by $(ii)$ of the $\times$-Lemma, both $x$ and $\dot x$ are orthogonal to $J$; the orbit is therefore located in a two-dimensional plane. Let us introduce polar coordinates in this plane, with the Sun in the origin, and with axis $A$ (this works, as $A$ is located in the same plane as well: $A\bot J$ by the definition of $A$).

Now let $\varphi(t)$ be the angle of $x(t)$ and $A$, set $\varepsilon := \left\|A\right\|$. This means

$\displaystyle \cos\varphi(t) = \frac{\left\langle x(t),A\right\rangle}{\left\|x\right\|\left\|A\right\|},$

and so

$\displaystyle \left\langle A,x(t)\right\rangle = \varepsilon \left\|x\right\|\cos\varphi(t).$

By definition of $A$, we have

\begin{aligned} \left\langle A,x(t)\right\rangle &\underset{\hphantom{\times\text{-Lemma}}}{=} \left\langle\frac{1}{\gamma Mm}J\times\dot x + \frac{x}{\left\|x\right\|}, x(t)\right\rangle\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} \frac{1}{\gamma Mm}\left\langle J\times \dot x, x\right\rangle + \frac{1}{\left\|x\right\|}\left\langle x,x\right\rangle\\ &\underset{\times\text{-Lemma}}{\overset{(i)}{=}} \frac{1}{\gamma Mm}\mathrm{det}(J,\dot x,x)+\frac{\left\| x\right\|^2}{\left\|x\right\|}\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} (-1)^3\frac{1}{\gamma Mm}\mathrm{det}(x,\dot x,J) + \left\|x(t)\right\|\\ &\underset{\times\text{-Lemma}}{\overset{(i)}{=}} -\frac1{\gamma Mm}\left\langle x\times \dot x,J\right\rangle + \left\|x(t)\right\|\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\gamma Mm^2}\left\langle x\times m\dot x, J\right\rangle + \left\|x(t)\right\|\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\gamma Mm^2}\left\langle J,J\right\rangle + \left\|x(t)\right\|\\ &\underset{\hphantom{\times\text{-Lemma}}}{=} const. + \left\|x(t)\right\|. \end{aligned}

Now, if $A=0$, then we have found

$\displaystyle \left\|x(t)\right\| = \frac1{\gamma Mm^2}\left\|J\right\|^2,$

which means that the planet moves on a circular orbit.

If $A\neq0$, then we conclude

\begin{aligned} &&\varepsilon \left\|x(t)\right\|\cos\varphi(t) &= -\frac1{\gamma Mm^2}\left\|J\right\|^2+\left\|x(t)\right\|\\ &\implies&\varepsilon\left\|x(t)\right\|\cos\varphi(t)&=-\frac1{\gamma Mm^2}\left\|J\right\|^2+\left\|x(t)\right\|\\ &\implies&\frac1{\gamma Mm^2}\left\|J\right\|^2&=\bigl(1-\varepsilon\cos\varphi(t)\bigr)\left\|x(t)\right\|\\ &\implies&\left\|x(t)\right\|&=\frac{\frac{\left\|J\right\|^2}{\gamma Mm^2}}{1-\varepsilon\cos\varphi(t)} = \frac{\varepsilon\frac{\left\|J\right\|^2}{\gamma Mm^2\left\|A\right\|}}{1-\varepsilon\cos\varphi(t)} = \frac{\varepsilon p}{1-\varepsilon\cos\phi(t)}, \end{aligned}

with $p$ defined in the obvious fashion to make the last equation work.

Therefore, the planet moves on a conic section, with focus in the Sun. As the planet’s orbits are bounded, we have proved that it must follow an ellipse. Q.e.d.

Theorem (Kepler’s Second Law of Planetary Motion): A planet sweeps out equal areas in equal times.

Proof: We use cartesian coordinates in $\mathbb{R}^3$, such that $e_1$ is parallel to $A$ and $e_3$ is parallel to $J$. Then $x(t)$ is the plane $\mathrm{span}(e_1,e_2)$ and the Sun is in $(0,0)$. In particular, $x_3(t) = 0$ for all $t$ by the proof of the First Law. Then,

$\displaystyle \frac1m J = x\times \dot x = \begin{pmatrix}x_1\\x_2\\0\end{pmatrix}\times \begin{pmatrix}\dot x_1\\\dot x_2\\ 0\end{pmatrix} = \begin{pmatrix}0\\0\\x_1\dot x_2-x_2\dot x_1\end{pmatrix}.$

By Leibniz’ sector formula, the line segment between times $t_1$ and $t_2$ sweeps the area

$\displaystyle \frac12\left|\int_{t_1}^{t_2}(x_1\dot x_2-x_2\dot x_1)dt\right| = \frac1{2m}\left\|J\right\|(t_2-t_1).$

This area only depends on the difference of times, as stated. Q.e.d.

Theorem (Kepler’s Third Law of Planetary Motion): The square of the period of an orbit is proportional to the cube of its semi-major axis.

Proof: By Leibniz’ sector formula (used similarly to the proof of the Second Law), the area contained in the planet’s entire orbit is

$\displaystyle \frac12\left|\int_0^T (x_1\dot x_2-x_2\dot x_1)dt\right| = \frac1{2m}\left\|J\right\| T,$

where $T$ is the time taken for a full orbit around the Sun. By the First Law, this orbit is an ellipse, the area of which may be computed as follows: The cartesian coordinates of an ellipse are

$\begin{pmatrix}x(t)\\y(t)\end{pmatrix}=\begin{pmatrix}a\cos t\\b\sin t\end{pmatrix},$

with $a$ and $b$ real constants (the larger one is called the semi-major axis). This is actually an ellipse because of

$\displaystyle \frac{x^2}{a^2}+\frac{y^2}{b^2} = \frac{a^2\cos^2 t}{a^2}+\frac{b^2\sin^2t}{b^2} = 1.$

From the notations about normal forms of conic sections, we find $b = \frac{\varepsilon p}{\sqrt{1-\varepsilon2}} = a\sqrt{1-\varepsilon^2}$, which implies that $a>b$ (as $\varepsilon > 0$ for any conic section). Now, the area of the ellipse is, by Leibniz’ sector formula again,

\begin{aligned} \frac12\left|\int_0^{2\pi} (x_1\dot x_2-x_2\dot x_1)dt\right| &= \frac12\left|\int_0^{2\pi}\bigl(a \cos t \cdot b \cos t-b\sin t \cdot a (-\sin t)\bigr)dt\right| \\ &= \frac12 ab\int_0^{2\pi}dt \\ &= \pi ab. \end{aligned}

Both representations of the area covered by the orbit now yield

$\displaystyle T \frac1{2m}\left\|J\right\| = \pi a^2\sqrt{1-\varepsilon^2},$

and so, using the definition of $p$ obtained in the proof of the First Law,

\begin{aligned} \displaystyle T^2 &= \frac{4m^2}{\left\|J\right\|^2}\pi^2a^4(1-\varepsilon^2) \\ &= \frac{4\pi^2 m^2a^3}{\left\|J\right\|^2}a(1-\varepsilon^2) \\ &= \frac{4\pi^2m^2a^3}{\left\|J\right\|^2}\frac{\varepsilon p}{1-\varepsilon^2}(1-\varepsilon^2) \\ &= \frac{4\pi^2m^2a^3}{\left\|J\right\|^2}\varepsilon\frac{\left\|J\right\|^2}{\gamma Mm^2\left\|A\right\|} \\ &= \frac{4\pi^2}{\gamma M}a^3. \end{aligned}

The constant $\frac{4\pi^2}{\gamma M}$ is identical for any planet travelling around the Sun, and thus is constant. Q.e.d.

Let us conclude with a brief remark of how beautiful and elegant those Laws are – made my day.

# On numerical integration and convergence issues

The methods for numerical integration are of extreme practical interest for any mathematician. In any number of applications one is obliged to find the value of some definite integral. One of the most trivial examples should be the values of probability distributions, a little less obvious example are solutions to differential equations.

Especially since the advent of cheap but high-performing computer power those possibilities have risen immensely. Things that were unattainable some decades ago are used thoroughly by today’s standards. This is why I found it very necessary to make myself familiar with how one can find definite integrals numerically (basically that’s the same motivation for why I wanted to know about numerical algorithms for finding eigenvalues – and there is a nice connection between the two fields, as we shall see).

At first, I had wanted to compile some of the basic ideas for numerical integration here. However, I dived deeper down into some certain aspect of the theory and decided to cut that one short to present something a little less known on convergence of quadrature methods. But to begin with, a little review of what every mathematician should learn in his basic course on numerical analysis.

The numerical integration is sometimes called quadrature, as one tries to find the value of a given shape (the “area under the curve”) and therefore find a square with the same area – similar to what one does in “squaring the circle” which is called “quadraturam circuli” in latin. It’s just that we don’t have to be restricted to ruler and compass anymore.

Basics

Of course, the natural way of numerical integration is to evaluate the function $f$ (to fix notation here) at certain points and make a weighted average of the values. This is related to the Riemann-definition of integrability, where you use sums of the form $\sum_{i=1}^nf(\xi_i)(x_{i}-x_{i-1})$ for $\xi_i\in(x_{i-1},x_{i})$ and let the grid $x_i$ get finer. This is a starting point for what we can achieve, but the first question is, how can we choose the $\xi_i$ smartly?

Let’s not delve into this too deeply, but the short answer from Riemann-integration is: do as you please (as long as $f$ is continuous). In the end, it will converge. The most common first choice is the appropriately-called midpoint-rule (here’s a picture). If you don’t know anything further about $f$, why not. But one will soon find the trapezoidal rule (picture) and there is no general rule which one of them will be better in any instance. Let’s just write those down for further reference:

Midpoint-rule: $\int_a^b f(t)dt \approx (b-a)f(\frac{a+b}2)$,

Trapezoidal rule: $\int_a^b f(t)dt \approx \frac{b-a}{2}\bigl(f(a)+f(b)\bigr)$.

A certain mixture of those two rules can be found in

Simpson’s rule: $\int_a^b f(t)dt \approx \frac{b-a}{6}(f(a)+4f(\frac{a+b}2)+f(b)\bigr)$.

All these rules are fit to integrate polynomials up to a certain degree exactly (which is why this feature is called the degree of exactness for the quadrature rule): the midpoint rule is exact for linear polynomials, the trapezoidal rule makes it up to degree 2 and Simpson’s rule can integrate cubic polynomials exactly.

Each of these rules can be constructed by interpolation of $f$ with low-degree polynomials: one chooses the unique polynomial $p$ for which $p(t)=f(t)$ in those $t$ that come up in the formulae given above and choose constants (“weights”) such that $\int_a^b p(t)dt = \sum_{i=1}^n w_ip(t_i)$. Note, that this will not be a Riemann-sum anymore, in general. You will find that for $p$ being of degree $n=1$, 2 or 3, you will end up with the three formulae given above.

Now, if you wish to be a little better in your interpolation, you can subdivide the interval $[a,b]$ a little further and use any of those rules on the smaller intervals (interpolating repeatedly) to get the compound rules:

– Midpoint-rule: $\int_a^b f(t)dt \approx \frac{b-a}{n}\sum_{i=1}^nf(\frac{x_{i}-x_{i-1}}2)$ with $x_k = a+k\frac{b-a}{n}$,

– Trapezoidal rule: $\int_a^b f(t)dt \approx \frac{b-a}{n}\sum_{i=1}^n\frac{f(x_{i-1})+f(x_i)}{2} = \frac12f(a)+\frac{b-a}{n}\sum_{i=1}^{n-1}f(x_i)+\frac12f(b)$ with $x_k = a+k\frac{b-a}{n}$,

– Simpsons rule: $\int_a^b f(t)dt \approx \frac{b-a}{6n}\bigl(f(a) + 2\sum_{i=1}^n f(x_i) + 4\sum_{i=1}^nf(\frac{x_{i-1}+x_{i}}{2}) + f(b)\bigr)$ with $x_k = a+k\frac{b-a}{n}$.

Note, that we have a constant grid-width here. This doesn’t have to be the case in general, as we shall see. Besides, note that in Simpson’s rule we have three evaluations of the $f$ in each of the sub-intervals, and only the boundary points $x_k$ are used twice (once as the left-hand point, once as the right-hand point). One might also define it slightly differently, by having the sub-intervals overlap and each evaluation of $f$ is used twice (once as the midpoint, the other time as a boundary point), except for $f(a)$ and $f(b)$.

Now, any basic text on numerical analysis and Wikipedia will tell you everything there is to know about the error that you get from these quadrature formulae. Let’s not go there, except for some small remarks. First, each of these rules has an error that will diminish with more sub-divisions of $[a,b]$. The error will depend on how smooth $f$ is, in terms of the norm $\left\|f^{(k)}\right\|_\infty$ for some $k\geq1$. Therefore, to know about the error, you will need to conjure some information on the derivatives of $f$, which will rarely be possible in applications. However, if $f$ is “only” continuous, you may find yourself in a bad position for error estimation: the quadrature rule might be just looking at the unusual points to evaluate $f$, which are not typical of the overall behaviour:

(here’s an example going with the picture: take $f$ to be some positive constant $c$, except for a tiny area around the evaluation, where it shall vanish; then $\int_a^bf(t)dt \approx c(b-a) \neq 0 = \sum_{i=1}^n w_if(x_i)$ – though not impossible, it’s harder to do this the smoother $f$ will be, and the derivatives of $f$ will blow up, giving a worse estimate). This effect has been called “resonance” by Pólya.

Apart from ad hoc methods for the error estimation there is the method of Peano kernels which can be applied to bound quadrature formulae (and other linear functionals). Let there just be mentioned that this method requires rather little effort to yield error bounds. I wouldn’t quite call it elegant, possibly since I lack the background in functional analysis, but it gets the job nicely done.

Next natural question: which $n$ should we use? Short answer: the more the better, though many evaluations of $f$ are “expensive” (especially in applications from the sciences and economics where you might even be unable to get more measurements). Apart from the compound rules, which are used quite often in practice, one might think that interpolation using polynomials of higher degree should work out better and give finer error estimates. The quadrature stemming from this idea is called Newton-Cotes-formula (of which our previous rules are special cases). In principle, this is sound reasoning, but here’s the showstopper: interpolating polynomials need not be similar to the function that they interpolate, and hence the quadrature runs wild. The most famous example is Runge’s phenomenon: for $f(x)=\frac{1}{1+x^2}$ the sequence of polynomials interpolating on an equidistant grid does not converge pointwise, which can be explained by the singularity at $x = \pm i$ that stays invisible while working on the real numbers only. In particular, an interpolating polynomial of sufficiently high degree won’t tell us anything useful about the behaviour of $f$.

For numerical applications, the next problem is that the weights in Newton-Cotes-formulae become negative when $n\geq 9$ (that means, if you use a polynomial of higher degree to interpolate $f$ on $[a,b]$). In effect, you will encounter annihilation in your computation from subtracting rather large numbers – there won’t be any guarantee for proper results anymore, even though in theory your method might work fine on your given function. Quadrature formulae with positive weights are called for.

Before the discussion of convergence of quadrature formulae, we will still look at what methods are best possible in terms of the degree of exactness. It’s the Gaussian integration (many practical things are named after Gauß, aren’t they?) which allows for exact integration of polynomials up to degree $2n-1$ from $n$ evaluations.

But first, why will we be content with a degree of exactness $2n-1$, why not go further? Well, there is always at least one polynomial of degree $2n$ that can’t be exactly integrated by a quadrature formula on $n$ evaluations. Let’s fix some quadrature formula, it will look like $\sum_{i=1}^nw_if(x_i)$ with weights $w_i$ and evaluations in $x_i$, and let’s look at the polynomial $\omega^2(x) = \prod_{i=1}^n(x-x_i)^2$ which has its zeroes at the points of evaluation of the quadrature formula. Then, we find

$\displaystyle \sum_{i=1}^nw_i\omega^2(x_i) = 0 < \int_a^b \omega^2(t)dt.$

Hence, there is always a polynomial of degree $2n$ that can’t be integrated exactly, and therefore $2n-1$ is the best we could hope to achieve. Now, how do we get there? Up to now, the quadrature formulae had used some $x_i$ (mostly because those $x_i$ were the natural thing to do or because we can’t always chose the $x_i$ at liberty, if they stem from experimental measurements, say) and the $w_i$ were computed accordingly to make the polynomials up to degree $n$ be integrated exactly. But we can of course choose the $x_i$ wisely as well: they can act as a degree of freedom, and then we’ll have $2n$ clever choices to make – this seems to indicate that we can achieve the degree of exactness $2n-1$. But can we? And how?

Let’s assume that there is some quadrature formula which has this degree of exactness, and let’s look at what it will do to the polynomials $\omega p$, where $p$ is any polynomial of degree less than $n$. Restricting ourselves to the interval $[a,b] = [-1,1]$ for simplicity, we should find

$\displaystyle 0 = \sum_{i=1}^nw_i\omega(x_i)p(x_i) = \int_{-1}^1\omega(t)p(t)dt.$

That shows that $p$ would be orthogonal to $\omega$, no matter what $p$ actually was (as the $L^2$-space is a Hilbert space, of course, with the integration as its scalar product). From the abstract theory of Hilbert spaces we can know that there are orthonormal bases of polynomials. For our setting here, we are led to use the Legendre polynomials as $\omega$; there are other orthonormal polynomials which will then use another weight function inside the integral – later more on that.

We have seen that the zeroes of the Legendre polynomials should be used to evaluate $f$. But how to find those? Here’s the pretty thing I alluded to above. There’s a recursion formula for the Legendre polynomials $P_n$ which can be shown by induction:

$\displaystyle \beta_{n+1}P_{n+1}(x) = xP_n(x) - \beta_nP_{n-1}(x)$, with $\displaystyle P_1\equiv\frac1{\sqrt2}$ and $\displaystyle P_2(x) = \frac32x$,

and using $\beta_n = \frac{n}{\sqrt{(2n-1)(2n+1)}}$. This recursion may be re-written as a vector-matrix-shape:

$\displaystyle\begin{bmatrix}0&\beta_1&0\\\beta_1&0&\beta_2&0\\0&\beta_2&0&\beta_3&0\\&&\ddots&\ddots&\ddots\\&&&\beta_{n-2}&0&\beta_{n-1}\\&&&0&\beta_{n-1}&0\end{bmatrix}\begin{bmatrix}P_0(x)\\P_1(x)\\P_2(x)\\\vdots\\P_{n-2}(x)\\P_{n-1}(x)\end{bmatrix}= x\begin{bmatrix}P_0(x)\\P_1(x)\\P_2(x)\\\vdots\\P_{n-2}(x)\\P_{n-1}(x)\end{bmatrix}-\begin{bmatrix}0\\0\\0\\\vdots\\0\\\beta_nP_n(x)\end{bmatrix}$

The last vector drops out for the zeroes of $P_n$, and then we have an eigenvalue-problem. In particular: find the eigenvalues of the matrix on the left-hand-side to find the zeroes of $P_n$. That’s some rather unexpected connection – I found it marvellous when I first saw this.

Computing eigenvalues is a tricky thing to do, as everybody knows, and getting eigenvectors is even harder. It turns out that you’ll need the eigenvectors to compute the proper weights $w_i$ for the quadrature formula, and many intelligent people have come up with ingenious algorithms to do so, such that we can have the proper values for the maximum degree of exactness that we have yearned for.

This shall suffice here. We can understand that these so-called Gauss-Legendre-formulae can’t be expressed in a closed form, but they yield satisfying results in practice. Of course, when there was no cheap computer power available, those formulae were more of an academic exercise, as you couldn’t easily evaluate any given function $f$ in those crude points $x_i$ (even though these $x_i$ came in tables) and you can’t re-use those evaluations when you take other values for $n$.

A numerical advantage is that the weights will be positive for the Gaussian quadrature, which is easy to see, as a matter of fact: with the Lagrange-polynomials $\ell_{ni}$, that are used for explicit interpolation with polynomials and that have $\ell_{ni}(x_j) = \delta_{ij}$ for the evaluation points, we find

$\displaystyle w_i = \sum_{j=1}^nw_j\ell_{ni}^2(x_j) = \int_{-1}^1\ell_{ni}^2(t)dt > 0.$

As an aside: there are some variants of Gauss-Legendre that may come in useful in some applications. For instance there is the Gauss-Chebyshev-formula which uses the weight function $\frac1{\sqrt{1-x^2}}$ in the integration (not in the quadrature – weights and weight functions are not related directly). This means, if your function $f$ is of the shape $f(x) = \frac{g(x)}{\sqrt{1-x^2}}$, use Gauss-Chebyshev on the function $g$, which may be easier or more stable; in that case, other $x_i$ and $w_i$ are necessary, of course. Similar things hold for Gauss-Laguerre (weight $e^{-x}$, the integral running on $[a,b)=[0,\infty)$, which makes it possible to integrate on infinite intervals, being a non-trivial task at a first glance) and Gauss-Hermite (weight $e^{-x^2}$, $(a,b)=(-\infty,\infty)$). Other variants have certain fixed points of evaluation, which certainly reduces the degree of exactness, but which might help in solving differential equations (when you always evaluate in the point $x_1=a$, you save one evaluation as this is your starting-point of your multi-step method; Gauss-Radau does this, for instance, and similarly Gauss-Lobatto with $x_1=a$ and $x_n=b$ fixed).

Obvious linear transformations will enable us to use different values for $a$ and $b$ than the standard ones for which the various orthonormal polynomials are constructed. Let’s ignore this here.

Those were pretty much the basic facts of what any mathematician should know or at least should have heard once, without much of a proof stated here. Now for a little more specialized material. Who guarantees that the quadrature gets better with increasing numbers of evaluation? Evaluations are expensive, generally speaking, and maybe it doesn’t even do any good (as was the case with the Newton-Cotes-method – or does it)? It seems that Stieltjes first came up with this question and gave partial answers in 1884, while Pólya answered it rather exhaustively a couple of decades later in 1933. We rely on Pólya’s original article “Über die Konvergenz von Quadraturverfahren” (Math. Zeitschr. 37, 1933, 264-286)

Convergence

A quadrature method is, for all intents and purposes that are to come, a triangular scheme of weights $w_{ij}$ and evaluation points $x_{ij}$ like

$\displaystyle \begin{matrix}w_{11}\hphantom{,}\\w_{21}, &w_{22}\hphantom{,}\\w_{31},&w_{32},&w_{33}\\\vdots\\w_{n1},&w_{n2},&\ldots&w_{nn}&\\\vdots&\end{matrix}\hspace{2em}$ and $\hspace{2em}\displaystyle \begin{matrix}x_{11}\hphantom{,}\\x_{21}, &x_{22}\hphantom{,}\\x_{31},&x_{32},&x_{33}\\\vdots\\x_{n1},&x_{n2},&\ldots&x_{nn}&\\\vdots&\end{matrix}$

For any $n$, we need $a\leq x_{n1}. Of course we shall combine those triangular schemes like this

$\displaystyle Q_n[f] := \sum_{i=1}^nw_{ni}f(x_{ni})$

and hold up our hope that $\lim_{n\to\infty} Q_n[f] = \int_a^bf(t)dt$. If so, the quadrature method converges for the funtion $f$. Of course, $f$ needs to be at least Riemann-integrable (We are unable to focus on Lebesgue-integrable functions here, as no quadrature method can converge for any Lebesgue-integrable function: there are only countably many evaluation points on which $f$ may be altered at will without effect on its integral).

It is clear that the Newton-Cotes-formulae fit in this definition. We can choose the $x_{ij}$ at liberty and the $w_{ij}$ follow from the demand that in stage $n$ polynomials up to degree $n-1$ are integrated exactly. In particular, the Newton-Cotes-formulae converge for any polynomial by definition.

But there’s the question, what do the $x_{ij}$ and $w_{ij}$ need to satisfy in order for the quadrature method to converge on a given set of functions (any polynomial, any continuous or even any integrable function)? If you switch to a larger set, what are the additional necessary assumptions?

For polynomials, the case is quite easy. We look for $\lim_{n\to\infty}\sum_{i=1}^nw_{ni}x_{ni}^k = \frac{1}{k+1}(b^{k+1}-a^{k+1})$ for any $k\geq0$, as every polynomial is a linear combination of the $x^k$ and both integration and quadrature are linear functionals. No big deal, and for $k=0$ in particular we find the important assumption $\lim_{n\to\infty}\sum_{i=1}^nw_{ni} = b-a$.

Pólya’s result that we shall examine in a little more detail is this

Theorem: The quadrature method converges for every continuous function, if and only if it converges for every polynomial and there is a constant $\Gamma$ such that $\sup_n\sum_{i=1}^n\left|w_{ni}\right| < \Gamma$.

One direction of the proof is easy. Let $\sup_n\sum_{i=1}^n\left|w_{ni}\right| < \Gamma$ and let $f$ be any continuous function. We show that the method converges for $f$. We know by Weierstrass’ Approximation Theorem that there are uniformly close polynomials for $f$, in the sense that for every $\varepsilon>0$ we can get a polynomial $p$ with $\left\|f(t) - p(t)\right\|_\infty<\varepsilon$. Thus,

\displaystyle \begin{aligned} \left|\int_a^bf(t)dt - Q_n[f]\right| &\leq \int_a^b\left|f(t)-p(t)\right|dt + \left|\int_a^bp(t)dt - Q_n[p]\right| + \sum_{i=1}^n\left|w_{ni}\right|\left|p(x_{ni})-f(x_{ni})\right| \\ &\leq \varepsilon(b-a) + \varepsilon + \varepsilon\Gamma. \end{aligned}

Here, we used that $Q_n$ converges for the polynomial $p$ and that for $n$ sufficiently large $\left\|f-p\right\|_\infty<\varepsilon$. This shows that $Q_n[f]$ is convergent.

The other direction of the proof is a little harder. We assume that there is no constant $\Gamma$ such that $\sup_n\sum_{i=1}^n\left|w_{ni}\right| < \Gamma$ – therefore $\limsup_n\sum_{i=1}^n\left|w_{ni}\right| = \infty$. We will construct a continuous function $f$ for which $Q_n[f]$ diverges.

This will involve the “resonance” that I alluded to earlier: choose a function that reacts as badly as possible on the evaluation points. For fixed $k$ take $g_k$ such that $\left\|g_k\right\|_\infty = 1$ and $Q_k[g_k] = \sum_{i=1}^k\left|w_{ki}\right|$.

This ensures that $Q_k[g_k]$ cannot possibly take a larger value on any function that itself is bounded by $1$. Such $g_k$ may be constructed, for instance, via $g_k(x_{kj}) = \mathrm{sgn} w_{kj}$ and linear connections between the evaluation points.

Now, one of two possible cases may occur: either there is some $k$ with $\limsup_n\left|Q_n[g_k]\right|=\infty$, or that expression stays bounded uniformly in $k$. In the first case, we have already found the continuous function which makes the quadrature method diverge and we are done. So, let us look at the second case only.

Let us inductively construct a subsequence of the $g_k$ as follows. Let $g_{k_1} = g_1$. If we have got $g_{k_{m-1}}$, set

$\displaystyle M_{m-1}:= \sup_nQ_n\left[\sum_{j=1}^{m-1}g_{k_j}3^{-j}\right].$

As we have already decided we’re in the second case, $M_{m-1}$ will be finite. Now, we wish to choose the next part $g_{k_m}$ of the subsequence, and we ensure both $k_m>k_{m-1}$ and (as $\limsup_n\sum_{i=1}^n\left|w_{ni}\right| = \infty$, we may find some index to make this sum of weights as large as we like)

$\displaystyle \sum_{i=1}^{k_m}\left|w_{k_mi}\right| > 3^m 2(M_{m-1} + m).$

With the subsequence $g_{k_m}$ at our disposal, let’s define the function

$\displaystyle f(x):=\sum_{j=1}^\infty 3^{-j}g_{k_j}(x).$

This is a normally convergent series, as each of the $g_{k_j}$ is bounded by $1$ and thus for sufficiently large $n$:

$\displaystyle\left\|\sum_{j=n}^\infty 3^{-j}g_{k_j}(x)\right\|_\infty\leq\sum_{j=n}^\infty 3^{-j} < C3^{-n}<\varepsilon.$

As a uniformly convergent limit of continuous functions, $f$ is continuous. Next, we invoke linearity of quadrature methods again and have a closer look at what the $Q_n$ do to $f$:

\displaystyle \begin{aligned} Q_{k_m}[f] &= Q_{k_m}\left[\sum_{j=1}^{m-1}g_{k_j}3^{-j}\right] + 3^{-m}Q_{k_m}\left[g_{k_m}\right] + Q_{k_m}\left[\sum_{j=m+1}^\infty g_{k_j}3^{-j}\right]\\ & > -M_{m-1} + 3^{-m}\sum_{j=1}^{k_m}\left|w_{k_mj}\right| - \sum_{\ell=1}^{k_m}\left|w_{k_m\ell}\right|\sum_{j=m+1}^\infty 1\cdot 3^{-j}, \end{aligned}

by the definition of $M_{m-1}$ and by the construction of $g_{k_m}$. Then,

\displaystyle \begin{aligned} Q_{k_m}[f] & = -M_{m-1} + 3^{-m}\sum_{j=1}^{k_m}\left|w_{k_mj}\right| - 3^{-m-1}\sum_{\ell=1}^{k_m}\left|w_{k_m\ell}\right|\cdot\frac1{1-\frac13}\\ & = -M_{m-1} + \frac123^{-m}\sum_{j=1}^{k_m}\left|w_{k_mj}\right|\\ & > m, \end{aligned}

which may explain the strange bound that we used in the construction of the subsequence.

Letting $m\to\infty$, we see that the quadrature method diverges for $f$. The entire construction hinges on the assumption that $\limsup_n\sum_{i=1}^n\left|w_{ni}\right| = \infty$; in particular, the condition given in the theorem is necessary for the convergence of the quadrature method.

The theorem is proved. q.e.d.

The condition for the convergence on Riemann-integrable functions is rather technical and we won’t dwell on this for too long. The basic ideas of resonance are similar, and in applications there is often good reason to assume that your function is continuous (as opposed to differentiable which you usually can’t know). We’ll content ourselves with the remark that quadrature methods converge on any Riemann-integrable function if and only if they converge for every continuous function and $\limsup_n\sum_{i\in I_n}\left|w_{ni}\right|=0$ where $I_n$ is a sequence of intervals which contains the $x_{ni}$ and the total length of the intervals converges to $0$.

A little more interesting is the following counter-example: There is an analytic function for which the Newton-Cotes-formulae on equidistant grids diverge.

Analytic functions are about the most well-behaved functions you may imagine, and even there the Newton-Cotes-method won’t do much good (as long as they’re computed on equidistant grids). Of course, by the theorem we proved, we can already see why: the weights are not bounded and the basic problem is what happened in Runge’s counterexample; the interpolating polynomials don’t have anything to do with the function $f$ that we wish to integrate. As Ouspensky put it:

La formule de Cotes perd tout valeur pratique tant que le nombre des ordonnées devient considérable.

However, the problem is not inherent in Newton-Cotes: if you choose the points of evaluation wisely (as zeroes of Legendre polynomials, or the like, usually clustering them around the endpoints of the interval), the sequence will converge by the theorem that we have proved: the quadrature is then Gaussian, its weights are positive and integrate the constant function $1$ properly (which even gives $\sum_{i=1}^n w_{ni} = b-a$ and so $b-a =: \Gamma <\infty$) – hence, it must converge for any continuous (and in particular: analytic) function. The message is: avoid negative weights in quadrature methods and choose wisely whatever you can choose; not only will you avoid trouble in numerical computations, but you may also keep nice convergence properties.

Let us at least sketch the proofs of this counter-example.

Let us switch the notation very slightly and fix it like this: take $x_{ni} = \frac in$ for $i=0,\ldots,n$, as we try to approximate $\int_0^1f(t)dt$. Then, $Q_n[f] = \sum_{i=0}^nw_{ni}f(\frac in)$.

We’ll need two preliminaries. First: Asymptotics on the quadrature weights. Second: An estimate for the interpolating polynomial.

First preliminary: Asymptotics on the quadrature weights. As we use Newton-Cotes, the idea is to interpolate and then integrate the corresponding polynomial. The interpolating polynomial can be expressed via the Lagrange polynomials $\ell_{ni}(x) := \prod_{j=1, j\neq i}^n\frac{x-x_{nj}}{x_{ni}-x_{nj}}$ and our idea narrows down to

$\displaystyle \int_0^1f(t)dt\approx \int_0^1p(t)dt = \int_0^1\sum_{i=0}^nf(x_{ni})\ell_{ni}(t)dt = \sum_{i=0}^nf(x_{ni})\int_0^1\ell_{ni}(t)dt.$

On the other hand, $\int_0^1f(t)dt\approx Q_n[f] = \sum_{i=0}^nf(x_{ni})w_{ni}$. Hence, the weights $w_{ni}$ must equal $\int_0^1\ell_{ni}(t)dt$, or to write it down for once

$\displaystyle w_{nk} = \int_0^1\frac{nt\cdot (nt-1)\cdots (nt-k+1)\cdot (nt-k-1)\cdots (nt-n)}{k(k-1)\cdots 1\cdot(-1)\cdots(k-n)}dt.$

The really troublesome part is an asymptotic formula for the growth of these $w_{nk}$. Pólya uses all kinds of neat ideas from the theory of the gamma function, far-sighted variable transformations and a lot of computation. This has been done by Ouspensky first (“Sur les valeurs asymptotiques des coefficients de Cotes“; Bull. Amer. Math. Soc., 31, 1925, 145-156), who concluded with the quote given above, that the weights for Newton-Cotes get larger and larger in absolute value and therefore become increasingly useless. The result (which was at least a little generalized and simplified by Pólya) goes like this

$\displaystyle w_{nk} = -\frac{1}{n(\log n)^2}\binom{n}{k}\left(\frac{(-1)^k}{k}+\frac{(-1)^{n-k}}{n-k}\right)(1+\eta_{nk})$

with $\lim_n\sup_k\eta_{nk}=0$. In particular, a close inspection shows the special case

$\displaystyle w_{n,\frac n2} > 2^n n^{-3}$

for even $n$. Let’s keep this guy in mind.

Second preliminary: An estimate for the interpolating polynomial. As we wish to say something about analytic functions, we can invoke the pretty results of complex analysis. This can tell us about a representation of the interpolation polynomial. In what follows, $\omega$ is, again, the polynomial with zeroes in the points of evaluation.  Of course, in the complex plane interpolation works with the Lagrange polynomials as well:

$\displaystyle p(x) = \sum_{i=0}^nf(x_{ni})\prod_{j=0, j\neq i}^n\frac{x-x_{nj}}{x_{ni}-x_{nj}}.$

Some close inspections show:

\displaystyle \begin{aligned} p(x) &= \sum_{i=0}^nf(x_{ni})\frac{\prod_{j=0, j\neq i}^n(x-x_{nj})}{\prod_{j=0, j\neq i}^n(x_{ni}-x_{nj})}\\ &= \sum_{i=0}^nf(x_{ni})\frac{\prod_{j=0}^n(x-x_{nj})}{(x-x_{ni})\sum_{k=0}^n\prod_{\ell=0, \ell\neq k}^n(x_{ni}-x_{n\ell})}\\ &= \sum_{i=0}^n\frac{\omega(x)f(x_{ni})}{(x-x_{ni})\omega'(x_{ni})}. \end{aligned}

For the sake of seeing how to invoke the residue theorem, we set $F(t) := \frac{\omega(x)-\omega(t)}{x-t}f(t)$ and $G(t):=\omega(t)$. This leads to

$\displaystyle p(x) = \sum_{i=0}^n\frac{F(x_{ni})}{G'(x_{ni})} = \sum_{i=1}^n\mathrm{res}_{x_{ni}}\left(\frac{F}{G}\right).$

The last equation followed from a little divine intuition – something we can’t avoid in this argument; in order to drop this intuition, we would have needed to start with an eerie expression that would have come from nowhere. So, how does it follow? The function $\frac1G$ has simple poles in the $x_{ni}$, while $F$ does not (the simple singularities in the $x_{ni}$ cancel out in the numerator to leave us with a polynomial without further zeroes in the $x_{ni}$). The Laurent expansion of $\frac FG$ starts with the $a_{-1}(x-x_{ni})^{-1}+\cdots$, $a_{-1}$ being the residue. Multiplying with $x-x_{ni}$, using the Taylor expansion of $G$ and taking the limit gives

\displaystyle \begin{aligned} \mathrm{res}_{x_{ni}}\left(\frac{F}{G}\right) = \lim_{x\to x_{ni}}(x-x_{ni})\frac{F(x)}{G(x)} &= \lim_{x\to x_{ni}}(x-x_{ni})\frac{F(x)}{(x-x_{ni})G'(x_{ni})+\cdots} \\ &= \lim_{x\to x_{ni}}\frac{F(x)}{G'(x_{ni})+\cdots} = \frac{F(x_{ni})}{G'(x_{ni})}. \end{aligned}

The argument looks slightly butchered as we read it the wrong way around – but it works nonetheless. Now comes the residue theorem into play, applying it on any reasonable domain $D$ that contains the $x_{ni}$ on its interior:

\displaystyle \begin{aligned} p(x) = \sum_{i=1}^n\mathrm{res}_{x_{ni}}\left(\frac{F(\zeta)}{G(\zeta)}\right)&=\sum_{i=1}^n\mathrm{res}_{x_{ni}}\left(\frac{\bigl(\omega(x)-\omega(\zeta)\bigr)f(\zeta)}{(x-\zeta)\omega(\zeta)}\right)\\ &= \frac1{2\pi i}\int_{\partial D}\frac{\omega(x)-\omega(\zeta)}{(x-\zeta)\omega(\zeta)}f(\zeta)d\zeta \end{aligned}

Re-writing this last expression with the help of Cauchy’s integral formula shows

\displaystyle \begin{aligned}p(x) &= \frac1{2\pi i}\int_{\partial D}\frac{\omega(x)f(\zeta)}{(x-\zeta)\omega(\zeta)}d\zeta - \frac1{2\pi i}\int_{\partial D}\frac{f(\zeta)}{x-\zeta}d\zeta \\ &= \frac1{2\pi i}\int_{\partial D}\frac{\omega(x)f(\zeta)}{(x-\zeta)\omega(\zeta)}d\zeta + f(x). \end{aligned}

Now, consider a certain domain $D$ in the complex plane

(half-circles of radius $1$ around $z=0$ and $z=1$, connected by straight lines), in which $f$ is bounded by some constant $M$. Let $f$ be analytic in this domain $D$ and being interpolated by the polynomial $p$ in the points $x_{n0},\ldots,x_{nn}$ which are located in the interval $[0,1]$. For $\zeta$ on the boundary of $D$, we are far away from the interval $[0,1]$ and all $x_{ni}$ in the sense that $\left|\omega(\zeta)\right|\geq1$ and $\left|x-\zeta\right|\geq1$, while for $x$ in $[0,1]$ we have $\left|\omega(x)\right|<1$. Hence,

$\displaystyle \left|p(x)-f(x)\right|\leq \left|\frac1{2\pi i}\int_{\partial D}\frac{\omega(x)f(\zeta)}{(x-\zeta)\omega(\zeta)}d\zeta\right| \leq \frac1{2\pi}(2+2\pi)M.$

Therefore

$\displaystyle \left|\int_0^1p(t)dt\right|\leq\left|\int_0^1p(t)-f(t)dt\right|+\left|\int_0^1f(t)dt\right|\leq \frac{\pi+1}{\pi}M+M\leq 3M.$

Let’s keep this one in mind as well.

From all of these preliminaries, let us state the promised counter-example: consider the function

$\displaystyle f(x) := \sum_{k=4}^\infty a^{k!}\frac{\sin(k! \pi x)}{-\cos(\pi x)},$

with $a\in(\frac12, 1)$ and where the denominator and the starting index $k=4$ are chosen to ensure that the singularity in $x=\frac12$ is removable by a positive constant. This function is actually analytic as the uniform limit of analytic functions (the pretty complex analysis again; and here we need $a<1$ for the summands to actually diminish quickly enough). Besides, it is periodic and one can use this to find its maximum value on the horizontal bounds of the domain: by complex analysis (yet again!) the maximum must be located on the boundary, and if it were on the vertical bounds, we could (by periodicity) shift the domain without losing any information and then contradictively find the maximum on the inside of the domain. Therefore, any summand of which $f$ is constructed may be bounded inside the domain by

$\displaystyle \left|a^{k!}\frac{\sin(k! \pi x)}{-\cos(\pi x)}\right| \leq Ca^{k!}\exp(k!\pi \mathrm{Im}(x)) \leq Ca^{k!}\exp(k!\pi),$

where the constant $C$ helps to encode the minimum of the cosine on $\mathrm{Im}z=1$ (it won’t vanish anywhere near this line).

Now that we have bounded the analytic summand of our function $f$, from our second preliminary, we get

$\displaystyle \left|\int_0^1 p(t)dt\right| < 3Ca^{k!}\exp(k!\pi)$

and we shall consume the 3 into the constant $C$ without further notice. On top of that, we are about to apply a Newton-Cotes-method; the integral of the interpolating polynomial is equal to what all the quadratures $Q_n[f]$ do:

$\displaystyle \left|\sum_{i=0}^nw_{ni}a^{k!}\frac{\sin(k! \pi \frac in)}{-\cos(\pi \frac in)}\right| = \left|\int_0^1 p(t)dt\right| < Ca^{k!}\exp(k!\pi).$

Now we see

\displaystyle \begin{aligned}Q_{n!}[f] &= Q_{n!}\left[\sum_{k=n}^\infty a^{k!}\frac{\sin(k!\pi x)}{-\cos(\pi x)}\right]+Q_{n!}\left[\sum_{k=4}^{n-1} a^{k!}\frac{\sin(k!\pi x)}{-\cos(\pi x)}\right]\\ &\geq Q_{n!}\left[\sum_{k=n}^\infty a^{k!}\frac{\sin(k!\pi x)}{-\cos(\pi x)}\right]-\sum_{k=4}^{n-1}Ca^{k!}\exp(k!\pi)\\ &= w_{n!, \frac{n!}{2}}\sum_{k=n}^\infty k! a^{k!} -\sum_{k=4}^{n-1}Ca^{k!}\exp(k!\pi), \end{aligned}

since in case $k\geq n$, $\sin(k!\pi x)=0$ for any relevant $x_{n!,i}=\frac i{n!}$ and all terms in $Q_{n!}$ vanish except for $x_{n!,\frac12n!}=\frac12$ where the denominator vanishes as well and the singularity is removable by taking the value $k!$.

This is the time to take our first preliminary out of the drawer:

\displaystyle \begin{aligned} Q_{n!}[f] &> \frac{2^{n!}}{(n!)^3} n! a^{n!} - (n-4)C(ae^\pi)^{(n-1)!}\\ &= \frac{(2a)^{n!}}{(n!)^2}\left(1-\bigl(n!\bigr)^2(n-4)C\left(\frac{ae^\pi}{(2a)^n}\right)^{(n-1)!}\right)\end{aligned}.

As $a>\frac12$, the term in the large brackets converges to 1 ($n!$ grows slower than $(2a)^{n!}$ does), and the entire term diverges. In particular, the Newton-Cotes-method diverges for $f$. That’s what we wanted to show. Oof. q.e.d.

So, we have found necessary and sufficient conditions for quadrature methods to converge and we dealt with a striking counter-example for a plausible method on a reasonably nice class of functions to make us dismiss this plausible method. However, whenever we spoke about convergence, we didn’t deal with the speed of convergence – and we couldn’t have. As Davis and Rabinovitz have rather un-nervedly put it: Nothing positive can be said about the degree of convergence. Lipow and Stenger (“How slowly can quadrature Formulas converge?“; Math. Comp. 26, 1972, 917-922) have shown that all quadrature methods will converge arbitrarily slowly on the space of continuous functions: for every sequence $a_n$ converging to $0$, there is a continuous function $f$ such that $\int_0^1f(t)dt - Q_n[f] \geq \varepsilon_n$.

Before closing this chapter, let’s say that in practical applications it’s hard to know whether your approximation to the integral is sufficient. The convergence theorems don’t speak about the speed of convergence, the error estimates deal with sup-norms of derivatives. Most applications deal with adaptive methods (which we haven’t covered here) to have a closer look where the function might become erratic, and in the end the methods are used with a little hope in the back of the user’s minds that stable methods with a high degree of exactness will do sufficiently well. There can be no certainty, however. But still, it’s some pretty mathematics to spend your time with.

# On approximating functions – Part Two

The last time, we have proved the Stone-Weierstass-Theorem on how to find approximating functions to any continuous function on a compact interval only using a certain algebra of “easy” functions – such as polynomials.

Now that the hard, abstract work is done, we shall note that there are other, less abstract ways to prove Weierstrass’ Approximation Theorem. I am unaware of a less abstract method for Stone-Weierstrass, in particular the method given above is non-constructive. You could go through most of these things, but the way that we used compactness is not constructable in general – how do you choose the finite covering, as you only know there should exist one?

Let us therefore look at Bernstein Polynomials, which are associated with a given function $f$ on $[0,1]$:

$\displaystyle f_n(x) = \sum_{k=0}^n\binom{n}{k}f\left(\frac kn\right)x^k(1-x)^{n-k}.$

One can prove:

Proposition: Let $f:[0,1]\to\mathbb{R}$ be continuous. Then the Bernstein Polynomials converge uniformly to $f$.

Proof: Since $f$ is continuous on a compact interval, it is also uniformly continuous. Let $\varepsilon>0$. Then, there is some $\delta>0$ with $\left|f(x)-f(y)\right|<\varepsilon$ if $\left|x-y\right|<\delta$. Besides, for any $x\in[0,1]$,

$\displaystyle \left|f(x)-f_n(x)\right| \leq \sum_{k=0}^n\binom{n}{k}\left|f(x)-f\left(\frac kn\right)\right|x^k(1-x)^{n-k},$

where we have used the standard fact that

$\displaystyle\sum_{k=0}^n\binom nk x^k(1-x)^{n-k} = \bigl(x+(1-x)\bigr)^n = 1,$

which will (if used again and together with uniform continuity) yield

$\displaystyle \left|f(x)-f_n(x)\right| \leq \sum_{k=0}^n\binom{n}{k}\left|f(x)-f\left(\frac kn\right)\right|x^k(1-x)^{n-k}\mathbf{1}_{\left|x-\frac kn\right|\geq\delta} + \varepsilon.$

So, we have split our problem in two parts: in the first part, if we want to approximate $f$ well, it’s easy because an interpolation point for $f_n$ is close by. If this is not the case, the interpolation point is far away, meaning:

$\displaystyle \left|x-\frac kn\right|\geq\delta\quad\iff\quad 1 \leq \frac{\bigl(x-\frac kn\bigr)^2}{\delta^2}.$

On top of that, by continuity, $f$ will be bounded by a constant $M$, say. This gives:

\begin{aligned} \left|f(x)-f_n(x)\right| &\leq \sum_{k=0}^n\binom{n}{k}\left|f(x)-f\left(\frac kn\right)\right| x^k(1-x)^{n-k}\frac{\bigl(x-\frac kn\bigr)^2}{\delta^2} + \varepsilon \\ &\leq \frac{2M}{\delta^2}\sum_{k=0}^n\binom{n}{k}\left(x-\frac kn\right)^2 x^k(1-x)^{n-k} + \varepsilon. \end{aligned}

As far as ideas are concerned, we’re done. All that is left to do is to shove terms and binomial coefficients from left to right and to find out that they behave properly. We won’t do that here, as it’s not particularly hard. One will find:

$\displaystyle\left |f(x)-f_n(x)\right| \leq \frac{2M}{\delta^2}\left(x^2-2x^2+x^2+\frac{x(1-x)}{n}\right) + \varepsilon\leq \frac{2M}{\delta^2}\frac1{4n}+\varepsilon < 2\varepsilon,$

if $n$ is chosen sufficiently large. q.e.d.

There is a variant of the proof which avoids some of these messier calculations that we didn’t pull through above. Well, this variant doesn’t avoid the calculations, it rather shoves them away to another concept: the Weak Law of Large Numbers. We rely on Klenke’s book on Probability for this (but the idea is due to Bernstein).

Alternative Proof: Let $x\in[0,1]$, and let $x_1,\ldots,X_N$ be independent random variables with a Bernoulli distribution with parameter $x$ (the random experiment of throwing one coin and its chance of landing on heads is $x$). Then, the sum $S_n = \sum_{i=1}^nX_i$ has a binomial distribution $B_{n,x}$. Therefore,

$\displaystyle E\left[f\left(\frac{S_n}n\right)\right] = \sum_{k=0}^nf\left(\frac kn\right)P[S_n=k] = f_n(x).$

We find, again with $M = \left\|f\right\|_\infty$,

$\displaystyle \left|f\left(\frac{S_n}n\right) - f(x)\right| \leq \varepsilon + 2M \mathbf{1}_{|\frac{S_n}n-x|\geq\delta}$

and hence

$\displaystyle \left|f_n(x)-f(x)\right| \leq E\left[\left|f\left(\frac{S_n}n\right)-f(x)\right|\right] \leq \varepsilon + 2MP\left[\left|\frac{S_n}{n}-x\right|\geq\delta\right].$

By the Chebyshev Inequality (which is just shy from using the weak law of large numbers itself), since $E[\frac{S_n}n] = x$ and $\mathrm{Var}[\frac{S_n}n] = \frac1{n^2}nx(1-x)\leq\frac1{4n}$, we have

$\displaystyle P\left(\left|\frac{S_n}n-x\right|\geq\delta \right) = P\left(\left|\frac{S_n}n-E\left[\frac{S_n}n\right]\right|\geq\delta\right) \leq \mathrm{Var}\left[\frac{S_n}n\right]\delta^{-2} = \frac{1}{4n\delta^2}.$

Hence,

$\displaystyle \left|f_n(x)-f(x)\right| \leq \varepsilon + \frac{2M}{4\delta^2n} < 2\varepsilon,$

if $n$ is taken large enough. q.e.d.

These two have been more direct ways of proving the classical Weierstrass Approximation Theorem. It is what Werner’s book on Functional Analysis does, and so does the treatment of Körner’s on Fourier Series. The pro side is, it’s a direct and simple proof. The downside is, however, you won’t get the stronger theorems given at the start of this text; it doesn’t even say anything about trigonometric polynomials.

But, there’s a slight hope: actually, in the non-probabilistic approach, we only needed to prove that Bernstein Polynomials can deal well with the monomials $1$, $x$ and $x^2$ (that’s the part that we have avoided, since there is some computation to be done). Maybe one can generalize this one – and one can: this is Korovkin’s approach.

Theorem (Korovkin): Let $T_n:\mathcal{C}_{\mathbb{R}}(K)\to\mathcal{C}_{\mathbb{R}}(K)$ a sequence of positive linear operators, and let $q\subset \mathcal{C}_{\mathbb{R}}(K)$ be a test set that satisfies certain conditions on positivity and on its zero set.

If for any $q\in Q$, we have $\lim_{n\to\infty}\left\|T_nq-q\right\|_\infty=0$, then we also have $\lim_{n\to\infty}\left\|T_nf-f\right\|_\infty=0$ for all $f\in\mathcal{C}_{\mathbb{R}}(K)$.

Remember that an operator is just a function on a function space (plug a function in, get a function out). It is linear by the usual definition of that term, and by positive one means $T_nf\geq0$ as long as $f\geq0$ everywhere it is defined.

Korovkin tells us that you only need to approximate certain functions well with a cleverly chosen operator, to make the classical Weierstrass-Theorem work. This is what can be achieved by Bernstein Polynomials (they are a sequence of operators which only use finitely many values of a function) and the test set $q=\{1, x, x^2\}$. You can also prove the Theorem on trigonometric polynomials with the operator $T_n$ chosen as the Fejér-kernel and the test set $q=\{1, \cos, \sin\}$. We don’t delve into this here, especially not in the test set $q$; let’s just say that there is one and in the given examples you can’t make it smaller. In the detailed proof, there is some computation required and you can actually learn many things about Fourier series (so go into it; for instance in the books by Werner on Functional Analysis, or in Hämmerlin’s book on Numerics – however, to place a quote by Gauss in here: Most readers may well wish in some places for greater clarity in the presentation).

We won’t prove that here because for the main applications there is not much to gain on top of what we already have. But the approach has a certain elegance to it which is especially captured by functional analysts. I am not aware of generalizations of its applicability, I’d be very interested to hear about them.

As a final alternate proof of the full real version of Stone-Weierstrass Theorem (not just Weierstrass’ Theorem), we shall look at an elementary approach suggested by Brosowski and Deutsch in 1981 (see the original paper here). There is no dealing with convergent series, operators and the like; the proof only hinges on definitions of compactness and continuity and three easily proved ideas: closed subsets of compact spaces are compact, continuous positive functions have a positive minimum on compact sets, and the Bernoulli inequality $(1+h)^n\geq 1+nh$ for any $n\in\mathbb{N}$ and $h\geq-1$. We will give two lemmas before we are able to prove the real version of Stone-Weierstrass.

The proof will actually show a little bit more, since we need not suppose that $K\subset\mathbb{R}$. In fact, Stone-Weierstrass holds for any real-valued function on a compact topological space (it’s just that we didn’t make a fuss about that earlier on – most applications deal with functions on $\mathbb{R}$, as a matter of fact).

The pointwise nearly indicator Lemma: Let $x_0\in K$ and let $U$ be a neighborhood of $x_0$. Then, for any $\varepsilon>0$ there is some neighborhood $V(x_0)$ of $x_0$ with $V\subset U$ and some function $g\in\mathcal{A}$ such that

• $0\leq g(x)\leq 1$ for $x\in K$,
• $g(x)<\varepsilon$ for $x\in V(t_0)$,
• $g(x)>1-\varepsilon$ for $x\in K\setminus U$.

The lemma got its name because $g$ serves as an indicator function (or characteristic function for the non-probabilists out there) for the set $K\setminus U$. Of course, the algebra $\mathcal A$ is not rich enough for a proper indicator function, since that wouldn’t be continuous. But the algebra approximates this indicator of $K\setminus U$, and we can guarantee the function $g$ to nearly vanish in some neighborhood of the given point $x_0$. We’ll improve on that soon.

Proof: First of all, we need to find a function that takes its values only in the interval $[0,1]$ and which already has a controllably small value in $x_0$. What easier way to pick that small value as $0$?

For any $y\in K\setminus U$, by separation, there is some function $g_y\in\mathcal{A}$ with $g_y(y)\neq g_y(x_0)$. Besides, the function $h_y(x) = g_y(x)-g_y(x_0)$ is in $\mathcal{A}$ and of course $h_y(y)\neq h_y(x_0) = 0$. Even the function $p_y(x) = \frac1{\left\|h_y^2\right\|_\infty}h_y^2(x)$ is in $\mathcal{A}$, we have $p_y(x_0) = 0$, $p_y(y)>0$ and $0\leq p_y\leq 1$.

That was the first important step. $p_y$ vanishes at $x_0$ and it doesn’t vanish in some point outside of $U$. Now, we need to push the “doesn’t vanish” part to make our function take values near $1$, uniformly outside of $U$.

Consider the set $W(y)=\{x\in K : p_y(x)>0\}$. This is a neighborhood of $y$, hence its name. We can look at any of those $W(y)$, whenever $y\in K\setminus U$. Now, $K\setminus U$ is compact (it’s a closed subset of a compact space), and hence finitely many $W(y)$ will suffice to cover $K\setminus U$, let’s say $K\setminus U \subset\bigcup_{k=1}^m W(y_k)$. The function $p(x) = \frac1m\sum_{k=1}^mp_{t_k}(x)$ is in $\mathcal{A}$, we still have $0\leq p\leq1$, $p(x_0)=0$ and $p(x)>0$ for any $x\in K\setminus U$.

Because of compactness, $p$ must take a positive minimum on the compact set $K\setminus U$, let’s call it $\delta$. Then, for this $\delta\in(0,1]$, we look at the set $V=\{x\in K : p(x)<\frac\delta2\}$. Obviously, $V\subset U$ and $V$ is a neighborhood of $x_0$.

Let us look at the integer $K = \left\lceil \frac1\delta+1\right\rceil$. Thus, $K-1\leq\frac1\delta$ and $K\leq\frac{1+\delta}{\delta} \leq \frac2\delta$. In total, $1< k\delta \leq 2$. We can therefore, for every $n\in\mathbb{N}$, define the functions $q_n(x) = \bigl(1-p^n(x)\bigr)^{k^n}$, which, of course, are in $\mathcal{A}$, which have $0\leq q_n\leq 1$ and which have $q_n(x_0)=1$.

Let $x\in V$. Then, $Kp(x)< \frac{k\delta}2\leq 1$ and Bernoulli’s inequality (used with $n := k^n$ and $h:= -p^n(x)\geq -1$) shows $q_n(x) \geq 1-k^np^n(x)\geq 1-\left(\frac{k\delta}2\right)^n$. Therefore, $\lim_{n\to\infty}q_n(x) = 1$, and this limit is uniform in $x\in V$.

Let $x\in K\setminus U$. Then, $Kp(x)\geq k\delta> 1$ and Bernoulli’s inequality shows

\begin{aligned} q_n(x) &= \frac{1}{k^np^n(x)}\bigl(1-p^n(x)\bigr)^{k^n} k^np^n(x) \\ &\leq \frac1{k^np^n(x)}\bigl(1-p^n(x)\bigr)^{k^n}\bigl(1+k^np^n(x)\bigr)\\ &\leq \frac{1}{k^np^n(x)}\bigl(1-p^n(x)\bigr)^{k^n}\bigl(1+p^n(x)\bigr)^{k^n} \\ &= \frac{1}{k^np^n(x)}\bigl(1-p^{2n}(x)\bigr)^{k^n} \\ &\leq \frac{1}{k^n\delta^n}, \end{aligned}

and this converges to $0$ uniformly.

In particular, for any $\varepsilon>0$, we can find some $n$ large enough such that $q_n\leq\varepsilon$ on $K\setminus U$ and $q_n\geq 1-\varepsilon$ on $V$. Now, to conclude the proof, pick $V(x_0) = V$ (remember that $V$ was a neighborhood of $x_0$ by construction) and $g(x) = 1-q_n(x)$. q.e.d.

This one felt quite chore-ish. It was a little messy and technical, but note the necessary tools: no operator theory, no convergence of series, just stuff from high school calculus and the very basics of the theory of compact sets. That would seem like a value in and of itself.

The setwise nearly indicator Lemma: Let $A,B\subset K$ disjoint and closed. Let $\varepsilon\in(0,1)$. Then, there is some $g\in\mathcal{A}$ with

• $0\leq g(x)\leq 1$ for $x\in K$,
• $g(x)<\varepsilon$ for $x\in A$,
• $g(x)>1-\varepsilon$ for $x\in B$.

Obviously, this lemma improves on the first one, we have nearly an indicator function on $B$ which nearly vanishes on $A$. In this respect, it resembles Urysohn’s Lemma from elementary topology, but Urysohn isn’t restricted to the algebra $\mathcal A$ and can hence do even a little better. Never mind.

Proof: This is going to be a lot quicker: you know the drill. Let $\varepsilon>0$. Let $U=K\setminus B$ and let $y\in A$. We can choose neighborhoods $V(y)$ as in the previous Lemma. Of course, since $A$ is compact, finitely many of those $V(y)$ suffice to cover $A$, let’s say $A\subset\bigcup_{k=1}^m V(y_k)$. For each of the $V(y_k)$, there is a function $g_k\in\mathcal{A}$ such that $0\leq g_k\leq 1$, $g_k(x)>1-\frac\varepsilon m$ for all $x\in K\setminus U = B$ and $g_k(x)<\frac\varepsilon m$ for all $x\in V(y_k)$.

The function $g(x) = \prod_{k=1}^mg_k(x)$ will now do the trick: Of course, we have $0\leq g\leq1$.

Let $x\in A\subset\bigcup_{k=1}^mV(y_k)$, in particular $x\in V(y_{n_0})$, say. Then

$\displaystyle g(x) = \prod_{k=1}^mg_k(x)= g_{n_0}(x)\prod_{k=1, k\neq n_0}^mg_k(x) \leq \frac\varepsilon m\prod_{k=1, k\neq n_0}^m1 \leq\varepsilon.$

Now, let $x\in B$. Then Bernoulli tells us that $g(x)=\prod_{k=1}^mg_k(x)\geq\left(1-\frac\varepsilon m\right)^m \geq 1-\varepsilon$. Note that we need $\varepsilon\leq m$ here (which is not a very hard restriction anyway – nevertheless, we included some condition like that in the assertion).

That’s all we needed to show. q.e.d.

Note that the notion of compactness has been used rather slightly differently from the very first proof we gave for Stone-Weierstrass. Then, we had the Approximation Lemma which used continuity and direct neighborhoods where we could approximate $f$ adequately. Here, we have decomposed the underlying set $K$ by using the level sets of $f$ to approximate it. Then, compactness tells us every time that we can do with finitely many sets – but the nature of those sets is ever so slightly different. The math, however, is not.

Now, let’s harvest the ideas:

Proof of the Stone-Weierstrass, real version:

Let $f\in\mathcal{C}_{\mathbb{R}}(K)$ and let $\varepsilon>0$. Without loss of generality, we may assume that $f\geq 0$, since we might as well replace it by $f+\left\|f\right\|_\infty$. Besides, $\varepsilon <\frac13$ is not much of a restriction.

Let $n\in\mathbb{N}$ be such that $(n-1)\varepsilon>\left\|f\right\|_\infty$. Let, for $j=0,\ldots,n$, $A_j$ and $B_j$ be the sets

$\displaystyle A_j = \left\{x\in K : f(x)\leq \left(j-\frac13\right)\varepsilon\right\}$

$\displaystyle B_j = \left\{x\in K : f(x)\geq \left(j+\frac13\right)\varepsilon\right\}.$

Obviously, those sets are disjoint for fixed $j$, and the $A_j$ are increasing to $A_n = K$, while the $B_j$ are decreasing to $B_n = \emptyset$. The setwise nearly indicator Lemma then says that there is some $g_j\in\mathcal{A}$ with $0\leq g_j\leq 1$, $g_j(x) < \frac\varepsilon n$ for $x\in A_j$ and $g_j(x) > 1-\frac{\varepsilon}{n}$ for $x\in B_j$.

That’s all well and good – but how will this help us to prove Stone-Weierstrass? Imagine the space $K$ decomposed like an onion into the $A_j$. The functions $g_j$ will have very small values on the $A_j$, but large values outside, in the $B_j$. We will combine these small and large values properly, to have similar values as the function $f$ in every ring of this onion. Note that the values of $f$ are closely attached to the $g_j$ – by the very definition of $A_j$ and $B_j$.

Now consider the function $g(x) = \varepsilon\sum_{k=0}^n g_j(x)$, which of course is in $\mathcal{A}$. Let $x\in K$, then there is some $j\geq 1$ with $x\in A_j\setminus A_{j-1}$, and so

$\displaystyle \left(j-\frac43\right)\varepsilon < f(x) \leq \left(j-\frac13\right)\varepsilon$,

and of course $g_i(x) < \frac\varepsilon n$ for all $i\geq j$. Besides, our fixed $x\in K$ is in $B_i$ for $i\leq j-2$. Therefore, $g_i(x)\geq 1-\frac\varepsilon n$. All of this tells us (remember that $\varepsilon<\frac13$)

\begin{aligned} g(x) = \varepsilon\sum_{k=0}^{j-1}g_k(x) + \varepsilon\sum_{k=j}^ng_k(x)&\leq j\varepsilon + \varepsilon(n-j+1)\frac{\varepsilon}{n} \\ &\leq j\varepsilon + \varepsilon^2 < j\varepsilon + \frac\varepsilon3 = \left(j+\frac13\right)\varepsilon. \end{aligned}

Now, if $j\geq2$,

\begin{aligned} g(x) \geq \varepsilon\sum_{k=0}^{j-2}g_k(x) &\geq (j-1)\varepsilon \left(1-\frac\varepsilon n\right) \\ &= (j-1)\varepsilon - \left(\frac{j-1}{n}\right)\varepsilon^2 > (j-1)\varepsilon - \varepsilon^2 > \left(j-\frac43\right)\varepsilon. \end{aligned}

Note that, for $j = 1$, $g(x) > \bigl(j-\frac43\bigr)\varepsilon$ appears to be correct.

To conclude, we wish to show something about $\left|f(x)-g(x)\right|$. Let us consider the case $f(x)\geq g(x)$ first:

$\displaystyle\left|f(x)-g(x)\right| = f(x)-g(x) \leq \left(j-\frac13\right)\varepsilon - \left(j-\frac43\right)\varepsilon = \varepsilon.$

The other case is $f(x) < g(x)$:

$\displaystyle\left|f(x)-g(x)\right| = g(x) - f(x) \leq \left(j+\frac13\right)\varepsilon - \left((j-2)+\frac13\right)\varepsilon = 2\varepsilon.$

This shows, that in every case $f$ and $g$ differ by at most $2\varepsilon$, uniformly on $K$. q.e.d.

To conclude, let us mention some little question that is related to the classical Weierstrass Aproximation Theorem. If we have all monomials $x^n$ at our disposal, we can uniformly approximate any continuous function; but what if some monomials are missing? With how few of them can the Approimation Theorem still hold? The surprisingly neat answer is given in Müntz’ Theorem (sometimes called the Müntz-Szász-Theorem for the sake of more signs on top of the letters):

Theorem (Müntz): Let $\lambda_k$ be a strictly increasing sequence of natural numbers with $\lambda_1>0$. Then, we have

$\displaystyle \overline{\mathrm{span}\{1, x^{\lambda_1},x^{\lambda_2},\ldots,\}} = \mathcal{C}_{\mathbb{R}}([0,1]) \qquad\iff\qquad \sum_{k=1}^\infty\lambda_k^{-1}=\infty.$

The span is meant as the linear space generated by the given monomials, i.e. the polynomials that can be constructed using only those monomials.  Hence, the Weierstrass Theorem can even hold if you disallow to use infinitely many monomials. However, you will always need the monomial $x^0=1$, since otherwise your approximating functions could only satisfy $f(0)=0$.

The proof of that theorem is not particularly easy, especially the part where you show that the series must diverge if you can approximate any function. For the other direction there has been a very elementary proof that we shall give here because of its inherent beauty:

Proof of Müntz’ Theorem, $\Leftarrow$-part:

Let $\sum_{k=1}^\infty\lambda_k^{-1}=\infty$.

Let $q\in\mathbb{N}$ be such that $q\notin\{\lambda_1,\lambda_2,\ldots\}$. Without loss of generality, we assume that $q$ is smaller than any of the $\lambda_k$: just take away those $\lambda_k$ which are smaller than $q$ and re-enumerate the $\lambda_k$ starting from $k=1$; the condition $\sum_{k=1}^\infty\lambda_k^{-1}=\infty$ will then still hold.

We will show that $x^q\in\overline{\mathrm{span}\{x^{\lambda_1},x^{\lambda_2},\ldots\}}$. This will imply that all monomials $x^n$ are in the closure of that span, and if we have that, Weierstrass tells us that we can approximate any continuous function.

Note that we don’t need the constant function $1$ to approximate $x^q$, since for our purposes the relevant functions all have $f(0)=0$.

Now consider the function

$\displaystyle Q_n(x) := x^q-\sum_{k=1}^na_{k,n}x^{\lambda_k}$

with certain $a_{k,n}\in\mathbb{R}$. Then first of all, $Q_0(x) = x^q$, and if we choose certain $a_{k,n}$, we find the recursion

$\displaystyle Q_n(x) = (\lambda_n-q)x^{\lambda_n}\int_x^1Q_{n-1}(t)t^{-1-\lambda_n}dt.$

Let’s prove this by induction:

\begin{aligned} (\lambda_n-q)x^{\lambda_n}\int_x^1Q_{n-1}(t)t^{-1-\lambda_n}dt &= (\lambda_n-q)x^{\lambda_n}\int_x^1\left(t^q-\sum_{k=1}^{n-1}a_{k,n-1}t^k\right)t^{-1-\lambda_n}dt\\ &=(\lambda_n-q)x^{\lambda_n}\int_x^1 t^{q-1-\lambda_n}-\sum_{k=1}^{n-1}a_{k,n-1}t^{k-1-\lambda_n}dt\\ &=(\lambda_n-q)x^{\lambda_n}\left(\left[\frac1{q-1-\lambda_n+1}t^{q-1-\lambda_n+1}\right]_x^1\right. \\ &\hphantom{=}\left.- \sum_{k=1}^{n-1}a_{k,n-1}\left[\frac1{k-1-\lambda_n+1}t^{k-1-\lambda_n+1}\right]_x^1\right)\\ &=(\lambda_n-q)x^{\lambda_n}\left(\frac1{q-\lambda_n}(1-x^{q-\lambda_n}) \vphantom{\sum_{k=1}^n}\right.\\ &\hphantom{=}-\left.\sum_{k=1}^{n-1}a_{k,n-1}\frac1{k-\lambda_n}(1-x^{k-\lambda_n})\right)\\ &=x^{\lambda_n}(x^{q-\lambda_n}-1)-(\lambda_n-q)\sum_{k=1}^{n-1}\frac{a_{k,n-1}}{k-\lambda_n}(1-x^{k-\lambda_n})\\ &=x^q-x^{\lambda_n}-(\lambda_n-q)\sum_{k=1}^{n-1}\frac{a_{k,n-1}}{k-\lambda_n}(1-x^{k-\lambda_n})\\ &=: x^q - \sum_{k=1}^na_{k,n}x^{\lambda_k}. \end{aligned}

Obviously, one could try to define the $a_{k,n}$ appropriately or to give some closed formula expression; but it doesn’t seem to be useful for the problem at hand.

Now, we have $\left\|Q_0\right\|_\infty = 1$, and

\begin{aligned} \left\|Q_n\right\|_\infty &= \left\|(\lambda_n-q)x^{\lambda_n}\int_x^1Q_{n-1}(t)t^{-1-\lambda_n}dt\right\|_\infty \\ &\leq \left|\lambda_n-q\right|\left\|x^{\lambda_n}\right\|_\infty\left\|Q_{n-1}\right\|_\infty\left\|\int_x^1t^{-1-\lambda_n}dt\right\|_\infty\\ &=\left|\lambda_n-q\right|\left\|Q_{n-1}\right\|_\infty\left\|-\frac1{\lambda_n}(1-x^{-\lambda_n})\right\|_\infty\\ &=\left|\lambda_n-q\right|\left\|Q_{n-1}\right\|_\infty\frac1{\left|\lambda_n\right|}\left\|1-x^{-\lambda_n}\right\|_\infty \\ &=\left|1-\frac q{\lambda_n}\right|\left\|Q_{n-1}\right\|_\infty. \end{aligned}

This shows

$\displaystyle \left\|Q_n\right\|_\infty\leq\prod_{k=1}^n\left|1-\frac q{\lambda_k}\right|,$

and hence

$\displaystyle \lim_{n\to\infty}\left\|Q_n\right\|_\infty \leq \prod_{k=1}^\infty\left|1-\frac q{\lambda_k}\right|$.

But the infinite product will be 0, because $0<\frac q{\lambda_k}<1$ by assumption, and because of the basic inequality $1-x\leq e^{-x}$ for any $x>0$, we have

$\displaystyle \prod_{k=1}^\infty\left|1-\frac q{\lambda_k}\right| \leq \prod_{k=1}^\infty\exp\left(-\frac q{\lambda_k}\right) = \exp\left(-\sum_{k=1}^\infty \frac q{\lambda_k}\right) = 0.$

Hence, $Q_n$ vanishes uniformly as $n\to\infty$, and so $x^q$ is being approximated uniformly by some polynomials using only the allowed monomials $x^{\lambda_k}$. This concludes the proof. q.e.d.

At first, I was excited because I thought one could reverse this idea to show the other direction of Müntz’ Theorem as well. However, it doesn’t work because of the second-to-last inequality given above: If the product is finite, that doesn’t say anything about how well (or how badly) $x^q$ is approximated. The proof of the reverse direction is harder and I am only aware of deeper ideas from Fourier Analysis (that’s historically the first proof) or using the Hahn-Banach Theorem (as in Rudin’s book).

But the elementary idea given here is actually very neat, even though it doesn’t, to the best my knowledge, solve any real-world problems. A very pretty academic finger-exercise. And this should be a good place to stop.

# On approximating functions – Part One

Let us have a closer look at the Stone-Weierstrass-Theorem. It is a generalization of Weierstrass’ classical approximation theorem on polynomials which stands at the base of approximation theory. Stone took Weierstrass’ classical statement and rendered it more abstractly, so it can be used for a number of things in proofs about continuous functions. In practice, I have rarely seen it used in its pure form – maybe because it lacks a direct statement about the approximation error, and because you can base more powerful statements upon it.

Let us state the Stone-Weierstrass-Theorem in its pure form first, and its two most useful corollaries next, before we turn to different methods to prove them and to a deeper rendition of their significance.

We shall denote by $\mathcal{C}(K)$ the set of continuous complex-valued functions on the set $K$, and by $\mathcal{C}_{\mathbb{R}}(K)$ the set of real-valued continuous functions on the set $K$.

Remember that an algebra $\mathcal{A}$ over a field $f$ is a set which satisfies the following conditions:

• for any $f\in \mathcal{A}$ and any $\alpha\in F$, $\alpha f\in \mathcal{A}$, and
• for any $f, g\in \mathcal{A}$, $f+g\in \mathcal{A}$ and $f\cdot g\in \mathcal{A}$.

Also, we shall call a family of functions on the set $x$ separating, if for any $x, y\in X$, with $x\neq y$, there is some $f\in \mathcal{A}$ such that $f(x)\neq f(y)$.

Stone-Weierstrass-Theorem (real version): Let $K\subset\mathbb{R}$ be a compact set and let $\mathcal{A}\subset\mathcal{C}_{\mathbb{R}}(K)$ a separating algebra with the constant function $1\in\mathcal{A}$.

Then, $\mathcal{A}$ is dense in $\mathcal{C}_{\mathbb{R}}(K)$ in the topology of uniform convergence.

Equivalently, $\overline{\mathcal{A}} = \mathcal{C}_{\mathbb{R}}(K)$. Equivalently, for every $\varepsilon>0$ and every $f\in\mathcal{C}_{\mathbb{R}}(K)$, there is some $g\in\mathcal{A}$ with $\left\|f-g\right\|_\infty<\varepsilon$.

Stone-Weierstrass-Theorem (complex version): Let $K\subset\mathbb{C}$ be a compact set and let $\mathcal{A}\subset\mathcal{C}(K)$ a separating algebra with the constant function $1\in\mathcal{A}$ and for any $g\in\mathcal{A}$ let $\overline g\in\mathcal{A}$.

Then, $\mathcal{A}$ is dense in $\mathcal{C}(K)$ in the topology of uniform convergence.

Corollary: Weierstrass’ classical Theorem: For every continuous real-valued function $f$ on an interval $[a,b]$ there is a sequence of polynomials $p_n$ which converges uniformly to $f$ on $[a,b]$.

Corollary: On trigonometric polynomials: For every continuous $2\pi$-periodic real-valued function $f$ there is a sequence of trigonometric polynomials $\sum_{j=0}^n \alpha_j\cos(jx)+\beta_j\sin(jx)$ which converges uniformly to $f$.

In this second corollary, the objects don’t look like polynomials at all, but in the proof we will give a hint on why the series can be called that. It has to do with the representation $(e^{iz})^j$.

The conditions in the theorem are quite natural. You need some sort of richness in your set of approximating functions, this is achieved by the separation: if there were two points $x\neq y$ and you couldn’t find a separating $f\in\mathcal A$, then how could you properly approximate any given continuous function which takes different values at $x$ and $y$? You could never, well, separate the values with what the algebra provides you. Hence, this is obviously a necessary condition. As Hewitt/Stromberg put out nicely, this obviously necessary condition is also sufficient in the real case – that makes some of the beauty and elegance of the theorem.

In the complex version, there is some sort of extra-richness that we must provide for the theorem to hold. We obviously can’t do without that: Let us think of the family of polynomials on the unit disk in $\mathbb{C}$. They are an algebra, alright, they contain the continuous functions, and they are separating (even the identity polynomial $f(x)=x$ can do that). But they are holomorphic as well, and the uniform limit on a compact set of holomorphic functions will be holomorphic again (this follows very naturally from Cauchy’s Integral Formula) – but certainly, there are continuous functions on the unit disk which are not holomorphic, so there needs to be an extra assumption in the Stone-Weierstrass-Theorem. And it follows very smoothly, that the lacking assumption can already be met by demanding that complex conjugates be contained in $\mathcal{A}$ as well. Let us show this, why not:

Proof of the complex version basing on the real version

The real and imaginary part of any $g\in\mathcal{A}$ can be represented as

$\displaystyle\mathrm{Re}g=\frac{g+\overline g}{2}$

and

$\displaystyle\mathrm{Im}g=\frac{g-\overline g}{2i}$,

which means that $\mathrm{Re}g, \mathrm{Im}g \in \mathcal{A}$, by assumption. Moreover, if $x, y\in K$ and $x\neq y$, then by separation there is some $g\in\mathcal{A}$ which gives us $g(x)\neq g(y)$, and that means at least $\mathrm{Re}g(x)\neq\mathrm{Re}g(y)$ or $\mathrm{Im}g(x)\neq\mathrm{Im}g(y)$. So the family of the real parts of the functions in $\mathcal{A}$ is separating and thus meets the conditions of the real version of the theorem, and so does the family of imaginary parts. The theorem yields that either one of these families is dense in $\mathcal{C}_{\mathbb{R}}(K)$.

Hence, for any $f\in\mathcal{C}(K)$ and for any $\varepsilon>0$, there are some $g_0, g_1\in\mathcal{A}$ such that

$\displaystyle\left\|\mathrm{Re}f - g_0\right\|_\infty < \frac\varepsilon2$

and

$\displaystyle\left\|\mathrm{Im}f - g_1\right\|_\infty < \frac\varepsilon2$.

Thus,

\begin{aligned}\displaystyle\left\|f - (g_0+ig_1)\right\|_\infty &= \left\|\mathrm{Re}f - g_0 + i(\mathrm{Im}f - g_1)\right\|_\infty \\ &\leq \left\|\mathrm{Re}f - g_0\right\|_\infty + \left\|\mathrm{Im}f - g_1\right\|_\infty \\ &< \varepsilon. \end{aligned}

Since $g_0+ig_1\in\mathcal{A}$, the complex version is proved. q.e.d.

While we’re at it, let’s give the proofs for the corollaries now.

Proof of Weierstrass’ classical Theorem

That’s rather simple: the set of polynomials is obviously an algebra, and it’s separating because you can find the identity polynomial $g(x)=x$ in it. The conditions of the theorem are met – the corollary follows. q.e.d.

Proof of the statement on trigonometric polynomials

This follows from the complex version. Let us look at some $2\pi$-periodic function $f:\mathbb{R}\to\mathbb{R}$. The domain of $f$ is not compact, but all that matters is the compact interval $[0,2\pi]$.

Now consider the set of functions $\sum_{j=-n}^nc_jz^j$ with $z\in [0,2\pi]$, which obviously form a separating algebra. The conditions of the real version are thus met. Those functions can approximate the function $f$ on $[0,2\pi]$ uniformly. We are interested in trigonometric polynomials, however, and so far, for any $\varepsilon>0$, we’ve only found some appropriate $n$ and $c_j$ with

$\displaystyle\left\|f(z) - \sum_{j=-n}^nc_jz^j\right\|_\infty < \varepsilon.$

Of course, that’s not news. We already knew that $c_j=0$ for $j<0$ from the real version. But we can now consider the bijection $z\mapsto e^{iz}$, which transforms our setting to the compact complex set $S^1$. Let us take $g(e^{iz}):=f(z)$ which maps $g:S^1\to\mathbb{R}$. We try to apply the complex version to this function $g$ and this time we use the approximating functions $\sum_{j=-n}^nc_je^{ijz}$ (in order to show the resemblance with the approximating functions for $f$, they can also be written like $\sum_{j=-n}^nc_k(e^{iz})^j$). They are still an algebra containing constant functions, they are separating because $e^{iz}$ is a bijection on $S^1$, and now we need to ensure the extra assumption on complex conjugation – but that’s not a problem because of $\overline{e^{ijz}} = e^{-ijz}$. The complex version thus holds and we can find some appropriate $n$ and $c_j$ with

$\displaystyle\left\|g(e^{iz}) - \sum_{j=-n}^nc_je^{ijz}\right\|_\infty < \varepsilon.$

Now, we can represent $c_j = a_j+ib_j$ with $a_j,b_j\in\mathbb{R}$, and $e^{ijz} = \cos(jz)+i\sin(jz)$, which yields

$\displaystyle\left\|g(e^{iz}) - \sum_{j=-n}^n\bigl(a_j\cos(jz)-b_j\sin(jz)\bigr) - i\sum_{j=-n}^n\bigl(a_j\sin(jz)+b_j\cos(jz)\bigr)\right\|_\infty < \varepsilon,$

but we had chosen $g(e^{iz})=f(z)$ and this is real by assumption (no imaginary part). So:

$\displaystyle\left\|f(z) - \sum_{j=-n}^n\bigl(a_j\cos(jz)-b_j\sin(jz)\bigr)\right\|_\infty < \varepsilon.$

Finally, we employ the relations $\cos(-x) = \cos(x)$ and $\sin(-x) = -\sin(x)$ to find

$\displaystyle\left\|f(z) - a_0 - \sum_{j=1}^n \bigl((a_j+a_{-j}) \cos(jz) - (b_j-b_{-j})\sin(jz)\bigr)\right\|_\infty < \varepsilon.$

By appropriate definitions of the coefficients $\alpha_j$ and $\beta_j$, we have proved the corollary:

$\displaystyle\left\|f(z) - \sum_{j=0}^n \bigl(\alpha_j\cos(jz)+\beta_j\sin(jz)\bigr)\right\|_\infty < \varepsilon.$ q.e.d.

In a way, that was surprisingly non-smooth for the proof of a corollary. But the Fourier series people are thankful for a proof like that. We shall see a sketch of another proof of that later, but this one here does without explicit computations of integrals and fuzzing around with sine-cosine-identities.

Up until now, everything has hinged on the real version of the Stone-Weierstrass-Theorem. In order to give a self-contained proof of this, we are going to need several lemmas and a little work. We take the proof from Königsberger’s book on Calculus (it’s very similar to the proof given in Heuser’s book). Let us start with the

The square root Lemma: Define the generalized binomial coefficient via $\binom{\alpha}{n}:=\frac{\alpha(\alpha-1)\cdots(\alpha-n+1)}{n!}$. Then we have the generalized binomial formula $(1+x)^\alpha=\sum_{k=0}^\infty\binom{\alpha}{k}x^k$ for any $\left|x\right|<1$.

For $\alpha>0$, uniform and absolute convergence even holds for any $\left|x\right|\leq1$

In particular, for $\alpha=\frac12$ and any $x\in[-1,1]$,

$\displaystyle\sqrt{1+x} = \sum_{k=0}^\infty\binom{\frac12}{k}x^k$.

Proof: One part is easy, if we apply the approriate machinery. Since $(1+x)^\alpha$ is holomorphic with a possible singularity at $x=-1$, we can find its absolutely convergent power series in the point $x=0$ with a radius of convergence at least $1$:

$\displaystyle (1+x)^{\alpha} = \sum_{k=0}^\infty \left.\frac{d^k}{dx^k}(1+x)^{\alpha}\right|_{x=0} \frac1{k!}x^k,\qquad\left|x\right|<1.$

But the derivatives in $x=0$ are easily found to be $\frac{d^k}{dx^k}(1+x)^{\alpha} = \prod_{j=0}^{k-1}(\alpha-j)$. This proves the series representation.

This can also be shown in an elementary way, as seen in Hewitt/Stromberg.

We still need to prove something about convergence on the boundary of the circle for $\alpha>0$. Let us consider the series $\sum_{k=0}^\infty\left|\binom{\alpha}k\right|$. We have, for $K\geq\alpha$:

$\displaystyle\frac{\left|\binom{\alpha}{k+1}\right|}{\left|\binom{\alpha}k\right|} = \frac{\left|\alpha-k\right|}{k+1} = \frac{k-\alpha}{k+1}.$

In particular, for those large $K$:

$\displaystyle k\left|\binom{\alpha}k\right|-(k+1)\left|\binom{\alpha}{k+1}\right| = \alpha \left|\binom{\alpha}k\right| > 0$

The sequence $K\left|\binom{\alpha}k\right|$ is thus eventually decreasing in $K$, it’s bounded and therefore has a limit $\gamma\geq0$. Now look at the telescoping series

$\displaystyle \sum_{k=0}^\infty \left(k\left|\binom{\alpha}k\right| - (k+1)\left|\binom{\alpha}{k+1}\right|\right),$

whose $p$-th partial sum is $-(p+1)\left|\binom{\alpha}{p+1}\right|$, which converges to $-\gamma$ (however, only the first few terms of it are negative, which gives the negative limit). So, the telescoping series converges, and we find

\begin{aligned}\displaystyle \sum_{k=0}^\infty\left|\binom{\alpha}k\right| &= \sum_{k=0}^{\left\lfloor\alpha\right\rfloor}\left|\binom{\alpha}k\right| + \sum_{k=\left\lfloor\alpha\right\rfloor+1}^\infty\left|\binom{\alpha}k\right| \\ &= C_1 + \frac1\alpha\sum_{k=\left\lfloor\alpha\right\rfloor+1}^\infty\left(k\left|\binom{\alpha}k\right| - (k+1)\left|\binom{\alpha}{k+1}\right|\right) \\ &= C_1 + C_2. \end{aligned}

This shows that the series $\sum_{k=0}^\infty\left|\binom{\alpha}k\right|$ is convergent. For in $x\in[-1,1]$, we are actually interested in

$\displaystyle \left|\sum_{k=0}^\infty\binom{\alpha}{k}x^k\right|\leq \sum_{k=0}^\infty\left|\binom{\alpha}{k}\right|,$

which is convergent. The series thus defines a uniformly and absolutely convergent function on the compact interval $[-1,1]$.

The fact that $(1+x)^\alpha = \sum_{k=0}^\infty\binom{\alpha}{k}x^k$ even for $x=-1$ and $x=1$ follows by continuity on $[-1,1]$ and by equality of both sides on $(-1,1)$. q.e.d.

The Closure Lemma: Let $\mathcal A$ be an algebra of continuous functions on a compact set and $\overline{\mathcal A}$ its closure in the topology of uniform convergence. Then, for any $f,g\in\overline{\mathcal A}$:

$f+g$, $fg$, $\left|f\right|$, $\max(f,g)$ and $\min(f,g)$ are in $\overline{\mathcal A}$.

Proof: Let $\varepsilon>0$. There are $p,q\in\mathcal A$ with $\left\|f-p\right\|_\infty<\varepsilon$ and $\left\|g-q\right\|_\infty<\varepsilon$. Therefore,

$\displaystyle\left\|f+g-(p+q)\right\|_\infty \leq \left\|f-p\right\|_\infty + \left\|g-q\right\|_\infty < 2\varepsilon,$

and

\begin{aligned} \displaystyle\left\|fg-pq\right\|_\infty &= \left\|fg-fq+fq-pq\right\|_\infty \\ &\leq \left\|f-p\right\|_\infty\left\|q\right\|_\infty + \left\|f\right\|_\infty\left\|g-q\right\|_\infty \\ &\leq \left\|f-p\right\|_\infty\bigl(\left\|g\right\|_\infty+\left\|g-q\right\|_\infty\bigr) + \left\|f\right\|_\infty\left\|g-q\right\|_\infty\\ &\leq \varepsilon\bigl(\left\|g\right\|_\infty + \varepsilon + \left\|f\right\|_\infty\bigr) < C\varepsilon. \end{aligned}

The right-hand side can be made arbitrarily small, which proves the first two statements.

To deal with $\sqrt{f}$, we employ the compactness of the set: $f$ will be bounded. We can therefore consider the function $\phi = \frac{f}{\left\|f\right\|_\infty}$ taking values in $[-1,1]$. Note that the constant function $\left\|f\right\|_\infty^{-1}$ belongs to $\mathcal A$, and since $f\in\overline{\mathcal A}$, $\phi\in\overline{\mathcal A}$. By the Square root Lemma,

$\displaystyle\left|\phi(x)\right| = \sqrt{\left|\phi(x)\right|^2} = \sqrt{1+\bigl(\phi^2(x)-1\bigr)} = \sum_{k=0}^\infty\binom{\frac12}{k}\bigl(\phi^2(x)-1\bigr)^k,$

with an absolutely and uniformly convergent series representation. There is some $N_0$ with

$\displaystyle \left\|\left|\phi(x)\right| - \sum_{k=0}^{N_0}\binom{\frac12}{k}\bigl(\phi^2(x)-1\bigr)^k\right\|_\infty < \frac\varepsilon2.$

By the arguments given above, together with induction, one sees that the partial sum up to $N_0$ belongs to $\overline{\mathcal A}$, if $\phi$ does. Hence, there is some $r\in\mathcal A$ with

$\displaystyle\left\|r - \sum_{k=0}^{N_0}\binom{\frac12}{k}\bigl(\phi^2(x)-1\bigr)^k\right\|_\infty < \frac\varepsilon2.$

This yields

\begin{aligned} \displaystyle \bigl\|\left|\phi(x)\right| - r\bigr\|_\infty &\leq \left\|\left|\phi(x)\right| - \sum_{k=0}^{N_0}\binom{\frac12}{k}\bigl(\phi^2(x)-1\bigr)^k\right\|_\infty + \left\|\sum_{k=0}^{N_0}\binom{\frac12}{k}\bigl(\phi^2(x)-1\bigr)^k - r\right\|_\infty \\ &< \frac\varepsilon2+\frac\varepsilon2 = \varepsilon. \end{aligned}

Thus, $\left|\phi\right|\in\overline{\mathcal A}$, and therefore $\left|f\right|\in\overline{\mathcal A}$.

Finally, we apply the equalities

$\displaystyle \max(f,g) = \frac12\bigl(f+g\left|f-g\right|\bigr),\qquad \min(f,g) = \frac12\bigl(f+g-\left|f-g\right|\bigr),$

to show the final statements. q.e.d.

Note that we need $\overline{\mathcal{A}}$ to find these functions, even if $f,g\in\mathcal{A}$. The algebra $\mathcal{A}$ itself is not rich enough, in general, to contain the functions in the statement of the Closure Lemma. Besides, we have made frequent use of $\overline{\overline{\mathcal{A}}}=\overline{\mathcal{A}}$, which is at the heart of why the Closure Lemma works.

The Approximation Lemma: Let $\mathcal A\subset\mathcal{C}_{\mathbb{R}}(K)$ be a separating algebra and let $f\in\mathcal{C}_{\mathbb{R}}(K)$, $x\in K$, $\varepsilon>0$. Then, there is some $q_x\in\overline{\mathcal A}$ with the properties

$q_x(x) = f(x)$ and $q_x(y) \leq f(y)+\varepsilon$ for any $y\in K$.

Proof: First of all, for any $z\in K$ there is some function $h_z\in\mathcal A$ with $h_z(x)=a$ and $h_z(z) = b$. Take

$\displaystyle h_z(y) = (b-a)\frac{g(y)-g(x)}{g(z)-g(x)}+a$

with an appropriate $g\in\mathcal{A}$ for instance (separation allows for this to be well-defined: pick $g$ such that $g(z)\neq g(x)$). In particular, one can choose $a=f(x)$, $b=f(z)$, which makes $h_z$ coincide with $f$ at least in the points $x$ and $z$.

Since $h_z$ and $f$ are continuous, there is some interval $I_z$ around $z$, with $h_z(y)\leq f(y)+\varepsilon$ for all $y\in I_z$. Now, any point $z\in K$ is at least contained in the interval $I_z$, and by compactness, finitely many of those intervals suffice to cover $K$. Say, $K\subset\bigcup_{i=1}^nI_{z_i}$. Then consider

$\displaystyle q_x:=\min(h_{z_1},\ldots,h_{z_n}),$

which is in $\overline{\mathcal A}$ by the Closure Lemma. Besides, $q_x(x) = \min\bigl(h_{z_1}(x),\ldots,h_{z_n}(x)\bigr) = \min\bigl(f(x),\ldots,f(x)\bigr) = f(x)$ by construction, and for every $y\in K$, there is some $j$, such that $y\in I_{z_j}$, and so $q_x(y) = \min\bigl(h_{z_1}(y),\ldots,h_{z_n}(y)\bigr) \leq h_{z_j}(y) \leq f(y)+\varepsilon$. q.e.d.

Proof of the Stone-Weierstrass-Theorem, real version: For any $x\in K$, choose some $q_x\in\overline{\mathcal A}$ as in the Approximation Lemma (in particular: $q_x(x)=f(x)$). Then, by continuity, pick an open interval $U_x$ around $x$, such that for any of the $y\in U_x\cap K$,

$\displaystyle q_x(y) \geq f(y)-\frac\varepsilon2.$

By compactness, finitely many of the $U_{x_i}$ suffice to cover $K$. Consider

$\displaystyle g:=\max(q_{x_1},\ldots,q_{x_n}),$

which is in $\overline{\mathcal A}$ by the Closure Lemma. For any $z\in K$, we find some $i$ with $z\in U_{x_i}$, and thus

$\displaystyle g(z) = \max\bigl(q_{x_1}(z),\ldots,q_{x_n}(z)\bigr) \geq q_{x_i}(z) \geq f(z)-\frac\varepsilon2.$

By the Approximation Lemma, however,

$\displaystyle g(z) = \max\bigl(q_{x_1}(z),\ldots,q_{x_n}(z)\bigr) \leq \max\bigl(f(z)+\varepsilon,\ldots,f(z)+\varepsilon\bigr) = f(z)+\varepsilon.$

Now, since $g\in\overline{\mathcal A}$, we can pick some $p\in\mathcal A$ with $\left\|g-p\right\|_\infty<\varepsilon$. Then, we have found

$\displaystyle\left\|f-p\right\|_\infty\leq\left\|f-g\right\|_\infty+\left\|g-p\right\|_\infty < \varepsilon+\varepsilon = 2\varepsilon$.

The Stone-Weierstrass-Theorem is proved. q.e.d.

A slightly different proof is given in Hewitt/Stromberg. The idea is about the same, but the compactness argument works on the image of $f$, w.l.o.g. the interval $[0,1]$. Here, we have used the compactness of the definition space $K$. Of course, Hewitt/Stromberg can’t do without the compactness of $K$, since they need that $f$ takes its maximum and minimum value. The downside of their approach is that they need to iterate their argument, so their approximating function is actually a series of those max-min-functions that we have used. But there are not too many differences in the proofs, so let’s just stick to that. I have actually found the proof given above a little slicker; but I have come to admire the book by Hewitt/Stromberg for its clarity and style. There are several nice things hidden inside there to come back to.

For now, we’ll stop. But there’s some more backstory about the Stone-Weierstrass-Theorem to come soon.

# Math is important

There have been many instances when people told me about how much they disliked math at school – especially when I speak highly of my studies and that I do abstract math frequently even today. On top of that, I have talked to many students (either B.Sc. students or teachers-to-be) who speak similarly about math at university-level. And finally, I know several math teachers who chime in to that tune as well. Among the most usual comments that I get are:

• math at school was so boring and dry, it doesn’t have anything to do with what happens in real life
• it would be sufficient if I had just learned the basic computations (addition, subtraction, multiplication and division) and to deal with percentages – that’s all I’ll ever need
• math at school was so hard, I never understood why I should have come up with these particular tricks
• the exercises at the university were so hard, I never managed any one of them on my own
• it’s no use to learn so many hard topics in math that I will never use in my later work (and, oh boy, I had to go to so freaking weird lectures)

Here’s my random rant about why none of these comments is valid – or why any of these comments would apply just the same to any other high school subject. I just had to collect those arguments once in a single place, to get them off my heart. And for further reference. None of these arguments is originally mine, which might just show how deep this anti-math feeling is rooted.

Before I start, let me make something clear: I do not intend to say that the whole universe of mathematics is interesting in and of itself and that everybody should make themselves acquainted with that just for the sake of it. There is a lot of mathematics in which I have lost my interest as time went by (or in which I never had any interest in the first place). On top of that, most of what I have to say applies accordingly to other sciences or topics: I myself would never have become much of a biologist or theatre critic. In what follows, I will rather talk about things that should be standard knowledge to anybody who consider themselves an educated person: things that are part of what mankind has achieved, be it the Mona Lisa, be it Romeo and Juliet or be it Newton’s Law of Gravitation. In some way, many people tend to say that the first two are “must-knows” but the last one is not. You can live your life perfectly at ease if you’re unaware of Newton’s Law of Gravitation, but if you don’t recognize the Mona Lisa or have never seen it, some would think you’ve been living with eyes closed – how’s that? A painting may be easier to approach than a theorem, but both require some sort of education to properly grasp them and both will be (aesthetically) pleasing to the educated beholder. And there also is really deep beauty in maths, there is elegance, there are unexpected connections. But it is hard work, no doubt about that.

On top of that, there are many people who work in some shade of a scientific profession. That not only applies to scientists but also to engineers, programmers, actuaries, and – yes – especially (math) teachers. All these people must possess a certain way of thinking to do their job properly: an analytic, logical train of thought which allows them to cut through a problem and to make their reasoning clear to other people (it hardly matters whether you have to explain your work to your boss or to a bunch of children). Besides, these people have learnt to actually tackle a problem and not run away if they don’t understand it at first sight. In practical professional uses these trains of thought will be what people have learnt – this is what employers look for in scientists: they have learnt to think and they can’t be scared away by some hard and unsolved problem. This way of thinking is being taught by considering mathematical problems, but the actual skill is generally not mathematical. If the way of logical thinking could as well be taught by applying for a truck driver’s licence, then employers would also look for truck drivers, why not.

So, this is one of my first points that I’d like to make: in learning or studying math, you’re supposed to learn to think. The fact that you learn math is just incidental, if you will. You’re supposed to be suspicious about what people just tell you without proof – you’re supposed to think for yourself and to convince you of the truth for yourself. In social studies, in theology, in psychology, the teacher might shoot down any of your questions with a phrase like “I’m right because that’s the general opinion and you’re not yet aware of it”, but in mathematics and in the sciences any student is able to convince the teacher of mistakes that he’s made. I am aware that I’m over-simplifying here, since there is no such thing as an absolute truth in psychology and social sciences (theology is somewhat tricky in this respect) and there is a living culture of discussions – those professions will teach you a different, albeit similarly important way of thinking. Discussions are equally important in math, but in the end there is only one answer (apart from very special cases that usually aren’t taught) and the students are supposed to learn arguments towards that answer. In philosophy, no-one needs to be convinced by arguments since no-one is wrong; maybe different or on shaky ground, but not wrong. My point is not to diminish the non-mathematical sciences, my exact point is that you can learn argumentation and discussion on topics that actually have universal truth hidden inside them. This is how you can uncover false arguments, how to fix arguments gone wrong, how to think twice at crucial points – and how to wonder about unconvincing statements other people may make (and how to question these statements).

Another issue made about math is that it is considered hard. Well, that’s right. Math is hard and exercises are hard. But if your mind is to be trained to a purely logical way of thinking, then how else would you do the training? If you want to run a marathon, you also need to do your training (and yes, of course, running a marathon is hard as well). On top of that: problems in work are harder than the math exercises. Applications of mathematics are even harder than abstract math, since the abstraction is gone. There are no proper circles in the real world with a circumference of $2\pi r$. In applications you always have to account for measurement errors, lack of data quality – and you have to stop dealing with those inaccuracies at some point because you couldn’t possibly get all of them right (but at least you should give an estimate then about how big your error is). Your math exercises are supposed to give indications about what can and what cannot be done, and how you can deal with that. About using math successfully, to discuss and to convince others by your logic. And without training, you can’t do anything properly. Learning is when you do things that you can’t do yet.

Now, some people will say they don’t have to deal with circles or anything similar in their everyday life, so this example doesn’t apply. Maybe so. But then they have to deal with something else and time and again something changes. People are not robots who receive their programming once and then do the same thing day after day – and every time something changes, you will have to adapt and to think about that. It might be useful to be able to think about that properly: what am I supposed to do now, how do I do that, how can I optimize that (i.e. have less trouble or effort with it). This doesn’t need to be mathematical thinking – it’s just that math and exercises may have taught you how to deal efficiently with “new things that you don’t yet understand”.

The other point of view to this is: math is universal because it’s abstract. Some things and techniques have been invented for a special purpose, but when you take the special situation away from it, the technique can be used more generally. I won’t start the long line of examples here, it would probably just distract from my point. Abstraction is hard, but it unveils the underlying principles and once you’ve understood those, many things will just fall into your hands like that. Or, to phrase it differently: you should make things as simple as possible – but not any simpler. Applications are not simple because they may be concrete – in this respect, math is simple because it is not concrete. And to give a quote from one of my high school math teachers: “The task of mathematics is to make life as simple as possible.”

About math exercises, some complain that, sometimes, tricks are required and you can’t come up with the trick in the first place. That’s right. But no-one expects you to invent a mesmerizing trick on your own. You’re supposed to look at the exercise and think about it, try to solve it. If you have spent some time with the exercise, sometimes you will come up with a solution (and sometimes a neater solution uses a dirty trick that you didn’t see). But there will always be exercises that you can’t solve. That’s not a problem. When you are given the solution, you are supposed to think about that, too. You wouldn’t understand it properly, if you hadn’t thought about the question in the first place – and you will understand the trick, so it will be part of your own quiver of tricks from now on. If there is a similar exercise later, you can use the trick since you’ve been taught it now. And, yes, there are many tricks. It takes experience and effort to know which can be used in a given situation and which can’t. This, in turn, enables you to deal with more complex problems that you couldn’t tackle without that quiver of tricks. That would bring us back to the training-argument from above: you can’t run a marathon, if you just walk around your house once a week. If you want to become better in something, this will take effort and time.

Much of what I said about some profession or science as a given example can be replaced by something else and still keep the same meaning. I actually intend to say, that a broad education is essential for anything an educated person does – you need to be really good in your special subject, but you also need to try and see the whole picture (or understand why you can’t see it). It’s rather my own point of view that made this posting start about mathematics. It actually applies to most other educated professions I can think of, and many people shun away from these “alien” educated professions when a discussion crosses those. I myself am often confronted with the mathematical angst of many people, which made me write this text.

Seeing the bigger picture (whatever your personal profession and specialty is), will also help you in another aspect. You always need to know a little more than you actually have to use. Thus, you will know the boundaries of what you’re doing (and the shortcuts that you may have taken). It will be immensely helpful to have an understanding of why the world works as it does, what the principles are – that doesn’t mean you have to properly understand each and every bit of it, your time is finite of course, but the basics should be standard knowledge. That makes it possible to appreciate the really hard stuff (e.g. it’s not an easy thing to encrypt your online banking access data; it’s not easy to write a song that will rank top in the charts; it’s not easy to understand why everyone actually puts any trust in paper-money instead of gold, bread or shells).

Especially teachers must know more than their students, and they must know more than what they actually teach their students. On top of that, teachers need to know how to deal with children and how to keep them awake and attentive in their classes. That’s why this job is so hard, after all.

Similar reasoning applies to languages. I can hardly overstate the need to speak foreign languages. You will understand the culture of a foreign country much better (if only at all) if you can make yourself familiar with its language. And that also applies to dead languages such as Latin or Ancient Greek. German educator Wilhelm von Humboldt once said “Having learnt Greek can be no less useful to a carpenter, than making tables is useful to a professor”. You learn to think analytically by learning the grammar and you learn about another culture. How can that be entirely useless, even though no-one will have a conversation with you in Latin? Of course, your time is finite and you’d rather deal with something else – but apart from very legitimate indulgent things like watching some football game or spending time with your friends: is a soap-opera really the better way to kill your time as opposed to learning a new language, playing chess, doing a painting or thinking about a science problem? What do you want to do with your time?

Finally, one paragraph about the boundaries of math. Mathematics is not a description of the world. Mathematics is a set of assumptions (called axioms) which are taken at their face value because no-one doubts them and you can’t prove anything without having a starting point. From those axioms, things are proven by logical conclusions. The construction is powerful enough that people have drawn useful and sensible conclusions from it to describe the world. But mathematics is not the world. It is a model (albeit quite a good one) and there are boundaries. Dealing with mathematics allows you to find out where those boundaries are and where your model assumptions break down (do they break down in your application?). You mustn’t use math without a sufficiently deep knowledge, and this is why economists, psychologists and many others have to deal with statistics, even though their computers will tell them about the significance of their test statistics. Still, you have to know the limits and deal with them.

That’s why people are supposed to deal with math.

# Eigenvalues and how to find them

Recently, I have taken some time to look into numerical eigenvalue algorithms. This is a subject that has interested me for some time now, since it is not regularly taught in basic lectures on numerics, and there are many instances where you actually need to compute eigenvalues. Of course, any reasonable math software will have built-in algorithms that do the trick for you – but time and again, I’d like to know what makes them tick. So how do they work?

In what follows, I refer to chapter 3 of the basic-ish book by Hämmerlin on Numerical Mathematics. It is not very well known, and its notation is somewhat intricate, but at least it started at roughly the same level where I stood. For more advanced ideas, the book by Stoer and Bulirsch might be more applicable, or so I’m told – however, I haven’t had a closer look at it yet.

To fix notation, we look at some real or complex $n\times n$ matrix $A = (a_{ij})_{i,j=1,\ldots,n}$, and we are interested in its eigenvalues $\lambda_i$ with corresponding eigenvectors $v_i$, $i=1\ldots,n$. We denote the $n$-dimensional identity matrix by $I$.

As it turns out, these eigenvalue-algorithms are rather intricate beings. There are different ones for slightly different purposes, and if you are interested in eigenvectors as well – that’s almost always the bottleneck. So, unless you have a great package of built-in algorithms (such as Lapack, which is the foundation to what R and Matlab do with their eigen functions), you should really think twice before having a look at eigenvectors.

The most basic algorithm is the Power Method, which will yield the largest eigenvalue and its corresponding eigenvector (large in the sense of absolute value). It is based upon the concept of eigen-decomposition of vectors: if there are $n$ eigenvalues $\lambda_i$ and a base of corresponding eigenvalues $x_i$, then we have for any initial vector $v_0$: $A^k v_0 = \sum_{i=1}^n\lambda_i^k x_i$. Here, the largest eigenvalue, $\lambda_1$ say, will dominate the others and you can extract its value from the iterates $A^k v_0$. The iterates will also give you the corresponding eigenvector – that is, if the eigenspace is one-dimensional. If there are other eigenvalues that have the same absolute values as $\lambda_1$, things get rather messy. Manageable, yes, but it will not be as nice as before (here, Hämmerlin goes into some detail on the convergence issues especially of the eigenvectors). Related to this problem is the concept of deflation, where your $n$-dimensional space splits into lower dimensional ones to deal with the eigenvalue-problem in lower dimensions only (this can sometimes happen, depending on the exact structure of the matrix). This will improve the feasibility of all the algorithms discussed here, but we won’t go into this much deeper.

This algorithm may be what you need, if you are only interested in the largest eigenvalue. This might happen in physics or when dealing with the spectral norm, but in my own applications in statistics, I usually have to know all of the eigenvalues. Thankfully, there are more possibilities. For instance, there are tailored algorithms that work particularly well on tridiagonal matrices (which may arise in problems on ODEs for instance). Those can be based on a non-explicit computation of the characteristic polynomial – you mustn’t evaluate that polynomial straight-forwardly, since you will get all kinds of rubbish for your answer. The problem is highly instable from that point of view. But a recursive evaluation will do just fine, because the structure of the problem can be broken down to an evaluation of the characteristic polynomial and its derivative at any given point, thus allowing for Newton’s method to take over.

Statisticians will use eigenvalue-algorithms for real symmetric matrices (or suspected covariance matrices, if you will), which have the additional property of yielding $n$ real eigenvalues (remember the principal axis theorem?). It will be interesting to find out, for statisticians at least, if all the eigenvalues are positive. Here, a clever use of orthogonal transformations will be useful, the Givens rotation, which will iteratively tone down the off-diagonal entries of your matrix (the application of this transformation in this application is called the Jacobi Algorithm). In the limit, you can read the eigenvalues off the diagonal. The proof of this fact is not too hard, but we won’t give it here. In practice, the question is when to stop your iterations.

For the Givens rotation, as well as for the Newton method, approximate facts about the eigenvalues would be helpful. Surprisingly, something can be said without much effort: the Gershgorin disks. All eigenvalues $\lambda$ of the matrix $A$ must lie in the union of the disks $\left\{z\in\mathbb{C} : \left|z-a_{kk}\right|\leq r_k\right\}$, where the radii are $r_k=\sum_{i=1, i\neq k}^n \left|a_{ki}\right|$.

The proof is remarkably simple: the eigenvalue equation $Ax=\lambda x$ can be written as $(\lambda-a_{kk})x_k=\sum_{i=1, i\neq k}^n a_{ki}x_i$. One of the indices $k$ must be such that $x_k=\left\|x\right\|_\infty$, then $\left|\lambda-a_{kk}\right| \leq\left|\sum_{i=1, i\neq k}^na_{ki}\frac{x_i}{x_k}\right| \leq\sum_{i=1, i\neq k}^n\left|a_{ki}\right|$. q.e.d.

These Gershgorin disks can tell you when your iterative algorithm has successfully split two eigenvalues. It will be rather undecisive with multiple zeros of the characteristic polynomial. But, as I mentioned above, those tricky situations will be troublesome for all of the algorithms.

Another fancy idea is an advanced power method. If you apply it to $A^{-1}$ instead of $A$, it will give you the smallest eigenvalue. And you might even use it on $(A-c I)^{-1}$ to find the eigenvalue closest to $c\in\mathbb{C}$. Gershgorin will tell you some proper choices of $c$ to start with. On top of that, you can find the eigenvalues from the power method, as I mentioned above. That’s something that the Jacobi method will not provide – but of course you can use the power method on an eigenvalue approximation that you have gathered from one of the other algorithms. The bad thing is, that you will need to apply the power method about $n$ times for this, and you might not even have the proper choices of $c$ yet, so this will take even more effort. If you’re unlucky, you still have to distinguish multiple eigenvalues – it’s a hard problem with many nasty special cases.

The most famous algorithm, and probably the most widely used one, is the QR algorithm. It unites ideas from the power method and the Givens rotation (when you dig down deeply, that is), it is quite fast and highly stable. But still, this algorithm doesn’t make a tough problem easy. My own implementation in R works just fine on usual matrices, but it struggles with several problems of convergence in special cases and it uses quite some memory. But then again, I didn’t really try properly… The basic idea lies in the QR-decomposition of the matrix $A = QR$, which can also be used for solving linear equations and whatnot. As the names should suggest, $R$ is an upper triangular matrix, while $Q$ is orthogonal. Then, one will compute the new matrix $A_{i+1} := R_iQ_i$ from $A_i = Q_iR_i$, and iterate. One can show that all the $A_i$ have the same eigenvalues (they differ by orthogonal transformations only) and that the iterates will converge to some upper triangular matrix from which you can read the eigenvalues.

The proof is somewhat intricate – already in the base case that was proved by Wilkinson in the 1960’s. That is why we will not discuss it here – but most of the books I have consulted on this give the same proof and only in the special case of symmetric real matrices with some additional technical assumptions that you couldn’t check in the first place. The proof hinges on many QR-decompositions of various matrices performed in a row, and on the uniqueness of this decomposition.

It is rather tricky to decide when the algorithm should stop. You cannot expect the elements under the diagonal to vanish, since complex eigenvalues will not show up on the diagonal (how would they, the algorithm is real if you work on real matrices) but form boxes of dimension 2 from which you can still read off the eigenvalues. But then, fluctuations on the first subdiagonal can either come from two conjugate complex eigenvalues or from the algorithm not having properly converged yet. I am not aware of a way to decide what happens during runtime. But, apparently, sophisticated methods have been found to cope with this in the usual implementations.

Principally, the algorithm works in the general case as well, and it works quite fast to give all the eigenvalues at the same time. The decomposition itself will take about $n^3$ operations in each step of the iterations, which is on the faster side of eigenvalue algorithms (the power method takes $n^2$ operations for the multiplications, but you’ll need to do the iterations with $n$ different matrices at least). People improve on the convergence by using the QR-decomposition of the matrix $A_i - rI$, where $r$ is the so-called shift and is often applied such that the lower right element of $A_i$ is set to 0. Depending on how deep you like to delve into this, you can also use more sophisticated shifts than that one. I am afraid, I didn’t get why this shifting should work, but, in practice, on many matrices it does wonders for the speed of convergence.

Finally, note that wikipedia has a list of eigenvalue-algorithms that goes beyond what we have covered here. It is yet another question whether this is an exhaustive list in any sense.

In the end, I have come to a much deeper appreciation of these algorithms. As I said earlier, you can’t do the first thing with the definition of an eigenvalue, you need more sophisticated algorithms for this problem. And these algorithms may be trivial to implement on easy matrices, but to make them work on “random” matrices is already very hard work – who guarantees that you found all the eigenvalues, and that they are reasonably exact? And that doesn’t include any thoughts about why these algorithms should work. In all of them, there is really hard work to be done, not only for the very many special cases that arise in your applications. All of this came as a surprise to me when I first thought about it – after all eigenvalues are a very basic, very natural and rather easy concept in linear algebra. By now, I have found and appreciated that for the numerical tackling of the problem, many intelligent people have spent huge amounts of time and effort on this highly important problem. Sometimes, the gap between theory and practice is really large.