The Theorems of Schauder and Peano

In the text on Brouwer’s Fixed Point Theorem, I had confidently stated that Schauder’s Theorem follows from it with less effort and that one may easily conclude things like Peano’s Theorem from there. As a matter of fact, things lie considerably deeper than I had naively thought. If sound proofs are to be given, there is technical work to be done in many instances. The ideas are not tough themselves, but the sheer number of steps to be taken and the methodological machinery cannot be neglected. Let’s see.

We shall follow the lines of Heuser’s books (both this one and this one), as we did before, to collect the ingredients to give a proof of Schauder’s Fixed Point Theorem. It involves a statement about convex sets, on which we will focus first, followed by an excursion on approximation in normed vector spaces. We shall also need the theorem named after Arzelà and Ascoli, being a basic glimpse into the ways of thinking of functional analysis. All of this will allow us to prove Schauder’s Theorem in a rather strong flavour. For the conclusion, we split our path: we give both Heuser’s treatment of Peano’s Theorem in the spirit of functional analysis and Walter’s more elementary approach (which, however, also makes use of the Theorem of Arzelà-Ascoli).

 

Remember that a set K is called convex, if for any x,y\in K and for any \alpha\in[0,1], we have \alpha x + (1-\alpha)y\in K. This formalizes the intuition that the line connecting x and y be contained in K as well.

 

Lemma (on convex sets): Let E be a normed space and let x_1,\ldots,x_n\in E. Let

\displaystyle\mathrm{conv}(x_1,\ldots,x_n) = \bigcap_{\substack{K\subset E~\mathrm{ convex}\\\{x_1,\ldots,x_n\}\subset K}} K

the convex hull. Then,

\displaystyle \mathrm{conv}(x_1,\ldots,x_n) = \left\{v\in E\colon v=\sum_{i=1}^n\lambda_ix_i\text{ with }\sum_{i=1}^n\lambda_i=1; \lambda_i\geq0\right\},\qquad(\diamondsuit)

and \mathrm{conv}(x_1,\ldots,x_n) is compact.

 

Proof: Let us first prove the representation of the convex hull. For the “\subset“-direction, we will show that the set on the right-hand side of (\diamondsuit) is convex. Let x = \sum_{i=1}^n\lambda_ix_i and y=\sum_{i=1}^n\mu_ix_i, with \sum\lambda_i=\sum\mu_i=1. Let \alpha\in[0,1], then

\displaystyle \alpha x + (1-\alpha)y = \sum_{i=1}^n\bigl(\alpha\lambda_i+(1-\alpha)\mu_i\bigr)x_i,

where \sum_{i=1}^n\bigl(\alpha\lambda_i+(1-\alpha)\mu_i\bigr) = \alpha+(1-\alpha) = 1. Hence, \alpha x+(1-\alpha)y is part of the set on the right-hand side of (\diamondsuit) .

We now turn to the “\supset“-direction. Let y_1,\ldots, y_m\in\mathrm{conv}(x_1,\ldots,x_n). We show that \sum_{j=1}^m\mu_jy_j\in\mathrm{conv}(x_1,\ldots,x_m) if \sum_{j=1}^m\mu_j = 1. That means any point that as a representation as in the right-hand side of (\diamondsuit) must be in \mathrm{conv}(x_1,\ldots,x_n). This is clear for m=1. For m>1, we take C:=\sum_{j=1}^{m-1}\mu_j to see

\displaystyle    \begin{aligned}    \sum_{j=1}^m\mu_jy_j &= \sum_{j=1}^{m-1}\mu_jy_j + \mu_my_m\\    &= C\sum_{j=1}^{m-1}\frac{\mu_j}{C}y_j + \mu_my_m\\    &= C\sum_{j=1}^{m-1}\tilde\mu_j y_j + (1-C)y_m.    \end{aligned}

Note that \mu_m = \sum_{j=1}^m\mu_j - \sum_{j=1}^{m-1}\mu_j = 1-C. By induction,

\displaystyle \sum_{j=1}^{m-1}\tilde\mu_jy_j\in\mathrm{conv}(x_1,\ldots,x_n),

since \sum_{j=1}^{m-1}\tilde\mu_j = \frac1C\sum_{j=1}^{m-1}\mu_j = 1. Hence,

\displaystyle\sum_{j=1}^m\mu_jy_j\in\mathrm{conv}(x_1,\ldots,x_n).

Finally, we shall prove compactness. Let (y_k)_k\subset\mathrm{conv}(x_1,\ldots,x_n), with a representation y_k = \sum_{i=1}^n\lambda_i^{(k)}x_i. The sequences (\lambda_i^{(k)})_k are bounded by [0,1] and hence have convergent subsequences. Choosing subsequences n times (for each i=1,\ldots,n), we find a subsequence (\lambda_i^{(k_\ell)})_\ell that converges to some \lambda_i for each i=1,\ldots,n. Besides,

\displaystyle\sum_{i=1}^n\lambda_i = \sum_{i=1}^n\lim_{\ell\to\infty}\lambda_i^{(k_\ell)} = \lim_{\ell\to\infty}1 = 1.

This yields

\displaystyle\lim_{\ell\to\infty}y_{k_\ell} = \lim_{\ell\to\infty}\sum_{j=1}^n\lambda_j^{(k_\ell)}x_j = \sum_{j=1}^n\lambda_jx_j\in\mathrm{conv}(x_1,\ldots,x_n).

q.e.d.

 

We will now prove a result that extends Brouwer’s Fixed Point Theorem to a more general setting. This is the one that I had skimmed earlier, believing it consisted only of standard arguments; in principle, this is true. But let’s have a closer look at it and how these standard arguments work together.

 

Theorem (on fixed points in real convex sets): Let \emptyset\neq K\subset\mathbb{R}^p be convex, compact, and let f:K\to K be continuous. Then f has a fixed point.

Proof: 0th step. As K is compact, it is bounded and thus there is some r>0 such that K\subset B_r(0).

1st step. Let us construct the best approximation of some x\in B_r(0) within K; that means we look for a z\in K with \left|x-z\right| = \inf_{y\in K}\left|x-y\right|.

Taking \gamma:=\inf_{y\in K}\left|x-y\right|, there is a sequence (z_n)_n\subset K with \lim_{n\to\infty}\left|x-z_n\right| = \gamma.

We wish to prove that (z_n)_n is a Cauchy sequence. From the basic properties of any scalar product, we find

\displaystyle    \begin{aligned}    \left|u+v\right|^2+\left|u-v\right|^2 &= \left\langle u+v,u+v\right\rangle + \left\langle u-v,u-v\right\rangle \\    &= \left|u\right|^2 + \left|v\right|^2 + 2\left\langle u,v\right\rangle + \left|u\right|^2 + \left|v\right|^2 - 2\left\langle u,v\right\rangle \\    &= 2 \left|u\right|^2 + 2 \left|v\right|^2.    \end{aligned}

In our case, this shows

\displaystyle    \begin{aligned}    \left|z_n-z_m\right|^2 &= \left|(z_n-x)-(z_m-x)\right|^2 \\    &= 2\left|z_n-x\right|^2 + 2\left|z_m-x\right|^2- \left|(z_m+z_n)-2x\right|^2 \\    &= 2\left|z_n-x\right|^2 + 2\left|z_m-x\right|^2 - 4\left|\frac{z_m+z_n}2-x\right|^2.    \end{aligned}

Since K is convex, \frac12z_n+\frac12z_m\in K. Therefore,

\displaystyle\left|\frac{z_m+z_n}2-x\right|\geq\gamma.

Thus,

\displaystyle    \begin{aligned}    \lim_{n,m\to\infty}\left|z_n-z_m\right|^2&\leq 2\lim_{n\to\infty}\left|z_n-x\right|^2 + 2\lim_{m\to\infty}\left|z_m-x\right|^2 - 4\gamma^2 \\    &= 2\gamma^2+2\gamma^2-4\gamma^2 = 0.    \end{aligned}

Therefore, (z_n)_n is a Cauchy sequence, having a limit y, say. As K is closed, y\in K. In total, we have seen (noting that the absolute value is continuous)

\displaystyle\gamma = \lim_{n\to\infty}\left|z_n-x\right| = \left|y-x\right|.

y is the best approximation to x within K.

2nd step. The best approximation is unique.

If there were two of them, u and V, say, then \left|x-u\right| = \left|x-v\right| = \gamma. If we consider the sequence (z_n)_n that alternates between u and V, we’d find \left|x-z_n\right| = \gamma for all n, and hence (z_n)_n is a Cauchy sequence by what we found in step 1. Therefore z_n must be convergent, which implies u=v.

3rd step. The mapping A:B_r(0) \to K that takes x to its best approximation, is continuous.

Let (x_n)_n\subset B_r(0) with \lim_{n\to\infty}x_n =: x. Let \varepsilon>0. For sufficiently large n, we have

\displaystyle \gamma_n :=\inf_{y\in K}\left|x_n-y\right| \leq \inf_{y\in K}\bigl(\left|x_n-x\right|+\left|x-y\right|\bigr) < \varepsilon + \inf_{y\in K}\left|x-y\right| = \varepsilon + \gamma.

Besides,

\displaystyle \gamma\leq\left|x - A(x_n)\right|\leq\left|x_n-A(x_n)\right| + \left|x_n-x\right| = \gamma_n + \left|x_n-x\right| < \gamma_n+\varepsilon.

These inequalities give us

\displaystyle \gamma\leq\left|x-A(x_n)\right| < 2\varepsilon + \gamma,\qquad\text{ which means }\lim_{n\to\infty}\left|x-A(x_n)\right| = \gamma.

Hence, \lim_{n\to\infty}A(x_n) is the best approximation to x. As this is unique, we have shown that A is sequentially continuous.

4th step. The quest for the fixed point.

The mapping f\circ A:B_r(0)\to K\subset B_r(0) is continuous. By Brouwer’s Fixed Point Theorem, f\circ A has a fixed point: there is some w\in B_r(0) with f\bigl(A(w)\bigr) = w. As f only takes images in K, we must have w\in K. By construction, for points in K, the mapping A does not do anything: hence

\displaystyle w = f\bigl(A(w)\bigr) = f(w).

q.e.d.

 

Corollary (on fixed points in convex sets of normed spaces): Let x_1,\ldots,x_n\in E a normed vector space, let \emptyset\neq K\subset\mathrm{span}(x_1,\ldots,x_n) convex, compact, and let f:K\to K continuous. Then f has a fixed point.

Proof: Let us choose a base for \mathrm{span}(x_1,\ldots,x_n) from the x_1,\ldots,x_n. We take w.l.o.g. x_1,\ldots,x_p for a certain p\leq n. Then any y\in\mathrm{span}(x_1,\ldots,x_n) has a unique representation as y=\sum_{j=1}^p\beta_jx_j, and the maping

\displaystyle A:\mathrm{span}(x_1,\ldots,x_n)\to\mathbb{R}^p,\qquad y\mapsto(\beta_1,\ldots,\beta_p)

is a bijection. As all norms on \mathbb{R}^p are equivalent, convergence issues are not affected by this bijection. Hence, the theorem and its proof work out in the setting of this corollary, too. q.e.d.

 

Note that this Corollary may deal with an infinite-dimensional space, however we make use of a finite-dimensional subspace only. This will become relevant in Schauder’s Theorem as well.

 

Theorem (Arzelà 1895, Ascoli 1884): Let X\subset\mathbb{R}^d compact, let \mathcal F be a family of continuous real-valued functions on X, which satisfies two properties:

  • it is pointwise bounded: for any x\in X, there is some M(x)\in\mathbb{R} with \left|f(x)\right|\leq M(x), for all f\in\mathcal F.
  • it is equicontinuous: for any \varepsilon>0 there is some \delta>0 such that for any x,y\in X with \left|x-y\right|<\delta we have \left|f(x)-f(y)\right|<\varepsilon, for all f\in\mathcal{F}.

Then, \mathcal F is relatively compact, that means every sequence in \mathcal F has a uniformly convergent subsequence.

 

Note, that we do not demand the limit of the convergent subsequence to be contained in \mathcal F; that would mean compact, instead of relatively compact.

 

Proof: 1st step. We get hold of a countable dense subset of X.

For our immediate uses of the theorem, it should suffice to choose \mathbb{Q}\cap X, since we will take X to be intervals and there will not be any need for more exotic applications. However, to show up something a little more general, have a look at the sets \bigl(U_{1/k}(x)\bigr)_{x\in X}. This is a covering of X and finitely many of them will suffice to cover X, for instance M_k:=\{x_{k1},\ldots,x_{kn}\}. The set M:=\bigcup_{k=1}^\infty M_k is countable. By construction, for any x_0\in X and any \varepsilon>0, we can find some point y\in M that has \left|x-y\right|<\varepsilon. Therefore, M is dense in X.

2nd step. We construct a certain subsequence to a given sequence (f_n)_n\subset\mathcal F.

This step is at the heart of the Arzelà-Ascoli-Theorem, with a diagonal argument to make it work. Let us enumerate the set M from the step 1 as \{x_1,x_2,\ldots\}.

As \mathcal F is pointwise bounded, the sequence \bigl(f_n(x_1)\bigr)_n\subset\mathbb{R} is bounded as well. By Bolzano-Weierstrass, it has a convergent subsequence that we will call \bigl(f_{1,n}(x_1)\bigr)_n.

If we evaluate this new sequence in x_2, we arrive at \bigl(f_{1,n}(x_2)\bigr)_n\subset\mathbb{R}, which is bounded as well. Again, we find a convergent subsequence that is now called \bigl(f_{2,n}(x_2)\bigr)_n. As this is a subsequence of \bigl(f_{1,n}(x_1)\bigr)_n, it converges in x_1 as well.

We continue this scheme and we find an array of sequences like this

f_{11} f_{12} f_{13} \cdots
f_{21} f_{22} f_{23} \cdots
f_{31} f_{32} f_{33} \cdots
\vdots \vdots \vdots \ddots

 

where each row is a subsequence of the row above. Row K is convergent in the point x_k by Bolzano-Weierstrass and convergent in the points x_1,\ldots,x_{k-1} by construction.

Now, consider the sequence (f_{nn})_n. It will converge in any point of M.

3rd step. Our subsequence of the 2nd step converges uniformly on X. We will use equicontinuity to expand the convergence from M to the whole of X.

As \mathcal F is equicontinuous, we will find for any \varepsilon>0 some \delta>0 with \left|f_{nn}(x)-f_{nn}(y)\right| < \frac\varepsilon3, for all n\in\mathbb{N}, as long as \left|x-y\right|<2\delta. Since X is compact, there are some points y_1,\ldots,y_p\in X with X\subset\bigcup_{j=1}^pU_\delta(y_j). And as M is dense in X, we can find some \xi_j\in U_\delta(y_j)\cap M for any j=1,\ldots,p.

Let x\in U_\delta(y_j), then

\displaystyle \left|x-\xi_j\right| \leq \left|x-y_j\right|+\left|y_j-\xi_j\right| < 2\delta,

which shows

\displaystyle \left| f_{nn}(x)-f_{nn}(\xi_j)\right| < \frac\varepsilon 3\qquad\text{ for any }n\in\mathbb{N}\text{ and }x\in X\cap U_\delta(y_j).

We have already seen that (f_{nn})_n is convergent on M, and hence (convergent sequences are Cauchy-sequences)

\displaystyle \left|f_{nn}(\xi_j) - f_{mm}(\xi_j)\right| < \frac\varepsilon 3\qquad \text{ for sufficiently large }m,n\text{ and for any }j=1,\ldots,p.

Now, let x\in X, no longer restricted. Then, there is some j=1,\ldots,p such that x\in U_\delta(x_j), and

\displaystyle    \begin{aligned}    \left|f_{nn}(x)-f_{mm}(x)\right| &\leq \left| f_{nn}(x)-f_{nn}(\xi_j)\right| + \left|f_{nn}(\xi_j) - f_{mm}(\xi_j)\right| + \left|f_{mm}(\xi_j) - f_{mm}(x)\right| \\    &< \frac\varepsilon3+\frac\varepsilon3+\frac\varepsilon3 = \varepsilon.    \end{aligned}

Thus, \left\|f_{nn}-f_{mm}\right\|_\infty < \varepsilon for sufficiently large n,m. This sequence is a Cauchy sequence and hence convergent. q.e.d.

 

This was our last stepping stone towards Schauder’s Theorem. Let’s see what we can do.

 

Theorem (Schauder, 1930): Let E be a normed vector space, \emptyset\neq K\subset E convex and closed, let f:K\to K continuous, f(K) relatively compact. Then f has a fixed point.

 

Proof: 1st step. As f(K) is relatively compact, its closure is compact. We construct a finite approximating subset of \overline{f(K)}.

Let \varepsilon>0. There are some finitely many points x_1,\ldots,x_m\in\overline{f(K)} with \overline{f(K)}\subset\bigcup_{j=1}^mU_\varepsilon(x_j). In particular, for any x\in f(K), there is some j=1,\ldots,m with \left|x_j-x\right| < \varepsilon. Let us consider the function for x\in f(K)

\displaystyle \varphi_j(x):=\mathbf{1}_{\left|x_j-x\right|<\varepsilon}\bigl(\varepsilon-\left|x-x_j\right|\bigr).

It is obviously continuous, and as \overline{f(K)} is covered by these U_\varepsilon,

\displaystyle \varphi(x)=\sum_{j=1}^m\varphi_j(x) > 0.

This allows \psi_j(x):=\frac{\varphi_j(x)}{\varphi(x)} to be well-defined, and by construction \psi(x)=\sum_{j=1}^m\psi_j(x) = 1. Hence, the function g:f(K)\to\mathrm{conv}(x_1,\ldots,x_m)

\displaystyle g(x):=\sum_{j=1}^m\psi_j(x) x_j

is continuous (the Lemma on convex sets tells us that this actually maps into the convex hull). Now, let x\in f(K). We find

\displaystyle g(x)-x = \sum_{j=1}^m\psi_j(x)x_j - x = \sum_{j=1}^m\psi_j(x)\bigl(x_j-x\bigr) = \sum_{\substack{j=1\\\left|x_j-x\right|<\varepsilon}}^m\psi_j(x)\bigl(x_j-x),

and therefore, for any x\in f(K),

\displaystyle \left|g(x)-x\right| \leq \sum_{\substack{j=1\\\left|x_j-x\right|<\varepsilon}}^m\psi_j(x)\left|x_j-x\right| < \varepsilon\sum_{j=1}^m\psi_j(x) = \varepsilon.

This shows that g uniformly approximates the identity on \mathrm{conv}(x_1,\ldots,x_m)\subset f(K). Note that g depends on the choice of \varepsilon.

2nd step. Reference to the Theorem on fixed points in convex sets and approximation of the fixed point.

We set h:=g\circ f, which is a continuous mapping

\displaystyle h:K\to\mathrm{conv}(x_1,\ldots,x_m)\subset f(K) \subset K.

We can restrict it to \mathrm{conv}(x_1,\ldots,x_m) and then re-name it \tilde h.  By the Lemma on convex sets, \mathrm{conv}(x_1,\ldots,x_m) is compact, it is finite-dimensional, and by the Corollary on fixed point sets in normed spaces, \tilde h has a fixed point z:

z = \tilde h(z) = g\bigl(f(z)\bigr)\qquad\text{ for some }z\in\mathrm{conv}(x_1,\ldots,x_m).

Therefore,

\displaystyle \left|f(z)-z\right| = \left|f(z)-g\bigl(f(z)\bigr)\right| < \varepsilon.

Note that z depends on g and hence on \varepsilon.

3rd step. Construction of the fixed point.

For any n\in\mathbb{N}, by step 2, we find some z_n\in\mathrm{conv}(x_1^{(n)},\ldots,x_{m(n)}^{(n)})\subset f(K)\subset K with

\displaystyle\left|f(z_n)-z_n\right| < \frac1n.

As f(K) is relatively compact, the sequence \bigl(f(z_n)\bigr)_n has a convergent subsequence: there is some \tilde z\in\overline{f(K)} with \tilde z=\lim_{k\to\infty}f(z_{n_k}). As K is closed, we get \tilde z\in \overline{f(K)}\subset\overline K = K. Now,

\displaystyle\left|z_{n_k} - \tilde z\right| \leq \left|z_{n_k} - f(z_{n_k})\right| + \left|f(z_{n_k}) - \tilde z\right| < \frac1{n_k} + \varepsilon_{n_k},

which means that z_{n_k} and \tilde z get arbitrarily close: \tilde z = \lim_{k\to\infty}z_{n_k}. Since f is continuous, we arrive at

\displaystyle f(\tilde z) = \lim_{k\to\infty}f(z_{n_k}) = \tilde z. q.e.d.

 

It is apparent that Schauder’s Theorem already has very general conditions that are tough to weaken further. Obviously the Theorem gets false if f is not continuous. If K were not closed, we’d get the counter-example of f:(0,1)\to(0,1), x\mapsto x^2, which doesn’t have any fixed points. If K were not convex, we’d get the counter-example of f:\partial B_1(0)\to \partial B_1(0), e^{it} \mapsto e^{i(t+\pi)}. It is hard to give a counter-example if f(K) is not relatively compact – in fact I would be interested to hear of any such counter-example or of the generalization of Schauder’s Theorem to such cases. Which is the most general such fixed point theorem?

 

Now, we are able to harvest the ideas of all this work and apply it to differential equations. Usually, in courses on ordinary differential equations, the famous Picard-Lindelöf-Theorem is proved, which states that for well-behaved functions f (meaning that they satisfy a Lipschitz-condition), the initial-value problem

y'(x) = f\bigl(x,y(x)\bigr),\qquad y(x_0) = y_0,

has a unique solution. This is a powerful theorem which simplifies the entire theory of differential equations. However, a little more holds true: it suffices that f is continuous to guarantee a solution. However, uniqueness is lost in general. While in many applications one can assume continuity of f without remorse (especially in physics), a Lipschitz-condition is much harder to justify. This is not to diminish the usefulness of Picard and Lindelöf, as any model has assumptions to be justified – the Lipschitz-condition is just one of them (if one even bothers to demand for a proper justification of existence and uniqueness – sometimes this would seem obvious from the start).

Let us have a look at what Peano told us:

 

Theorem (Peano, 1886/1890): Let f:R\to\mathbb{R} be continuous, where

\displaystyle R:=\bigl\{(x,y)\in\mathbb{R}^2\colon \left|x-x_0\right| \leq a, \left|y-y_0\right|\leq b\bigr\},

let M:=\max_{(x,y)\in R}\left|f(x,y)\right|, \alpha := \min\bigl(a, \frac bM\bigr).

Then, the initial value problem y'(x)= f\bigl(x,y(x)\bigr), y(x_0)=y_0, has a solution on the interval [x_0-\alpha, x_0+\alpha].

 

Concerning the interval on which we claim the solution to exist, have a look at how such a solution y might behave: as we vary x, the solution may “leave” R either to the vertical bounds (to left/right) or to the horizontonal bounds (up/down). A solution y can at most have a slope of \pm M, and thus, if it leaves on the horizontal bounds, this will happen at x_0\pm\frac bM as the earliest point. If it doesn’t leave there, it will exist until x_0\pm a. Of course, it might exist even further, but we have only demanded f to be defined till there. A little more formally, the mean value theorem tells us

\displaystyle \left|y(x)-y_0\right| = \left|y'(\xi)\right|\left|x-x_0\right| = \left|f\bigl(\xi,y(\xi)\bigr)\right|\left|x-x_0\right| \leq M\alpha\leq b.

This guarantees that the solution y is well-defined on R, because f is defined there.

 

Proof: 0th step. To simplify notation, let us set

\displaystyle    \begin{aligned}    J&:=[x_0-\alpha, x_0+\alpha],\\    \mathcal{C}(J)&:=\bigl\{f:J\to\mathbb{R}\text{ continuous}\bigr\},\\    K&:=\bigl\{y\in\mathcal{C}(J)\colon \left|y(x)-y_0\right|\leq b\text{ for any }x\in J\bigr\}.    \end{aligned}

1st step. We twist the problem to another equivalent shape, making it more accessible to our tools.

First, let y(x) be a solution to the initial value problem on a sub-interval I\subset J. Then, for any x\in I,

\displaystyle y'(x) = f\bigl(x,y(x)\bigr),\quad y(x_0)=y_0,

and hence

\displaystyle y(x) = y_0 + \int_{x_0}^x f\bigl(t,y(t)\bigr)dt.\qquad (\heartsuit)

On the other hand, if we start from this equation and suppose that it holds for any x\in I, y must be differentiable with y'(x) = f\bigl(x,y(x)\bigr) and y(x_0)=y_0.

We have seen that a function y solves the initial value problem on J if and only if it satisfies the equation (\heartsuit) on J.

2nd step. We try to give a representation of the problem as a fixed-point-problem.

Let us consider the mapping

\displaystyle A:K\to\mathcal{C}(J),~ y\mapsto y_0 + \int_{x_0}^{\text{\large\textbf{.}}} f\bigl(t,y(t)\bigr)dt.

This is a functional where we plug in a continuous function and where we get a continuous function back. In particular, and to make it even more painfully obvious,

\displaystyle (Ay)(x) = y_0+\int_{x_0}^xf\bigl(t,y(t)\bigr)dt,\qquad\text{for any }x\in J.

Therefore, y is a solution to the intial value problem, if it is a fixed point of A, meaning Ay=y.

3rd step. We show that A maps K to itself. We have defined A only on K, so let y\in K and x\in J; then:

\displaystyle \left|(Ay)(x)-y_0\right| = \left|\int_{x_0}^x f\bigl(t,y(t)\bigr) dt\right| \leq M\left|x-x_0\right| \leq \alpha M\leq b.

The second-to-last inequality follows from x\in J, the last one from the definition of \alpha.

This shows that Ay\in K.

4th step. K\neq\emptyset is obvious, as the constant function y_0 is in K.

5th step. K is convex. Let f,g\in K and let \beta\in[0,1]. Then, for any x\in J,

\displaystyle    \begin{aligned}    \left|(1-\beta)f(x)+\beta g(x) - y_0\right| &= \left|(1-\beta)\bigl(f(x)-y_0\bigr) + \beta\bigl(g(x)-y_0\bigr)\right| \\    &\leq (1-\beta)\left|f(x)-y_0\right| + \beta\left|g(x)-y_0\right| \\    &\leq (1-\beta)b+\beta b = b.    \end{aligned}

This proves (1-\beta)f+\beta g\in K.

6th step. K is a closed set in \mathcal{C}(J), where we use the topology of uniform convergence.

Consider the sequence (f_n)_n\subset K which converges uniformly to some f\in\mathcal{C}(J). Remember that \mathcal{C}(J) is complete, which is why we can do this. Then, for any x\in J,

\displaystyle\left|f(x)-y_0\right| = \left|\lim_{n\to\infty}f_n(x)-y_0\right| = \lim_{n\to\infty}\left|f_n(x)-y_0\right| \leq \lim_{n\to\infty} b = b.

This shows that f\in K.

7th step. Using the topology of uniform convergence, the mapping A:K\to K is continuous.

Let \varepsilon>0. The function f is continuous on the compact set R and hence uniformly continuous. Therefore, there is some \delta>0 such that for \left|u-v\right|<\delta,

\displaystyle \left|f(t,u)-f(t,v)\right|<\frac\varepsilon\alpha.

Now, let y,z\in K with \left\|y-z\right\|_\infty < \delta. Then we have just seen that for any t\in J

\displaystyle \left|f\bigl(t,y(t)\bigr)-f\bigl(t,z(t)\bigr)\right| < \frac\varepsilon\alpha.

That yields

\displaystyle    \begin{aligned}    \left|(Ay)(x)-(Az)(x)\right| &= \left|\int_{x_0}^xf\bigl(t,y(t)\bigr)dt - \int_{x_0}^xf\bigl(t,z(t)\bigr)dt\right| \\    &< \left|x-x_0\right|\frac\varepsilon\alpha\\    &\leq\varepsilon.    \end{aligned}

In particular,

\displaystyle\left\|Ay-Az\right\|_\infty < \varepsilon.

8th step. The set A(K)\subset K is relatively compact. Note that A(K) is a set of continuous functions.

Let y\in K and let x,x_1,x_2\in J. Then, every function of A(K) is bounded pointwise, since

\displaystyle \left|(Ay)(x)\right| = \left|y_0+\int_{x_0}^xf\bigl(t,y(t)\bigr)dt\right|\leq \left|y_0\right|+\left|x-x_0\right|M \leq \left|y_0\right|+\alpha M.

Besides, A(K) is equicontinuous, because of

\displaystyle    \begin{aligned}    \left|(Ay)(x_1)-(Ay)(x_2)\right| &= \left|\int_{x_0}^{x_1}f\bigl(t,y(t)\bigr)dt - \int_ {x_0}^{x_2}f\bigl(t,y(t)\bigr)dt\right| \\    &= \left|\int_{x_1}^{x_2}f\bigl(t,y(t)\bigr) dt\right| \\    &\leq \left|x_1-x_2\right| M.    \end{aligned}

Arzelà and Ascoli now tell us that any sequence in A(K) has a uniformly convergent subsequence.

9th and final step. From Schauder’s Fixed Point Theorem and steps 3 to 8, A has a fixed point in K. From step 2, the initial value problem has a solution. q.e.d.

 

 

There was a lot of technical work that we have only needed to invoke Schauder’s Theorem. Some of this could have been avoided, if we had a more elementary proof of Schauder’s Theorem. Such a proof exists, however, some of our machinery is still needed – the proof cannot honestly be called elementary. In some way, the proof matches our procedure given above, however, not everything is needed in such a fine manner. Let’s have short look at it; this is taken from Walter’s book.

 

Proof (Peano’s Theorem in a more elementary fashion): We proceed in two parts. First, we shall prove the weaker statement that if f is continuous and bounded on the (non-compact) set [x_0,x_0+a]\times\mathbb{R}, then there is a solution to the intial-value problem on [x_0,x_0+a]. Afterwards, we extend this to our compact set R. We won’t deal with extending the solution to the left of x_0 as it’s neither important nor difficult. In the previous proof we didn’t need to bother about this.

1st step. Let us define a function on [x_0,x_0+a] using some parameter \alpha\in(0,a] by

\displaystyle z_\alpha(x) = y_0\mathbf{1}_{x\leq x_0} + \left(y_0+\int_{x_0}^xf\bigl(t,z_\alpha(t)\bigr)dt\right)\mathbf{1}_{x>x_0}.

This is well-defined, since on the sub-interval [x_0+k\alpha, x_0+(k+1)\alpha) we have t-\alpha \in[x_0+(k-1)\alpha, x_0+k\alpha), and thus z_\alpha(t-\alpha) has been recursively defined; hence z_\alpha is defined as well.

Let us denote \mathcal{F}:=\bigl\{z_\alpha\colon\alpha\in(0,a]\bigr\}\subset\mathcal{C}\bigl([x_0,x_0+a]\bigr).

2nd step. \mathcal F is equicontinuous. Let \varepsilon > 0 and let x_1,x_2\in[x_0,x_0+a]. Then we get, as f is bounded by some M,

\displaystyle \left|z_\alpha(x_1)-z_\alpha(x_2)\right| = \left|\int_{x_1}^{x_2} f\bigl(t, z_\alpha(t-\alpha)\bigr)dt\right| \leq \left|x_2-x_1\right|M,

which doesn’t depend on \alpha, x_1 or x_2 (only on their distance). Hence, if \left|x_1-x_2\right|<\delta = \frac\varepsilon M,

\displaystyle \left|z_\alpha(x_1)-z_\alpha(x_2)\right| \leq \varepsilon.

3rd step. \mathcal F is pointwise bounded. This is obvious from \left|z_\alpha(x)\right|\leq\left|y_0\right|+aM, which doesn’t depend on \alpha or x.

4th step. We determine a solution to the intial-value problem.

From steps 2 and 3 and from Arzelà-Ascoli, we know that the sequence (z_{1/n})_n\subset\mathcal{F} has a uniformly convergent subsequence. Let us denote its limit by y(x), which is defined for all x\in[x_0,x_0+a]. This allows us to get

\displaystyle    \begin{aligned}    \left|z_{1/n_k}\left(t-\frac1{n_k}\right) - y(t)\right| &\leq \left| z_{1/n_k}\left(t-\frac1{n_k}\right) - z_{1/n_k}(t)\right| + \left|z_{1/n_k}(t) - y(t)\right| \\    &\leq \frac M{n_k} + \varepsilon\\    &< \overline\delta,    \end{aligned}

for any t\in[x_0,x_0+a] and for sufficiently large K. It should be clear what we intend to say with \overline\delta (let’s bring in a little sloppiness here, shall we). Since f is continuous in its second component, this proves

\displaystyle \left|f\left(t, z_{1/n_k}\left(t-\frac1{n_k}\right)\right) - f\bigl(t,y(t)\bigr)\right| < \overline\varepsilon\qquad\text{for any }t\in[x_0,x_0+a].

Hence, as every participant here converges uniformly,

\displaystyle    \begin{aligned}    y(x) &= \lim_{k\to\infty}z_{1/n_k}(x) \\    &= \lim_{k\to\infty}\left(y_0 + \int_{x_0}^x f\left(t, z_{1/n_k}\left(t-\frac1{n_k}\right)\right)dt\right)\\    &= y_0 + \int_{x_0}^x \lim_{k\to\infty} f\left(t, z_{1/n_k}\left(t-\frac1{n_k}\right)\right) dt\\    &= y_0 + \int_{x_0}^x f\bigl(t, y(t)\bigr) dt.    \end{aligned}

This shows that

y'(x) = f\bigl(x, y(x)\bigr),\qquad y(x_0) = y_0.

5th step. Extension to the general case: Let f be defined on the compact rectangle R.

We give a continuation of f beyond [y_0-b,y_0+b] for all x\in[x_0-a, x_0+a] via

\displaystyle \tilde f(x,y) = \begin{cases}f(x, y_0-b) ,&\text{for }y < y_0-b\\ f(x,y), &\text{for }(x,y)\in R\\ f(x,y_0+b),&\text{for }y>y_0+b\end{cases}

Obviously, \tilde f is continuous and bounded. From Step 1, y' = \tilde f(x, y) has a solution on [x_0, x_0+a]. For \left|x-x_0\right|\leq\frac bM, we get

\displaystyle    \begin{aligned}    \left|y(x) - y_0\right| &\leq\left|y'(\xi)\right|\left|x-x_0\right| \\    &= \left|\tilde f\bigl(\xi,y(\xi)\bigr)\right|\left|x-x_0\right| \\    &\leq M\left|x-x_0\right|\leq b    \end{aligned}

Therefore, the solution of y'=f(x,y) is well-defined if \left|x-x_0\right|\leq\frac bM, and thus the solution is guaranteed to exist for \left|x-x_0\right|<\alpha = \min\bigl(a,\frac bM\bigr). q.e.d.

 

As a sort of last-minute addendum, I have stumbled upon two articles from the 1970s that shed some more light on the issue of elementary proofs to Peano’s Theorem which completely avoid the technicalties of Schauder’s Theorem and of Arzelà-Ascoli. One is called “There is an Elementary Proof of Peano’s Existence Theorem” by Wolfgang Walter (the author of the book we cited earlier; Amer. Math. Monthly 78 1971, 170-173), the other is “On Elementary Proofs of Peano’s Existence Theorems” by Johann Walter (Amer. Math. Monthly 80, 1973, 282-286). The issue of whether Arzelà-Ascoli can be avoided is solved by both papers positively: they give proofs of Peano’s Theorem which only deal with standard calculus methods. Basically, the employ the Euler polygon method to construct a solution of the intitial value problem. However, again, the proofs are not constructive. Besides, “elementary” is not to be confused with “easy”, Peano’s Theorem is still nothing that lies directly on the surface of things. A brief look at the second of those articles (to the best of my knowledge the identical names of the authors are a coincidence) raises hope that this proof is actually not too hard – it should be understandable with a lot less effort than the proof via Schauder’s Theorem that we gave above in full detail; remember that Schauder itself required many non-standard theorems on its way. The elementary proof will only work for one-dimensional differential equations, but we bothered only with those anyway; it uses monotonicity of its approximating sequence which is only applicable in the real numbers. On the plus side, the proof explicitly constructs a solution via the Euler method.

The papers also shed some light on the history of Peano’s Theorem and the quest for its proof (together with some rather unusual disagreement on whether an earlier proof is valid or not; some interesting lines to read in passing). This should be enough on this matter for now. If the interest holds up (which is, to this extent, rather unlikely), we’ll return to it. But not for now.

Advertisements

Die athenische Demokratie

In meiner naiven Vorstellung war das antike Griechenland eine der schöneren Epochen der Weltgeschichte. Nachdem ich mich längere Zeit mit Kriegen, Intrigen und ambivalenten Persönlichkeiten befasst hatte, wollte ich etwas über die Hochkultur, über die Entstehung der Philosophie und der Demokratie lesen. All das gab es im alten Griechenland; aber die Lektüre hat mir meine Naivität dennoch gründlich ausgetrieben.

Im Studienbuch von Linda Günther nehmen kulturelle Dinge einen eher kleinen Raum ein, es werden Theater, Olympische Spiele und auch die Philosophen erwähnt, aber kaum in der Tiefe beleuchtet. Bei dem Buch handelt es sich eher um eine politische Ereignisgeschichte, die also auch und gerade all das beleuchtet, vor dem ich auf der Suche nach einer erbaulicheren Lektüre die Augen verschließen wollte. Und selbstverständlich gehört all das auch in eine umfassende Arbeit über die griechische Antike: die griechischen Stadtstaates (Poleis) haben sich fortlaufend untereinander bekriegt, Allianzen geschmiedet und gebrochen, die Staatsmänner haben Mehrheiten gesammelt, die Volksversammlung aufgestachelt, Steuern von den Verbündeten abgepresst. So groß die kulturellen Leistungen dieser Epoche sind, die politischen Verwicklungen sind keinen Deut geringer als in anderen Zeiten.

Ebenfalls einen relativ kleinen Raum nehmen die Heldengeschichten ein (schon die aus der wirklichen Historie, erst recht die aus den Mythen Homers). In der Folklore gibt es viele Erzählungen aus den Perserkriegen, etwa von der Schlacht bei Marathon („νενικεκαμεν“), von der Schlacht bei Thermopylae („Komm und nimm sie“ – „Μολον λαβη“; „Wanderer, kommst du nach Sparta, verkündige dorten, du habest uns hier liegen gesehn, wie das Gesetz es befahl“ – „O ξειν αγγελειν Λακεδαιμονιοις οτι τηδε κειμεθα τοις κεινον πημασι πειθομενοι“), also insgesamt wie das verbündete Griechenland sich gegen die übermächtigen Perser erfolgreich verteidigte. Zwar werden die Perserkriege angemessen ausführlich beleuchtet, aber auch nicht übertrieben stark. Ein sehr viel stärkerer Fokus liegt auf dem Peloponnesischen Krieg, der die klassische Staatenwelt tatsächlich durcheinander mischte.

Abgesehen von Athen und Sparta ist in den meisten anderen Poleis ist die Quellenlage eher dürftig. Natürlich ist die klassische Vorherrschaft Athens mit verantwortlich dafür, dass es im Vordergrund der Überlieferung und der Geschichtsschreibung steht; durchaus ähnlich war es ja auch im Römischen Reich (Tacitus beispielsweise kümmert sich praktisch nicht um das Geschehen in den Provinzen, sondern fokussiert auf Rom und die kaiserliche Familie). Ein Verdienst der Darstellung Günthers ist darum, auch den Kontrast zu den restlichen Poleis zu suchen und die Quellenlage ausgiebig zu diskutieren. Natürlich lässt sich ohne belastbare Quellen nur eine Extrapolation anhand der vorhandenen Quellen leisten – sodass stets die Gefahr besteht, die Verhältnisse aus Athen und Sparta auf ganz Griechenland zu übertragen. Die Gefahr dabei liegt auf der Hand, zumal diese Verhältnisse so grundverschieden sind. Wie dürftig die Quellen grundsätzlich sind, ist daran erkennbar, worauf sich die Geschichtswissenschaft bei manchen Angelegenheiten bezieht: beispielsweise wird eine Erhöhung des Richtersolds von 2 auf 3 Obolen auf eine Randbemerkung in einer zeitgenössischen Komödie zurückgeführt. Es gibt offenkundig keine Akten, keinen aufgeschriebenen Beschluss, keine Rezeption in der Historiographie – nur diese Komödie, aus der der Sachverhalt implizit geschlossen wird. Aus neuzeitlicher Sicht („das tintenklecksende Säkulum“ ist auch schon einige Zeit her) eine beeindruckende Sache – selbstverständlich nur für den halbgebildeten Laien 🙂

Überhaupt trägt die Vielzahl der Poleis zum schwierigeren Verständnis bei; es ist eine der Eigenarten der Geographie Griechenlands, dass sich so viele kleine, nahezu autarke Stadtstaaten ausgebildet haben, die sich auch nie dauerhaft zu größeren Einheiten zusammengeschlossen haben. Die weitere Komplexität aufgrund der vielen handelnden Personen zieht sich wie ein roter Faden durch die griechische Geschichte – immer wieder gibt es ein Aha-Erlebnis mit bekannten Namen, die sich in das Gesamtbild einfügen lassen. Aber sehr häufig bleibt das ganze eine amorphe Masse von Namen, die sich manchmal sogar bei verschiedenen Personen wiederholen.

Die Entwicklung der Demokratie in Athen war in gewisser Weise ein Sonderweg, wenn auch nicht vollständig einzigartig. Nach dem Tyrannensturz um 510 v. Chr. wurde der Alleinherrschaft, der Tyrannis tatsächlich abgeschworen (wiewohl zu beachten ist, dass dieser Begriff keineswegs negativ besetzt war). Die Reformen Drakons, Solons und schließlich des Kleisthenes etablierten die Machtstellung der Volksversammlung und verringerten den Einfluss der Adligen. Es handelt sich allerdings noch um eine Phase mit insgesamt fragwürdiger Quellensituation, vieles ist erst die Rezeption und Redaktion aus dem Blickwinkel des 4. Jahrhunderts v. Chr.

Als die Blütezeit der griechischen Antike gilt die Zeit der Hegemonie Athens, zwischen dem Perserkrieg von 490 v. Chr. und dem Peloponnesischen Krieg (430 – 404 v. Chr.). Die Geschehnisse in dieser Phase sind ausreichend bekannt, um hier nicht referiert zu werden (wenn auch im Detail viele spannende Geschehnisse verborgen sind, die eine Lektüre immer wieder lohnen). Die glanzvolle und mächtige Ausgestaltung Athens unter Perikles mit der prachtvollen Akropolis, dem Parthenon und den „langen Mauern“ sei genannt, die auch erkauft ist durch die Ausbeutung der Verbündeten im attischen Seebund – hierdurch erst wurden die Bauwerke möglich. Die Schattenseiten voller Großmachtphantasien, die Unterdrückung der formal gleichberechtigten Bundesgenossen und die Anzettelung von Kriegen dürfen hier ebenso wenig fehlen. Als der Peloponnesische Krieg endet, ist die Vormachtstellung Athens vergangen, die Stadt entgeht nur knapp der Vernichtung durch die siegreichen Spartaner. Hier wird eher die Realpolitik eine Rolle gespielt haben, dass Sparta ein Machtvakuum zu verhindern versuchte, und nicht das heroische Gedenken an die Führungsmacht Athen in den Perserkriegen (zumal die Heerführung damals bei Sparta gelegen hatte und nicht bei Athen).

Die Demokratie während dieser klassischen Phase war vorhanden und hatte tatsächlich einige Züge der modernen Herrschaftsform gleichen Namens. Allerdings waren nur die Vollbürger stimmberechtigt, nicht die Frauen, Sklaven oder Zugezogenen. Die meisten Ämter wurden durch Los, nicht durch Wahlen bestimmt; schließlich wurden Diäten für die Teilnahme an der Volksversammlung vergeben. Die Demokratie ist besonders unter Perikles als eine Art von „gelenkter Volksherrschaft“ aufzufassen, da Demagogen, darunter eben auch Perikles, es verstanden, die Volksversammlung auf ihre Seite zu ziehen und so fast als demokratisch legitimierter Alleinherrscher zu agieren. Allerdings ist diese Sichtweise der Antike noch vollständig fremd. Im Peloponnesischen Krieg werden die Entscheidungen der Volksversammlung zusehends erratisch, die militärische Niederlage erklärt sich unter anderem daraus, dass die siegreichen Strategen wegen religiöser Verfehlungen hingerichtet werden (und nicht nur einmal, Athen hat mehrfach seine gesamte Heerführung abgesetzt und hingerichtet). Aber selbst unter der erdrückenden Erfahrung der vernichtenden Niederlage bleibt die Demokratie der Standardfall, der nur kurz von tyrannischen oder chaotischen Zuständen unterbrochen wird – die athenische Demokratie hat sich als sehr stabil erwiesen.

Nach dem Peloponnesischen Krieg beginnt eine Phase, über die es sich aus dem politischen Blickwinkel eher deprimierend liest. Es ist ein unübersichtliches Gewirr von entstehenden und vergehenden Bündnissen, es gibt keine bipolare Staatenwelt mehr wie zuvor. Die Bedrohung durch das Perserreich ist nennenswert schwächer als zuvor, wenn die Perser auch mächtig bleiben und beispielsweise den Königsfrieden diktieren können. Nach einem Intermezzo mit der griechischen Hegemonialmacht Theben steigt Makedonien vom Rand der griechischen Welt auf (es galt schon als grenzwertig, ob die Makedonen als des Griechischen nicht mächtige Barbaren zu bezeichnen waren oder nicht). Dieser Aufstieg kristallisiert sich in Alexander dem Großen.

Die Machtposition und der Erfolg Alexanders des Großen sind geradezu faszinierend anzusehen – beides erscheint vollständig unerklärbar. Mit allen Ambivalenzen, die auch ihm zu Eigen waren, ist er doch in der Rückschau zu einer fast mythischen Figur geworden. Dazu trägt auch sein früher Tod bei, der es ihm erspart hat, sein unermesslich großes Reich selbst zusammenhalten zu müssen – wer weiß, ob ihm das hätte gelingen können. Es kam anders, die Diadochenkämpfe sind wieder im Kontrast eine sehr deprimierende Lektüre. Der Reiz besteht hier aber darin, dass die griechische Kultur so große Räume durchdrungen hat und auf Jahrhunderte hin Einfluss genommen hat. Schließlich zerfallen alle Diadochenreiche nach und nach, viele fallen schließlich an das Römische Reich – eine ebenfalls gleichermaßen sehr faszinierende und sehr deprimierende Geschichte.

Über Jahrhunderte hinweg war die Rezeption alles Griechischen stilbildendes, grundlegendes Objekt aller gelehrten Beschäftigung. Philosophie und Wissenschaft wurden zur unbezweifelten Autorität, bis sie ab der Renaissance kritisch weiterentwickelt wurden; besonders im 19. Jahrhundert wurde die klassische Kunst imitiert, die wissenschaftliche Strenge wurde wieder belebt (vor allem in der axiomatischen Methode der Mathematik, aber nicht nur dort), und der Unterricht in altgriechischer Sprache an allgemeinbildenden Schulen wurde zum zentralen Element des Bildungskanons. Bei all dieser Rezeption erscheint es paradox, dass die Demokratiebewegung sich offenbar kaum auf die ursprünglichen griechischen Vorbilder stützte. Im Gegenteil gab es im 19. Jahrhundert eine große Fixierung auf die bestehenden Monarchien und das Gottesgnadentum, und wo die Demokratie Staatsform war, wurde sie eher als Weiterentwicklung der antiken Demokratie verstanden und nicht als Rückbesinnung: solche Überlegungen von einer tatsächlich gelenkten, nicht von der Massenstimmung beherrschten Demokratie finden sich etwa in der US-Verfassung mit der indirekten Wahl des Präsidenten. Möglicherweise war die Furcht vor dem Ableiten in Anarchie und Chaos hier maßgeblich, wie es sich in Athen während des Peloponnesischen Kriegs gezeigt hat, oder auch während der Spätphase der Römischen Republik. Tatsächlich haben schon die antiken Staatsphilosophen wie Platon und Cicero eher die Oligarchie als die Demokratie propagiert. Für mich war daran ein neuer Effekt, dass die Demokratie in Athen nach dem Peloponnesischen Krieg wieder hergestellt wurde und funktionierte wie in ihren besten Zeiten; tatsächlich haben die Makedonen die Demokratie in Athen abgeschafft, um die Einordnung in das Alexanderreich zu garantieren: die Demokratie galt den Makedonen als Grundursache für das Freiheitsstreben der Athener und für Aufstände gegen die Hegemonialmacht.

Darüber hinaus sind natürlich auch Elemente der direkten Demokratie wie die Volksversammlung nur in kleinen Stadtstaaten wie Athen oder Rom praktisch umsetzbar, was zur heutigen Ausgestaltung der repräsentativen Demokratie führt. Umgekehrt sind auch neuzeitliche Demokratien nicht gefeit vor Demagogie, vielleicht im Gegenteil. Aber die zusätzliche Stabilität, die sich heute beobachten lässt, ergibt sich auch aus der fest gefügten Beamtenwelt (in Deutschland etwa während der preußischen Reformen entstanden), da es keine Losverfahren zur Besetzung von öffentlichen Stellen mehr gibt. Die Kehrseite ist eine Art von Kastensystem. Die Demokratie in ihrer heutigen Form ist aber bei all ihren Schwächen eine funktionierende Herrschaftsform. Sie erträgt es, dass sie hinterfragt, verteidigt und verbessert wird, und dass niemand sie als gegeben hinnimmt. Die Abgründe, die von ihr wegführen, sind in der Historie vielfach gut zu sehen.

What is a Brachistochrone?

The cycloid was a core object of mathematical studies during the development of calculus and, before that, for geometry. It arises as a special case of curves in astronomy and it has been used as a challenge for competitors during the rise of the analytic method.

Let us have a look at its definition first, and what it describes heuristically. The cycloid is the planar curve parametized via

\displaystyle\begin{pmatrix}x(t)\\y(t)\end{pmatrix} = \begin{pmatrix}r(t-\sin t)\\r(1-\cos t)\end{pmatrix},\qquad t\in[0,2\pi], \quad r>0.

It is the curve of a point on the periphery of a circle that is rolled along the x-axis. The parametization follows like this:

The circle rolls along the x-axis with constant speed, therefore the angle \sphericalangle PMD = t. As \sin t = \frac{PD}{r}, we get BA = r\sin t. Now, the x-component of P is

\displaystyle x = OB = OA-BA = rt-r\sin t = r(t-\sin t).

For the y-component of PBP = AD = AM-DM = r- r\cos t, and thus

\displaystyle y = BP = r(1-\cos t).

The cycloid can be considered as a special case of the epicycloid. Those have been a matter of interest in the pre-Kepler era, when astronomers tried to explain the motion of the planets in the night sky. As they considered perfect circles as orbits only (as opposed to the ellipses that they actually are), and as they postulated the Earth to be in the center of all those orbits, it was tricky to explain away the observations of different arc speeds and loops that the planets sometimes take. The solution was to imagine the planets circling around Earth, but on this circle was the center of another circle, on which the planets moved. Thus, the planets travelled on an epicycle; and sometimes one of those wasn’t enough (“salvation of the phenomena”).

A first simplification of this theory came from Copernicus who dropped the assumption that the Earth would be in the center of all things, but who didn’t get rid of the perfect circles. For the final resolution, the world had to wait for Kepler and Tycho Brahe. But anyway, that’s not why we’re here.

The cycloid is the curve, on which a point of mass will travel the quickest, if it just rolls along it, drawn by gravitation only (if friction is disregarded); in Greek, this is called the “brachistochrone“. It is remarkable that this quickest path is not the shortest path – there are quicker ways than a straight line. There is some sort of trade-off between gaining speed quickly and between keeping the path sufficiently short. We will look at two different approaches to prove that the cycloid can do this trick.

Another remarkable property of the cycloid is being the “tautochrone“: if points of mass are placed anywhere on this curve, they will travel to another point on this curve in exactly the same amount of time. Points that are farther away will gain more speed in order to close the distance. This is a highly interesting property for building a pendulum: no matter how big the amplitude, the frequency will always be the same. This, in turn, is the core feature of an exact clock, which was a sort of holy grail for scientists to find during the 17th century (not just for ship navigation). This property has been found by Huygens, who had not been able to use calculus methods for this (his solution is hidden in quite cumbersome geometry).

More on this curve and some very nice experiments may be seen in this youtube-video from the highly interesting channel vsauce. I especially love the excitement of both guys when they actually see these properties of the cycloid curve in action.

The brachistochrone problem was posed by Johann Bernoulli in a journal as a quest for the most enlightened mathematicians of the world (“acutissimis mathematicis qui toto orbe florent“). We will see his very elegant approach right below. His brother Jacob found a more general approach, but his train of thought is much more cumbersome – we will see a modernized simplification of this later. Both brothers engaged in a non-friendly competition by posing problems like this one to each other, always hoping for each other’s errors to gloat over. In retrospect, both of them advanced the applications of calculus when it was conceived; note however, that very many of the things that are named after Bernoulli (Bernoulli numbers, Bernoulli distribution, the Law of Large Numbers) have come from Jacob, not from Johann. But the other enlightened mathematicians of the time also retrieved the solution, particularly Leibniz and Newton who both are said to have found the solution in a matter of few hours, and both of them appreciating the beauty of the problem.

Now, let’s see how Johann came to his solution. We will look at some physical properties first.

 

The Speed Lemma: Consider a point of mass m that travels without friction along any sort of curve in \mathbb{R}^2, the only force on it being the gravitation. Let g be Newton’s gravitational constant. Then, when it has travelled height h, its speed is v = \sqrt{2gh}.

Proof: As physics tells us, the sum of kinetic and potential energy is constant. One may prove this mathematically by doing very basic integration and thinking of Newton’s second axiom (the one with force, mass and acceleration); we won’t go into this. Now, the kinetic energy is \frac12mv^2 (for physicists that’s the definition, for mathematicians that’s an easy lemma), while the potential energy is mgh. Our zero-level for the potential energy is set such that the potential energy vanishes, when the point of mass has travelled height h. By our set-up, the point of mass has no speed in the beginning and hence no kinetic energy. We have found

\displaystyle  0 + mgh = E_{\mathrm{kin}}^{\mathrm{start}} + E_{\mathrm{pot}}^{\mathrm{start}} = E_{\mathrm{kin}}^{\mathrm{end}} + E_{\mathrm{kin}}^{\mathrm{end}} = \frac12mv^2 + 0,

which means

\displaystyle v^2 = 2gh,

which was to be shown. q.e.d.

 

One might wonder if there is some problem here, that the speed formula does not depend on the kind of curve that the point of mass moves on. Indeed, without friction there is no problem. One can argue in an entirely different way about decomposition of the gravitational force in a force directed along the (derivative of the) curve and a normal force orthogonal to this one. This decomposed force is of course smaller than the gravitation and hence brings less acceleration to our point. In turn, one can compute the time it takes the point to travel to height h, and the speed that it has gained by then. As physics is consistent in itself (surprise!), we arrive at the same result that we gained via kinetic and potential energy. Not being a physicist, I can’t tell with certainty if this connection just stems from a little proof that I didn’t see, or if this is some sort of recognition that the world actually behaves responsibly and rationally. I won’t even start to question this here.

 

The Time Lemma: Consider the same setting as in the Speed Lemma. On top of that, let the curve on which our point travels be given by a differentiable function y = f(x). Let the point travel from (0,0) to (b,d). The time it takes for this is

\displaystyle T = \int_0^b\sqrt{\frac{1+(f'(x))^2}{2gf(x)}}\mathrm{d}x.

 

Proof (by a little hand-waving): Consider any point (x,y) on the curve, with x\in(0,b). The infinitesimal time our point of mass spends in (x,y) is

\displaystyle \mathrm{d}t = \frac{\mathrm{d}s}{v(x)} = \frac{\sqrt{(\mathrm{d}x)^2 + (\mathrm{d}y)^2}}{\sqrt{2gf(x)}} = \sqrt{\frac{1+(\mathrm{d}y/\mathrm{d}x)^2}{2gf(x)}}\mathrm{d}x.

Taking a leap of faith and integrating this (which is supposed to amount to the sum of all such infinitesimal times) gives

\displaystyle T = \int_0^b \sqrt{\frac{1+(f'(x))^2}{2gf(x)}}\mathrm{d}x.

In a post on physical interprations of mathematics, a little physical computation can’t be too wrong now, can it. q.e.d.

 

The Reflection Principle: Consider a ray of light travelling in \mathbb{R}^2 from point (0,h_1) to point (a,h_2), being reflected somewhere on the x-axis. The resulting angles of reflection \alpha and \beta are equal.

Proof: The underlying physical principle is to choose the line of minimal length for the reflection. A mathematician would put this as an axiom, a physicist will consider this granted by the way that nature behaves. Let’s go with it: the length of the chosen path is, as long as the ray of light is reflected in the point (x,0),

\displaystyle L(x) = \sqrt{x^2+h_1^2} + \sqrt{(x-a)^2+h_2^2},

hence we look for some x with

\displaystyle 0 = L'(x) = \frac{x}{\sqrt{x^2+h_1^2}} + \frac{x-a}{\sqrt{(x-a)^2+h_2^2}} = \sin\alpha - \sin\beta.

As, \alpha, \beta\in(0,\frac\pi2) for obvious reasons, this is the assertion. q.e.d.

 

The Refraction Lemma: Consider a ray of light changing the medium in which it travels. Let the speeds of light in those media be c_1 and c_2. The resulting angles of refraction have a constant proportion:

\displaystyle\frac{\sin\alpha}{\sin\beta} = \frac{c_1}{c_2}.

Proof: Now, the speed of light gets relevant and the physical principle is to find the path of minimal time. By the basic laws on time and speed we get

\displaystyle T(x) = \frac{\sqrt{x^2+h_1^2}}{c_1} + \frac{\sqrt{(x-a)^2+h_2^2}}{c_2}

and we look for some x with

\displaystyle 0 = T'(x) = \frac1{c_1}\frac{x}{\sqrt{x^2+h_1^2}} + \frac1{c_2}\frac{x-a}{\sqrt{(x-a)^2+h_2^2}} = \frac1{c_1}\sin\alpha - \frac1{c_2}\sin\beta.

The lemma is proved. q.e.d.

 

We have all ingredients to follow Johann Bernoulli’s idea to find the brachistochrone now. The basic question is, what is the quickest path for a point of mass to take, if it is to travel from one point in the plane, (0,0) say, to another one (b,d)? Johann’s ingenious idea was to compare this to the path that a ray of light will take – as we have postulated, the ray of light will choose the quickest path as well. The acceleration may stem from gravitation or the path may result from the change of the media, but the aim is the same; as Bernoulli wrote: “who would deny us to replace one approach by the other?”

Hence, let us consider a “continous” change of media, for instance by making a limit of finer layers of media for the ray of light to traverse. As the Refraction Lemma showed, we will get a constant quotient of \frac{\sin\alpha}{v}. By the Speed Lemma, our point of mass has gained v=\sqrt{2gy}, if it has arrived at level y only being accelerated by gravitation.

Now, using the designations of the following picture (note that \beta=\frac\pi2-\alpha),

\begin{aligned}    \displaystyle \sin\alpha = \cos\beta = \sqrt{\frac{\cos^2\beta}{\cos^2\beta+\sin^2\beta}} = \sqrt{\frac1{1+\tan^2\beta}} = \sqrt{\frac1{1+(\frac{\mathrm{d}y}{\mathrm{d}x})^2}}.    \end{aligned}

In particular,

\displaystyle\sin\alpha = \frac{1}{\sqrt{1+(y')^2}}.

As \frac{\sin\alpha}{v} is constant, we find the differential equation

\displaystyle \frac{1}{\sqrt{1+(y')^2}} = k\cdot v = k\sqrt{2gy},

or equivalently,

\displaystyle y' = \sqrt{\frac{1}{k^2\sqrt{2gy}^2}-1} = \sqrt{\frac{1-2gk^2y}{2gk^2y}} = \sqrt{\frac{\frac{1}{2gk^2} - y}{y}}.

By setting a:=\frac{1}{2gk^2} and by separation of variables,

\displaystyle x+C = \int\sqrt{\frac{y}{a-y}} \mathrm{d}y.

Then, we substitute y(s):=a\sin^2s, yielding \frac{\mathrm{d}y}{\mathrm{d}s} = 2a\sin s \cos s,

\displaystyle    \begin{aligned}    \int\sqrt{\frac{a\sin^2 s}{a-a\sin^2s}}2a\sin s\cos s \mathrm{d}s &= 2a\int\sqrt{\frac{\sin^2s}{1-\sin^2s}}\sin s\cos s \mathrm{d}s \\    &= 2a\int\sin^2s\mathrm{d}s.    \end{aligned}

This integral can be readily solved via partial integration:

\displaystyle \int\sin^2s\mathrm{d}s = -\sin s\cos s + \int\cos^2s\mathrm{d}s = -\sin s\cos s + s - \int\sin^2s\mathrm{d}s,

meaning

\displaystyle \int\sin^2s\mathrm{d}s = \frac12(s-\sin s\cos s).\qquad(\spadesuit)

Altogether we have found (note that we do not re-substitute y for s, since we are not interested in a parametrization like x=x(y))

\displaystyle x+C = 2a\frac12(s-\sin s\cos s) = \frac a2\bigl(2s-\sin(2s)\bigr).

As we can set our coordinates such that x(0)=0 (the point will begin its voyage in (0,0)), we get C=0. This shows

\displaystyle    \begin{aligned}    x(s) &=  \frac a2\bigl(2s-\sin(2s)\bigr),\\    y(s) &= a\sin^2s = \frac a2\bigl((1-\cos^2s)+\sin^2s\bigr) = \frac a2\bigr(1-\cos(2s)\bigr).    \end{aligned}

Setting r:=\frac a2 and t:=2s, we retrieve the standard parametrization of the cycloid:

\displaystyle x(t) = r(t-\sin t),\qquad y(t) = r(1-\cos t).

The brachistochrone must be a cycloid.

 

But now for a completely different approach. The brachistochrone can also be found via calculus of variations, which is considerably harder, from a technical point of view, than what we did above. On the other hand, these techniques can be applied to a much broader spectrum of problems. We can only sketch many of the issues here.

Historically, the brachistochrone problem has been the start to calculus of variations. Jacob Bernoulli solved the problem with methods like this, much more general but much less elegant than his brother.

At the core is the observation that we wish to minimize a functional

\displaystyle J(f) = \int_a^bF\bigl(t, f(t), f'(t)\bigr) \mathrm{d}t,

over the set M:=\{f:[a,b]\to\mathbb{R}\colon f(a)=c, f(b)=d, f\in\mathcal{C}^2\}. Is there some g\in M such that J(g)\leq J(f) for all f\in M?

We consider the function F to be defined as F(t,y,p). The inputs y and p will play the roles of the solution function and its derivative, respectively.

Notice that we restrict ourselves already to smooth functions f\in\mathcal{C}^2. From a physical point of view, there is no reason why the brachistochrone shouldn’t be just continuous. However, tougher mathematics would be necessary to track down this one.

If the space M is well-behaved, usual compactness arguments tell us that there is a minimum. But it is much harder to pinpoint.

 

Theorem (Euler-Lagrange; tiny special case): A necessary condition for a \mathcal C^2-function g to be a solution to the minimization problem is

\displaystyle \frac{\partial}{\partial y}F\bigl(t,g(t),g'(t)\bigr) - \frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(t,g(t),g'(t)\bigr) = 0.

In its expanded form, this is (dropping the arguments for reasons of better legibility)

\displaystyle \frac{\partial F}{\partial y}- \frac{\partial^2F}{\partial t\partial p}-\frac{\partial^2F}{\partial y\partial p}\cdot g'(t)-\frac{\partial^2F}{\partial p^2}\cdot g''(t) = 0.

 

Proof: Let g be the minimum and let \eta\in\mathcal{C}^2 with \eta(a)=\eta(b)=0. We then consider

\displaystyle \varphi(\varepsilon):=\int_a^bF\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr) \mathrm{d}t.

Since we chose everything to be well-behaved, \varphi will be differentiable. As g minimizes the functional J, \varphi(0) = J(g) \leq \varphi(\varepsilon), and hence \varphi'(0) = 0. Note that the derivative is a \frac{\mathrm{d}}{\mathrm{d}\varepsilon} here. For the function g, the derivative means \frac{\mathrm{d}}{\mathrm{d}t}.

Now, let us compute this (calculemus!)

\displaystyle    \begin{aligned}    \varphi'(\varepsilon) &= \int_a^b\frac{\mathrm{d}}{\mathrm{d}\varepsilon} F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\mathrm{d}t \\    &= \int_a^b \left[\frac{\partial}{\partial y}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t) + \varepsilon \eta'(t)\bigr)\eta(t) +\right.\\    &\hphantom{=}\qquad\left.+ \frac{\partial}{\partial p}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\eta'(t) \right]\mathrm{d}t    \end{aligned}

Integration by parts yields, together with the fact that \eta(a) = \eta(b) = 0,

\displaystyle    \begin{aligned}    \int_a^b \frac{\partial}{\partial p}F(t,y,p)\eta'(t) \mathrm{d}t &= \left[\frac{\partial}{\partial p}F(t,y,p)\eta(t)\right]_a^b - \int_a^b\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F(t,y,p)\eta(t) \mathrm{d}t \\    &= -\int_a^b\eta(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F(t,y,p) \mathrm{d}t.    \end{aligned}

We conclude

\displaystyle    \begin{aligned}    \varphi'(\varepsilon) &= \int_a^b \left[\frac{\partial}{\partial y}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\eta(t) \right.\\    &\hphantom{=}\qquad\left.- \eta(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p} F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr) \right]\mathrm{d}t\\    &= \int_a^b\eta(t)\left[\frac{\partial}{\partial y}F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta'(t)\bigr)\right.\\    &\hphantom{=}\qquad\left. - \frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p} F\bigl(t, g(t)+\varepsilon\eta(t), g'(t)+\varepsilon\eta(t)\bigr)\right]\mathrm{d}t.    \end{aligned}

This expression must vanish, as we demand \varphi'(0)=0, if g is supposed to be a solution to the minimization problem. We have an arbitrary function \eta involved, so the expression in brackets will have to vanish entirely. Formally, one can see this by contradiction: if in some point t_0, the bracket-expression did not vanish, we could choose some interval [t_1,t_2]\subset [a,b] where this bracket-expression didn’t vanish at all (it is continuous, after all). On this interval, we set \eta(t):=(t-t_1)^4(t-t_2)^4, we find the integrand strictly positive there, and vanishing outside. Contradiction to \varphi'(0)=0.

For \varepsilon=0, the statement follows. We have thus proved the Euler-Lagrange equation in this particular case. q.e.d.

 

Notice that we didn’t speak about sufficient conditions. That would overstretch this text by far – let’s ignore this.

 

The Simplification Lemma: In the special case, when F only depends on y and p, and not directly on its first argument t, the Euler-Lagrange equation will simplify to the condition

\displaystyle\frac{\mathrm{d}}{\mathrm{d}t}\left(F\bigl(g(t), g'(t)\bigr)-g'(t)\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\right) = 0.

 

Proof: This follows by a straight-forward computation:

\begin{aligned}    \displaystyle    &\hphantom{=}\frac{\mathrm{d}}{\mathrm{d}t}\left(F\bigl(g(t),g'(t)\bigr)-g'(t)\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\right) \\    &\stackrel{(\circ)}{=} \frac{\partial}{\partial y}F\bigl(g(t),g'(t)\bigr)g'(t) + \frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)g''(t) +\\    &\hphantom{=}\quad- g''(t)\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)-g'(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr) \\    &\stackrel{\hphantom{(\ast)}}{=} \frac{\partial}{\partial y}F\bigl(g(t),g'(t)\bigr)g'(t) - g'(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\\    &\stackrel{(\ast)}{=} \left(\frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr) g'(t) + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g''(t)\right) g'(t) +\\    &\hphantom{=}\quad - g'(t)\frac{\mathrm{d}}{\mathrm{d}t}\frac{\partial}{\partial p}F\bigl(g(t),g'(t)\bigr)\\    &\stackrel{(\circ)}{=} \frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr) \bigl(g'(t)\bigr)^2 + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g'(t)g''(t) +\\    &\hphantom{=}\quad - g'(t)\left(\frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr)g'(t) + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g''(t)\right)\\    &\stackrel{\hphantom{(\ast)}}{=} \frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr) \bigl(g'(t)\bigr)^2 + \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g'(t)g''(t) +\\    &\hphantom{=}\quad - \frac{\partial^2}{\partial y\partial p}F\bigl(g(t),g'(t)\bigr)\bigl(g'(t)\bigr)^2 - \frac{\partial^2}{\partial p^2}F\bigl(g(t),g'(t)\bigr)g'(t)g''(t)\\    &\stackrel{\hphantom{(\ast)}}{=} 0.    \end{aligned}

We have used the expanded form of the Euler-Lagrange equation in (\ast) together with the chain-rule and the feature that in the present special case \frac{\partial}{\partial t}F=0, and the chain-rule all by itself in (\circ). All over the place, we have used that g is a solution to the Euler-Lagrange equation and thus needs to be plugged into F. q.e.d.

 

Now that we have the ingredients, let’s try and find the brachistochrone by calculus of variations. By the Time Lemma, we want to minimize the expression

\displaystyle \int_0^b F\bigl(f(x), f'(x)\bigr)\mathrm{d}x,\qquad F(y,p) = \sqrt{\frac{1+p^2}{2gy}}.

By the Simplification Lemma, any solution \varphi will have

\displaystyle    \begin{aligned}    c &= \sqrt{\frac{1+(\varphi'(x))^2}{2g\varphi(x)}} - \varphi'(x)\frac{\varphi'(x)}{2g\varphi(x)}\sqrt{\frac{2g\varphi(x)}{1+(\varphi'(x))^2}}\\    &= \sqrt{\frac{1+(\varphi'(x))^2}{2g\varphi(x)}}\left(1-\frac{(\varphi'(x))^2}{1+(\varphi'(x))^2}\right)\\    &= \sqrt{\frac{1+(\varphi'(x))^2}{2g\varphi(x)}}\frac1{1+(\varphi'(x))^2}\\    &= \frac1{\sqrt{2g\varphi(x)}\sqrt{1+(\varphi'(x))^2}},    \end{aligned}

which means

\displaystyle \varphi(x)\left(1+\bigl(\varphi'(x)\bigr)^2\right) = \frac1{2gc^2} =: C\qquad\qquad(\clubsuit)

\varphi will be a solution depending on x. On the other hand, we look for a parametrization of a curve in \mathbb{R}^2, hence we try to find both functions x(t) and y(t), that are connected via \varphi(x) = \varphi\bigl(x(t)\bigr) = y(t). We set, by divine insight,

\displaystyle y(t) = C\frac{1-\cos t}{2} = C\sin^2\frac t2.

The chain rule then says \frac{\mathrm{d}}{\mathrm{d}t}\varphi\bigl(x(t)\bigr) = \frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)\frac{\mathrm{d}}{\mathrm{d}t}x(t), and hence

\displaystyle \frac{\mathrm{d}x}{\mathrm{d}t} = \frac{\frac{\mathrm{d}}{\mathrm{d}t}\varphi\bigl(x(t)\bigr)}{\frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)} = \frac{\frac{\mathrm{d}}{\mathrm{d}t}y(t)}{\frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)}.

By (\clubsuit),

\displaystyle    \begin{aligned}    \frac{\mathrm{d}}{\mathrm{d}x}\varphi(x) = \frac{\mathrm{d}}{\mathrm{d}x}\varphi\bigl(x(t)\bigr) &= \sqrt{\frac{C}{\varphi(x(t))} - 1} \\    &= \sqrt{\frac{C}{C\sin^2\frac t2}-1} \\    &= \sqrt{\frac{1-\sin^2\frac t2}{\sin^2\frac t2}} \\    &= \cot\frac t2.    \end{aligned}

Altogether,

\displaystyle    \begin{aligned}    \frac{\mathrm{d}x}{\mathrm{d}t} = \frac{\frac{\mathrm{d}}{\mathrm{d}t}y(t)}{\frac{\mathrm{d}}{\mathrm{d}x}\varphi(x)} &= \frac{C2\frac12\sin\frac t2\cos\frac t2}{\cot\frac t2} \\    &= \frac{C\sin\frac t2\cos\frac t2}{\cos\frac t2}\sin\frac t2 \\    &= C\sin^2\frac t2.    \end{aligned}

We already have almost integrated this one before in (\spadesuit), the substitution s(t)=\frac t2 yields

\displaystyle    \begin{aligned}    x(t) = C\int\sin^2\frac t2 \mathrm{d}t = 2C\int\sin^2(s)\mathrm{d}s &= 2C\frac{s-\sin s\cos s}{2} \\    &= C\left[s-\frac12\sin(2s)\right] \\    &= \frac C2(t-\sin t).    \end{aligned}

This shows, that any solution to the minimization problem must look like

\displaystyle\begin{pmatrix}x(t)\\y(t)\end{pmatrix} = \begin{pmatrix}\frac C2(t-\sin t)\\\frac C2(1-\cos t)\end{pmatrix},

and is hence a cycloid. What we haven’t proved is, that it actually is a solution to the minimization problem – we didn’t speak about the sufficient condition with Euler-Lagrange, not about regularity of our set M and only about \mathcal C^2-functions in the first place (I won’t even go into the physical hand-waving). But anyway, the little tricks and the big machinery of technique make both approaches really insightful and interesting. This makes it a good place to end.

Der Sommer des Jahrhunderts

Vor einigen Jahren ist ein Trend gestartet, ganze Bücher über einzelne Jahre zu schreiben. Mit diesen Büchern soll der „runden Geburtstage“ großer Ereignisse gedacht werden, etwa der 100-sten Wiederkehr des Ersten Weltkriegs, der 200-sten Wiederkehr des Wiener Kongresses oder 50-sten Wiederkehr der Studentenrevolution. In diese Reihe passt auch das Buch 1913 – Der Sommer des Jahrhunderts von Florian Illies. Es unterscheidet sich aber dadurch, dass es sich nicht um ein historisches Überblickswerk handelt. Im Gegenteil ist es ein locker zusammengestelltes Kaleidoskop feuilletonistischer Episoden aus dem Jahr 1913, einem Jahr, das im Wesentlichen nur durch das Hintergrundwissen über das darauffolgende Jahr von Relevanz ist.

Ich selbst habe mir inmitten des Hypes rund um 2013 dieses Buch aus Neugier gekauft und es damals nicht bereut. Jetzt habe ich es aus einer Laune heraus wieder gelesen und werde lebhaft an den Satz erinnert: Ein Buch, das man ein einziges Mal gelesen hat, hat man entweder einmal zu oft oder einmal zu wenig gelesen. Hier ist letzteres der Fall: es verbergen sich eine Menge kleine Perlen in diesem Buch, die sich erst durch mehrmaliges Lesen wirklich erkennen lassen: lege, lege, relege et invenies. Und kurzweilig ist es außerdem noch. Es verleitet durch die Kürze seiner unzähligen Episoden ein wenig dazu, es in vielen sehr kurzen Abschnitten zu lesen. Das ist möglich, aber nicht klug. Seine Pracht entfaltet das Kaleidoskop dadurch, dass es seine ganze Vielfalt aufzeigt – das ist nur möglich durch die Wahrnehmung all der vielen Episoden nebeneinander.

Illies schreibt mit allen literarischen Methoden, die ihm ein Roman an die Hand geben würde. Neben seinen diversen wiederkehrenden Figuren verwendet er auch allerhand Stilmittel und greift episodenübergreifend Themen auf – so etwa das Zitat „Der Rest ist Schweigen“ aus dem akademischen Disput zwischen Freud und C.G.Jung, der zu Jahresbeginn ausbricht und sich nie wieder kitten lässt. Der Satz „Der Rest ist Schweigen“ taucht in der Folge immer wieder unvermittelt, aber nicht unpassend, auf und spannt so den Bogen über viele Schauplätze und über das ganze Jahr hinweg. Tatsächlich wird das Buch in einigen Rezensionen als Roman bezeichnet, es trägt aber vollkommen zurecht nicht diese Selbstbezeichnung und ist in der Spiegel-Bestsellerliste unter den Sachbüchern geführt worden.

Es kann kaum verwundern, dass das Buch dem Feuilleton entnommen zu sein scheint: der Autor Illies war jahrelang Leiter des Feuilletons der FAZ. Ähnliche Beobachtungen wie er sie über das Jahr vor Ausbruch des Ersten Weltkriegs gesammelt hat, hat er bereits aus anderer Perspektive über seine eigene Jugendzeit angestellt: in seinem Erstlingswerk Generation Golf.

Illies entwirft ein Panorama des Jahres 1913, nach Monaten geordnet und mit wiederkehrenden Hauptpersonen. Sein Grundgerüst entnimmt er wahren Begebenheiten, die er leicht ausschmückt und mit sparsamer Erfindung ergänzt. Auf diese Weise entstehen hübsche Charakterstudien etwa von Franz Kafka, Ernst Jünger und Sigmund Freud, auf deren Spuren sich Illies begibt. Er wechselt zwischen den damals schon berühmten Persönlichkeiten wie Freud, Einstein oder Albert Schweitzer hin und her, und er bezieht auch solche Personen ein, die 1913 noch vollkommen unbekannt waren und erst später relevant für den Lauf der Welt werden sollen: neben Kafka etwa auch Hitler und Stalin. Von letzteren beiden erfindet Illies die Episode, dass sie sich im Januar 1913 bei einem Spaziergang durch Wien getroffen haben könnten – unstreitig ist der Fakt, dass beide sich niemals so nahe gekommen sind wie in diesem Monat.

Famos ist die enge Begleitung Kafkas durch das Jahr, die vor allem durch seine umfangreiche Korrespondenz mit seiner Verlobten Felice Bauer möglich wird. Prompt als die beiden sich ein Wochenende lang persönlich treffen lässt sich nichts mehr über sie aussagen, da in diesem Moment keine Briefe geschrieben werden. Aber abgesehen von diesen wenigen Tagen ergibt sich eine 360°-Ansicht von einem gnadenlos neurotischen und unsicheren Kafka, der sogar in seinem Heiratsantrag seitenweise Gründe aufzählt, warum Felice ihn unter keinen Umständen heiraten sollte (was sie auch nicht getan hat).

Überhaupt bewegen sich unverhältnismäßig viele der Akteure des Buches im Künstlermilieu, es tritt zwar der deutsche Kaiser, nicht aber sein Reichskanzler auf (nicht, dass das ein Verlust wäre). Das ist dem Feuilleton-Charakter des Buches geschuldet, erfordert aber eine gewisse Wikipedia-Zeit vom nur allgemeingebildeten Leser, der sich eben nicht tiefgehend in der Kunstgeschichte des Kubismus und Futurismus auskennt. Auch Details über die Literatenfamilie Mann (in der Thomas gerade den Zauberberg beginnt und Heinrich soeben den Untertan beendet), den Wiener Dichter Georg Trakl oder über den Lehrer James Joyce (der in Triest zu seinem Ulysses ermutigt wird, den er im Folgejahr tatsächlich in Angriff nimmt) lassen sich durch ein gewisses Fundament in der Wikipedia besser verkraften.

Ein schöner selbstreflexiver Moment des Buchs ist die Bemerkung, dass in diesem Jahr 1913 der Schöpfer des Kulturfahrplans geboren wird. In tabellarischer Form wäre Illies‘ Werk in den Kulturfahrplan zu gießen, und mit etwas literarischer Ausschmückung entspräche der Kulturfahrplan dem Buch von Illies. Ein wirklich ästhetischer Fixpunkt für meinen Geschmack.

Meistens hält Illies eine strenge zeitgenössische Perspektive ein. Was nach 1913 geschieht, ist seinen Akteuren unbekannt und wird auch durch ihn selbst meist ausgeblendet. Hin und wieder bricht er jedoch auch diesem Korsett aus, mal augenzwinkernd, mal prophetisch. Er erreicht dadurch den Verweis darauf, dass das Jahr erst im Kontrast zum Ersten Weltkrieg heute noch von Interesse ist (sicherlich hätte das Jahr 1912 mehr spektakuläre Ereignisse zu bieten gehabt – aber sein Abstand zum Weltkrieg ist größer, und sicher gewinnt das Buch gerade durch die Belanglosigkeit und Alltäglichkeit vieler seiner Inhalte).

Gelegentlich begibt Illies sich in die Vogelperspektive und blickt etwa losgelöst von allen Episoden auf die vier Zentren der Moderne (Paris, Berlin, München und Wien) und deren unterschiedliche Sicht auf die Welt. Ein anderes Mal zitiert er den Kunstkritiker Meier-Graefe und entspinnt daraus die treffende (und beinahe atemlose) Erkenntnis: „‘Bei dem Namen Picasso wird der Historiker der Zukunft stillhalten und feststellen: Hier hörte es auf.‘ Ende. Unvorstellbar, dass es nach der Formzertrümmerung des Kubismus noch einmal weitergehen könnte. Der große Autor, der vielleicht feurigste kunstkritische Stilist des Jahrhunderts, der ein Meister des Erzählens der ‚Entwicklung‘ der Kunst war, der sieht sie, ganz nüchtern, jetzt an ihr Ende gekommen. Dort, wo wir heute ihren Anfang sehen.

Das Kaleidoskop von 1913 setzt sich mit der Zeit zu einem Gesamtbild, einem Panorama der Epoche zusammen. Die Zeit vor dem Ersten Weltkrieg ist hoch ambivalent, das macht ihren Reiz aus heutiger Betrachtung aus: sowohl hochmodern als auch rückwärtsgewandt; sowohl moralisch streng konservativ als auch alle Grenzen testend und überschreitend. Ganz richtig beschwört Illies nicht den „Abendglanz“, der in der Rückschau gern herbeigerufen wird: in der Sicht der Zeitgenossen war das Ende ihrer Welt durch den Krieg nicht absehbar, im Gegenteil. Ein großer Krieg galt als zunehmend unwahrscheinlich, die Welt und die Wirtschaft waren fast wie in heutiger Zeit verflochten und vernetzt. Die Kultur schritt von Höhepunkt zu Höhepunkt voran, das Fin de Siècle war vorbei, die vielen Kunstrichtungen gingen voran und wurden zunehmend abstrakter.

Eine gewisse Untergangsstimmung will Illies sich aber nicht entgehen lassen. Er zitiert die Weltuntergangsszenarien, von denen C.G.Jung träumt, und er führt eine zeitgenössische Novelle an, in der ein spannungsreiches Duell beschrieben wird, „empfindlich und feinschalig wie eine Frucht, die auf dem Südhange gereift ist“ – daraus macht er 1913 zum Jahr „am Südhang der Geschichte“. Als wollte er den Untergang am Horizont sehen können, der für die Zeitgenossen unsichtbar sein musste. Aber die latente Depression muss er bei seinen Künstlern nicht lange suchen, die Empfindsamkeit ist ihnen angesichts der immer weiter voranstürmenden Moderne ganz natürlich zu eigen. Und in der Tat waren einige Zeitgenossen ihrer Umwelt überdrüssig; in welcher Gestalt auch immer sie eine Veränderung wollten.

Das soll für den groben Eindruck genügen. Alles Weitere lässt sich nur durch das Buch selbst erleben – und erleben muss man das Buch, sodass man tatsächlich in das Jahr 1913 eintauchen kann (oder das, was Illies durch seine Auswahl und seinen Blickwinkel daraus gemacht hat). Ein Buch, das sich mit ein wenig Abstand wieder lohnen wird zu lesen.

Brouwer’s Fixed Point Theorem

Recently, we have concluded the text on the Transformation formula, remarking that it was a tool in an elementary proof of Brouwer’s Fixed Point Theorem. Let’s have a closer look at that.

Brouwer’s Fixed Point Theorem is at the core of many insights in topology and functional analysis. As many other powerful theorems, it can be stated and understood very easily, however the proof is quite deep. In particular, the conclusions that are drawn from it, are considered even deeper. As we shall see, Brouwer’s theorem can be shown in an elementary fashion, where the Transformation Formula, the Inverse Function Theorem and Weierstrass’ Approximation Theorem are the toughest stepping stones; note that we have given a perfectly elementary proof of Weierstrass’ Theorem before. This makes Brouwer’s theorem accessible to undergraduate calculus students (even though, of course, these stepping stones already mean bringing the big guns to the fight). The downside is that the proof, even though elementary, is quite long-ish. The undergraduate student needs to exercise some patience.

 

Theorem (Brouwer, 1910): Let K:=\{x\in\mathbb{R}^p\colon \left|x\right|\leq1\} be the compact unit ball, and let f:K\to K be continuous. Then f has a fixed point, i.e. there is some x\in K such that f(x)=x.

 

There are many generalizations of the Theorem, considering more complex sets instead of K, and taking place in the infinite-dimensional space. We shall get back to that later. First, we shall look at a perfectly trivial and then a slightly less trivial special case.

 

For p=1, the statement asks to find a fixed point for the continuous mapping f:[0,1]\to[0,1]. W.l.o.g. we have shrunk the set K to [0,1] instead of [-1,1] to avoid some useless notational difficulty. This is a standard exercise on the intermediate value theorem with the function g(x):=f(x)-x. Either, f(1)=1 is the fixed point, or else f(1)<1, meaning g(1)<0 and g(0)=f(0)\geq0. As g is continuous, some point needs to be the zero of g, meaning 0=g(\xi) = f(\xi)-\xi and hence f(\xi)=\xi. q.e.d. (p=1)

 

For p=2, things are still easy to see, even though a little less trivial. This is an application of homotopy theory (even though one doesn’t need to know much about it). The proof is by contradiction however. We will show an auxiliary statement first: there is no continuous mapping h:K\to\partial K, which is the identity on \partial K, i.e. h(x)=x for x\in\partial K. If there was, we would set

\displaystyle H(t,s):=h(se^{it}), \qquad t\in[0,2\pi], s\in[0,1].

H is a homotopy of the constant curve h(0) = H(t,0) to the circle e^{it} = h(e^{it}) = H(t,1). This means, we can continuously transform the constant curve to the circle. This is a contradiction, as the winding number of the constant is 0, but the winding number of the circle is 1. There can be no such h.

Now, we turn to the proof of the actual statement of Brouwer’s Theorem: If f had no fixed point, we could define a continuous mapping as follows: let x\in K, and consider the line through x and f(x) (which is well-defined by assumption). This line crosses \partial K in the point h(x); actually there are two such points, we shall use the one that is closer to x itself. Apparently, h(x)=x for x\in\partial K. By the auxiliary statement, there is no such h and the assumption fails. f must have a fixed point. q.e.d. (p=2)

 

For the general proof, we shall follow the lines of Heuser who has found this elementary fashion in the 1970’s and who made it accessible in his second volume of his book on calculus. It is interesting, that most of the standard literature for undergraduate students shies away from any proof of Brouwer’s theorem. Often, the theorem is stated without proof and then some conclusions and applications are drawn from it. Sometimes, a proof via differential forms is given (such as in Königsberger’s book, where it is somewhat downgraded to an exercise after the proof of Stoke’s Theorem) which I wouldn’t call elementary because of the theory which is needed to be developed first. The same holds for proofs using homology groups and the like (even though this is one of the simplest fashions to prove the auxiliary statement given above – it was done in my topology class, but this is by no means elementary).

A little downside is the non-constructiveness of the proof we are about to give. It is a proof by contradiction and it won’t give any indication on how to find the fixed point. For many applications, even the existence of a fixed point is already a gift (think of Peano’s theorem on the existence of solutions to a differential equation, for instance). On the other hand, there are constructive proofs as well, a fact that is quite in the spirit of Brouwer.

In some way, the basic structure of the following proof is similar to the proof that we gave for the case p=2. We will apply the same reasoning that concluded the proof for the special case (after the auxiliary statement), we will just add a little more formality to show that the mapping g is actually continuous and well-defined. The trickier part in higher dimensions is to show the corresponding half from which the contradiction followed. Our auxiliary statement within this previous proof involved the non-existence of a certain continuous mapping, that is called a retraction: for a subset A of a topological space X, f:X\to A is called a retraction of X to A, if f(x)=x for all x\in A. We have found that there is no retraction from K to \partial K. As a matter of fact, Brouwer’s Fixed Point Theorem and the non-existence of a retraction are equivalent (we’ll get back to that at the end).

The basic structure of the proof is like this:

  • we reduce the problem to polynomials, so we only have to deal with those functions instead of a general continuous f;
  • we formalize the geometric intuition that came across in the special case p=2 (this step is in essence identical to what we did above): basing on the assumption that Brouwer’s Theorem is wrong, we define a mapping quite similar to a retraction of K to \partial K;
  • we show that this almost-retraction is locally bijective;
  • we find, via the Transformation Formula, a contradiction: there can be no retraction and there must be a fixed point.

Steps 3 and 4 are the tricky part. They may be replaced by some other argument that yields a contradiction (homology theory, for instance), but we’ll stick to the elementary parts. Let’s go.

 

Lemma (The polynomial simplification): It will suffice to show Brouwer’s Fixed Point Theorem for those functions f:K\to K, whose components are polynomials on K and which have f(K)\subset\mathring K.

 

Proof: Let f:K\to K continuous, it has the components f = (f_1,\ldots,f_p), each of which has the arguments x_1,\ldots,x_p. By Weierstrass’ Approximation Theorem, for any \varepsilon>0 there are polynomials p_k^\varepsilon such that \left|f_k(x)-p_k^\varepsilon(x)\right| < \varepsilon, k=1,\ldots,p, for any x\in K. In particular, there are polynomials \varphi_{k,n} such that

\displaystyle \left|f_k(x)-\varphi_{k,n}(x)\right| < \frac{1}{\sqrt{p}n}\qquad\text{for any }x\in K.

If we define the function \varphi_n:=(\varphi_{1,n},\ldots,\varphi_{p,n}) which maps K to \mathbb{R}^p, we get

\displaystyle    \begin{aligned}    \left|f(x)-\varphi_n(x)\right|^2 &= \sum_{k=1}^p\left|f_k(x)-\varphi_{k,n}(x)\right|^2 \\    &< \frac{p}{pn^2} \\    &= \frac1{n^2},\qquad\text{for any }x\in K    \end{aligned}

and in particular \varphi_n\to f uniformly in K.

Besides,

\displaystyle \left|\varphi_n(x)\right|\leq\left|\varphi_n(x)-f(x)\right| + \left|f(x)\right| < \frac1n + \left|f(x)\right| \leq \frac1n + 1 =:\alpha_n.

This allows us to set

\displaystyle \psi_n(x) = \frac{\varphi_n(x)}{\alpha_n}.

This function also converges uniformly to f, as for any x\in K,

\displaystyle    \begin{aligned}    \left|\psi_n(x)-f(x)\right| &= \left|\frac{\varphi_n(x)}{\alpha_n} - f(x)\right| \\    &= \frac1{\left|\alpha_n\right|}\left|\varphi_n(x)-\alpha_nf(x)\right|\\    &\leq \frac1{\left|\alpha_n\right|}\left|\varphi_n(x)-f(x)\right| + \frac1{\left|\alpha_n\right|}\left|f(x)-\alpha_nf(x)\right|\\    &< \frac1{\left|\alpha_n\right|}\frac1n + \frac1{\left|\alpha_n\right|}\left|f(x)\right|\left|1-\alpha_n\right|\\    &< (1+\delta)\frac1n + \frac{\delta}{1+\delta}\left|f(x)\right|\\    &< \varepsilon \qquad\text{for }n\gg0.    \end{aligned}

Finally, for x\in K, by construction, \left|\varphi_n(x)\right|\leq\alpha_n, and so \left|\psi_n(x)\right| = \frac{\left|\varphi_n(x)\right|}{\alpha_n} < 1, which means that \psi_n:K\to\mathring K.

The point of this lemma is to state that if we had shown Brouwer’s Fixed Point Theorem for every such function \psi_n:K\to\mathring K, whose components are polynomials, we had proved it for the general continuous function f. This can be seen as follows:

As we suppose Brouwer’s Theorem was true for the \psi_n, there would be a sequence (x_n)\subset K with \psi_n(x_n) = x_n. As K is (sequentially) compact, there is a convergent subsequence (x_{n_j})\subset(x_n), and \lim_jx_{n_j} = x_0\in K. For sufficiently large j, we see

\displaystyle \left|\psi_{n_j}(x_{n_j})-f(x_0)\right| \leq\left|\psi_{n_j}(x_{n_j})-f(x_{n_j})\right| + \left|f(x_{n_j})-f(x_0)\right| < \frac\varepsilon2 + \frac\varepsilon2.

The first bound follows from the fact that \psi_{n_j}\to f uniformly, the second bound is the continuity of f itself, with the fact that x_{n_j}\to x_0. In particular,

\displaystyle x_0 = \lim_{j} x_{n_j} = \lim_{j} \psi_{n_j}(x_{n_j}) = f(x_0).

So, f has the fixed point x_0, which proves Brouwer’s Theorem.

In effect, it suffices to deal with functions like the \psi_n for the rest of this text. q.e.d.

 

Slogan (The geometric intuition): If Brouwer’s Fixed Point Theorem is wrong, then there is “almost” a retraction of K to \partial K.

Or, rephrased as a proper lemma:

Lemma: For f being polynomially simplified as in the previous lemma, assuming x\neq f(x) for any x\in K, we can construct a continuously differentiable function g_t:K\to K, t\in[0,1], with g_t(x)=x for x\in\partial K. This function is given via

\displaystyle g_t(x) =x + t\lambda(x)\bigl(x-f(x)\bigr),

\displaystyle \lambda(x)=\frac{-x\cdot\bigl(x-f(x)\bigr)+\sqrt{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2+\bigl(1-\left|x\right|^2\bigr)\left|x-f(x)\right|^2}}{\left|x-f(x)\right|^2}.

The mapping t\mapsto g_t is the direct line from x to the boundary of \partial K, which also passes through f(x). \lambda(x) is the parameter in the straight line that defines the intersection with \partial K.

 

Proof: As we suppose, Brouwer’s Fixed Point Theorem is wrong the continuous function \left|x-f(x)\right| is positive for any x\in K. Because of continuity, for every y\in \partial K, there is some \varepsilon = \varepsilon(y) > 0, such that still \left|x-f(x)\right|>0 in the neighborhood U_{\varepsilon(y)}(y).

Here, we have been in technical need of a continuation of f beyond K. As f is only defined on K itself, we might take f(x):=f\bigl(\frac{x}{\left|x\right|}\bigr) for \left|x\right|>1. We still have \left|f(x)\right| < 1 and f(x)\neq x, which means that we don’t get contradictions to our assumptions on f. Let’s not dwell on this for longer than necessary.

On the compact set \partial K, finitely many of the neighborhoods U_{\varepsilon(y)}(y) will suffice to cover \partial K. One of them will have a minimal radius. We shall set \delta =  \min_y\varepsilon(y) +1, to find: there is an open set U = U_\delta(0)\supset K with \left|x-f(x)\right| >0 for all x\in U.

Let us define for any x\in U

\displaystyle d(x):=\frac{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2+\bigl(1-\left|x\right|\bigr)^2\left|x-f(x)\right|^2}{\left|x-f(x)\right|^4}.

It is well-defined by assumption. We distinguish three cases:

 

a) \left|x\right|<1: Then 1-\left|x\right|^2>0 and the numerator of d(x) is positive.

b) \left|x\right|=1: Then the numerator of d(x) is

\displaystyle \left(x\cdot\bigl(x-f(x)\bigr)\right)^2 = \bigl(x\cdot x - x\cdot f(x)\bigr)^2 = \bigl(\left|x\right|^2-x\cdot f(x)\bigr)^2 = \bigl(1-x\cdot f(x)\bigr)^2,

where by Cauchy-Schwarz and by assumption on f, we get

\displaystyle x\cdot f(x) \leq \left|x\right|\left|f(x)\right| = \left|f(x)\right| < 1\qquad (\spadesuit).

In particular, the numerator of d(x) is strictly positive.

c) \left|x\right|>1: This case is not relevant for what’s to come.

 

We have seen that d(x)>0 for all \left|x\right|\leq 1. Since d is continuous, a compactness argument similar to the one above shows that there is some V = V_{\delta'}(0)\supset K with d(x)>0 for all x\in V. If we pick \delta'=\delta if \delta is smaller, we find: d is positive and well-defined on V.

The reason why we have looked at d is not clear yet. Let us grasp at some geometry first. Let x\in V and \Gamma_x = \left\{x+\lambda\bigl(x-f(x)\bigr)\colon\lambda\in\mathbb{R}\right\} the straight line through x and f(x). If we look for the intersection of \Gamma_x with \partial K, we solve the equation

\displaystyle\left|x+\lambda\bigl(x-f(x)\bigr)\right| = 1.

The intersection “closer to” x is denoted by some \lambda>0.

This equation comes down to

\displaystyle    \begin{aligned}    && \left(x+\lambda\bigl(x-f(x)\bigr)\right) \cdot \left(x+\lambda\bigl(x-f(x)\bigr)\right) &=1 \\    &\iff& \left|x\right|^2 + 2\lambda x\cdot\bigl(x-f(x)\bigr) + \lambda^2\left|x-f(x)\right|^2 &=1\\    &\iff& \lambda^2\left|x-f(x)\right|^2 + 2\lambda x\cdot\bigl(x-f(x)\bigr) &= 1-\left|x\right|^2\\    &\iff& \left(\lambda+\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 &= \frac{1-\left|x\right|^2}{\left|x-f(x)\right|^2} + \left(\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 \\    &\iff& \left(\lambda+\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 &= \frac{(1-\left|x\right|)^2\left|x-f(x)\right|^2+\left(x\cdot\bigl(x-f(x)\bigr)\right)^2}{\left|x-f(x)\right|^4} \\    &\iff& \left(\lambda+\frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\right)^2 &= d(x).    \end{aligned}

As x\in V, d(x)>0, and hence there are two real solutions to the last displayed equation. Let \lambda(x) be the larger one (to get the intersection with \partial K closer to x), then we find

\displaystyle    \begin{aligned}    \lambda(x) &= \sqrt{d(x)} - \frac{x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^2}\\    &= \frac{-x\cdot\bigl(x-f(x)\bigr)+\sqrt{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2+\bigl(1-\left|x\right|^2\bigr)\left|x-f(x)\right|^2}}{\left|x-f(x)\right|^2}.    \end{aligned}

By construction,

\displaystyle \left|x+\lambda(x)\bigl(x-f(x)\bigr)\right| = 1,\qquad\text{for all }x\in V.\qquad(\clubsuit)

Let us define

\displaystyle g_t(x) = x+t\lambda(x)\bigl(x-f(x)\bigr),\qquad t\in[0,1],~x\in V.

This is (at least) a continuously differentiable function, as we simplified f to be a polynomial and the denominator in \lambda(x) is bounded away from 0. Trivially and by construction, g_0(x)=x and \left|g_1(x)\right| = 1 for all x\in V.

For \left|x\right|<1 and t<1, we have

\displaystyle    \begin{aligned}    \left|x+t\lambda(x)\bigl(x-f(x)\bigr)\right| &\stackrel{\hphantom{(\clubsuit)}}{=} \left|\bigl(t+(1-t)\bigr)x + t\lambda(x)\bigl(x-f(x)\bigr)\right|\\    &\stackrel{\hphantom{(\clubsuit)}}{=}\left|t\left(x+\lambda(x)\bigl(x-f(x)\bigr)\right)+(1-t)x\right|\\    &\stackrel{\hphantom{(\clubsuit)}}{\leq} t\left|x+\lambda(x)\bigl(x-f(x)\bigr)\right|+(1-t)\left|x\right|\\    &\stackrel{(\clubsuit)}{=} t+(1-t)\left|x\right|\\    &\stackrel{\hphantom{(\clubsuit)}}{<} t+(1-t) = 1\qquad (\heartsuit).    \end{aligned}

Hence, \left|g_t(x)\right|<1 for \left|x\right|<1 and t\in[0,1). This means g_t(\mathring K)\subset\mathring K for t<1.

For \left|x\right|=1, we find (notice that x\cdot\bigl(x-f(x)\bigr)>0 for \left|x\right|=1, by (\spadesuit)).

\displaystyle    \begin{aligned}    \lambda(x) &= \frac{-x\cdot\bigl(x-f(x)\bigr)+\sqrt{\left(x\cdot\bigl(x-f(x)\bigr)\right)^2}}{\left|x-f(x)\right|^4} \\    &= \frac{-x\cdot\bigl(x-f(x)\bigr)+x\cdot\bigl(x-f(x)\bigr)}{\left|x-f(x)\right|^4} = 0.    \end{aligned}

This is geometrically entirely obvious, since \lambda(x) denotes the distance of x to the intersection with \partial K; if x\in\partial K, this distance is apparently 0.

We have seen that g_t(x)=x for \left|x\right|=1 for any t\in[0,1]. Hence, g_t(\partial K)=\partial K for all t. g_t is almost a retraction, g_1 actually is a retraction. q.e.d.

 

Note how tricky the general formality gets, compared to the more compact and descriptive proof that we gave in the special case p=2. The arguments of the lemma and in the special case are identical.

 

Lemma (The bijection): Let \hat K be a closed ball around 0, K\subset\hat K\subset V. The function g_t is a bijection on \hat K, for t\geq0 sufficiently small.

 

Proof: We first show that g_t is injective. Let us define h(x):=\lambda(x)\bigl(x-f(x)\bigr), for reasons of legibility. As we saw above, h is (at least) continuously differentiable. We have

\displaystyle g_t(x) = x+th(x),\qquad g_t'(x)=\mathrm{Id}+th'(x).

As \hat K is compact, h' is bounded by \left|h'(x)\right|\leq C, say. By enlarging C if necessary, we can take C\geq1. Now let x,y\in\hat K with g_t(x)=g_t(y). That means x+th(x)=y+th(y) and so, by the mean value theorem,

\displaystyle \left|x-y\right| = t\left|h(x)-h(y)\right|\leq tC\left|x-y\right|.

By setting \varepsilon:=\frac1C and taking t\in[0,\varepsilon), we get \left|x-y\right| = 0. g_t is injective for t<\varepsilon.

Our arguments also proved \left|th'(x)\right| < 1. Let us briefly look at the convergent Neumann series \sum_{k=0}^\infty\bigl(th'(x)\bigr)^k, having the limit s, say. We find

\displaystyle sth'(x) = \sum_{k=0}^\infty\bigl(th'(x)\bigr)^{k+1} = s-\mathrm{Id},

which tells us

\displaystyle \mathrm{Id} = s-s\cdot th'(x) = s\bigl(\mathrm{Id}-th'(x)\bigr).

In particular, g_t'(x) = \mathrm{Id}-th'(x) is invertible, with the inverse s. Therefore, \det g_t'(x)\neq0. Since this determinant is a continuous function of t, and \det g_0'(x) = \det\mathrm{Id} = 1, we have found

\displaystyle \det g_t'(x) > 0 \text{ for any }t\in[0,\varepsilon),~x\in\hat K.

Now, let us show that g_t is surjective. As \det g_t'(x) never vanishes on \hat K, g_t is an open mapping (by an argument involving the inverse function theorem; g_t can be inverted locally in any point, hence no point can be a boundary point of the image). This means that g_t(\mathring K) is open.

Let z\in K with z\notin g_t(\mathring K); this makes z the test case for non-surjectivity. Let y\in g_t(\mathring K); there is some such y due to (\heartsuit). The straight line between y and z is

\displaystyle \overline{yz}:=\left\{(1-\lambda)y+\lambda z\colon \lambda\in[0,1]\right\}.

As g_t is continuous, there must be some point v\in\partial g_t(\mathring K)\cap\overline{yz}; we have to leave the set g_t(\mathring K) somewhere. Let us walk the line until we do, and then set

\displaystyle v=(1-\lambda_0)y+\lambda_0z,\qquad\text{with }\lambda_0=\sup\left\{\lambda\in[0,1]\colon\overline{y;(1-\lambda)y+\lambda z}\subset g_t(\mathring K)\right\}.

Now, continuous images of compact sets remain compact: g_t(K) is compact and hence closed. Therefore, we can conclude

\displaystyle g_t(\mathring K)\subset g_t(K)\quad\implies\quad \overline{g_t(\mathring K)}\subset g_t(K)\quad\implies\quad v\in\overline{g_t(\mathring K)}\subset g_t(K).

This means that there is some u\in K such that v=g_t(u). As g_t(\mathring K) is open, u\in\partial K (since otherwise, v\notin\partial g_t(\mathring K) which contradicts the construction). Therefore, \left|u\right|=1, and since g_t is almost a retraction, g_t(u)=u. Now,

\displaystyle v=g_t(u) = u \quad\implies\quad v\in\partial K.

But by construction, v is a point between z\in K and y\in g_t(\mathring K); however, y\notin\partial K, since g_t(\mathring K) is open. Due to the convexity of K, we have no choice but z\in\partial K, and by retraction again, g_t(z)=z. In particular, z\in g_t(\partial K).

We have shown that if z\notin g_t(\mathring K), then z\in g_t(\partial K). In particular, z\in g_t(K) for any z\in K. g_t is surjective. q.e.d.

 

Lemma (The Integral Application): The real-valued function

\displaystyle V(t)=\int_K\det g_t'(x)dx

is a polynomial and satisfies V(1)>0.

 

Proof: We have already seen in the previous lemma that \det g_t'(x)>0 on x\in\mathring{\hat K} for t<\varepsilon. This fact allows us to apply the transformation formula to the integral:

\displaystyle V(t) = \int_{g_t(K)}1dx.

As g_t is surjective, provided t is this small, g_t(K) = K, and therefore

\displaystyle V(t) = \int_K1dx = \mu(K).

In particular, this no longer depends on t, which implies V(t)>0 for any t<\varepsilon.

By the Leibniz representation of the determinant, \det g_t'(x) is a polynomial in t, and therefore, so is V(t). The identity theorem shows that V is constant altogether: in particular V(1)=V(0)>0. q.e.d.

 

Now we can readily conclude the proof of Brouwer’s Fixed Point Theorem, and we do it in a rather unexpected way. After the construction of g_t, we had found \left|g_1(x)\right|=1 for all x\in V. Let us write this in its components and take a partial derivative (j=1,\ldots,p)

\displaystyle    \begin{aligned}    &&1 &= \sum_{k=1}^p\bigl(g_{1,k}(x)\bigr)^2\\    &\implies& 0 &= \frac\partial{\partial x_j}\sum_{k=1}^p\bigl(g_{1,k}(x)\bigr)^2 = \sum_{k=1}^p2\frac{\partial g_{1,k}(x)}{\partial x_j}g_{1,k}(x)    \end{aligned}

This last line is a homogeneous system of linear equations, that we might also write like this

\displaystyle \begin{pmatrix}\frac{\partial g_{1,1}(x)}{\partial x_1}&\cdots &\frac{\partial g_{1,p}(x)}{\partial x_1}\\ \ldots&&\ldots\\ \frac{\partial g_{1,1}(x)}{\partial x_p}&\cdots&\frac{\partial g_{1,p}(x)}{\partial x_p}\end{pmatrix} \begin{pmatrix}\xi_1\\\ldots\\\xi_p\end{pmatrix} = 0,

and our computation has shown that the vector \bigl(g_{1,1}(x),\ldots,g_{1,p}(x)\bigr) is a solution. But the vector 0 is a solution as well. These solutions are different because of \left|g_1(x)\right| = 1. If a system of linear equations has two different solutions, it must be singular (it is not injective), and the determinant of the linear system vanishes:

\displaystyle 0 = \det \begin{pmatrix}\frac{\partial g_{1,1}(x)}{\partial x_1}&\cdots &\frac{\partial g_{1,p}(x)}{\partial x_1}\\ \ldots&&\ldots\\ \frac{\partial g_{1,1}(x)}{\partial x_p}&\cdots&\frac{\partial g_{1,p}(x)}{\partial x_p}\end{pmatrix} = \det g_1'(x).

This means

\displaystyle 0 = \int_K\det g_1'(x)dx = V(1) > 0.

A contradiction, which stems from the basic assumption that Brouwer’s Fixed Point Theorem were wrong. The Theorem is thus proved. q.e.d.

 

Let us make some concluding remarks. Our proof made vivid use of the fact that if there is a retraction, Brouwer’s Theorem must be wrong (this is where we got our contradiction in the end: the retraction cannot exist). The proof may also be started the other way round. If we had proved Brouwer’s Theorem without reference to retractions (this is how Elstrodt does it), you can conclude that there is no retraction from K to \partial K as follows: if there was a retraction g:K\to\partial K, we could consider the mapping -g. It is, in particular, a mapping of K to itself, but it does not have any fixed point – a contradiction to Brouwer’s Theorem.

 

Brouwer’s Theorem, as we have stated it here, is not yet ready to drink. For many applications, the set K is too much of a restriction. It turns out, however, that the hardest work has been done. Some little approximation argument (which in the end amounts to continuous projections) allows to formulate, for instance:

  • Let C\subset\mathbb{R}^p be convex, compact and C\neq\emptyset. Let f:C\to C be continuous. Then f has a fixed point.
  • Let E be a normed space, K\subset E convex and \emptyset\neq C\subset K compact. Let f:K\to C be continuous. Then f has a fixed point.
  • Let E be a normed space, K\subset E convex and K\neq\emptyset. Let f:K\to K be continuous. Let either K be compact or K bounded and f(K) relatively compact. Then f has a fixed point.

The last two statements are called Schauder’s Fixed Point Theorems, which may often be applied in functional analysis, or are famously used for proofs of Peano’s Theorem in differential equations. But at the core of all of them is Brouwer’s Theorem. This seems like a good place to end.

Aus der Literatur der Weimarer Republik

In einer Welle der schöngeistigen Literatur habe ich in den vergangenen Monaten diverse Werke deutscher Autoren aus der Zeit der Weimarer Republik gelesen. Tatsächlich kam das nicht geplant zustande, die Autoren, die Stilrichtungen und auch mein bleibender Eindruck sind äußerst unterschiedlich. Dieses einigende Band der Entstehungszeit ist mir erst im Nachhinein klargeworden.

 

In unregelmäßigen Abständen, aber immer wieder und mit großem Genuss lese ich den Fabian von Erich Kästner. Es handelt sich um Kästners stärksten Roman und um sein bekanntestes Werk für Erwachsene. Das soll keine Schmälerung seiner (zum Teil grandiosen) Kinderbücher sein, aber ihr Anspruch und ihre Wirkung sind andere und seien hier außen vor gelassen. Aus meiner Schulzeit kannte ich die ursprünglich veröffentlichte Fassung des Fabian, eine in manchen Aspekten bereits entschärfte Version, die dennoch von Goebbels verboten und verbrannt wurde. Vor einigen Monaten habe ich mir die Originalversion zugelegt, die zu Beginn des Textes ein wenig drastischer ist, aber von einem Kapitel abgesehen nur noch redaktionelle Änderungen zum bekannten Text aufzeigt. Kästner selbst hat in späteren Vorworten angemerkt, er habe dem Text den Titel „Der Gang vor die Hunde“ geben wollen. Wie akkurat diese Aussage ist, sei dahingestellt, es kann vermutet werden, dass diese Idee erst in der Rückschau entstanden ist.

Die titelgebende Hauptfigur Jakob Fabian begreift sich als Moralisten, der in der Zeit der Wirtschaftskrise die Welt immer weniger versteht und als immer absurder auffasst. Er unterhält diverse kurzlebige Frauenbekanntschaften, vor denen er sich teilweise aus Abscheu abwendet; die einzige, in der er echtes Potential für Liebe findet, geht in die Brüche als die Frau, Cornelia, sich einem reichen Mann zuwendet, den sie zwar nicht liebt, aber bei dem sie keine wirtschaftlichen Ängste mehr zu fürchten braucht. Fabians Freund Labude ist einer seiner wenigen Rettungsanker, auch wenn die beiden sich in der Beurteilung der Welt nicht einig sind: hier Fabian der schicksalsergebene Fatalist, der die Welt nimmt wie sie ist, weil er glaubt sie nicht ändern zu können („der Faustschlag blieb stumm“); dort Labude, der Romantiker, der glaubt, die Welt und die Menschen zum Vernünftigen verändern zu müssen. Als Labude sich aufgrund eines überzogenen, schlechten Scherzes aus Hoffnungslosigkeit das Leben nimmt, ist Fabian das einzige Mal während des Buches zu tiefen, ehrlichen Emotionen hingerissen. Fast immer bleibt Fabian reiner Beobachter, selbst als er sich in einem nächtlichen Feuergefecht zwischen einem Kommunisten und einem Nationalsozialisten wiederfindet und beide Kontrahenten im gleichen Taxi zum Krankenhaus bringt. Politisch verhält sich Fabian entsprechend seiner auch sonstigen Einstellung nicht extrem, in einer Ausrichtung, die er wohl als „anständig“ bezeichnen würde. Das schlägt sich in seiner Respektlosigkeit gegen nationalistische Denkmäler in Berlin nieder (die er in einer Busfahrt verspottet, die in der Erstausgabe noch herausgestrichen worden war), oder in seiner zurückhaltenden Auseinandersetzung mit einem Schulfreund gegen Ende des Romans. Und wie fast alle Romanhelden Kästners ist auch Fabian ein Muttersöhnchen: die rührige Episode des Besuchs seiner Mutter in Berlin und das heimliche Einanderzustecken von 20 Mark seien genannt.

Das Buch pendelt stetig zwischen beißender Komik und scharfem Sittenportrait seiner Zeit; es ist dabei stets eine bewusste Überzeichnung der Verhältnisse, und bleibt doch in einem höchst sachlichen, nüchternen Tonfall. Am Ende ertrinkt Fabian beim Versuch, ein Kind aus einem Kanal zu retten – das Kind selbst schwimmt ans Ufer. Kästner wendet sich in der Kapitelüberschrift direkt an das Publikum: „Lernt schwimmen!“ – im übertragenen Sinne: verhaltet euch nicht beobachtend und überfordert wie Fabian, sondern geht mit der Welt um wie sie sich bietet und ändert sie. Aus dieser auch sehr persönlichen Botschaft Kästners, die sich durch das ganze Buch zieht, gewinnt der Roman seine Stärke; bis zu einem gewissen Punkt hat Kästner sich hier selbst verewigt, nicht nur durch die diversen autobiographischen Elemente (Fabian, wie Kästner, ist Werbetexter in Berlin; Kästner, wie Labude, ist promovierter Germanist mit einem Schwerpunkt auf Lessing und seiner Zeit), sondern eben durch eine Botschaft, die ihm damals eine Herzensangelegenheit gewesen sein muss.

Ein kurzweiliges Buch, das sich immer wieder von neuem mit Gewinn lesen lässt. Mag der Humor auch etwas staubig geworden sein, die Geschichte ist es nicht, sie berührt noch immer und regt immer wieder zur Auseinandersetzung an.

 

Über Brecht ist praktisch alles gesagt. Aus einem Gefühl der Neugier heraus habe ich mir einige seiner Stücke wieder zu Gemüte geführt, die ich ebenfalls aus der Schulzeit kannte. So unterschiedlich die Mutter Courage, das Leben des Galilei und der Gute Mensch von Sezuan in ihrer Erzählung sind, so ähnlich sind sie sich doch in dem Punkt, dass der Leser (bzw. Theaterzuschauer) belehrt werden soll. Ganz im Geist Schillers handelt es sich bei diesen, wie vielen der Dramen von Brecht, um Lehrstücke. Schillers Rede „Die Schaubühne als eine moralische Anstalt betrachtet“ hat bereits Ende des 18. Jahrhunderts den Aspekt herausgestellt, dass das Publikum durch den Theaterbesuch gebildet werden solle – erst in zweiter Linie soll das Theater der Unterhaltung dienen, sondern idealerweise wird das Publikum aufgeklärt und in die Lage versetzt, die richtigen Fragen zu stellen. Akkurat dieser Maxime folgt Brecht, und er treibt es im Guten Menschen geradezu auf die Spitze, in dem er das Stück offen enden lässt:

Wir stehen selbst enttäuscht und sehen betroffen / den Vorhang zu und alle Fragen offen. […] Verehrtes Publikum, los, such dir selbst den Schluss! / Es muss ein guter da sein, muss, muss, muss!

Man mag darüber streiten, ob es noch ein unaufgeklärtes Theaterpublikum gibt (soll heißen, ob Menschen, die der Aufklärung bedürfen, überhaupt noch ins Theater gehen), oder ob das Konzept des Lehrstücks aus der Zeit gefallen ist. Alternativ findet die Aufklärung des aufnahmebereiten Publikums eher in anderen Kontexten statt, etwa im nicht allzu plakativen Kabarett und in der qualitativ hochwertigen Presse – wieder vorausgesetzt, dass ein aufklärungsbedürftiges Publikum dort überhaupt vorhanden ist (den Diskurs über die Selbsterkenntnis der Aufklärungsbedürftigkeit wollen wir hier nicht führen). In jedem Fall erscheinen die Lehrstücke Brechts als derart durchschaubar, dass eine moralische Besserung des Publikums von ihnen kaum zu erhoffen ist. Auch die angeprangerten Missstände scheinen in der heutigen Welt weniger von Belang zu sein – zumal die durch Brecht gelegentlich aufgezeigte Alternative des Kommunismus (im Guten Menschen) keine mehr ist und die Schrecken eines großen Kriegs um so viel größer sind als zur Entstehungszeit der Mutter Courage.

Aufgrund ihrer Durchsichtigkeit taugen die Lehrstücke natürlich dennoch als Schullektüre und werden als solche heute noch oft gelesen. Es lassen sich tiefe Charakterstudien daraus machen und für das Verständnis sind nur Hintergründe aus der klassischen humanistischen Gymnasialbildung nötig (etwa im Gegensatz zu manch anderer moderner Literatur aus dem gleichen historischen Umfeld). Ob diese Lehrstücke aber wirklich noch als Lehre taugen, sei dahingestellt. Nennenswert schöner und lehrreicher wirken doch manche Aphorismen, wie in den Geschichten vom Herrn Keuner („Aber Herr K., Sie haben sich ja überhaupt nicht verändert! – Und K. erbleichte.“) oder der auf einen Dritten Weltkrieg gemünzte Satz

Das große Karthago führte drei Kriege: es war noch mächtig nach dem ersten, noch bewohnbar nach dem zweiten. Es war nicht mehr auffindbar nach dem dritten.

Derlei ist in den genannten Lehrstücken recht selten.

Die Art der Belehrung ist in den drei Stücken, die ich jetzt neu gelesen habe, sehr unterschiedlich. Im Guten Menschen ist von der schrecklichen Realität des Kapitalismus die Rede, im Leben des Galilei von der Freiheit und Verantwortung der Wissenschaft und der Unfreiheit in der Kirche, bei der Mutter Courage von den Schrecken des Kriegs und davon, wie unterschiedlich die Menschen mit der Moral in Krieg und Frieden umgehen (sichtbar besonders in den drei Kindern der Courage, mehr noch als bei ihr selbst). Tatsächlich habe ich Brechts großen Aufführungserfolg die Dreigroschenoper aus schierer Erschöpfung ausgelassen. Meine Erinnerung ist jedoch durchaus analog zu den anderen Stücken und lässt sich an der bissigen Beschreibung des Hurenhauses als „bürgerliches Idyll“ festmachen – ein klassischer Fall, in dem „Show, don’t tell“ angebrachter gewesen wäre. Aber aus der Dreigroschenoper habe ich die schöne Zeile „Erst kommt das Fressen, dann kommt die Moral“ behalten, tatsächlich ein wahres, wenn auch drastisches Wort.

So wenig spektakulär die Inhalte der Stücke auch sein mögen, sie geben den Schauspielern aber große Möglichkeiten für eine brillante Darstellung der titelgebenden Figuren. Besonders die Rolle der Shen Te erfordert eine hohe schauspielerische Leistung, die selten in der nötigen Komplexität gelingen wird.

Die am schönsten komponierte Szene in den drei Stücken ist die Ankleideszene des Papstes im Leben des Galilei. Der Papst selbst ist Wissenschaftler, der aber während der Ankleide in das Gewand (soll heißen: in die Rolle) des Papstes schlüpft und auf diese Weise andere Prioritäten setzen muss als Galilei. Eine schöne subtile Art, die Zwänge aufzuzeigen, in denen die handelnden Figuren sich bewegen: show, don’t tell.

In der Summe ist dem Eindruck Marcel Reich-Ranickis recht zu geben, den er in der Reihe „Lauter schwierige Patienten“ geäußert hat: die Theaterstücke Brechts werden der Zeit nicht standhalten. Sie werden noch in der Schule gelesen und in den Theatern aufgeführt, aber sie haben an Aktualität und Relevanz eingebüßt. Was von Brecht überdauern mag, ist die Lyrik, seine Songs.

Auf die Stücke in weiteren Details einzugehen erscheint aufgrund ihrer Verbreitung gerade als Schullektüre nicht angebracht. Über Brecht ist alles (und mehr als das) gesagt.

 

Ebenfalls aus schierer Neugier habe ich mich mit Hans Falladas Roman Kleiner Mann – was nun? auseinander gesetzt. Paradoxerweise hat diese durchaus rührige Geschichte vom jungen Ehepaar in kleinen Verhältnissen (sie nennen sich gegenseitig „Junge“ und „Lämmchen“, ihr kleines Kind werden sie immer nur als „Murkel“ bezeichnen) nicht im geringsten meinen Nerv getroffen. Ja – der soziale Abstieg der jungen Familie ist schrecklich und grausam. Ja – die wirtschaftlichen Verhältnisse in der späten Weimarer Republik waren grässlich und schwer. Es ist überhaupt eine Epoche, mit der ich mich lange und ausgiebig auseinandergesetzt habe und die mich fortlaufend fasziniert. Und doch bin ich von dieser Erzählung, diesem (damaligen) Gegenwartsstück, seltsam unberührt geblieben. Im Gegenteil habe ich mich beim Lesen mehrfach entsetzlich für Lämmchen fremdgeschämt; eine in vielen Dingen so weltfremde und überkomplizierte Person, die gleichzeitig nicht aus übertrieben behüteten Verhältnissen kommt und sich doch in der Welt so schlecht zurecht findet (die furchtbare Szene, als sie versucht eine Erbsensuppe zu kochen). Immerhin macht sie im Lauf der Geschichte eine Entwicklung durch und geht resoluter mit der Situation um als ihr Mann. Das Ende ist offen, aber es wird klar, dass beide trotz aller Armut ihre Liebe bewahren.

Tatsächlich hat mich dieser Versuch mit Hans Fallada nicht bewogen, seinen anderen berühmten Roman Jeder stirbt für sich allein zu lesen. Bei aller Anrührung und Realitätsbeschreibung war ich weder gefesselt noch beeindruckt von Kleiner Mann – was nun? Ich habe einfach den Funken nicht gefunden, der von diesem Autor auf mich hätte überspringen können.

 

Schließlich endete mein Rundgang durch die Literatur der 1920er Jahre mit einem Ausflug zu Hermann Hesses Narziß und Goldmund. Unterschiedlicher zu den bisherigen Büchern könnte ein Roman kaum sein. Gänzlich unpolitisch, völlig der (damaligen) Gegenwart entrückt, wird die Geschichte zweier mittelalterlicher Mönche erzählt. Es sind zwei ganz und gar verschiedene Charaktere, die ganz und gar verschiedenen Lebensentwürfen folgen: der abstrakte Denker und Asket Narziß, der sein Leben im Kloster verbringt, und der Schöngeist Goldmund, der sich der Sinnenwelt, der Kunst und den Frauen hingibt und durch die Welt zieht. Beide schließen in der Jugend eine enge Freundschaft und profitieren in ihrer Sicht auf die Welt vom jeweils anderen. Diese Freundschaft überdauert Goldmunds Wanderjahre, bis sie sich nach Jahrzehnten zufällig wiedertreffen. Ein ästhetisch ungemein schönes Buch, dessen Komposition klar ist und doch nicht aufdringlich wirkt. Über die schwülstige Sprache muss man ein wenig hinwegsehen und sich hineinarbeiten, aber dennoch ist diese Sprache klar und vor allem kein Selbstzweck (ich denke an entsetzliche Phrasen bei Thomas Mann).

Etwas befremdlich innerhalb dieser Erzählung sind mystische Motive, etwa das Bild von Goldmunds Mutter, die keine der handelnden Personen kennt, das für Goldmunds Suche nach Schönheit in der Kunst vorantreibt. Gleichzeitig bleibt Goldmunds Handeln auch für den eher verkopften Leser fast zu jeder Zeit nachvollziehbar und klar. Er ist getrieben von der Suche nach Schönheit und folglich nach Inspiration – der Meister, bei dem er die Holzverarbeitung lernt, bleibt ihm einerseits künstlerisches Vorbild, andererseits ein Schreckgespenst bei der Aussicht ebenso gefesselt an einen Ort, eine Profession, eine Werkstatt zu sein.

Durch Narziß und Goldmund habe ich ein grobes Gefühl dafür bekommen, was es in einem geschichtlichen Überblick über die Literaten der Weimarer Republik hieß: „Hermann Hesse schwankt zwischen Ethik und Ästhetik.“ Wirklich greifen kann ich diese Einschätzung noch nicht, aber es dämmert eine Einsicht, wohin dieser Satz abzielen sollte. Im Gegensatz zu Brecht werde ich einmal zu Hesse zurückkehren. Auch die Person Hesse mag es wert sein, näher beleuchtet zu werden, schon aufgrund der komplexen Hintergründe in seinem Werk, aber auch aufgrund seiner Wirkung, etwa durch den Steppenwolf, oder das Glasperlenspiel.

 

Dieser Exkurs hat mir wieder die Faszination der Weimarer Republik aufgezeigt, die eine so breit gefächerte Künstlerlandschaft hervorgebracht hat. Und gleichzeitig ist dies auch ein Zeichen für die Faszination am Deutschen Kaiserreich, das mit seiner humanistischen Bildung und Kultur überhaupt die Grundlagen für eine solche künstlerische Explosion in den 1920er Jahren geschaffen hat. Von den genannten Werken setzen sich außer Narziß und Goldmund alle mit den komplexen und komplexer werdenden Problemen ihrer Gegenwart auseinander (und auch Hesse hat sich nicht im luftleeren Raum bewegt, sondern ist sowohl von seinen Vorbildern als auch sehr spürbar von seiner Umwelt beeinflusst). Es ist beeindruckend zu lesen, wie diese Welt künstlerisch verarbeitet wurde. Eine Epoche, zu der ich noch oft zurückkehren werde.

What does the determinant have to do with transformed measures?

Let us consider transformations of the space \mathbb{R}^p. How does Lebesgue measure change by this transformation? And how do integrals change? The general case is answered by Jacobi’s formula for integration by substitution. We will start out slowly and only look at how the measure of sets is transformed by linear mappings.

It is folklore in the basic courses on linear algebra, when the determinant of a matrix is introduced, to convey the notion of size of the parallelogram spanned by the column vectors of the matrix. The following theorem shows why this folklore is true; this of course is based on the axiomatic description of the determinant which encodes the notion of size already. But coming from pure axiomatic reasoning, we can connect the axioms of determinant theory to their actual meaning in measure theory.

First, remember the definition of the pushforward measure. Let X and Y be measurable spaces, and f:X\to Y a measurable mapping (i.e. it maps Borel-measurable sets to Borel-measurable sets; we shall not deal with finer details of measure theory here). Let \mu be a measure on X. Then we define a measure on Y in the natural – what would \mu do? – way:

\displaystyle \mu_f(A) := \mu\bigl(f^{-1}(A)\bigr).

In what follows, X = Y = \mathbb{R}^p and \mu = \lambda the Lebesgue measure.

Theorem (Transformation of Measure): Let f be a bijective linear mapping and let A\subset\mathbb{R}^p be a measurable set. Then, the pushforward measure satisfies:

\displaystyle \lambda_f(A) = \left|\det f\right|^{-1}\lambda(A)\qquad\text{for any Borel set }A\in\mathcal{B}^p.

 

We will start with several Lemmas.

Lemma (The Translation-Lemma): Lebesgue measure is invariant under translations.

Proof: Let c,d\in\mathbb{R}^p with c\leq d component-wise. Let t_a be the shift by the vector a\in\mathbb{R}^p, i.e. t_a(v) = v+a and t_a(A) = \{x+a\in\mathbb{R}^p\colon x\in A\}. Then,

\displaystyle t_a^{-1}\bigl((c,d]\bigr) = (c-a,d-a],

where the interval is meant as the cartesian product of the component-intervals. For the Lebesgue-measure, we get

\displaystyle \lambda_{t_a}\bigl((c,d]\bigr) = \lambda\bigl((c-a,d-a]\bigr) = \prod_{i=1}^p\bigl((d-a)-(c-a)\bigr) = \prod_{i=1}^p(d-c) = \lambda\bigl((c,d]\bigr).

The measures \lambda, \lambda_{t_a} hence agree on the rectangles (open on their left-hand sides), i.e. on a semi-ring generating the \sigma-algebra \mathcal{B}^p. With the usual arguments (which might as well involve \cap-stable Dynkin-systems, for instance), we find that the measures agree on the whole \mathcal{B}^p.

q.e.d.

 

Lemma (The constant-multiple-Lemma): Let \mu be a translation-invariant measure on \mathcal{B}^p, and \alpha:=\mu\bigl([0,1]^p\bigr) < \infty. Then \mu(A) = \alpha\lambda(A) for any A\in\mathcal(B)^p.

 

Note that the Lemma only holds for finite measures on [0,1]^p. For instance, the counting measure is translation-invariant, but it is not a multiple of Lebesgue measure.

Proof: We divide the set (0,1]^p via a rectangular grid of sidelengths \frac1{n_i}, i=1,\ldots,p:

\displaystyle (0,1]^p = \bigcup_{\stackrel{k_j=0,\ldots,n_j-1}{j=1,\ldots,p}}\left(\times_{i=1}^p\left(0,\frac1{n_i}\right] + \left(\frac{k_1}{n_1},\ldots,\frac{k_p}{n_p}\right)\right).

On the right-hand side there are \prod_{i=1}^pn_i sets which have the same measure (by translation-invariance). Hence,

\displaystyle \mu\bigl((0,1]^p\bigr) = \mu\left(\bigcup\cdots\right) = n_1\cdots n_p \cdot \mu\left(\times_{i=1}^p \left(0,\frac1{n_i}\right]\right).

Here, we distinguish three cases.

 

Case 1: \mu\bigl((0,1]^p\bigr) = 1. Then,

\displaystyle\mu\left(\times_{i=1}^p (0,\frac1{n_i}]\right) = \prod_{i=1}^p\frac1{n_i} = \lambda\left(\times_{i=1}^p (0,\frac1{n_i}]\right).

By choosing appropriate grids and further translations, we get that \mu(I) = \lambda(I) for any rectangle I with rational bounds. Via the usual arguments, \mu=\lambda on the whole of \mathcal{B}^p.

Case 2: \mu\bigl((0,1]^p\bigr) \neq 1 and >0. By assumption, however, this measure is finite. Setting \gamma = \mu\bigl((0,1]^p\bigr), we can look at the measure \gamma^{-1}\mu, which of course has \gamma^{-1}\mu\bigl((0,1]^p\bigr) = 1. By Case 1, \gamma^{-1}\mu = \lambda.

Case 3: \mu\bigl((0,1]^p\bigr) = 0. Then, using translation invariance again,

\displaystyle \mu(\mathbb{R}^p) = \mu\bigl(\bigcup_{z\in\mathbb{Z}^p}((0,1]^p+z)\bigr) = \sum_{z\in\mathbb{Z}^p}\mu\bigl((0,1]^p\bigr) = 0.

Again, we get \mu(A) = 0 for all A\in\mathcal{B}^p.

 

That means, in all cases, \mu is equal to a constant multiple of \lambda, the constant being the measure of (0,1]^p. That is not quite what we intended, as we wish the constant multiple to be the measure of the compact set [0,1]^p.

Remember our setting \alpha:=\mu\bigl([0,1]^p\bigr) and \gamma := \mu\bigl((0,1]^p\bigr). Let A\in\mathcal{B}^p. We distinguish another two cases:

 

Case a) \alpha = 0. By monotony, \gamma = 0 and Case 3 applies: \mu(A) = 0 = \alpha\lambda(A).

Case b) \alpha > 0. By monotony and translation-invariance,

\displaystyle \alpha \leq \mu\bigl((-1,1]^p\bigr) = 2^p\gamma,

meaning \gamma\geq\frac{\alpha}{2^p}. Therefore, as \alpha>0, we get \gamma>0, and by Case 1, \frac1\gamma\mu(A) = \lambda(A). In particular,

\displaystyle \frac\alpha\gamma = \frac1\gamma\mu\bigl([0,1]^p\bigr) = \lambda\bigl([0,1]^p\bigr) = 1,

and so, \alpha = \gamma, meaning \mu(A) = \gamma\lambda(A) = \alpha\lambda(A).

q.e.d.

 

Proof (of the Theorem on Transformation of Measure). We will first show that the measure \lambda_f is invariant under translations.

We find, using the Translation-Lemma in (\ast), and the linearity of f before that,

\displaystyle    \begin{aligned}    \lambda_{t_a\circ f}(A) = \lambda_f(A-a) &\stackrel{\hphantom{(\ast)}}{=} \lambda\bigl(f^{-1}(A-a)\bigr) \\    &\stackrel{\hphantom{(\ast)}}{=} \lambda\bigl(f^{-1}(A) - f^{-1}(a)\bigr) \\    &\stackrel{(\ast)}{=} \lambda\bigl(f^{-1}(A)\bigr) \\    &\stackrel{\hphantom{(\ast)}}{=} \lambda_f(A),    \end{aligned}

which means that \lambda_f is indeed invariant under translations.

As [0,1]^p is compact, so is f^{-1}\bigl([0,1]^p\bigr) – remember that continuous images of compact sets are compact (here, the continuous mapping is f^{-1}). In particular, f^{-1}\bigl([0,1]^p\bigr) is bounded, and thus has finite Lebesgue measure.

We set c(f) := \lambda_f\bigl([0,1]^p)\bigr). By the constant-multiple-Lemma, \lambda_f is a multiple of Lebesgue measure: we must have

\displaystyle \lambda_f(A) = c(f)\lambda(A)\text{ for all }A\in\mathcal{B}^p.\qquad (\spadesuit)

We only have left to prove that c(f) = \left|\det f\right|^{-1}. To do this, there may be two directions to follow. We first give the way that is laid out in Elstrodt’s book (which we are basically following in this whole post). Later, we shall give the more folklore-way of concluding this proof.

We consider more and more general fashions of the invertible linear mapping f.

Step 1: Let f be orthogonal. Then, for the unit ball B_1(0),

\displaystyle c(f)\lambda\bigl(B_1(0)\bigr) \stackrel{(\spadesuit)}{=} \lambda_f\bigl(B_1(0)\bigr) = \lambda\left(f^{-1}\bigl(B_1(0)\bigr)\right) = \lambda\bigl(B_1(0)\bigr).

This means, that c(f) = 1 = \left|\det(f)\right|.

This step shows for the first time how the properties of a determinant encode the notion of size already: we have only used the basic lemmas on orthogonal matrices (they leave distances unchanged and hence the ball B_1(0) doesn’t transform; besides, their inverse is their adjoint) and on determinants (they don’t react to orthogonal matrices because of their multiplicative property and because they don’t care for adjoints).

Step 2: Let f have a representation as a diagonal matrix (using the standard basis of \mathbb{R}^p). Let us assume w.l.o.g. that f = \mathrm{diag}(d_1,\ldots,d_p) with d_i>0. The case of d_i<0 is only notationally cumbersome. We get

\displaystyle c(f)\lambda_f\bigl([0,1]^p\bigr) = \lambda\left(f^{-1}\bigl([0,1]^p\bigr)\right) = \lambda\bigl(\times_{i=1}^p[0,d_i^{-1}]\bigr) = \prod_{i=1}^pd_i^{-1} = \left|\det f\right|^{-1}.

Again, the basic lemmas on determinants already make use of the notion of size without actually saying so. Here, it is the computation of the determinant by multiplication of the diagonal.

Step 3: Let f be linear and invertible, and let f^\ast be its adjoint. Then f^\ast f is non-negative definite (since for x\in\mathbb{R}^p, x^\ast(f^\ast f)x = (fx)^\ast(fx) = \left\|fx\right\|^2\geq0). By the Principal Axis Theorem, there is some orthogonal matrix v and some diagonal matrix with non-negative entries d with f^\ast f = vd^2v^\ast. As f was invertible, no entry of d may vanish here (since then, its determinant would vanish and in particular, f would no longer be invertible). Now, we set

\displaystyle w:=d^{-1}v^\ast f^\ast,

which is orthogonal because of

\displaystyle ww^\ast = d^{-1} v^\ast f^\ast fv d^{-1} = d^{-1}v^\ast (vd^2v^\ast)v d^{-1} = d^{-1}d^2d^{-1}v^\ast v v^\ast v = \mathrm{id}.

As f = w^\ast dv, we see from Step 1

\displaystyle    \begin{aligned}    c(f) = \lambda_f\bigl([0,1]^p\bigr) &= \lambda\left(v^{-1}d^{-1}w\bigl([0,1]^p\bigr)\right) \\    &= \lambda\left(d^{-1}w\bigl([0,1]^p\bigr)\right)\\    &= \left|\det d\right|^{-1}\lambda\left(w\bigl([0,1]^p\bigr)\right) \\    &= \left|\det d\right|^{-1}\lambda\bigl([0,1]^p\bigr) \\    &= \left|\det f\right|^{-1}\lambda\bigl([0,1]^p\bigr) \\    &= \left|\det f\right|^{-1},    \end{aligned}

by the multiplicative property of determinants again (\det f = \det d).

q.e.d.(Theorem)

 

As an encore, we show another way to conclude in the Theorem, once all the Lemmas are shown and applied. This is the more folklore way alluded to in the proof, making use of the fact that any invertible matrix is the product of elementary matrices (and, of course, making use of the multiplicative property of determinants). Hence, we only consider those.

Because Step 2 of the proof already dealt with diagonal matrices, we only have to look at shear-matrices like E_{ij}(r) := \bigl(\delta_{kl}+r\delta_{ik}\delta_{jl}\bigr)_{k,l=1,\ldots,p}. They are the identity matrix with the (off-diagonal) entry r in row i and column j. One readily finds \bigl(E_{ij}(r)\bigr)^{-1} = E_{ij}(-r), and \det E_{ij}(r) = 1. Any vector v\in[0,1]^p is mapped to

\displaystyle E_{ij}(v_1,\ldots,v_i,\ldots, v_p)^t = (v_1,\ldots,v_i+rv_j,\ldots,v_p)^t.

This gives

\displaystyle    \begin{aligned}\lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr) &= \lambda\left(E_{ij}(-r)\bigl([0,1]^p\bigr)\right) \\    &= \lambda\left(\left\{x\in\mathbb{R}^p\colon x=(v_1,\ldots,v_i-rv_j,\ldots,v_p), v_k\in[0,1]\right\}\right).    \end{aligned}

This is a parallelogram that may be covered by n rectangles as follows: we fix the dimension i and one other dimension to set a rectangle of height \frac1n, width 1+\frac rn (all other dimension-widths = 1; see the image for an illustration). Implicitly, we have demanded that p\geq2 here; but p=1 is uninteresting for the proof, as there are too few invertible linear mappings in \mathbb{R}^1.

By monotony, this yields

\lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr) \leq n\frac1n\left(1+\frac{r}{n}\right) = 1+\frac rn\xrightarrow{n\to\infty}1.

On the other hand, this parallelogram itself covers the rectangles of width 1-\frac rn, and a similar computation shows that in the limit \lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr)\geq1.

In particular: \lambda_{E_{ij}(r)}\bigl([0,1]^p\bigr) = 1 = \left|\det E_{ij}(r)\right|^{-1}.

q.e.d. (Theorem encore)

 

Proving the multidimensional transformation formula for integration by substitution is considerably more difficult than in one dimension, where it basically amounts to reading the chain rule reversedly. Let us state the formula here first:

Theorem (The Transformation Formula, Jacobi): Let U,V\subset \mathbb{R}^p be open sets and let \Phi:U\to V be a \mathcal{C}^1-diffeomorphism (i.e. \Phi^{-1} exists and both \Phi and \Phi^{-1} are \mathcal{C}^1-functions). Let f:V\to\mathbb{R} be measurable. Then, f\circ\Phi:U\to\mathbb{R} is measurable and

\displaystyle\int_V f(t)dt = \int_U f\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds.

 

At the core of the proof is the Theorem on Transformation of Measure that we have proved above. The idea is to approximate \Phi by linear mappings, which locally transform the Lebesgue measure underlying the integral and yield the determinant in each point as correction factor. The technical difficulty is to show that this approximation does no harm for the evaluation of the integral.

We will need a lemma first, which carries most of the weight of the proof.

 

The Preparatory Lemma: Let U,V\subset \mathbb{R}^p be open sets and let \Phi:U\to V be a \mathcal{C}^1-diffeomorphism. If X\subset U is a Borel set, then so is \Phi(X)\subset V, and

\displaystyle \lambda\bigl(\Phi(X)\bigr)\leq\int_X\left|\det\Phi'(s)\right|ds.

Proof: Without loss of generality, we can assume that \Phi, \Phi' and (\Phi')^{-1} are defined on a compact set K\supset U. We consider, for instance, the sets

\displaystyle U_k:=\left\{ x\in U\colon \left|x\right|<k, \mathrm{dist}\bigl(x,U^\complement\bigr)>\frac1k\right\}.

The U_k are open and bounded, \overline U_k is hence compact, and there is a chain U_k\subset\overline U_k\subset U_{k+1}\subset\cdots for all k, with U=\bigcup_kU_k. To each U_k there is, hence, a compact superset on which \Phi, \Phi' and (\Phi')^{-1} are defined. Now, if we can prove the statement of the Preparatory Lemma on X_k := X\cap U_k, it will also be true on X=\lim_kX_k by the monotone convergence theorem.

As we can consider all relevant functions to be defined on compact sets, and as they are continuous (and even more) by assumption, they are readily found to be uniformly continuous and bounded.

It is obvious that \Phi(X) will be a Borel set, as \Phi^{-1} is continuous.

 

Let us prove that the Preparatory Lemma holds for rectangles I with rational endpoints, being contained in U.

There is some r>0 such that for any a\in I, B_r(a)\subset U. By continuity, there is a finite constant M with

\displaystyle M:=\sup_{t\in I}\left\|\bigl(\Phi'(t)\bigr)^{-1}\right\|,

and by uniform continuity, r may be chosen small enough such that, for any \varepsilon>0, even

\displaystyle \sup_{x\in B_r(a)}\left\|\Phi'(x)-\Phi'(a)\right\|\leq\frac{\varepsilon}{M\sqrt p} \text{ for every }a\in I.

With this r, we may now sub-divide our rectangle I into disjoint cubes I_k of side-length d such that d<\frac{r}{\sqrt p}. In what follows, we shall sometimes need to consider the closure \overline I_k for some of the estimates, but we shall not make the proper distinction for reasons of legibility.

For any given b\in I_k, every other point c of I_k may at most have distance d in each of its components, which ensures

\displaystyle\left\|b-c\right\|^2 \leq \sum_{i=1}^pd^2 = pd^2 < r^2.

This, in turn, means, I_k\subset B_r(b) (and B_r(b)\subset U has been clear because of the construction of r).

Now, in every of the cubes I_k, we choose the point a_k\in I_k with

\displaystyle\left|\det\Phi'(a_k)\right| = \min_{t\in I_k}\left|\det\Phi'(t)\right|,

and we define the linear mapping

\displaystyle \Phi_k:=\Phi'(a_k).

Remember that for convex sets A, differentiable mappings h:A\to\mathbb{R}^p, and points x,y\in A, the mean value theorem shows

\displaystyle \left\|h(x)-h(y)\right\|\leq\left\|x-y\right\|\sup_{\lambda\in[0,1]}\left\|h'\bigl(x+\lambda(y-x)\bigr)\right\|.

Let a\in I_k be a given point in one of the cubes. We apply the mean value theorem to the mapping h(x):=\Phi(x)-\Phi_k(x), which is certainly differentiable, to y:=a_k, and to the convex set A:=B_r(a):

\displaystyle    \begin{aligned}    \left\|h(x)-h(y)\right\|&\leq\left\|x-y\right\|\sup_{\lambda\in[0,1]}\left\|h'\bigl(x+\lambda(y-x)\bigr)\right\|\\    \left\|\Phi(x)-\Phi_k(x)-\Phi(a_k)+\Phi_k(a_k)\right\| & \leq \left\|x-a_k\right\|\sup_{\lambda\in[0,1]}\left\|\Phi'\bigl(x+\lambda(x-a_k)\bigr)-\Phi'(a_k)\right\|\\    \left\|\Phi(x)-\Phi(a_k)-\Phi_k(x-a_k)\right\| &< \left\|x-a_k\right\| \frac{\varepsilon}{M\sqrt p}\qquad (\clubsuit).    \end{aligned}

Note that as a_k\in I_k\subset B_r(a), x+\lambda(x-a_k)\in B_r(a) by convexity, and hence the upper estimate of uniform continuity is applicable. Note beyond that, that \Phi_k is the linear mapping \Phi'(a_k) and the derivative of a linear mapping is the linear mapping itself.

Now, \left\|x-a_k\right\|< d\sqrt p, as both points are contained in I_k, and hence (\clubsuit) shows

\displaystyle    \begin{aligned}    \Phi(I_k) &\subset \Phi(a_k)+\Phi_k(I_k-a_k)+B_{\frac{\varepsilon}{M\sqrt p}d\sqrt p}(0) \\    &\subset \Phi(a_k)+\Phi_k(I_k-a_k)+B_{\frac{d\varepsilon}{M}}(0).    \end{aligned}

By continuity (and hence boundedness) of \Phi', we also have

\displaystyle \left\|(\Phi_k)^{-1}(x)\right\|\leq\left\|\bigl(\Phi'(a_k)\bigr)^{-1}\right\|\left\|x\right\|\leq M \left\|x\right\|,

which means B_{\frac{d\varepsilon}{M}}(0) = \Phi_k\left(\Phi_k^{-1}\bigl(B_{\frac{d\varepsilon}{M}}(0)\bigr)\right) \subset \Phi_k\bigl(B_{d\varepsilon}(0)\bigr).

Hence:

\displaystyle \Phi(I_k) \subset \Phi(a_k) + \Phi_k\bigl(I_k-a_k+B_{d\varepsilon}(0)\bigr).

Why all this work? We want to bound the measure of the set \Phi(I_k), and we can get it now: the shift \Phi(a_k) is unimportant by translation invariance. And the set I_k-a_k+B_{d\varepsilon}(0) is contained in a cube of side-length d+2d\varepsilon. As promised, we have approximated the mapping \Phi by a linear mapping \Phi_k on a small set, and the transformed set has become only slightly bigger. By the Theorem on Transformation of Measure, this shows

\displaystyle    \begin{aligned}    \lambda\bigl(\Phi(I_k)\bigr) &\leq \lambda\left(\Phi_k\bigl(I_k-a_k+Blat_{d\varepsilon}(0)\bigr)\right) \\    &=\left|\det\Phi_k\right|\lambda\bigl(I_k-a_k+B_{d\varepsilon}(0)\bigr)\\    &\leq \left|\det\Phi_k\right|d^p(1+2\varepsilon)^p \\    &= \left|\det\Phi_k\right|(1+2\varepsilon)^p\lambda(I_k).    \end{aligned}

Summing over all the cubes I_k of which the rectangle I was comprised, (remember that \Phi is a diffeomorphism and disjoint sets are kept disjoint; besides, a_k has been chosen to be the point in I_k of smallest determinant for \Phi')

\displaystyle    \begin{aligned}    \lambda\bigl(\Phi(I)\bigr) &\leq (1+2\varepsilon)^p\sum_{k=1}^n\left|\det \Phi_k\right|\lambda(I_k) \\    &= (1+2\varepsilon)^p\sum_{k=1}^n\left|\det \Phi'(a_k)\right|\lambda(I_k)\\    &= (1+2\varepsilon)^p\sum_{k=1}^n\int_{I_k}\left|\det\Phi'(a_k)\right|ds\\    &\leq (1+2\varepsilon)^p\int_I\left|\det\Phi'(s)\right|ds.    \end{aligned}

Taking \varepsilon\to0 yields to smaller subdivisions I_k and in the limit to the conclusion. The Preparatory Lemma holds for rectangles.

 

Now, let X\subset U be any Borel set, and let \varepsilon>0. We cover X by disjoint (rational) rectangles R_k\subset U, such that \lambda\bigl(\bigcup R_k \setminus X\bigr)<\varepsilon. Then,

\displaystyle    \begin{aligned}    \lambda\bigl(\Phi(X)\bigr) &\leq \sum_{k=1}^\infty \lambda\bigl(\Phi(R_k)\bigr)\\    &\leq\sum_{k=1}^\infty\int_{R_k}\left|\det \Phi'(s)\right|ds\\    &= \int_{\bigcup R_k}\left| \det\Phi'(s)\right| ds\\    &= \int_X\left| \det\Phi'(s)\right| ds + \int_{\bigcup R_k\setminus X}\left|\det\Phi'(s)\right|ds\\    &\leq \int_X\left| \det\Phi'(s)\right| ds + M\lambda\left(\bigcup R_k\setminus X\right)\\    &\leq \int_X\left| \det\Phi'(s)\right| ds + M\varepsilon.    \end{aligned}

If we let \varepsilon\to0, we see \lambda\bigl(\Phi(X)\bigr)\leq\int_X\bigl|\det\Phi'(s)\bigr|ds.

q.e.d. (The Preparatory Lemma)

 

We didn’t use the full generality that may be possible here: we already focused ourselves on the Borel sets, instead of the larger class of Lebesgue-measurable sets. We shall skip the technical details that are linked to this topic, and switch immediately to the

Proof of Jacobi’s Transformation Formula: We can focus on non-negative functions f without loss of generality (take the positive and the negative part separately, if needed). By the Preparatory Lemma, we already have

\displaystyle\begin{aligned}    \int_{\Phi(U)}\mathbf{1}_{\Phi(X)}(s)ds &= \int_{V}\mathbf{1}_{\Phi(X)}(s)ds\\    &= \int_{\Phi(X)}ds\\    &= \lambda\bigl(\Phi(X)\bigr)\\    &\leq \int_X\left|\det\Phi'(s)\right|ds\\    &= \int_U\mathbf{1}_X(s)\left|\det\Phi'(s)\right|ds\\    &= \int_U\mathbf{1}_{\Phi(X)}\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds,    \end{aligned}

which proves the inequality

\displaystyle \int_{\Phi(U)}f(t)dt \leq \int_U f\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds,

for indicator functions f = \mathbf{1}_{\Phi(X)}. By usual arguments (linearity of the integral, monotone convergence), this also holds for any measurable function f. To prove the Transformation Formula completely, we apply this inequality to the transformation \Phi^{-1} and the function g(s):=f\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|:

\displaystyle    \begin{aligned}    \int_Uf\bigl(\Phi(s)\bigr)\left|\det\Phi'(s)\right|ds &= \int_{\Phi^{-1}(V)}g(t)dt\\    &\leq \int_Vg\bigl(\Phi^{-1}(t)\bigr)\left|\det(\Phi^{-1})'(t)\right|dt\\    &=\int_{\Phi(U)}f\Bigl(\Phi\bigl(\Phi^{-1}(t)\bigr)\Bigr)\left|\det\Phi'\bigl(\Phi^{-1}(t)\bigr)\right|\left|\det(\Phi^{-1})'(t)\right|dt\\    &=\int_{\Phi(U)}f(t)dt,    \end{aligned}

since the chain rule yields \Phi'\bigl(\Phi^{-1}\bigr)(\Phi^{-1})' = \bigl(\Phi(\Phi^{-1})\bigr)' = \mathrm{id}. This means that the reverse inequality also holds. The Theorem is proved.

q.e.d. (Theorem)

 

There may be other, yet more intricate proofs of this Theorem. We shall not give any other of them here, but the rather mysterious looking way in which the determinant pops up in the transformation formula is not the only way to look at it. There is a proof by induction, given in Heuser’s book, where the determinant just appears from the inductive step. However, there is little geometric intuition in this proof, and it is by no means easier than what we did above (as it make heavy use of the theorem on implicit functions). Similar things may be said about the rather functional analytic proof in Königsberger’s book (who concludes the transformation formula by step functions converging in the L^1-norm, he found the determinant pretty much in the same way that we did).

 

Let us harvest a little of the hard work we did on the Transformation Formula. The most common example is the integral of the standard normal distribution, which amounts to the evaluation of

\displaystyle \int_{-\infty}^\infty e^{-\frac12x^2}dx.

This can happen via the transformation to polar coordinates:

\Phi:(0,\infty)\times(0,2\pi)\to\mathbb{R}^2,\qquad (r,\varphi)\mapsto (r\cos \varphi, r\sin\varphi).

For this transformation, which is surjective on all of \mathbb{R}^2 except for a set of measure 0, we find

\Phi'(r,\varphi) = \begin{pmatrix}\cos\varphi&-r\sin\varphi\\\sin\varphi&\hphantom{-}r\cos\varphi\end{pmatrix},\qquad \det\Phi'(r,\varphi) = r.

From the Transformation Formula we now get

\displaystyle    \begin{aligned}    \left(\int_{-\infty}^\infty e^{-\frac12x^2}dx\right)^2 &= \int_{\mathbb{R}^2}\exp\left(-\frac12x^2-\frac12y^2\right)dxdy\\    &= \int_{\Phi\left((0,\infty)\times(0,2\pi)\right)}\exp\left(-\frac12x^2-\frac12y^2\right)dxdy\\    &= \int_{(0,\infty)\times(0,2\pi)}\exp\left(-\frac12r^2\cos^2(\varphi)-\frac12r^2\sin^2(\varphi)\right)\left|\det\Phi'(r,\varphi)\right|drd\varphi\\    &= \int_{(0,\infty)\times(0,2\pi)}\exp\left(-\frac12r^2\right)rdrd\varphi\\    &= \int_0^\infty\exp\left(-\frac12r^2\right)rdr\int_0^{2\pi}d\varphi \\    &= 2\pi \left[-\exp\left(-\frac12r^2\right)\right]_0^\infty\\    &= 2\pi \left(1-0\right)\\    &= 2\pi.    \end{aligned}

In particular, \int_{-\infty}^\infty e^{-\frac12x^2}dx=\sqrt{2\pi}. One of the very basic results in probability theory.

 

Another little gem that follows from the Transformation Formula are the Fresnel integrals

\displaystyle \int_{0}^\infty \cos(x^2)dx = \int_{0}^\infty\sin(x^2)dx = \sqrt{\frac{\pi}{8}}.

 

They follow from the same basic trick given above for the standard normal density, but as other methods for deriving this result involve even trickier uses of similarly hard techniques (the Residue Theorem, for instance, as given in Remmert’s book), we shall give the proof of this here:

Consider

\displaystyle F(t)=\int_{0}^\infty e^{-tx^2}\cos(x^2)dx\qquad\text{and}\qquad\int_{0}^\infty e^{-tx^2}\sin(x^2)dx.

Then, the trigonometric identity \cos a+b = \cos a \cos b - \sin a\sin b tells us

\displaystyle    \begin{aligned}    \bigl(F(t)\bigr)^2 - \big(G(t)\bigr)^2 &= \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\cos(x^2)\cos(y^2)dxdy - \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\sin(x^2)\sin(y^2)dxdy\\    &= \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\bigl(\cos(x^2+y^2)+\sin(x^2)\sin(y^2) - \sin(x^2)\sin(y^2)\bigr)dxdy\\    &= \int_0^\infty\int_0^{\frac\pi2}e^{-tr^2}\cos(r^2)rd\varphi dr\\    &= \frac\pi2 \int_0^\infty e^{-tu}\cos u\frac12du.    \end{aligned}

This integral can be evaluated by parts to show

\displaystyle \int_0^\infty e^{-tu}\cos udu\left(1+\frac1{t^2}\right) = \frac1t,

which means

\displaystyle \bigl(F(t)\bigr)^2 - \bigl(G(t)\bigr)^2 = \frac\pi4\int_0^\infty\int_0^\infty e^{-tu}\cos udu = \frac\pi4\frac t{t^2+1}.

Then we consider the product F(t)G(t) and use the identity \sin(a+b) = \cos a\sin b + \cos b\sin a, as well as the symmetry of the integrand and integration by parts, to get

\displaystyle    \begin{aligned}    F(t)G(t) &= \int_0^\infty\int_0^\infty e^{-t(x^2+y^2)}\bigl(\cos(x^2)\sin(y^2)\bigr)dxdy\\    &=2\int_0^\infty\int_0^ye^{-t(x^2+y^2)}\sin(x^2+y^2)dxdy\\    &=2\int_0^\infty\int_0^{\frac\pi8}e^{-tr^2}\sin(r^2)rd\varphi dr\\    &=\frac\pi4\int_0^\infty e^{-tr^2}\sin(r^2)r dr\\    &=\frac\pi4\int_0^\infty e^{-tu}\sin u\frac12du\\    &=\frac\pi8\frac1{1+t^2}.    \end{aligned}

We thus find by the dominated convergence theorem

\displaystyle    \begin{aligned}    \left(\int_0^\infty\cos x^2dx\right)^2-\left(\int_0^\infty\sin x^2dx\right)^2 &= \left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\cos x^2dx\right)^2-\left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\sin x^2dx\right)^2 \\    &=\lim_{t\downarrow0}\left(\bigl(F(t)\bigr)^2-\bigl(G(t)\bigr)^2\right)\\    &=\lim_{t\downarrow0}\frac\pi4\frac{t}{t^2+1}\\    &=0,    \end{aligned}

and

\displaystyle    \begin{aligned}    \left(\int_0^\infty\cos x^2dx\right)^2 &= \left(\int_0^\infty\cos x^2dx\right)\left(\int_0^\infty\sin x^2dx\right)\\    &=\left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\cos x^2dx\right)\left(\int_0^\infty\lim_{t\downarrow0}e^{-tx}\sin x^2dx\right)\\    &=\lim_{t\downarrow0}F(t)G(t)\\    &=\lim_{t\downarrow0}\frac\pi8\frac1{1+t^2}\\    &=\frac\pi8.    \end{aligned}

One can easily find the bound that both integrals must be positive and from the first computation, we get

\int_0^\infty\cos x^2dx = \int_0^\infty\sin x^2dx,

from the second computation follows that the integrals have value \sqrt{\frac\pi8}.

q.e.d. (Fresnel integrals)

 

Even Brouwer’s Fixed Point Theorem may be concluded from the Transformation Formula (amongst a bunch of other theorems, none of which is actually as deep as this one though). This is worthy of a seperate text, mind you.

Well done, youtube-algorithm…

In the past few weeks I have encountered a youtube channel that gives several nice ideas about art and about some artistic ideas work. Apparently, youtube (which is to mean: Google) knows enough about my musical preferences to present a video called “Lord of the Rings: How music elevates story”, which de-constructs the musical leitmotifs in the score of Peter Jackson’s Lord of the Rings trilogy:

In particular, I had recognized the different themes in the score by myself, but the way they were intertwined had stayed hidden from my conscious thinking. A very nice clip that most certainly made my day – and in particular, I found out that the nerdwriter-channel had plenty of other very insightful ideas to discover: understanding Picasso or Edvard Munch (he didn’t just paint The Scream, you know, even though it is his very most famous piece, and rightly so), looking deeper into Bob Dylan’s lyrics, analyzing the speeches and tweets of President Donald Trump, something on how great directors work, as in Sherlock, or Saving Private Ryan, even about the Beatles’ cover of Sgt. Pepper’s Lonely Hearts Club Band (which would tell me little new, but I wouldn’t expect that in a non-specialist video). A very nice channel, challenging me to think deeply about many different topics, both old and new to me.

A slight introspective.

There are basic differences between the categories in which this blog is organized. On the one hand, there is maths, often rather technical and detailed, highlighting some certain aspect of a topic that has caught my interest. Usually, those texts are written immediately after I have spent enough time with this particular topic – as I often switch the topics of interest and as I tend to forget the technical details pretty fast. As a matter of fact, this is exactly why there are these mathematical texts: I can get amazed by remembering the details when I re-read the blog, and especially when I have understood fundamental things about something, I wish to keep some record and some hint of how things work.

On the other hand, there are the deeper texts on literature, like book reviews or texts on topics that I have spent a lot of time with (usually a lot more than with the particular math topics: those switch faster). In some respects, I find those texts not only deep and long, but also somewhat more mature. I have taken a lot of time to develop an understanding of these things, and even if someone was to disagree with me, I would feel ready for discussion, even years later. This is why my stack of topics to be covered changes rather slowly. I have a text on Sebastian Haffner in the back of my mind, which still needs to be written, and which can’t be written in just half an hour of leisure time; same goes for a text on the TV-series Sherlock, which itself may be still work in progress, anyway. I was tempted to start a text on ancient Greece and democracy (today compared to the past), but I didn’t feel to have any definite view yet – this topic needs some more maturity. Time and again, texts like the one on The Beatles just drop out of my head and appear here, which then strikes me as a nice event, as I have managed to put down my present view of some aspect that I have spent very much time with.

And then, there are the posts like this one. Somewhat short, lacking the really deep insights, but in some way serving as a blog in the original sense of the word: as my “web-log”. They deal with topics that sparked my immediate interest, they sometimes deal with my lack of putting my insights in writing, they sometimes even focus on my personal situation of sorts. I am quite aware of the problem that this post doesn’t bring any benefit to humanity, and that this kind of self-centration is one of the tombstones of modern media. But, if my deeper insights still have to mature, then so be it. Let us stick to Gauss: Pauca sed matura. And, as the phases come and go, my lack of spare time has returned and makes it harder to actually let my insights mature as much as I want them to. But, on the up-side, new music by Judith Holofernes is about to show up, I encountered the most amazing unplugged music by the wonderfully talented Taylor Swift, I’ve had a look at the 4th season of Sherlock, and I retrieved a great book with short-stories by Andreas Eschbach. All this made my day, several times over.

Maybe, in some way, having thought about the structure of what can be found here, has been quite an insight for myself. Considering where this post started, that’s not nothing.

Johann Kepler and how planets move

Kepler’s Laws of planetary motion follow very smoothly from Newton’s Law of Gravitation. Very little tough mathematics is needed for the proofs, it can actually be done with ordinary differential calculus and some knowledge on path integration.

Of course, from a historical point of view, these laws appeared reversed. Kepler had neither Newton’s Law at his disposal, nor had he sufficient use of the calculus machinery that we have today. Instead, his way of coming up with his Laws included years of hard work on the astronomical tables compiled by Tycho Brahe; Kepler himself was unable to make astronomical observations himself, even with his self-invented telescope, as he was ill-sighted for all his life. Later, Newton could rely on Kepler’s results to find inspiration for his Law of Gravitation: indeed, unless he could relate his results to Kepler’s laws, he knew his results to be incomplete. Together with his many other achievements (for instance, Kepler was the first to state Simpson’s rule, which is accordingly called “Kepler’s barrel rule” sometimes) this makes him one of the most interesting minds of the early modern era. His interest in planet’s orbits was rooted in astrology and the construction of horoscopes; his observations and deductions led him to drop both Copernicus’ thought that the orbits were circles and, later, that there were no platonic solids involved. Both Copernicus and Kepler had revolutionized the thoughts on space itself, Copernicus by showing how much easier the orbits can be described when Earth is allowed to move itself (no more epicycles and the like), but Kepler made it even easier by dropping circles altogether.

To consider how hard the problem of finding the planet’s orbits is, consider that we have plenty of observational data of the planets, however the data contain two unknowns: the orbit of Earth and the orbit of the planet. On top of that, our observations only tell us about the angle under which the planet is observed, we don’t learn anything about the distances involved (unless we have Kepler’s third law at our disposal). In a very insightful talk for a wider audience, Terry Tao has explained some of Kepler’s ideas on this, especially how Kepler dealt with the orbit of Mars which had been for several reasons the most tricky one of the orbits in the models that preceded Kepler. Tao mentions that Einstein valued the finding of Kepler’s Laws one of the most shining moments in the history of human curiosity. “And if Einstein calls you a genius, you are really doing well.

From these Laws and from what Newton and his successors achieved, many things can be inferred that are impossible to measure directly. For instance the mass of the Sun and all planets can be computed from here, once the gravitational constant is known (which is tricky to pinpoint, actually). Voltaire is quoted with the sentence, regarding Newton’s achievements but this would fit to Kepler as well, that the insights gained “semblaient n’être pas faites pour l’esprit humain.

To give just a tiny bit of contrast, we mention that Kepler also had erroneous thoughts that show how deeply he was still rooted in ancient ideas of harmonics and aesthetics. For instance, Kepler tried to prove why the Solar system had exactly six planets (or to rephrase a little more accurately to his thinking: why God had found pleasure in creating exactly six planets). For some time, he believed that the reason was related to the fact that there are exactly five platonic solids which define the structure of the six orbits around the sun. Those were ideas also related to the integer harmonies of a vibrating string, as the planets were supposed to move in a harmonical way themselves. Of course, in these days the observations were limited up to Saturn, as the outer planets (and dwarf planets) cannot be found by eyesight or the telescopes at Kepler’s disposal; all such ideas were doomed to be incomplete. However, his quest for harmonics in the Solar system led him in the end to his Third Law. On another account, Kepler was mistaken in the deduction of his First Law, since he lacked the deep knowledge about integration that would be developed decades later; luckily, his mistake cancels out with another mistake later on: “Es ist schon atemberaubend, wie sich bei Keplers Rechnungen letztlich doch alles fügt.” (“It is stunning how everything in Kepler’s computations adds up in the end”; have a look at Sonar’s highly readable and interesting book on the history of calculus for this).

In what follows, we shall show how the Kepler’s Laws can be proved, assuming Newton’s Law of Gravitation, in a purely mathematical fashion. There will be no heuristics from physics or from astronomy, only the axiomatic mathematical deduction that mostly works without any intuition from the applications (though we will look at motivations for why some definitions are made the way they are).

As a nice aside, we can look at the mathematical descriptions of the conic sections on which the first Law relies. But here again, there’s no connection to why these curve are called this way.

Let us state Kepler’s Laws here first.

Kepler’s First Law of Planetary Motion: Planets orbit in ellipses, with the Sun as one of the foci.

Kepler’s Second Law of Planetary Motion: A planet sweeps out equal areas in equal times.

Kepler’s Third Law of Planetary Motion: The square of the period of an orbit is proportional to the cube of its semi-major axis.

Let us prove this, by following the account of Königsberger’s book on calculus. Many calculus books deal with Kepler’s Laws in a similar axiomatical fashion, yet we stick to this account as it appears to be the neatest one without conjuring up too much of physics.

We shall give a couple of technical lemmas first.

 

The Triangle-Lemma: The triangle marked by the points (x_1,y_1), (x_2,y_2), (x_3,y_3) has area \displaystyle \frac12\left|\mathrm{det}\begin{pmatrix}1&x_1&y_1\\1&x_2&y_2\\1&x_3&y_3\end{pmatrix}\right|.

Proof: The triangle together with the coordinate axes marks the parallelograms:

P_{13} with the points (x_1,y_1), (x_3, y_3), (x_3,0), (x_1,0);

P_{23} with the points (x_2,y_2), (x_3, y_3), (x_3,0), (x_2,0);

P_{12} with the points (x_1,y_1), (x_2, y_2), (x_2,0), (x_1,0).

dreiecksflaeche

Thus, the area of the triangle is:

F_{T} = P_{13}+P_{23}-P_{12}.

The sign represents the situation given in the figure. For other triangles, another permutation of the signs may be necessary, but there will always be exactly one negative sign. Other permutations of the sign only represent a re-numbering of the points and therefore a change of sign in the determinant given in the statement. As we put absolute values to our statement, we avoid any difficulties of this kind.

As each of the paralellograms has two of their points on the x-axis, we find

\begin{aligned}    F_{T} &= \frac{y_1+y_3}{2}(x_3-x_1)+\frac{y_2+y_3}{2}(x_2-x_3)-\frac{y_1+y_2}{2}(x_2-x_1)\\    &= \frac12\left(x_1\bigl((y_1+y_2)-(y_1+y_3)\bigr) + x_2\bigl((y_2+y_3)-(y_1+y_2)\bigr) + x_3\bigl((y_1+y_3)-(y_2+y_3)\bigr)\right)\\    &= \frac12\left(x_1\bigl((y_1-y_3) + (y_2 - y_1)\bigr) + x_2(y_3-y_1) + x_3(y_1-y_2)\right)\\    &= \frac12\left((x_2-x_1)(y_3-y_1) - (x_3-x_1)(y_2-y_1)\right)\\    &= \frac12\mathrm{det}\begin{pmatrix}x_2-x_1&y_2-y_1\\x_3-x_1&y_3-y_1\end{pmatrix}\\    &= \frac12\mathrm{det}\begin{pmatrix}1&x_1&y_1\\0&x_2-x_1&y_2-y_1\\0&x_3-x_1&y_3-y_1\end{pmatrix}\\    &= \frac12\mathrm{det}\begin{pmatrix}1&x_1&y_1\\1&x_2&y_2\\1&x_3&y_3\end{pmatrix}.    \end{aligned}

Q.e.d.

 

Lemma (Leibniz’ sector formula): Let \gamma:[a,b]\to\mathbb{R}^2 be a continuously differentiable path, \gamma(t) = \begin{pmatrix}x(t)\\y(t)\end{pmatrix}. Then the line segment from 0 to the points of the path sweeps the area \displaystyle\frac12\int_a^b(x\dot y - y\dot x)dt.

Note that we have used Newton’s notation for derivatives. One might also write the integral as \displaystyle\int_a^b\bigl(x(t)y'(t)-y(t)x'(t)\bigr)dt.

Proof: Let us clarify first, what we understand by “sweeping” line segments. Consider the path \gamma given in the image.

weg

As this path is not closed (it’s not a contour), it doesn’t contain an area. But if we take the origin into account, we can define an area that is related to where the path is:

fahrstrahl

Now, pick a partition of [a,b], such as \{a=t_0,t_1,\ldots,t_{n-1},b=t_n\} and make a polygon of the partition and the origin – the corresponding triangles form an area that approximates the area bounded by \gamma, as above.

fahrstrahl_proxy

As the partition gets finer, we expect that the polygon-area converges to the \gamma-area. And this is where the definition originates, of the area that is swept by a path:

For any \varepsilon>0 there shall be \delta>0, such that for every partition \{t_0,\ldots,t_n\} of [a,b] that is finer than \delta, we get

\displaystyle \left|\sum_{i=0}^{n-1}F_{T_i} - \frac12\int_a^b(x\dot y-y\dot x)dt\right| < \varepsilon.\qquad(\ast)

Here, F(T_i) is the area of the triangle bounded by \gamma_{t_i}, \gamma_{t_{i+1}} and the origin. By the Triangle-Lemma, its area is \displaystyle\frac12\mathrm{det}\begin{pmatrix}1&0&0\\1&x_i&y_i\\1&x_{i+1}&y_{i+1}\end{pmatrix}.

Because the orientation of the T_i might be of importance, F(T_i) may keep its sign in what follows (Imagine, for instance, a path that traverses a line segment once from left to right and once from right to left; in total, no area is covered).

Now, let us prove that (\ast) is true.

As \dot x and \dot y are continuous, choose L=\max\left(\max_{[a,b]}\left|\dot x(t)\right|, \max_{[a,b]}\left|\dot y(t)\right|\right) and take \delta = \frac{\varepsilon}{2L^2(b-a)}. Take a partition \{t_0,\ldots,t_n\} with t_0=a, t_0=b and \left|t_{k+1}-t_k\right| < \delta for k=0,\ldots,n-1. Then, for any such k,

\begin{aligned}    F(T_k) &= \frac12\mathrm{det}\begin{pmatrix}1&0&0\\1&x_k&y_k\\1&x_{k+1}&y_{k+1}\end{pmatrix}\\    &=\frac12(x_ky_{k+1}-x_{k+1}y_k)\\    &= \frac12\Bigl(x_k\bigl(y_k+(y_{k+1}-y_k)\bigr) - y_k\bigl(x_k+(x_{k+1}-x_k)\bigr)\Bigr)\\    &=\frac12\bigl(x_k(y_{k+1}-y_k)-y_k(x_{k+1}-x_k)\bigr)\\    &=\frac12\left(x_k\int_{t_k}^{t_{k+1}}\dot y(t)dt - y_k\int_{t_k}^{t_{k+1}}\dot x(t)dt\right)\\    &=\frac12\int_{t_k}^{t_{k+1}}\bigl(x_k\dot y(t)-y_k\dot x(t)\bigr)dt.    \end{aligned}

This yields, using the mean value theorem,

\begin{aligned}    \left|2F(T_k)-\int_{t_k}^{t_{k+1}}(x\dot y-y\dot x)dt\right| &= \left|\int_{t_k}^{t_{k+1}}(x_k\dot y-y_k\dot x-x\dot y+y\dot x)dt\right|\\    &\leq \left|\int_{t_k}^{t_{k+1}}(x_k-x)\dot ydt\right| + \left|\int_{t_k}^{t_{k+1}}(y-y_k)\dot xdt\right|\\    &\leq\int_{t_k}^{t_{k+1}}\max_{[a,b]}\left|\dot x(t)\right|\left|{t_k-t}\right|\left|\dot y(t)\right|dt + \\    &\hphantom{=}+\int_{t_k}^{t_{k+1}}\max_{[a,b]}\left|\dot y(t)\right|\left|{t_k-t}\right|\left|\dot x(t)\right|dt\\    &\leq L^2\left|t_{k+1}-t_k\right|(t_{k+1}-t_k) + L^2\left|t_{k+1}-t_k\right|(t_{k+1}-t_k)\\    &< 2L^2\delta(t_{k+1}-t_k).    \end{aligned}

We conclude

\displaystyle\left|2\sum_{k=0}^{n-1}F(T_k) - \int_a^b(x\dot y - y\dot x)dt\right| < 2L^2\delta(b-a) = \varepsilon.

Q.e.d.

One might as well prove this by applying Green’s Theorem, but in this case it just gets less elementary.

 

The \times-Lemma: Let a,b\in\mathbb{R}^3. We set

\displaystyle a\times b := \begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix}.

Obviously, this is linear both in a and in b. We have

(i)\quad\displaystyle \left\langle (a\times b),c\right\rangle = \mathrm{det}(a,b,c).

(ii)\quad a\times b is orthogonal to a and to b.

(iii)\quad\displaystyle (a\times b)\times c = -\left\langle b,c\right\rangle a + \left\langle a,c\right\rangle b (Grassmann’s equation).

(iv)\quad\displaystyle (a\times b)^{\cdot} = \dot a\times b+a\times\dot b.

Proof: For (i):

\begin{aligned}    \left\langle (a\times b),c\right\rangle &= \left\langle\begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix},\begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}\right\rangle\\    &=c_1(a_2b_3-a_3b_2)+c_2(a_3b_1-a_1b_3)+c_3(a_1b_2-a_2b_1)\\    &=c_1\mathrm{det}\begin{pmatrix}a_2&b_2\\a_3&b_3\end{pmatrix}-c_2\mathrm{det}\begin{pmatrix}a_1&b_1\\a_3&b_3\end{pmatrix}+c_3\mathrm{det}\begin{pmatrix}a_1&b_1\\a_2&b_2\end{pmatrix}\\    &=\mathrm{det}\begin{pmatrix}a_1&b_1&c_1\\a_2&b_2&c_2\\a_3&b_3&c_3\end{pmatrix}\\    &=\mathrm{det}(a,b,c).    \end{aligned}

For (ii): \left\langle a\times b, a\right\rangle = \mathrm{det}(a,b,a)=0 and \left\langle a\times b,b\right\rangle = \mathrm{det}(a,b,b)=0.

For (iii):

\begin{aligned}    (a\times b)\times c &=\begin{pmatrix}a_2b_3-a_3b_2\\a_3b_1-a_1b_3\\a_1b_2-a_2b_1\end{pmatrix}\times\begin{pmatrix}c_1\\c_2\\c_3\end{pmatrix}\\    &=\begin{pmatrix}(a_3b_1-a_1b_3)c_3-(a_1b_2-a_2b_1)c_2\\(a_1b_2-a_2b_1)c_1-(a_2b_3-a_3b_2)c_3\\(a_2b_3-a_3b_2)c_2-(a_3b_1-a_1b_3)c_1\end{pmatrix}\\    &=\begin{pmatrix}a_3b_1c_3-a_1b_3c_3-a_1b_2c_2+a_2b_1c_2\\a_1b_2c_1-a_2b_1c_1-a_2b_3c_3+a_3b_2c_3\\a_2b_3c_2-a_3b_2c_2-a_3b_1c_1+a_1b_3c_1\end{pmatrix}\\    &=\begin{pmatrix}a_1b_1c_1+a_2b_1c_2+a_3b_1c_3-a_1b_1c_1-a_1b_2c_2-a_1b_3c_3\\a_1b_2c_1+a_2b_2c_2+a_3b_2c_3-a_2b_1c_1-a_2b_2c_2-a_2b_3c_3\\a_1b_3c_1+a_2b_3c_2+a_3b_3c_3-a_3b_1c_1-a_3b_2c_2-a_3b_3c_3\end{pmatrix}\\    &=(a_1c_1+a_2c_2+a_3c_3)\begin{pmatrix}b_1\\b_2\\b_3\end{pmatrix} - (b_1c_1+b_2c_2+b_3c_3)\begin{pmatrix}a_1\\a_2\\a_3\end{pmatrix}\\    &=\left\langle a,c\right\rangle b - \left\langle b,c\right\rangle a.    \end{aligned}

For (iv):

\begin{aligned}    \bigl(a(t)\times b(t)\bigr)^\cdot &= \begin{pmatrix}a_2(t)b_3(t)-a_3(t)b_2(t)\\a_3(t)b_1(t)-a_1(t)b_3(t)\\a_1(t)b_2(t)-a_2(t)b_1(t)\end{pmatrix}^\cdot\\    &=\begin{pmatrix}\dot a_2b_3+a_2\dot b_3-\dot a_3b_2-a_3\dot b_2\\\dot a_3b_1+a_3\dot b_1-\dot a_1b_3-a_1\dot b_3\\\dot a_1b_2+a_1\dot b_2-\dot a_2b_1-a_2\dot b_1\end{pmatrix}\\    &=\begin{pmatrix}\dot a_2b_3-\dot a_3b_2\\\dot a_3b_1-\dot a_1b_3\\\dot a_1b_2-\dot a_2b_1\end{pmatrix}+\begin{pmatrix}a_2\dot b_3-a_3\dot b_2\\a_3\dot b_1-a_1\dot b_3\\ a_1\dot b_2-a_2\dot b_1\end{pmatrix} \\    &=\dot a\times b + a\times \dot b    \end{aligned}

Q.e.d.

 

Now, let us look at conic sections and define them mathematically. We will not be interested in what these things have to do with cones – as stated at the beginning: pure mathematics here.

We are going to work in \mathbb{R}^2 here. Let F be a point (the so-called focal point) and l be a line (the so-called directrix), and the distance of F and l shall be some p>0. We are looking for all those points in \mathbb{R}^2 for which the distance to F and the distance to l are proportional – formally: For any point (\xi,\eta)^t\in\mathbb{R}^2, we set

r=\mathrm{dist}\bigl(F,(\xi,\eta)^t\bigr)\qquad\text{and}\qquad d = \mathrm{dist}\bigl(l, (\xi,\eta)^t\bigr),

and for \varepsilon > 0 we demand

\displaystyle \frac{r}{d}=\varepsilon.

For simplicity, we will put F into the origin of our coordinate system, and l parallel to one of the axes, as in the figure. In particular, r^2 = \xi^2+\eta^2 and d = p+\xi. Our equation for the interesting points thus becomes:

\begin{aligned}    && r^2&=\varepsilon^2d^2\\    &\iff& \xi^2+\eta^2 &= \varepsilon^2p^2+2\varepsilon^2p\xi+\varepsilon^2\xi^2\\    &\iff&\xi^2(1-\varepsilon^2) &= \varepsilon^2p^2-\eta^2+2\varepsilon^2p\xi.    \end{aligned}

kegelschnitte

Let us distinguish the following cases:

Case \varepsilon = 1. Then we set x := \xi+\frac p2, y := \eta and find

\begin{aligned}    &&\xi^2 (1-\varepsilon^2) &= \varepsilon^2p^2-\eta^2 + 2\varepsilon^2 p \xi \\    &\iff& 0 &= p^2-y^2+2p\left(x-\frac p2\right)\\    &\iff& 0 &= p^2-y^2+2px-p^2\\    &\iff& y^2 &= 2px.    \end{aligned}

We see that the interesting points lie on a parabola which is open to the right (by choosing other coordinate systems, of course, any other parabola will appear; in some way, this is its normal form).

Case \varepsilon < 1. Here we set x:=\xi-\frac{p\varepsilon^2}{1-\varepsilon^2}, y:=\eta. Then we get

\begin{aligned}    &&\xi^2 (1-\varepsilon^2) &= \varepsilon^2p^2-\eta^2 + 2\varepsilon^2 p \xi \\    &\iff &\left(x+\frac{p\varepsilon^2}{1-\varepsilon^2}\right)^2(1-\varepsilon^2) &= \varepsilon^2p^2-y^2+2\varepsilon^2p\left(x+\frac{p\varepsilon^2}{1-\varepsilon^2}\right)\\    &\iff& x^2(1-\varepsilon^2)+2xp\varepsilon^2 + \frac{p^2\varepsilon^4}{1-\varepsilon^2} &= \varepsilon^2p^2-y^2+2\varepsilon^2px+\frac{2\varepsilon^4p^2}{1-\varepsilon^2}\\    &\iff& x^2(1-\varepsilon^2) + y^2 &= \varepsilon^2p^2+\frac{\varepsilon^4p^2}{1-\varepsilon^2}\\    &\iff& x^2(1-\varepsilon^2) + y^2 &= \frac{(1-\varepsilon^2)\varepsilon^2p^2+\varepsilon^4p^2}{1-\varepsilon^2}\\    &\iff& x^2(1-\varepsilon^2) + y^2 &= \frac{\varepsilon^2p^2}{1-\varepsilon^2}\\    &\iff& \frac{x^2(1-\varepsilon^2)^2}{\varepsilon^2p^2} + \frac{y^2(1-\varepsilon^2)}{\varepsilon^2p^2} &= 1\\    &\iff& \frac{x^2}{a^2} + \frac{y^2}{b^2} &= 1,    \end{aligned}

with a = \frac{\varepsilon p}{1-\varepsilon^2} and b = \frac{\varepsilon p}{\sqrt{1-\varepsilon^2}}.

We have found that the interesting points lie on an ellipse.

Case \varepsilon > 1. This is exactly the same as the case \varepsilon<1, except for the last step. We mustn’t set b as before, since 1-\varepsilon^2<0 and we cannot get a real square root of this. Thus, we use b = \frac{\varepsilon p}{\sqrt{\varepsilon^2-1}} and the resulting negative sign is placed in the final equation:

\displaystyle \frac{x^2}{a^2}-\frac{y^2}{b^2} = 1.

This is a hyperbola.

To conclude this part, we give the general representation of conic sections in polar coordinates. From the figure given above, we see d = p+r\cos\varphi and so

\displaystyle r = \varepsilon d = \varepsilon p + \varepsilon r\cos\varphi,

which means

\displaystyle r = \frac{\varepsilon p}{1-\varepsilon\cos\varphi}.

That yields the polar coordinates (only depending on parameters and on the variable \varphi:

\displaystyle re^{i\varphi} = \frac{\varepsilon p}{1-\varepsilon\cos\varphi}e^{i\varphi}.

 

Now, let us turn to our base for the proofs of Kepler’s Laws: Newton’s Law of Gravitation. Let m be the mass of a planet, M the mass of the Sun, \gamma a real constant (the gravitational constant), and let x(t) be a path (the planet’s orbit). By Newton’s Law we have the differential equation

\displaystyle m\ddot x = -\gamma Mm\frac{x}{\left\|x\right\|^3}.

On the left-hand side, there’s the definition of force as mass multiplied by acceleration. On the right-hand side is Newton’s Law stating the gravitational force between Sun and planet.

We define the vector-valued functions of t (note that x depends on t):

\displaystyle J = x\times m\dot x\qquad\text{and}\qquad A=\frac1{\gamma Mm}J\times\dot x+\frac{x}{\left\|x\right\|}.

(J is the angular momentum, A is an axis; but our math doesn’t care for either of those names or intentions).

The AJ-Lemma: As functions of t, A and J are constant.

Proof: Let us look at J first. Using the fact that by definition a\times a = 0, and Newton’s Law of Gravitation, we get

\begin{aligned}    \dot J \underset{\times\text{-Lemma}}{\overset{(iv)}{=}} (x\times m\dot x)^\cdot &= \dot x \times m\dot x + x\times m\ddot x\\    &= \dot x \times m\dot x + x\times\left(-\gamma Mm\frac{x}{\left\|x\right\|^3}\right)\\    &= m(\dot x\times \dot x) - \frac{\gamma Mm}{\left\|x\right\|^3}(x\times x)\\    &= 0 - 0 = 0.    \end{aligned}

Now, for A, we will need a side-result first.

\begin{aligned}    \frac{d}{dt}\frac{1}{\left\|x\right\|} &= \frac{d}{dt}\bigl(x_1^2(t)+x_2^2(t)+x_3^2(t)\bigr)^{-1/2}\\    &= -\frac12\bigl(x_1^2(t)+x_2^2(t)+x_3^2(t)\bigr)^{-3/2}\bigl(2x_1(t)\dot x_1(t)+2x_2(t)\dot x_2(t)+2x_3(t)\dot x_3(t)\bigr)\\    &= -\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}.    \end{aligned}

This yields

\begin{aligned}    \dot A &\underset{\hphantom{\times\text{-Lemma}}}{=} \left(\frac{1}{\gamma Mm}J\times \dot x + \frac{x}{\left\|x\right\|}\right)^\cdot \\    &\underset{\times\text{-Lemma}}{\overset{(iv)}{=}} \frac{1}{\gamma Mm}\bigl(\dot J\times\dot x + J\times\ddot x\bigr) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} \frac1{\gamma Mm}\left(0+J\times\left(-\gamma M\frac{x}{\left\|x\right\|^3}\right)\right) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\    &\underset{\hphantom{\times\text{-Lemma}}}{=}-\frac1m\left((x\times m\dot x)\times \frac{x}{\left\|x\right\|^3}\right) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\    &\underset{\hphantom{\times\text{-Lemma}}}{=}-\left((x\times \dot x)\times \frac{x}{\left\|x\right\|^3}\right) + \left(\frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x\right)\\    &\underset{\times\text{-Lemma}}{\overset{(iii)}{=}} -\left\langle x,\frac{x}{\left\|x\right\|^3}\right\rangle \dot x + \left\langle \dot x,\frac{x}{\left\|x\right\|^3}\right\rangle x + \frac{1}{\left\|x\right\|}\dot x-\frac{\left\langle x,\dot x\right\rangle}{\left\|x\right\|^3}x \\    &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\left\|x\right\|^3}\left\langle x,x\right\rangle \dot x + \frac1{\left\|x\right\|^3}\left\langle \dot x, x\right\rangle x + \frac1{\left\|x\right\|}\dot x-\frac1{\left\|x\right\|^3}\left\langle x,\dot x\right\rangle x\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\left\|x\right\|^3}\left\|x\right\|^2\dot x + \frac1{\left\|x\right\|}\dot x\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} 0.    \end{aligned}

Q.e.d.

 

Now, we have all ingredients to prove Kepler’s Laws. We conclude axiomatically by assuming that planetary motion is governed by Newton’s Law of Gravitation (the differential equation given above).

Let’s start with

Theorem (Kepler’s First Law of Planetary Motion): Planets orbit in ellipses, with the Sun as one of the foci.

Proof: Let x(t) denote the orbit of a planet around the Sun. By the AJ-Lemma, J is constant. By definition of J = x\times m\dot x and by (ii) of the \times-Lemma, both x and \dot x are orthogonal to J; the orbit is therefore located in a two-dimensional plane. Let us introduce polar coordinates in this plane, with the Sun in the origin, and with axis A (this works, as A is located in the same plane as well: A\bot J by the definition of A).

planetenorbit

Now let \varphi(t) be the angle of x(t) and A, set \varepsilon := \left\|A\right\|. This means

\displaystyle \cos\varphi(t) = \frac{\left\langle x(t),A\right\rangle}{\left\|x\right\|\left\|A\right\|},

and so

\displaystyle \left\langle A,x(t)\right\rangle = \varepsilon \left\|x\right\|\cos\varphi(t).

By definition of A, we have

\begin{aligned}    \left\langle A,x(t)\right\rangle &\underset{\hphantom{\times\text{-Lemma}}}{=} \left\langle\frac{1}{\gamma Mm}J\times\dot x + \frac{x}{\left\|x\right\|}, x(t)\right\rangle\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} \frac{1}{\gamma Mm}\left\langle J\times \dot x, x\right\rangle + \frac{1}{\left\|x\right\|}\left\langle x,x\right\rangle\\    &\underset{\times\text{-Lemma}}{\overset{(i)}{=}} \frac{1}{\gamma Mm}\mathrm{det}(J,\dot x,x)+\frac{\left\| x\right\|^2}{\left\|x\right\|}\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} (-1)^3\frac{1}{\gamma Mm}\mathrm{det}(x,\dot x,J) + \left\|x(t)\right\|\\    &\underset{\times\text{-Lemma}}{\overset{(i)}{=}} -\frac1{\gamma Mm}\left\langle x\times \dot x,J\right\rangle + \left\|x(t)\right\|\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\gamma Mm^2}\left\langle x\times m\dot x, J\right\rangle + \left\|x(t)\right\|\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} -\frac1{\gamma Mm^2}\left\langle J,J\right\rangle + \left\|x(t)\right\|\\    &\underset{\hphantom{\times\text{-Lemma}}}{=} const. + \left\|x(t)\right\|.    \end{aligned}

Now, if A=0, then we have found

\displaystyle \left\|x(t)\right\| = \frac1{\gamma Mm^2}\left\|J\right\|^2,

which means that the planet moves on a circular orbit.

If A\neq0, then we conclude

\begin{aligned}    &&\varepsilon \left\|x(t)\right\|\cos\varphi(t) &= -\frac1{\gamma Mm^2}\left\|J\right\|^2+\left\|x(t)\right\|\\    &\implies&\varepsilon\left\|x(t)\right\|\cos\varphi(t)&=-\frac1{\gamma Mm^2}\left\|J\right\|^2+\left\|x(t)\right\|\\    &\implies&\frac1{\gamma Mm^2}\left\|J\right\|^2&=\bigl(1-\varepsilon\cos\varphi(t)\bigr)\left\|x(t)\right\|\\    &\implies&\left\|x(t)\right\|&=\frac{\frac{\left\|J\right\|^2}{\gamma Mm^2}}{1-\varepsilon\cos\varphi(t)} = \frac{\varepsilon\frac{\left\|J\right\|^2}{\gamma Mm^2\left\|A\right\|}}{1-\varepsilon\cos\varphi(t)} = \frac{\varepsilon p}{1-\varepsilon\cos\phi(t)},    \end{aligned}

with p defined in the obvious fashion to make the last equation work.

Therefore, the planet moves on a conic section, with focus in the Sun. As the planet’s orbits are bounded, we have proved that it must follow an ellipse. Q.e.d.

Theorem (Kepler’s Second Law of Planetary Motion): A planet sweeps out equal areas in equal times.

Proof: We use cartesian coordinates in \mathbb{R}^3, such that e_1 is parallel to A and e_3 is parallel to J. Then x(t) is the plane \mathrm{span}(e_1,e_2) and the Sun is in (0,0). In particular, x_3(t) = 0 for all t by the proof of the First Law. Then,

\displaystyle \frac1m J = x\times \dot x = \begin{pmatrix}x_1\\x_2\\0\end{pmatrix}\times \begin{pmatrix}\dot x_1\\\dot x_2\\ 0\end{pmatrix} = \begin{pmatrix}0\\0\\x_1\dot x_2-x_2\dot x_1\end{pmatrix}.

By Leibniz’ sector formula, the line segment between times t_1 and t_2 sweeps the area

\displaystyle \frac12\left|\int_{t_1}^{t_2}(x_1\dot x_2-x_2\dot x_1)dt\right| = \frac1{2m}\left\|J\right\|(t_2-t_1).

This area only depends on the difference of times, as stated. Q.e.d.

Theorem (Kepler’s Third Law of Planetary Motion): The square of the period of an orbit is proportional to the cube of its semi-major axis.

Proof: By Leibniz’ sector formula (used similarly to the proof of the Second Law), the area contained in the planet’s entire orbit is

\displaystyle \frac12\left|\int_0^T (x_1\dot x_2-x_2\dot x_1)dt\right| = \frac1{2m}\left\|J\right\| T,

where T is the time taken for a full orbit around the Sun. By the First Law, this orbit is an ellipse, the area of which may be computed as follows: The cartesian coordinates of an ellipse are

\begin{pmatrix}x(t)\\y(t)\end{pmatrix}=\begin{pmatrix}a\cos t\\b\sin t\end{pmatrix},

with a and b real constants (the larger one is called the semi-major axis). This is actually an ellipse because of

\displaystyle \frac{x^2}{a^2}+\frac{y^2}{b^2} = \frac{a^2\cos^2 t}{a^2}+\frac{b^2\sin^2t}{b^2} = 1.

From the notations about normal forms of conic sections, we find b = \frac{\varepsilon p}{\sqrt{1-\varepsilon2}} = a\sqrt{1-\varepsilon^2}, which implies that a>b (as \varepsilon > 0 for any conic section). Now, the area of the ellipse is, by Leibniz’ sector formula again,

\begin{aligned}    \frac12\left|\int_0^{2\pi} (x_1\dot x_2-x_2\dot x_1)dt\right| &= \frac12\left|\int_0^{2\pi}\bigl(a \cos t \cdot b \cos t-b\sin t \cdot a (-\sin t)\bigr)dt\right| \\    &= \frac12 ab\int_0^{2\pi}dt \\    &= \pi ab.    \end{aligned}

Both representations of the area covered by the orbit now yield

\displaystyle T \frac1{2m}\left\|J\right\| = \pi a^2\sqrt{1-\varepsilon^2},

and so, using the definition of p obtained in the proof of the First Law,

\begin{aligned}    \displaystyle T^2 &= \frac{4m^2}{\left\|J\right\|^2}\pi^2a^4(1-\varepsilon^2) \\    &= \frac{4\pi^2 m^2a^3}{\left\|J\right\|^2}a(1-\varepsilon^2) \\    &= \frac{4\pi^2m^2a^3}{\left\|J\right\|^2}\frac{\varepsilon p}{1-\varepsilon^2}(1-\varepsilon^2) \\    &= \frac{4\pi^2m^2a^3}{\left\|J\right\|^2}\varepsilon\frac{\left\|J\right\|^2}{\gamma Mm^2\left\|A\right\|} \\    &= \frac{4\pi^2}{\gamma M}a^3.    \end{aligned}

The constant \frac{4\pi^2}{\gamma M} is identical for any planet travelling around the Sun, and thus is constant. Q.e.d.

Let us conclude with a brief remark of how beautiful and elegant those Laws are – made my day.