## Mathematics Category Archive

Below you will find the most recent posts tagged “Mathematics”, arranged in reverse chronological order.

Below you will find the most recent posts tagged “Mathematics”, arranged in reverse chronological order.

Posted in Mathematics
Leave a comment

I recently gave a talk about different visualizations of modular forms, including many new visualizations that I have been developing and making. I have continued to develop these images, and I now have a proposal for new visualizations for modular forms in the LMFDB.

To see a current visualization, look at this modular form page. The image from that page (as it is currently) looks like this.

This is a plot on a disk model. To make sense of this plot, I note that the real axis in the upper-half-plane model is the circumference of the circle, and the imaginary axis in the upper-half-plane model is the vertical diameter of the circle. In particular, $z = 0$ is the bottom of the circle, $z = i$ is the center of the circle, and $z = \infty$ is the top of the circle. The magnitude is currently displayed — the big blue region is where the magnitude is very small. In a neighborhood of the blue blob, there are a few bands of color that are meaningful — but then things change too quickly and the graph becomes a graph of noise.

I propose one of the following alternatives. I maintain the same badge and model for the space, but I change what is plotted and what colors to use. Also, I plot them larger so that we can get a good look at them; for the LMFDB they would probably be produced at the same (small) size.

I have made three plots with contours. They are all morally the same, except for the underlying colorscheme. The “default” sage colorscheme leads to the following plot.

The good thing is that it’s visually striking. But I recently learned that this colorscheme is hated, and it’s widely thought to be a poor choice in almost every situation.

A little bit ago, matplotlib added two colorschemes designed to fix the problems with the default colorscheme. (sage’s preferences are behind — the new matplotlib default has changed). This is one of them, called *twilight*.

I’ve also prepared these plots without the contours, and I think they’re quite nice as well.

First *jet.*

Then *twilight*. At the talk I recently gave, this was the favorite — but I hadn’t yet implemented the contour-plots above for non-default colorschemes.Then *viridis.* (I’m still not serious about this one — but I think it’s pretty).Note on other Possibilities

There are other possibilities, such as perhaps plotting on a portion of the upper half-plane instead of a disk-model. I describe a few of these possibilities and give examples in the notes from my last talk. I should note that I can now produce contour-type plots there as well, though I haven’t done that.

For fun, here is the default colorscheme, but rotated. This came about accidentally (as did so many other plots in this excursion), but I think it highlights how odd jet is.

This concludes my proposal. I am collecting opinions. If you are struck by an idea or an opinion and would like to share it with me, please let me know, email me, or leave a comment below.

Posted in LMFDB, Mathematics, sage, sagemath
Tagged complex_plot, modular forms, plots, sage, sagemath, visualization
Leave a comment

Today, I’ll be at Bowdoin College giving a talk on visualizing modular forms. This is a talk about the actual process and choices involved in illustrating a modular form; it’s not about what little lies one might hold in their head in order to form some mental image of a modular form.^{1}

This is a talk heavily inspired by the ICERM semester program on Illustrating Mathematics (currently wrapping up). In particular, I draw on^{2} conversations with Frank Farris (about using color to highlight desired features), Elias Wegert (about using logarithmically scaling contours), Ed Harriss (about the choice of colorscheme), and Brendan Hassett (about overall design choices).

There are very many pictures in the talk!

Here are the slides for the talk.

I wrote a few different complex-plotting routines for this project. At their core, they are based on sage’s complex_plot. There are two major variants that I use.

The first (currently called “ccomplex_plot”. Not a good name) overwrites how sage handles lightness in complex_plot in order to produce “contours” at spots where the magnitude is a two-power. These contours are actually a sudden jump in brightness.

The second (currently called “raw_complex_plot”, also not a good name) is even less formal. It vectorizes the computation and produces an object containing the magnitude and argument information for each pixel to be drawn. It then uses numpy and matplotlib to convert these magnitudes and phases into RGB colors according to a matplotlib-compatible colormap.

I am happy to send either of these pieces of code to anyone who wants to see them, but they are very much written for my own use at the moment. I intend to improve them for general use later, after I’ve experimented further.

In addition, I generated all the images for this talk in a single sagemath jupyter notebook (with the two .spyx cython dependencies I allude to above). This is also available here. (Note that using a service like nbviewer or nbconvert to view or convert it to html might be a reasonable idea).

As a final note, I’ll add that I mistyped several times in the preparation of the images for this talk. Included below are a few of the interesting-looking mistakes. The first two resulted from incorrectly applied conformal mappings, while the third came from incorrectly applied color correction.

Posted in Expository, Math.NT, Mathematics, sage, sagemath, sagemath
Tagged matplotlib, modular form, sage, visualization
Leave a comment

Inspired by the images and ideas of Elias Wegert, I thought it might be interesting to attempt to implement a version of his colorizing technique for complex functions in sage. The purpose is ultimately to revisit how one plots modular forms in the LMFDB (see lmfdb.org and click around to see various plots — some are good, others are less good).

The challenge is that plotting a function from $\mathbb{C} \longrightarrow \mathbb{C}$ is that the graph is naturally 4-dimensional, and we are very bad at visualizing 4d things. In fact, we want to use only 2d to visualize it.

A complex number $z = re^{i \theta}$ is determined by the magnitude ($r$) and the argument ($\theta$). Thus

one typical approach to represent the value taken by a function $f$ at a point $z$ is to represent the magnitude of $f(z)$ in terms of the brightness, and to represent the argument in terms of color.

For example, the typical complex space would then look like the following.

$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.

We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}’ a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron’s formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy \begin{equation}\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds \end{equation} for an appropriate choice of $\sigma$.

Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a “Main Term” $\MT(x)$ and an “Error Term” $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to *irregularity in the distribution* of the coefficients.

To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider *nice* weight functions, i.e.\ weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.

For a weight function $v(\cdot)$, denote its Mellin transform by \begin{equation} V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}. \end{equation} Then we can study the more general dual family \begin{equation}\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds. \end{equation}

We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.

Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big). \end{equation}

Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.

Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm( x^{\sigma_0 – \epsilon} ). \end{equation}

The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham’s Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) – x = \Omega_\pm(x^{\Theta – \epsilon})$ and $\psi(x) – x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev’s second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan’s text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn’t have it nicely digested when I needed it).

Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape \begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s} \end{equation} for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.

Explicit asymptotics take the form \begin{equation} \sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X \end{equation} where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.

In this case, applying Theorem 2 with the Perron-weight \begin{equation} v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases} \end{equation} shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X). \end{equation} Similarly, Theorem 1 shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta – \epsilon}), \end{equation} where $\Theta < 7/64$ is the supremum of the deviations to Selberg’s Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).

More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.

The proofs of these results will soon appear on the arXiv.

Posted in Math.NT, Mathematics
Tagged dirichlet integral, dirichlet series, omega, spectral poles
Leave a comment

Today I will be giving a talk at the Maine-Quebec Number Theory conference. Each year that I attend this conference, I marvel at how friendly and inviting an environment it is — I highly recommend checking the conference out (and perhaps modelling other conferences after it).

The theme of my talk is about spectral poles and their contribution towards asymptotics (especially of error terms). I describe a few problems in which spectral poles appear in asymptotics. Unlike the nice simple cases where a single pole (or possibly a few poles) appear, in these cases infinite lines of poles appear.

For a bit over a year, I have encountered these and not known what to make of them. Could you have the pathological case that residues of these poles generically cancel? Could they combine to be larger than expected? How do we make sense of them?

The resolution came only very recently.^{1}

I will later write a dedicated note to this new idea (involving Dirichlet integrals and Landau’s theorem in this context), but for now — here are the slides for my talk.

Posted in Expository, Math.NT, Mathematics
Tagged dirichlet integral, dirichlet series, error term, gauss circle problem
2 Comments

insidious(adjective)1.

a. Having a gradual and cumulative effect

b. of a disease : developing so gradually as to be well established before becoming apparent2.

a. awaiting a chance to entrap

b. harmful but enticing— Merriam-Webster Dictionary

In early topics in mathematics, one can often approach a topic from a combination of intution and first principles in order to deduce the desired results. In later topics, it becomes necessary to repeatedly sharpen intuition while taking advantage of the insights of the many mathematicians who came before — one sees much further by standing on the giants. Somewhere in the middle, it becomes necessary to accept the idea that there are topics and ideas that are not at all obvious. They might appear to have been plucked out of thin air. And this is a conceptual boundary.

In my experience, calculus is often the class where students primarily confront the idea that it is necessary to take advantage of the good ideas of the past. It sneaks up. The main ideas of calculus are intuitive — local rates of change can be approximated by slopes of secant lines and areas under curves can be approximated by sums of areas of boxes. That these are deeply connected is surprising.

To many students, Taylor’s Theorem is one of the first examples of a commonly-used result whose proof has some aspect which appears to have been plucked out of thin air.^{1} Learning Taylor’s Theorem in high school was one of the things that inspired me to begin to revisit calculus with an eye towards *why* each result was true.

I also began to try to prove the fundamental theorems of single and multivariable calculus with as little machinery as possible. High school me thought that topology was overcomplicated and unnecessary for something so intuitive as calculus.^{2}

This train of thought led to my previous note, on another proof of Taylor’s Theorem. That note is a simplified version of one of the first proofs I devised on my own.

Much less obviously, this train of thought also led to the paper on the mean value theorem written with Miles. Originally I had thought that “nice” functions should clearly have continuous choices for mean value abscissae, and I thought that this could be used to provide alternate proofs for some fundamental calculus theorems. It turns out that there are very nice functions that don’t have continuous choices for mean value abscissae, *and* that actually using that result to prove classical calculus results is often more technical than the typical proofs.

The flow of ideas is turbulent, highly nonlinear.

I used to think that developing extra rigor early on in my mathematical education was the right way to get to deeper ideas more quickly. There is a kernel of truth to this, as transitioning from pre-rigorous mathematics to rigorous mathematics is very important. But it is also necessary to transition to post-rigorous mathematics (and more generally, to choose one’s battles) in order to organize and communicate one’s thoughts.

In hindsight, I think now that I was focused on the wrong aspect. As a high school student, I had hoped to discover the obvious, clear, intuitive proofs of every result. Of course it is great to find these proofs when they exist, but it would have been better to grasp earlier that sometimes these proofs don’t exist. And rarely does actual research proceed so cleanly — it’s messy and uncertain and full of backtracking and random exploration.

Posted in Expository, Math.CA, Mathematics
Leave a comment

In this note, we produce a proof of Taylor’s Theorem. As in many proofs of Taylor’s Theorem, we begin with a curious start and then follow our noses forward.

Is this a new proof? I think so. But I wouldn’t bet a lot of money on it. It’s certainly new to me.

Is this a groundbreaking proof? No, not at all. But it’s cute, and I like it.^{1}

We begin with the following simple observation. Suppose that $f$ is two times continuously differentiable. Then for any $t \neq 0$, we see that \begin{equation} f'(t) – f'(0) = \frac{f'(t) – f'(0)}{t} t. \end{equation} Integrating each side from $0$ to $x$, we find that \begin{equation} f(x) – f(0) – f'(0) x = \int_0^x \frac{f'(t) – f'(0)}{t} t dt. \end{equation} To interpret the integral on the right in a different way, we will use the mean value theorem for integrals.

Mean Value Theorem for IntegralsSuppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \int_0^x g(t) h(t) dt = g(c) \int_0^x h(t) dt. \end{equation}

Suppose without loss of generality that $h(t)$ is nonnegative. Since $g$ is continuous on $[0, x]$, it attains its minimum $m$ and maximum $M$ on this interval. Thus \begin{equation} m \int_0^x h(t) dt \leq \int_0^x g(t)h(t)dt \leq M \int_0^x h(t) dt. \end{equation} Let $I = \int_0^x h(t) dt$. If $I = 0$ (or equivalently, if $h(t) \equiv 0$), then the theorem is trivially true, so suppose instead that $I \neq 0$. Then \begin{equation} m \leq \frac{1}{I} \int_0^x g(t) h(t) dt \leq M. \end{equation} By the intermediate value theorem, $g(t)$ attains every value between $m$ and $M$, and thus there exists some $c$ such that \begin{equation} g(c) = \frac{1}{I} \int_0^x g(t) h(t) dt. \end{equation} Rearranging proves the theorem.

For this application, let $g(t) = (f'(t) – f'(0))/t$ for $t \neq 0$, and $g(0) =f'{}'(0)$. The continuity of $g$ at $0$ is exactly the condition that $f'{}'(0)$exists. We also let $h(t) = t$.

For $x > 0$, it follows from the mean value theorem for integrals that there exists a $c \in [0, x]$ such that \begin{equation} \int_0^x \frac{f'(t) – f'(0)}{t} t dt = \frac{f'(c) – f'(0)}{c} \int_0^x t dt = \frac{f'(c) – f'(0)}{c} \frac{x^2}{2}. \end{equation} (Very similar reasoning applies for $x < 0$). Finally, by the mean value theorem (applied to $f’$), there exists a point $\xi \in (0, c)$ such that \begin{equation} f'{}'(\xi) = \frac{f'(c) – f'(0)}{c}. \end{equation} Putting this together, we have proved that there is a $\xi \in (0, x)$ such that \begin{equation} f(x) – f(0) – f'(0) x = f'{}'(\xi) \frac{x^2}{2}, \end{equation} which is one version of Taylor’s Theorem with a linear approximating polynomial.

This approach generalizes. Suppose $f$ is a $(k+1)$ times continuously differentiable function, and begin with the trivial observation that \begin{equation} f^{(k)}(t) – f^{(k)}(0) = \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t. \end{equation} Iteratively integrate $k$ times: first from $0$ to $t_1$, then from $0$ to $t_2$, and so on, with the $k$th interval being from $0$ to $t_k = x$.

Then the left hand side becomes \begin{equation} f(x) – \sum_{n = 0}^k f^{(n)}(0)\frac{x^n}{n!}, \end{equation} the difference between $f$ and its degree $k$ Taylor polynomial. The right hand side is

\begin{equation}\label{eq:only}\underbrace{\int _0^{t_k = x} \cdots \int _0^{t _1}} _{k \text{ times}} \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t \, dt \, dt _1 \cdots dt _{k-1}.\end{equation}

To handle this, we note the following variant of the mean value theorem for integrals.

Mean value theorem for iterated integralsSuppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \underbrace{\int_0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} g(t) h(t) dt =g(c) \underbrace{\int _0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} h(t) dt. \end{equation}

In fact, this can be proved in almost exactly the same way as in the single-integral version, so we do not repeat the proof.

With this theorem, there is a $c \in [0, x]$ such that we see that \eqref{eq:only} can be written as \begin{equation} \frac{f^{(k)}(c) – f^{(k)}(0)}{c} \underbrace{\int _0^{t _k = x} \cdots \int _0^{t _1}} _{k \; \text{times}} t \, dt \, dt _1 \cdots dt _{k-1}. \end{equation} By the mean value theorem, the factor in front of the integrals can be written as $f^{(k+1)}(\xi)$ for some $\xi \in (0, x)$. The integrals can be directly evaluated to be $x^{k+1}/(k+1)! $.

Thus overall, we find that \begin{equation} f(x) = \sum_{n = 0}^n f^{(n)}(0) \frac{x^n}{n!} + f^{(k+1)}(\xi) \frac{x^{k+1}}{(k+1)!} \end{equation} for some $\xi \in (0, x)$. Thus we have proved Taylor’s Theorem (with Lagrange’s error bound).

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions *well*.

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$

g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),

$$ which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

Thus the first task is to be able to generate functions $f$ such that:

- $f(0) = 0$,
- $f(3) = 0$,
- $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
- $f'(x)$ has prescribed order of vanishing at $1$, and
- $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in *Hermite interpolation*. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

- determine the degree of the polynomial,
- create a degree $n-1$ polynomial with variable coefficients in sympy,
- have sympy symbolically compute the relations the coefficients must satisfy,
- ask sympy to solve this symbolic system of equations.

In code, this looks like

```
import sympy
from sympy.abc import X, B, C, D # Establish our variable names
def interpolate(conds):
"""
Finds the polynomial of minimal degree that solves the given Hermite conditions.
conds is a list of the form
[(x1, r1, v1), (x2, r2, v2), ...]
where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
"""
# the degree will be one less than the number of conditions
n = len(conds)
# generate a symbol for each coefficient
A = [sympy.Symbol("a[%d]" % i) for i in range(n)]
# generate the desired polynomial symbolically
P = sum([A[i] * X**i for i in range(n)])
# generate the equations the polynomial must satisfy
#
# for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
# substitutes x in for X, and requires that this equals v.
EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]
# solve this system for the coefficients A[n]
SOLN = sympy.solve(EQNS, A)
return P.subs(SOLN)
```

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$

p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.

$$ In sympy variables, we think of this as

`P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n`

.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

```
> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1
```

Indeed, $2x – 1$ is this polynomial.

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to `interpolate`

.

```
# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1
# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]
def c_degen(n, residue):
"""
Give Hermite conditions for order of vanishing at c0 equal to `n`, with
first nonzero residue `residue`.
NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
of additional degeneracy to add. This may be a source of off-by-one errors.
"""
return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]
def b_degen(n, residue):
"""
Give Hermite conditions for order of vanishing at b0 equal to `n`, with
first nonzero residue `residue`.
"""
return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]
def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
"""
Give unique polynomial with given degeneracies for this MVT problem.
`nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
`nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
"""
conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
return interpolate(conds)
```

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

```
> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16
```

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form `(pt, order, VAL)`

and think of `VAL`

as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown `VAL`

.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition `(2, 1, VAL)`

. Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to `VAL`

over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to `VAL`

and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of `VAL`

.^{1}

In code, this looks like

```
def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
"""
Find the polynomial of minimal degree that interpolates the Hermite
conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
norm on `interval` of its derivative.
"""
# Add the symbolic point to the conditions.
# Recall that D is a sympy variable
new_conds = conds + [(ctrl_point, order, D)]
# Find the polynomial interpolating `new_conds`, symbolic in X *and* D
P = interpolate(new_conds)
# Compute L2 norm of the derivative on `interval`
L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))
# Take the first critical point of the L2 norm with respect to D
SOLN = sympy.solve(sympy.diff(L2, D), D)[0]
# Substitute the minimizing solution in for D and return
return P.subs(D, SOLN)
def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
"""
Give unique polynomial with given degeneracies for this MVT problem whose
derivative on (0, 4) has minimal L2 norm.
`nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
`nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
"""
conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
return smoother_interpolate(conds)
```

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

```
> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
+ 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212
> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
+ 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X
```

Is it much better? Let’s compute the L2 norms.

```
> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349
> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325
```

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

```
import matplotlib.pyplot as plt
import numpy as np
def basic_plot(F, n=300):
fig = plt.figure(figsize=(6, 2.5))
ax = fig.add_subplot(1, 1, 1)
b1d = np.linspace(-.5, 4.5, n)
f = sympy.lambdify(X, F)(b1d)
ax.plot(b1d,f,'k')
ax.set_aspect('equal')
ax.grid(True)
ax.set_xlim([-.5, 4.5])
ax.set_ylim([-1, 5])
ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
fig.savefig("basic_plot.pdf")
```

Then the plot of `poly_with_degens()`

is given by

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of `smoother_poly_with_degens()`

is given by

This stays in frame between $0$ and $4$, as desired.

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

`F3 = smoother_poly_with_degens(nc=1, residue_b=-3)`

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

```
F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)
```

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$

F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).

$$ Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

- Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
- Restrict attention to the domain where solutions are meaningful.
- Plot the
*contour*of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol `F`

in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

```
import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt
def abscissa_plot(F, n=300):
# Compute the derivative of f
DF = sympy.diff(F,X)
# Define CAP_F --- "capital F"
#
# this is (f(b) - f(0))/(b - 0) - f'(c).
CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)
# build the mesh
b1d = np.linspace(-.5, 4.5, n)
b2d, c2d = np.meshgrid(b1d, b1d)
# compute CAP_F within the mesh
cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)
# restrict attention to below the diagonal --- we require c < b
# (although the mas inequality looks reversed in this perspective)
valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)
# Set up plot basics
fig = plt.figure(figsize=(6, 2.5))
ax = fig.add_subplot(1, 1, 1)
ax.set_aspect('equal')
ax.grid(True)
ax.set_xlim([-.5, 4.5])
ax.set_ylim([-.5, 4.5])
# plot the contour
ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')
# plot a diagonal line representing the boundary
ax.plot(b1d,b1d,'k--')
# plot the guaranteed point
ax.plot(b0,c0,'ko')
fig.savefig("abscissa_plot.pdf")
```

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for `poly_with_degens()`

, while for `smoother_poly_with_degens()`

we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

```
F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)
```

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath
Tagged contour plot, implicit function theorem, matplotlib, mean value theorem, numpy, paper, plotting, scipy
3 Comments

Miles Wheeler and I have recently uploaded a paper to the arXiv called “When are there continuous choices for the mean value abscissa?”, which we have submitted to an expository journal. The underlying question is simple but nontrivial.

The mean value theorem of calculus states that, given a differentiable function $f$ on an interval $[a, b]$, then there exists a $c \in (a, b)$ such that

$$ \frac{f(b) – f(a)}{b – a} = f'(c).$$

We call $c$ the *mean value abscissa*.

Our question concerns potential behavior of this abscissa when we fix the left endpoint $a$ of the interval and vary $b$. For each $b$, there is at least one abscissa $c_b$ such that the mean value theorem holds with that abscissa. But generically there may be more than one choice of abscissa for each interval. When can we choose $c_b$ as a continuous function of $b$? That is, when can we write $c = c(b)$ such that

$$ \frac{f(b) – f(a)}{b – a} = f'(c(b))$$

for all $b$ in some interval?

We think of this as a continuous choice for the mean value abscissa.

This is a great question. It’s widely understandable — even to students with only one semester of calculus. Further it encourages a proper understanding of what a *function* is, as thinking of $c$ as potentially a function of $b$ is atypical and interesting.

But I also like this question because the answer is not as simple as you might think, and there are a few nice ideas that get to the answer.

Should you find yourself reading this without knowing the answer, I encourage you to consider it right now. Should continuous choices of abscissas exist? What if the function is really well-behaved? What if it’s smooth? Or analytic?

Let’s focus on the smooth question. Suppose that $f$ is smooth — that it is infinitely differentiable. These are a distinguished class of functions. But it turns out that being smooth is not sufficient: here is a counterexample.

In this figure, there are points $b$ arbitrarily near $b_0$ such that the secant line from $a_0$ to $b$ have positive slope, and points arbitrarily near such that the secant lines have negative slope. There are infinitely many mean value abscissae with $f'(c_0) = 0$, but all of them are either far from a point $c$ where $f'(c) > 0$ or far from a point $c$ where $f'(c) < 0$. And thus there is no continuous choice. From a theorem oriented point of view, our main theorem is that if $f$ is analytic, then there is *always* a locally continuous choice. That is, for every interval $[a_0, b_0]$, there exists a mean value abscissa $c$ such that $c = c(b)$ for some interval $B$ containing $b_0$. But the purpose of this article isn’t simply to prove this theorem. The purpose is to exposit how the ideas that are used to study this problem and to prove these results are fundamentally based only on a couple of central ideas covered in introductory single and multivariable calculus. All of this paper is completely accessible to a student having studied only single variable calculus (and who is willing to believe that partial derivatives exist are a reasonable object). We prove and use simple-but-nontrivial versions of the contraction mapping theorem, the implicit function theorem, and Morse’s lemma. The implicit function theorem is enough to say that any abscissa $c_0$ such that $f”(c_0) \neq 0$ has a unique continuous extension. Thus immediately for “most” intervals on “most” reasonable functions, we answer in the affirmative. Morse’s lemma allows us to say a bit more about the case when $f”(c_0) = 0$ but $f'{}'{}'(c_0) \neq 0$. In this case there are either multiple continuous extensions or none. And a few small ingredients and the idea behind Morse’s lemma, combined with the implicit function theorem again, is enough to prove the main result. ## Student projects A calculus student looking for a project to dive into and sharpen their calculus skills could find ideas here to sink their teeth into. Beginning by understanding this paper is a great start. A good motivating question would be to carry on one additional step, and to study explicitly the behavior of a function near a point where $f”(c_0) = f'{}'{}'(c_0) = 0$, but $f^{(4)}(c_0) \neq 0$. A slightly more open question that we lightly touch on (but leave largely implicit) is ther inverse question: when can one find a mean value abscissa $c$ such that the right endpoint $b$ can be written as a continuous function $b(c)$ for some neighborhood $C$ containing the initial point $c_0$? Much of the analysis is the same, but figuring it out would require some attention. A much deeper question is to consider the abscissa as a function of both the left endpoint $a$ and the right endpoint $b$. The guiding question here could be to decide when one can write the abscissa as a continuous function $c(a, b)$ in a neighborhood of $(a_0, b_0)$. I would be interested to see a graphical description of the possible shapes of these functions — I’m not quite sure what they might look like. There is also a nice computational problem. In the paper, we include several plots of solution curves in $(b, c)$ space. But we did this with a meshed implicit function theorem solver. A computationally inclined student could devise an explicit way of constructing solutions. On the one hand, this is guaranteed to work since one can apply contraction mappings explicitly to make the resulting function from the implicit function theorem explicit. But on the other hand, many (most?) applications of the implicit function theorem are in more complicated high dimensional spaces, whereas the situation in this paper is the smallest nontrivial example. ## Producing the graphs We made 13 graphs in 5 figures for this article. These pictures were created using matplotlib. The data was created using numpy, scipy, and sympy from within the scipy/numpy python stack, and the actual creation was done interactively within a jupyter notebook. The actual notebook is available here, (along with other relatively raw jupyter notebooks). The most complicated graph is this one.

This figure has graphs of three functions along the top. In each graph, the interval $[0, 3]$ is considered in the mean value theorem, and the point $c_0 = 1$ is a mean value abscissa. In each, we also have $f”(c_0) = 0$, and the point is that the behavior of $f”(b_0)$ has a large impact on the nature of the implicit functions. The three graphs along the bottom are in $(b, c)$ space and present all mean value abscissa for each $b$. This is not a function, but the local structure of the graphs are interesting and visually distinct.

The process of making these examples and making these figures is interesting in itself. We did not make these figures explicitly, but instead chose certain points and certain values of derivatives at those points, and used Hermite interpolation find polynomials with those points.^{1}

In the future I plan on writing a note on the creation of these figures.

Posted in Expository, Math.CA, Mathematics
Tagged Calculus, implicit function theorem, mean value theorem, paper, student project
Leave a comment

The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.

But what does this really mean?

If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says

Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.

This doesn’t give clarity.^{1} In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.

According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.^{2}

When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.

Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives^{3}

```
State ideal rounded_down
Vermont 2.851 2
NewHampshire 4.727 4
Maine 3.218 3
Massachusetts 12.62 12
RhodeIsland 2.281 2
Connecticut 7.894 7
NewYork 11.05 11
NewJersey 5.985 5
Pennsylvania 14.42 14
Delaware 1.851 1
Maryland 9.283 9
Virginia 21.01 21
Kentucky 2.290 2
NorthCarolina 11.78 11
SouthCarolina 6.874 6
Georgia 2.361 2
```

But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional “ideal” parts:

- New Jersey (0.985)
- Connecticut (0.894)
- South Carolina (0.874)
- Vermont (0.851)
- Delaware (0.851)
- Massachusetts+Maine (0.838)
- North Carolina (0.78)
- New Hampshire (0.727)

(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?

Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?

There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, *Is it not unfair that the fractional apportionment favours the North*?^{4}

Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.

Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the *Jefferson Method of Apportionment*, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.

As an aside, it’s interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.^{5} In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.

At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?

We’ll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.^{6}

So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.

For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander^{7}

The number of people each representative actually represent is at the core of the notion of fairness — but even then, it’s not obvious.

Suppose we enumerate the states, so that *S*_{i} refers to state *i*. We’ll also denote by *P*_{i} the population of state *i*, and we’ll let *R*_{i} denote the number of representatives allotted to state *i*.

In the ideal scenario, every representative would represent the exact same number of people. That is, we would have

$$\text{pop. per rep. in state i}

= \frac{P_i}{R_i}

= \frac{P_j}{R_j}

= \text{pop. per rep. in state j}$$

for every pair of states *i* and *j*. But this won’t ever happen in practice.

Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If

$$

\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}

$$

then we can say that each representative in state *i* represents more people, and thus those people have a diluted vote.

There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.

A few natural ideas emerge:

- We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
- We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
- We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.

This last one needs a bit of explanation. Define the **relative difference** between two numbers *x* and *y* to be

$$

\frac{\lvert x – y \rvert}{\min(x, y)}.

$$

Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state *j* have smaller constituencies than in state *i* (and therefore people in state *j* have more powerful votes). Then the relative difference in constituency size is

$$

\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.

$$

The relative difference in per capita representation is

$$

\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =

\frac{P_i/R_i}{P_j/R_j} – 1.

$$

Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.

All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson’s scheme — though to be fair, Jefferson’s scheme doesn’t seek to minimize inequality and there is no reason to think it should behave the same).

Each of these ideas leads to a different apportionment scheme, and in fact each has a name.

- Minimizing differences in constituency size is the
*Dean*method. - Minimizing differences in per capita representation is the
*Webster*method. - Minimizing relative differences between both constituency size and per capita representation is the
*Hill*(or sometimes*Huntington-Hill*) method.

Further, each of these schemes has been used at some time in US history. Webster’s method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.^{8} The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.^{9}

In 1929 an automatic apportionment act was passed.^{10} In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:

- The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
- The apportionment that would come from the Webster method.
- The apportionment that would come from the newly introduced Hill method.

If one reads congressional discussion from the time, then it will be good to note that Webster’s method is sometimes called the *method of major fractions* and Hill’s method is sometimes called the *method of equal proportions*. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill’s method was declared to be the recommendation of the Academy.^{11} From 1930 on, Hill’s method has been used.

The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the *Alabama paradox* was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one *more* seat available led to Alabama receiving one *fewer* seat.

The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section **The Apportionment Act of 1792**).

Another paradox was discovered in 1900, known as the *Population paradox*. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia’s population was larger and growing much more rapidly.

In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.

Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it’s still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.^{12}

The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn’t suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.^{13}

Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that *P*_{i} is the population of state *i*, and *R*_{i} is the number of representatives allocated to state *i*. The Hill method seeks to minimize

$$

\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1

$$

whenever *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j}. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.

We can work out a different way of understanding this apportionment that is easier to implement in practice.

Suppose that we have allocated all of the representatives to each state and state *j* has *R*_{j} representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states *i* and *j* with *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j}. (If this isn’t possible then the allocation is perfect).

We can ask if it would be a good idea to move one representative from state *j* to state *i*, since state *j*‘s constituency sizes are smaller. This can be thought of as working with *R*_{i}′=*R*_{i} + 1 and *R*_{j}′=*R*_{j} − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that *P*_{j}/*R*_{j}′>*P*_{i}/*R*_{i}′ (since otherwise the relative difference is strictly smaller) and

$$

\frac{P_jR_i’}{P_iR_j’} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1

$$

(since the relative difference must be at least as large). This is equivalent to

$$

\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}

\iff

\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.

$$

As every variable is positive, we can rewrite this as

$$

\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}

$$

We’ve shown that $(2)$ must hold whenever *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j} in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states *i* and *j*.

Clearly it holds if *i* = *j* as the denominator on the left is strictly smaller.

If we are in the case when *P*_{j}/*R*_{j} > *P*_{i}/*R*_{i}, then we necessarily have the chain *P*_{j}/(*R*_{j} − 1)>*P*_{j}/*R*_{j} > *P*_{i}/*R*_{i} > *P*_{i}/(*R*_{i} + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.

This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill’s method is the largest fraction

$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$

being too large. (Some call this term the *Hill rank-index*).

This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for *n* seats, we can get an apportionment for *n* + 1 seats by assigning the additional seat the any state *i* which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.

Further, it can be shown that there is a unique apportionment in Hill’s method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.

This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.

Officially, Dean’s method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton’s method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean’s method, not Hamilton’s method. Specifically, New York and Illinois were each given one fewer seat than Hamilton’s method would have given, while New Hampshire and Florida were given one additional seat each.

There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.

One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean’s method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden’s 184.

There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it’s widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the “Reconstruction” period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.

Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 269^{14}. If Jefferson’s method had been used, then Gore would have won with 271 votes to Bush’s 266.

These decisions have far-reaching consequences!

- Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
- Balinski, Michel L., and H. Peyton Young. “The quota method of apportionment.” The American Mathematical Monthly 82.7 (1975): 701-730.
- Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
- Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
- Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
- Peskin, Allan. “Was there a Compromise of 1877.” The Journal of American History 60.1 (1973): 63-75.
- US Census Results
- US Constitution
- US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
- George Washington’s collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
- Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
- Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg

Posted in Data, Expository, Mathematics, Politics, Story
Tagged apportionment, election, Hill apportionment
Leave a comment