## Mathematics Category Archive

Below you will find the most recent posts tagged “Mathematics”, arranged in reverse chronological order.

Below you will find the most recent posts tagged “Mathematics”, arranged in reverse chronological order.

Posted in Mathematics
Leave a comment

Today I will be giving a talk at the Maine-Quebec Number Theory conference. Each year that I attend this conference, I marvel at how friendly and inviting an environment it is — I highly recommend checking the conference out (and perhaps modelling other conferences after it).

The theme of my talk is about spectral poles and their contribution towards asymptotics (especially of error terms). I describe a few problems in which spectral poles appear in asymptotics. Unlike the nice simple cases where a single pole (or possibly a few poles) appear, in these cases infinite lines of poles appear.

For a bit over a year, I have encountered these and not known what to make of them. Could you have the pathological case that residues of these poles generically cancel? Could they combine to be larger than expected? How do we make sense of them?

The resolution came only very recently.^{1}

I will later write a dedicated note to this new idea (involving Dirichlet integrals and Landau’s theorem in this context), but for now — here are the slides for my talk.

Posted in Expository, Math.NT, Mathematics
Tagged dirichlet integral, dirichlet series, error term, gauss circle problem
2 Comments

insidious(adjective)1.

a. Having a gradual and cumulative effect

b. of a disease : developing so gradually as to be well established before becoming apparent2.

a. awaiting a chance to entrap

b. harmful but enticing— Merriam-Webster Dictionary

In early topics in mathematics, one can often approach a topic from a combination of intution and first principles in order to deduce the desired results. In later topics, it becomes necessary to repeatedly sharpen intuition while taking advantage of the insights of the many mathematicians who came before — one sees much further by standing on the giants. Somewhere in the middle, it becomes necessary to accept the idea that there are topics and ideas that are not at all obvious. They might appear to have been plucked out of thin air. And this is a conceptual boundary.

In my experience, calculus is often the class where students primarily confront the idea that it is necessary to take advantage of the good ideas of the past. It sneaks up. The main ideas of calculus are intuitive — local rates of change can be approximated by slopes of secant lines and areas under curves can be approximated by sums of areas of boxes. That these are deeply connected is surprising.

To many students, Taylor’s Theorem is one of the first examples of a commonly-used result whose proof has some aspect which appears to have been plucked out of thin air.^{1} Learning Taylor’s Theorem in high school was one of the things that inspired me to begin to revisit calculus with an eye towards *why* each result was true.

I also began to try to prove the fundamental theorems of single and multivariable calculus with as little machinery as possible. High school me thought that topology was overcomplicated and unnecessary for something so intuitive as calculus.^{2}

This train of thought led to my previous note, on another proof of Taylor’s Theorem. That note is a simplified version of one of the first proofs I devised on my own.

Much less obviously, this train of thought also led to the paper on the mean value theorem written with Miles. Originally I had thought that “nice” functions should clearly have continuous choices for mean value abscissae, and I thought that this could be used to provide alternate proofs for some fundamental calculus theorems. It turns out that there are very nice functions that don’t have continuous choices for mean value abscissae, *and* that actually using that result to prove classical calculus results is often more technical than the typical proofs.

The flow of ideas is turbulent, highly nonlinear.

I used to think that developing extra rigor early on in my mathematical education was the right way to get to deeper ideas more quickly. There is a kernel of truth to this, as transitioning from pre-rigorous mathematics to rigorous mathematics is very important. But it is also necessary to transition to post-rigorous mathematics (and more generally, to choose one’s battles) in order to organize and communicate one’s thoughts.

In hindsight, I think now that I was focused on the wrong aspect. As a high school student, I had hoped to discover the obvious, clear, intuitive proofs of every result. Of course it is great to find these proofs when they exist, but it would have been better to grasp earlier that sometimes these proofs don’t exist. And rarely does actual research proceed so cleanly — it’s messy and uncertain and full of backtracking and random exploration.

Posted in Expository, Math.CA, Mathematics
Leave a comment

In this note, we produce a proof of Taylor’s Theorem. As in many proofs of Taylor’s Theorem, we begin with a curious start and then follow our noses forward.

Is this a new proof? I think so. But I wouldn’t bet a lot of money on it. It’s certainly new to me.

Is this a groundbreaking proof? No, not at all. But it’s cute, and I like it.^{1}

We begin with the following simple observation. Suppose that $f$ is two times continuously differentiable. Then for any $t \neq 0$, we see that \begin{equation} f'(t) – f'(0) = \frac{f'(t) – f'(0)}{t} t. \end{equation} Integrating each side from $0$ to $x$, we find that \begin{equation} f(x) – f(0) – f'(0) x = \int_0^x \frac{f'(t) – f'(0)}{t} t dt. \end{equation} To interpret the integral on the right in a different way, we will use the mean value theorem for integrals.

Mean Value Theorem for IntegralsSuppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \int_0^x g(t) h(t) dt = g(c) \int_0^x h(t) dt. \end{equation}

Suppose without loss of generality that $h(t)$ is nonnegative. Since $g$ is continuous on $[0, x]$, it attains its minimum $m$ and maximum $M$ on this interval. Thus \begin{equation} m \int_0^x h(t) dt \leq \int_0^x g(t)h(t)dt \leq M \int_0^x h(t) dt. \end{equation} Let $I = \int_0^x h(t) dt$. If $I = 0$ (or equivalently, if $h(t) \equiv 0$), then the theorem is trivially true, so suppose instead that $I \neq 0$. Then \begin{equation} m \leq \frac{1}{I} \int_0^x g(t) h(t) dt \leq M. \end{equation} By the intermediate value theorem, $g(t)$ attains every value between $m$ and $M$, and thus there exists some $c$ such that \begin{equation} g(c) = \frac{1}{I} \int_0^x g(t) h(t) dt. \end{equation} Rearranging proves the theorem.

For this application, let $g(t) = (f'(t) – f'(0))/t$ for $t \neq 0$, and $g(0) =f'{}'(0)$. The continuity of $g$ at $0$ is exactly the condition that $f'{}'(0)$exists. We also let $h(t) = t$.

For $x > 0$, it follows from the mean value theorem for integrals that there exists a $c \in [0, x]$ such that \begin{equation} \int_0^x \frac{f'(t) – f'(0)}{t} t dt = \frac{f'(c) – f'(0)}{c} \int_0^x t dt = \frac{f'(c) – f'(0)}{c} \frac{x^2}{2}. \end{equation} (Very similar reasoning applies for $x < 0$). Finally, by the mean value theorem (applied to $f’$), there exists a point $\xi \in (0, c)$ such that \begin{equation} f'{}'(\xi) = \frac{f'(c) – f'(0)}{c}. \end{equation} Putting this together, we have proved that there is a $\xi \in (0, x)$ such that \begin{equation} f(x) – f(0) – f'(0) x = f'{}'(\xi) \frac{x^2}{2}, \end{equation} which is one version of Taylor’s Theorem with a linear approximating polynomial.

This approach generalizes. Suppose $f$ is a $(k+1)$ times continuously differentiable function, and begin with the trivial observation that \begin{equation} f^{(k)}(t) – f^{(k)}(0) = \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t. \end{equation} Iteratively integrate $k$ times: first from $0$ to $t_1$, then from $0$ to $t_2$, and so on, with the $k$th interval being from $0$ to $t_k = x$.

Then the left hand side becomes \begin{equation} f(x) – \sum_{n = 0}^k f^{(n)}(0)\frac{x^n}{n!}, \end{equation} the difference between $f$ and its degree $k$ Taylor polynomial. The right hand side is

\begin{equation}\label{eq:only}\underbrace{\int _0^{t_k = x} \cdots \int _0^{t _1}} _{k \text{ times}} \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t \, dt \, dt _1 \cdots dt _{k-1}.\end{equation}

To handle this, we note the following variant of the mean value theorem for integrals.

Mean value theorem for iterated integralsSuppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \underbrace{\int_0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} g(t) h(t) dt =g(c) \underbrace{\int _0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} h(t) dt. \end{equation}

In fact, this can be proved in almost exactly the same way as in the single-integral version, so we do not repeat the proof.

With this theorem, there is a $c \in [0, x]$ such that we see that \eqref{eq:only} can be written as \begin{equation} \frac{f^{(k)}(c) – f^{(k)}(0)}{c} \underbrace{\int _0^{t _k = x} \cdots \int _0^{t _1}} _{k \; \text{times}} t \, dt \, dt _1 \cdots dt _{k-1}. \end{equation} By the mean value theorem, the factor in front of the integrals can be written as $f^{(k+1)}(\xi)$ for some $\xi \in (0, x)$. The integrals can be directly evaluated to be $x^{k+1}/(k+1)! $.

Thus overall, we find that \begin{equation} f(x) = \sum_{n = 0}^n f^{(n)}(0) \frac{x^n}{n!} + f^{(k+1)}(\xi) \frac{x^{k+1}}{(k+1)!} \end{equation} for some $\xi \in (0, x)$. Thus we have proved Taylor’s Theorem (with Lagrange’s error bound).

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions *well*.

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$

g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),

$$ which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

Thus the first task is to be able to generate functions $f$ such that:

- $f(0) = 0$,
- $f(3) = 0$,
- $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
- $f'(x)$ has prescribed order of vanishing at $1$, and
- $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in *Hermite interpolation*. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

- determine the degree of the polynomial,
- create a degree $n-1$ polynomial with variable coefficients in sympy,
- have sympy symbolically compute the relations the coefficients must satisfy,
- ask sympy to solve this symbolic system of equations.

In code, this looks like

```
import sympy
from sympy.abc import X, B, C, D # Establish our variable names
def interpolate(conds):
"""
Finds the polynomial of minimal degree that solves the given Hermite conditions.
conds is a list of the form
[(x1, r1, v1), (x2, r2, v2), ...]
where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
"""
# the degree will be one less than the number of conditions
n = len(conds)
# generate a symbol for each coefficient
A = [sympy.Symbol("a[%d]" % i) for i in range(n)]
# generate the desired polynomial symbolically
P = sum([A[i] * X**i for i in range(n)])
# generate the equations the polynomial must satisfy
#
# for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
# substitutes x in for X, and requires that this equals v.
EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]
# solve this system for the coefficients A[n]
SOLN = sympy.solve(EQNS, A)
return P.subs(SOLN)
```

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$

p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.

$$ In sympy variables, we think of this as

`P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n`

.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

```
> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1
```

Indeed, $2x – 1$ is this polynomial.

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to `interpolate`

.

```
# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1
# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]
def c_degen(n, residue):
"""
Give Hermite conditions for order of vanishing at c0 equal to `n`, with
first nonzero residue `residue`.
NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
of additional degeneracy to add. This may be a source of off-by-one errors.
"""
return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]
def b_degen(n, residue):
"""
Give Hermite conditions for order of vanishing at b0 equal to `n`, with
first nonzero residue `residue`.
"""
return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]
def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
"""
Give unique polynomial with given degeneracies for this MVT problem.
`nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
`nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
"""
conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
return interpolate(conds)
```

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

```
> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16
```

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form `(pt, order, VAL)`

and think of `VAL`

as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown `VAL`

.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition `(2, 1, VAL)`

. Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to `VAL`

over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to `VAL`

and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of `VAL`

.^{1}

In code, this looks like

```
def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
"""
Find the polynomial of minimal degree that interpolates the Hermite
conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
norm on `interval` of its derivative.
"""
# Add the symbolic point to the conditions.
# Recall that D is a sympy variable
new_conds = conds + [(ctrl_point, order, D)]
# Find the polynomial interpolating `new_conds`, symbolic in X *and* D
P = interpolate(new_conds)
# Compute L2 norm of the derivative on `interval`
L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))
# Take the first critical point of the L2 norm with respect to D
SOLN = sympy.solve(sympy.diff(L2, D), D)[0]
# Substitute the minimizing solution in for D and return
return P.subs(D, SOLN)
def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
"""
Give unique polynomial with given degeneracies for this MVT problem whose
derivative on (0, 4) has minimal L2 norm.
`nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
`nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
"""
conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
return smoother_interpolate(conds)
```

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

```
> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
+ 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212
> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
+ 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X
```

Is it much better? Let’s compute the L2 norms.

```
> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349
> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325
```

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

```
import matplotlib.pyplot as plt
import numpy as np
def basic_plot(F, n=300):
fig = plt.figure(figsize=(6, 2.5))
ax = fig.add_subplot(1, 1, 1)
b1d = np.linspace(-.5, 4.5, n)
f = sympy.lambdify(X, F)(b1d)
ax.plot(b1d,f,'k')
ax.set_aspect('equal')
ax.grid(True)
ax.set_xlim([-.5, 4.5])
ax.set_ylim([-1, 5])
ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
fig.savefig("basic_plot.pdf")
```

Then the plot of `poly_with_degens()`

is given by

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of `smoother_poly_with_degens()`

is given by

This stays in frame between $0$ and $4$, as desired.

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

`F3 = smoother_poly_with_degens(nc=1, residue_b=-3)`

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

```
F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)
```

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$

F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).

$$ Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

- Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
- Restrict attention to the domain where solutions are meaningful.
- Plot the
*contour*of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol `F`

in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

```
import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt
def abscissa_plot(F, n=300):
# Compute the derivative of f
DF = sympy.diff(F,X)
# Define CAP_F --- "capital F"
#
# this is (f(b) - f(0))/(b - 0) - f'(c).
CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)
# build the mesh
b1d = np.linspace(-.5, 4.5, n)
b2d, c2d = np.meshgrid(b1d, b1d)
# compute CAP_F within the mesh
cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)
# restrict attention to below the diagonal --- we require c < b
# (although the mas inequality looks reversed in this perspective)
valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)
# Set up plot basics
fig = plt.figure(figsize=(6, 2.5))
ax = fig.add_subplot(1, 1, 1)
ax.set_aspect('equal')
ax.grid(True)
ax.set_xlim([-.5, 4.5])
ax.set_ylim([-.5, 4.5])
# plot the contour
ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')
# plot a diagonal line representing the boundary
ax.plot(b1d,b1d,'k--')
# plot the guaranteed point
ax.plot(b0,c0,'ko')
fig.savefig("abscissa_plot.pdf")
```

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for `poly_with_degens()`

, while for `smoother_poly_with_degens()`

we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

```
F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)
```

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath
Tagged contour plot, implicit function theorem, matplotlib, mean value theorem, numpy, paper, plotting, scipy
3 Comments

Miles Wheeler and I have recently uploaded a paper to the arXiv called “When are there continuous choices for the mean value abscissa?”, which we have submitted to an expository journal. The underlying question is simple but nontrivial.

The mean value theorem of calculus states that, given a differentiable function $f$ on an interval $[a, b]$, then there exists a $c \in (a, b)$ such that

$$ \frac{f(b) – f(a)}{b – a} = f'(c).$$

We call $c$ the *mean value abscissa*.

Our question concerns potential behavior of this abscissa when we fix the left endpoint $a$ of the interval and vary $b$. For each $b$, there is at least one abscissa $c_b$ such that the mean value theorem holds with that abscissa. But generically there may be more than one choice of abscissa for each interval. When can we choose $c_b$ as a continuous function of $b$? That is, when can we write $c = c(b)$ such that

$$ \frac{f(b) – f(a)}{b – a} = f'(c(b))$$

for all $b$ in some interval?

We think of this as a continuous choice for the mean value abscissa.

This is a great question. It’s widely understandable — even to students with only one semester of calculus. Further it encourages a proper understanding of what a *function* is, as thinking of $c$ as potentially a function of $b$ is atypical and interesting.

But I also like this question because the answer is not as simple as you might think, and there are a few nice ideas that get to the answer.

Should you find yourself reading this without knowing the answer, I encourage you to consider it right now. Should continuous choices of abscissas exist? What if the function is really well-behaved? What if it’s smooth? Or analytic?

Let’s focus on the smooth question. Suppose that $f$ is smooth — that it is infinitely differentiable. These are a distinguished class of functions. But it turns out that being smooth is not sufficient: here is a counterexample.

In this figure, there are points $b$ arbitrarily near $b_0$ such that the secant line from $a_0$ to $b$ have positive slope, and points arbitrarily near such that the secant lines have negative slope. There are infinitely many mean value abscissae with $f'(c_0) = 0$, but all of them are either far from a point $c$ where $f'(c) > 0$ or far from a point $c$ where $f'(c) < 0$. And thus there is no continuous choice. From a theorem oriented point of view, our main theorem is that if $f$ is analytic, then there is *always* a locally continuous choice. That is, for every interval $[a_0, b_0]$, there exists a mean value abscissa $c$ such that $c = c(b)$ for some interval $B$ containing $b_0$. But the purpose of this article isn’t simply to prove this theorem. The purpose is to exposit how the ideas that are used to study this problem and to prove these results are fundamentally based only on a couple of central ideas covered in introductory single and multivariable calculus. All of this paper is completely accessible to a student having studied only single variable calculus (and who is willing to believe that partial derivatives exist are a reasonable object). We prove and use simple-but-nontrivial versions of the contraction mapping theorem, the implicit function theorem, and Morse’s lemma. The implicit function theorem is enough to say that any abscissa $c_0$ such that $f”(c_0) \neq 0$ has a unique continuous extension. Thus immediately for “most” intervals on “most” reasonable functions, we answer in the affirmative. Morse’s lemma allows us to say a bit more about the case when $f”(c_0) = 0$ but $f'{}'{}'(c_0) \neq 0$. In this case there are either multiple continuous extensions or none. And a few small ingredients and the idea behind Morse’s lemma, combined with the implicit function theorem again, is enough to prove the main result. ## Student projects A calculus student looking for a project to dive into and sharpen their calculus skills could find ideas here to sink their teeth into. Beginning by understanding this paper is a great start. A good motivating question would be to carry on one additional step, and to study explicitly the behavior of a function near a point where $f”(c_0) = f'{}'{}'(c_0) = 0$, but $f^{(4)}(c_0) \neq 0$. A slightly more open question that we lightly touch on (but leave largely implicit) is ther inverse question: when can one find a mean value abscissa $c$ such that the right endpoint $b$ can be written as a continuous function $b(c)$ for some neighborhood $C$ containing the initial point $c_0$? Much of the analysis is the same, but figuring it out would require some attention. A much deeper question is to consider the abscissa as a function of both the left endpoint $a$ and the right endpoint $b$. The guiding question here could be to decide when one can write the abscissa as a continuous function $c(a, b)$ in a neighborhood of $(a_0, b_0)$. I would be interested to see a graphical description of the possible shapes of these functions — I’m not quite sure what they might look like. There is also a nice computational problem. In the paper, we include several plots of solution curves in $(b, c)$ space. But we did this with a meshed implicit function theorem solver. A computationally inclined student could devise an explicit way of constructing solutions. On the one hand, this is guaranteed to work since one can apply contraction mappings explicitly to make the resulting function from the implicit function theorem explicit. But on the other hand, many (most?) applications of the implicit function theorem are in more complicated high dimensional spaces, whereas the situation in this paper is the smallest nontrivial example. ## Producing the graphs We made 13 graphs in 5 figures for this article. These pictures were created using matplotlib. The data was created using numpy, scipy, and sympy from within the scipy/numpy python stack, and the actual creation was done interactively within a jupyter notebook. The actual notebook is available here, (along with other relatively raw jupyter notebooks). The most complicated graph is this one.

This figure has graphs of three functions along the top. In each graph, the interval $[0, 3]$ is considered in the mean value theorem, and the point $c_0 = 1$ is a mean value abscissa. In each, we also have $f”(c_0) = 0$, and the point is that the behavior of $f”(b_0)$ has a large impact on the nature of the implicit functions. The three graphs along the bottom are in $(b, c)$ space and present all mean value abscissa for each $b$. This is not a function, but the local structure of the graphs are interesting and visually distinct.

The process of making these examples and making these figures is interesting in itself. We did not make these figures explicitly, but instead chose certain points and certain values of derivatives at those points, and used Hermite interpolation find polynomials with those points.^{1}

In the future I plan on writing a note on the creation of these figures.

Posted in Expository, Math.CA, Mathematics
Tagged Calculus, implicit function theorem, mean value theorem, paper, student project
Leave a comment

The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.

But what does this really mean?

If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says

Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.

This doesn’t give clarity.^{1} In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.

According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.^{2}

When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.

Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives^{3}

```
State ideal rounded_down
Vermont 2.851 2
NewHampshire 4.727 4
Maine 3.218 3
Massachusetts 12.62 12
RhodeIsland 2.281 2
Connecticut 7.894 7
NewYork 11.05 11
NewJersey 5.985 5
Pennsylvania 14.42 14
Delaware 1.851 1
Maryland 9.283 9
Virginia 21.01 21
Kentucky 2.290 2
NorthCarolina 11.78 11
SouthCarolina 6.874 6
Georgia 2.361 2
```

But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional “ideal” parts:

- New Jersey (0.985)
- Connecticut (0.894)
- South Carolina (0.874)
- Vermont (0.851)
- Delaware (0.851)
- Massachusetts+Maine (0.838)
- North Carolina (0.78)
- New Hampshire (0.727)

(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?

Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?

There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, *Is it not unfair that the fractional apportionment favours the North*?^{4}

Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.

Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the *Jefferson Method of Apportionment*, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.

As an aside, it’s interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.^{5} In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.

At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?

We’ll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.^{6}

So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.

For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander^{7}

The number of people each representative actually represent is at the core of the notion of fairness — but even then, it’s not obvious.

Suppose we enumerate the states, so that *S*_{i} refers to state *i*. We’ll also denote by *P*_{i} the population of state *i*, and we’ll let *R*_{i} denote the number of representatives allotted to state *i*.

In the ideal scenario, every representative would represent the exact same number of people. That is, we would have

$$\text{pop. per rep. in state i}

= \frac{P_i}{R_i}

= \frac{P_j}{R_j}

= \text{pop. per rep. in state j}$$

for every pair of states *i* and *j*. But this won’t ever happen in practice.

Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If

$$

\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}

$$

then we can say that each representative in state *i* represents more people, and thus those people have a diluted vote.

There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.

A few natural ideas emerge:

- We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
- We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
- We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.

This last one needs a bit of explanation. Define the **relative difference** between two numbers *x* and *y* to be

$$

\frac{\lvert x – y \rvert}{\min(x, y)}.

$$

Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state *j* have smaller constituencies than in state *i* (and therefore people in state *j* have more powerful votes). Then the relative difference in constituency size is

$$

\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.

$$

The relative difference in per capita representation is

$$

\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =

\frac{P_i/R_i}{P_j/R_j} – 1.

$$

Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.

All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson’s scheme — though to be fair, Jefferson’s scheme doesn’t seek to minimize inequality and there is no reason to think it should behave the same).

Each of these ideas leads to a different apportionment scheme, and in fact each has a name.

- Minimizing differences in constituency size is the
*Dean*method. - Minimizing differences in per capita representation is the
*Webster*method. - Minimizing relative differences between both constituency size and per capita representation is the
*Hill*(or sometimes*Huntington-Hill*) method.

Further, each of these schemes has been used at some time in US history. Webster’s method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.^{8} The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.^{9}

In 1929 an automatic apportionment act was passed.^{10} In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:

- The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
- The apportionment that would come from the Webster method.
- The apportionment that would come from the newly introduced Hill method.

If one reads congressional discussion from the time, then it will be good to note that Webster’s method is sometimes called the *method of major fractions* and Hill’s method is sometimes called the *method of equal proportions*. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill’s method was declared to be the recommendation of the Academy.^{11} From 1930 on, Hill’s method has been used.

The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the *Alabama paradox* was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one *more* seat available led to Alabama receiving one *fewer* seat.

The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section **The Apportionment Act of 1792**).

Another paradox was discovered in 1900, known as the *Population paradox*. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia’s population was larger and growing much more rapidly.

In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.

Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it’s still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.^{12}

The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn’t suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.^{13}

Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that *P*_{i} is the population of state *i*, and *R*_{i} is the number of representatives allocated to state *i*. The Hill method seeks to minimize

$$

\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1

$$

whenever *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j}. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.

We can work out a different way of understanding this apportionment that is easier to implement in practice.

Suppose that we have allocated all of the representatives to each state and state *j* has *R*_{j} representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states *i* and *j* with *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j}. (If this isn’t possible then the allocation is perfect).

We can ask if it would be a good idea to move one representative from state *j* to state *i*, since state *j*‘s constituency sizes are smaller. This can be thought of as working with *R*_{i}′=*R*_{i} + 1 and *R*_{j}′=*R*_{j} − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that *P*_{j}/*R*_{j}′>*P*_{i}/*R*_{i}′ (since otherwise the relative difference is strictly smaller) and

$$

\frac{P_jR_i’}{P_iR_j’} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1

$$

(since the relative difference must be at least as large). This is equivalent to

$$

\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}

\iff

\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.

$$

As every variable is positive, we can rewrite this as

$$

\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}

$$

We’ve shown that $(2)$ must hold whenever *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j} in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states *i* and *j*.

Clearly it holds if *i* = *j* as the denominator on the left is strictly smaller.

If we are in the case when *P*_{j}/*R*_{j} > *P*_{i}/*R*_{i}, then we necessarily have the chain *P*_{j}/(*R*_{j} − 1)>*P*_{j}/*R*_{j} > *P*_{i}/*R*_{i} > *P*_{i}/(*R*_{i} + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.

This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill’s method is the largest fraction

$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$

being too large. (Some call this term the *Hill rank-index*).

This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for *n* seats, we can get an apportionment for *n* + 1 seats by assigning the additional seat the any state *i* which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.

Further, it can be shown that there is a unique apportionment in Hill’s method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.

This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.

Officially, Dean’s method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton’s method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean’s method, not Hamilton’s method. Specifically, New York and Illinois were each given one fewer seat than Hamilton’s method would have given, while New Hampshire and Florida were given one additional seat each.

There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.

One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean’s method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden’s 184.

There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it’s widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the “Reconstruction” period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.

Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 269^{14}. If Jefferson’s method had been used, then Gore would have won with 271 votes to Bush’s 266.

These decisions have far-reaching consequences!

- Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
- Balinski, Michel L., and H. Peyton Young. “The quota method of apportionment.” The American Mathematical Monthly 82.7 (1975): 701-730.
- Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
- Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
- Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
- Peskin, Allan. “Was there a Compromise of 1877.” The Journal of American History 60.1 (1973): 63-75.
- US Census Results
- US Constitution
- US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
- George Washington’s collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
- Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
- Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg

Posted in Data, Expository, Mathematics, Politics, Story
Tagged apportionment, election, Hill apportionment
Leave a comment

Here are some notes for my talk **Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles** (an invitation to analytic number theory), which I’m giving on Tuesday 26 February at Macalester College.

The slides for my talk are available here.

The overarching idea of the talk is to explore the deep relationship between

- right triangles with rational side lengths and area $n$,
- three-term arithmetic progressions of squares with common difference $n$, and
- rational points on the elliptic curve $Y^2 = X^3 – n^2 X$.

If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a **congruent number**.

By understanding this relationship, we also describe the ideas and results in the paper A Shifted Sum for the Congruent Number Problem, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.

Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like sage.^{1}

For the rest of this note, I’ll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.

Firstly, one needs access to sage. It’s free to install, but it’s quite large. The easiest way to begin using sage immediately is to use cocalc.com, a free interface to sage (and other tools) that was created by William Stein, who also created sage.

In a sage session, we can create an elliptic curve through

```
> E6 = EllipticCurve([-36, 0])
> E6
Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field
```

More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use

```
> n = 6 # (or anything you want)
> E = EllipticCurve([-n**2, 0])
```

We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.

```
> E6.rank()
1
```

If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.^{2}

For the curve $Y^2 = X^3 – 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through

```
> E6.point_search(10)
[(-3 : 9 : 1)]
```

The `10`

in this code is a limit on the complexity of the point. The precise definition isn’t important — using $10$ is a reasonable limit for us.

We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 – 36 X Z^2$ — it turns out that in many cases, it’s easier to perform computations when every term is a polynomial of the same degree. The coordinates it’s giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.

```
> Pt = E6.point_search(10)[0] # The [0] means to return the first element of the list
> Pt.xy()
(-3, 9)
```

In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by

$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$

We can write a sage function to perform this map for us, through

```
> def pt_to_triangle(P):
x, y = P.xy()
return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y
> pt_to_triangle(Pt)
(3, 4, 5)
```

This returns the $(3, 4, 5)$ triangle!

Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.

```
> Pt2 = Pt + Pt
> Pt2.xy()
(25/4, -35/8)
> pt_to_triangle(Pt2)
(7/10, 120/7, -1201/70)
```

Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it’s the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).

Let’s do one more.

```
> Pt3 = Pt + Pt + Pt
> Pt3.xy()
(-1587/1369, -321057/50653)
> pt_to_triangle(Pt3)
(-4653/851, -3404/1551, -7776485/1319901)
```

That’s a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!

Posted in Expository, Math.NT, Mathematics, Programming, sage, sagemath, sagemath
Leave a comment

Today, I’m giving a talk on *Zeroes of L-functions associated to half-integral weight modular forms*, which includes some joint work with Li-Mei Lim and Tom Hulse, and which alludes to other joint work touched on previously with Jeff Hoffstein and Min Lee (and which perhaps should have been finished a few years ago).

Posted in Math.NT, Mathematics
Tagged half-integral weight modular form, l function, modular form, zeroes
Leave a comment

Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a paper to the arXiv called “Second Moments in the Generalized Gauss Circle Problem”. I’ve briefly described its contents before.

This paper has been accepted and will appear in Forum of Mathematics: Sigma.

This is the first time I’ve submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don’t release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.

An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.^{1}

There is one major addition to the paper that didn’t appear in the original preprint. At one of the referee’s suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.

If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$ f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz), $$ then one can use a (real analytic) Eisenstein series $$ E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s $$ to recognize the Rankin-Selberg $L$-function \begin{equation}\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k – 1}} = h(s) \langle f g y^k, E(s, z) \rangle, \end{equation} where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.

When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.

When $f$ and $g$ are not cusp forms, then Zagier^{2} provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.

In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta^{3} gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.

However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.^{4} For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.

The Appendix to the paper can be thought of as an extension of Gupta’s paper: it uses Gupta’s ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.

There is more to this story. I’ll return to it in a later note.

I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.

When I first opened these comments, I was a bit afraid. Having *so many comments* was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).

More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,^{5} but this process changed and helped sharpen my ideas concerning good technical math writing.

I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn’t expect this.

I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author’s institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).

This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.^{6}

I will certainly consider Forum of Mathematics in the future.

Posted in Expository, Math.NT, Mathematics, Warwick
Tagged gauss circle problem, l function, number theory, rankin-selberg convolution
Leave a comment

In my previous note, I looked at an amusing but inefficient way to compute the sum $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1}$$ using Mellin and inverse Mellin transforms. This was great fun, but the amount of work required was more intense than the more straightforward approach offered immediately by using Lambert series.

However, Adam Harper suggested that there is a nice shortcut that we can use (although coming up with this shortcut requires either a lot of familiarity with Mellin transforms or knowledge of the answer).

In the Lambert series approach, one shows quickly that $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ and then evaluates this last sum directly. For the Mellin transform approach, we might ask: do the two functions $$ f(x) = \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}$$ and $$ g(x) = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ have the same Mellin transforms? From the previous note, we know that they have the same values at $1$.

We also showed very quickly that $$ \mathcal{M} [f] = \frac{1}{(\log 2)^2} \Gamma(s) \zeta(s-1). $$ The more difficult parts from the previous note arose in the evaluation of the inverse Mellin transform at $x=1$.

Let us compute the Mellin transform of $g$. We find that $$ \begin{align}

\mathcal{M}[g] &= \sum_{n \geq 1} n \int_0^\infty \frac{1}{2^{nx}} x^s \frac{dx}{x} \notag \\

&= \sum_{n \geq 1} n \int_0^\infty \frac{1}{e^{nx \log 2}} x^s \frac{dx}{x} \notag \\

&= \sum_{n \geq 1} \frac{n}{(n \log 2)^s} \int_0^\infty x^s e^{-x} \frac{dx}{x} \notag \\

&= \frac{1}{(\log 2)^2} \zeta(s-1)\Gamma(s). \notag

\end{align}$$ To go from the second line to the third line, we did the change of variables $x \mapsto x/(n \log 2)$, yielding an integral which is precisely the definition of the Gamma function.

Thus we see that $$ \mathcal{M}[g] = \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) = \mathcal{M}[f],$$ and thus $f(x) = g(x)$. (“Nice” functions with the same “nice” Mellin transforms are also the same, exactly as with Fourier transforms).

This shows that not only is $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ but in fact $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1} = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ for all $x > 1$.

I think that’s sort of slick.

Posted in Math.NT, Mathematics, Warwick
Tagged euler phi, Mellin Transform, number theory, sum evaluation
Leave a comment