Making Plots of Modular Forms

Making plots of modular forms

Inspired by the images and ideas of Elias Wegert, I thought it might be interesting to attempt to implement a version of his colorizing technique for complex functions in sage. The purpose is ultimately to revisit how one plots modular forms in the LMFDB (see lmfdb.org and click around to see various plots — some are good, others are less good).

 

The challenge is that plotting a function from $\mathbb{C} \longrightarrow \mathbb{C}$ is that the graph is naturally 4-dimensional, and we are very bad at visualizing 4d things. In fact, we want to use only 2d to visualize it.

A complex number $z = re^{i \theta}$ is determined by the magnitude ($r$) and the argument ($\theta$). Thus
one typical approach to represent the value taken by a function $f$ at a point $z$ is to represent the magnitude of $f(z)$ in terms of the brightness, and to represent the argument in terms of color.

For example, the typical complex space would then look like the following.

Continue reading

Posted in Mathematics | Tagged , | Leave a comment

Non-real poles and irregularity of distribution I

$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.

We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}’ a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron’s formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy \begin{equation}\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds \end{equation} for an appropriate choice of $\sigma$.

Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a “Main Term” $\MT(x)$ and an “Error Term” $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to irregularity in the distribution of the coefficients.

To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider nice weight functions, i.e.\ weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.

For a weight function $v(\cdot)$, denote its Mellin transform by \begin{equation} V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}. \end{equation} Then we can study the more general dual family \begin{equation}\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds. \end{equation}

We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.

Theorem 1

Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big). \end{equation}


Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.

Theorem 2

Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm( x^{\sigma_0 – \epsilon} ). \end{equation}


The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham’s Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) – x = \Omega_\pm(x^{\Theta – \epsilon})$ and $\psi(x) – x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev’s second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan’s text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn’t have it nicely digested when I needed it).

Motivation and Application

Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape \begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s} \end{equation} for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.

Explicit asymptotics take the form \begin{equation} \sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X \end{equation} where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.

In this case, applying Theorem 2 with the Perron-weight \begin{equation} v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases} \end{equation} shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X). \end{equation} Similarly, Theorem 1 shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta – \epsilon}), \end{equation} where $\Theta < 7/64$ is the supremum of the deviations to Selberg’s Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).

More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.

Proofs

The proofs of these results will soon appear on the arXiv.

Posted in Math.NT, Mathematics | Tagged , , , | Leave a comment

Notes from a talk at the Maine-Quebec Number Theory Conference

Today I will be giving a talk at the Maine-Quebec Number Theory conference. Each year that I attend this conference, I marvel at how friendly and inviting an environment it is — I highly recommend checking the conference out (and perhaps modelling other conferences after it).

The theme of my talk is about spectral poles and their contribution towards asymptotics (especially of error terms). I describe a few problems in which spectral poles appear in asymptotics. Unlike the nice simple cases where a single pole (or possibly a few poles) appear, in these cases infinite lines of poles appear.

For a bit over a year, I have encountered these and not known what to make of them. Could you have the pathological case that residues of these poles generically cancel? Could they combine to be larger than expected? How do we make sense of them?

The resolution came only very recently.1

I will later write a dedicated note to this new idea (involving Dirichlet integrals and Landau’s theorem in this context), but for now — here are the slides for my talk.

Posted in Expository, Math.NT, Mathematics | Tagged , , , | 2 Comments

The Insidiousness of Mathematics

insidious (adjective)

1.

a. Having a gradual and cumulative effect
b. of a disease : developing so gradually as to be well established before becoming apparent

2.
a. awaiting a chance to entrap
b. harmful but enticing

— Merriam-Webster Dictionary

In early topics in mathematics, one can often approach a topic from a combination of intution and first principles in order to deduce the desired results. In later topics, it becomes necessary to repeatedly sharpen intuition while taking advantage of the insights of the many mathematicians who came before — one sees much further by standing on the giants. Somewhere in the middle, it becomes necessary to accept the idea that there are topics and ideas that are not at all obvious. They might appear to have been plucked out of thin air. And this is a conceptual boundary.

In my experience, calculus is often the class where students primarily confront the idea that it is necessary to take advantage of the good ideas of the past. It sneaks up. The main ideas of calculus are intuitive — local rates of change can be approximated by slopes of secant lines and areas under curves can be approximated by sums of areas of boxes. That these are deeply connected is surprising.

To many students, Taylor’s Theorem is one of the first examples of a commonly-used result whose proof has some aspect which appears to have been plucked out of thin air.1 Learning Taylor’s Theorem in high school was one of the things that inspired me to begin to revisit calculus with an eye towards why each result was true.

I also began to try to prove the fundamental theorems of single and multivariable calculus with as little machinery as possible. High school me thought that topology was overcomplicated and unnecessary for something so intuitive as calculus.2

This train of thought led to my previous note, on another proof of Taylor’s Theorem. That note is a simplified version of one of the first proofs I devised on my own.

Much less obviously, this train of thought also led to the paper on the mean value theorem written with Miles. Originally I had thought that “nice” functions should clearly have continuous choices for mean value abscissae, and I thought that this could be used to provide alternate proofs for some fundamental calculus theorems. It turns out that there are very nice functions that don’t have continuous choices for mean value abscissae, and that actually using that result to prove classical calculus results is often more technical than the typical proofs.

The flow of ideas is turbulent, highly nonlinear.

I used to think that developing extra rigor early on in my mathematical education was the right way to get to deeper ideas more quickly. There is a kernel of truth to this, as transitioning from pre-rigorous mathematics to rigorous mathematics is very important. But it is also necessary to transition to post-rigorous mathematics (and more generally, to choose one’s battles) in order to organize and communicate one’s thoughts.

In hindsight, I think now that I was focused on the wrong aspect. As a high school student, I had hoped to discover the obvious, clear, intuitive proofs of every result. Of course it is great to find these proofs when they exist, but it would have been better to grasp earlier that sometimes these proofs don’t exist. And rarely does actual research proceed so cleanly — it’s messy and uncertain and full of backtracking and random exploration.

Posted in Expository, Math.CA, Mathematics | Leave a comment

Another proof of Taylor’s Theorem

In this note, we produce a proof of Taylor’s Theorem. As in many proofs of Taylor’s Theorem, we begin with a curious start and then follow our noses forward.

Is this a new proof? I think so. But I wouldn’t bet a lot of money on it. It’s certainly new to me.

Is this a groundbreaking proof? No, not at all. But it’s cute, and I like it.1

We begin with the following simple observation. Suppose that $f$ is two times continuously differentiable. Then for any $t \neq 0$, we see that \begin{equation} f'(t) – f'(0) = \frac{f'(t) – f'(0)}{t} t. \end{equation} Integrating each side from $0$ to $x$, we find that \begin{equation} f(x) – f(0) – f'(0) x = \int_0^x \frac{f'(t) – f'(0)}{t} t dt. \end{equation} To interpret the integral on the right in a different way, we will use the mean value theorem for integrals.

Mean Value Theorem for Integrals

Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \int_0^x g(t) h(t) dt = g(c) \int_0^x h(t) dt. \end{equation}

Suppose without loss of generality that $h(t)$ is nonnegative. Since $g$ is continuous on $[0, x]$, it attains its minimum $m$ and maximum $M$ on this interval. Thus \begin{equation} m \int_0^x h(t) dt \leq \int_0^x g(t)h(t)dt \leq M \int_0^x h(t) dt. \end{equation} Let $I = \int_0^x h(t) dt$. If $I = 0$ (or equivalently, if $h(t) \equiv 0$), then the theorem is trivially true, so suppose instead that $I \neq 0$. Then \begin{equation} m \leq \frac{1}{I} \int_0^x g(t) h(t) dt \leq M. \end{equation} By the intermediate value theorem, $g(t)$ attains every value between $m$ and $M$, and thus there exists some $c$ such that \begin{equation} g(c) = \frac{1}{I} \int_0^x g(t) h(t) dt. \end{equation} Rearranging proves the theorem.

For this application, let $g(t) = (f'(t) – f'(0))/t$ for $t \neq 0$, and $g(0) =f'{}'(0)$. The continuity of $g$ at $0$ is exactly the condition that $f'{}'(0)$exists. We also let $h(t) = t$.

For $x > 0$, it follows from the mean value theorem for integrals that there exists a $c \in [0, x]$ such that \begin{equation} \int_0^x \frac{f'(t) – f'(0)}{t} t dt = \frac{f'(c) – f'(0)}{c} \int_0^x t dt = \frac{f'(c) – f'(0)}{c} \frac{x^2}{2}. \end{equation} (Very similar reasoning applies for $x < 0$). Finally, by the mean value theorem (applied to $f’$), there exists a point $\xi \in (0, c)$ such that \begin{equation} f'{}'(\xi) = \frac{f'(c) – f'(0)}{c}. \end{equation} Putting this together, we have proved that there is a $\xi \in (0, x)$ such that \begin{equation} f(x) – f(0) – f'(0) x = f'{}'(\xi) \frac{x^2}{2}, \end{equation} which is one version of Taylor’s Theorem with a linear approximating polynomial.

This approach generalizes. Suppose $f$ is a $(k+1)$ times continuously differentiable function, and begin with the trivial observation that \begin{equation} f^{(k)}(t) – f^{(k)}(0) = \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t. \end{equation} Iteratively integrate $k$ times: first from $0$ to $t_1$, then from $0$ to $t_2$, and so on, with the $k$th interval being from $0$ to $t_k = x$.

Then the left hand side becomes \begin{equation} f(x) – \sum_{n = 0}^k f^{(n)}(0)\frac{x^n}{n!}, \end{equation} the difference between $f$ and its degree $k$ Taylor polynomial. The right hand side is
\begin{equation}\label{eq:only}\underbrace{\int _0^{t_k = x} \cdots \int _0^{t _1}} _{k \text{ times}} \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t \, dt \, dt _1 \cdots dt _{k-1}.\end{equation}

To handle this, we note the following variant of the mean value theorem for integrals.

Mean value theorem for iterated integrals

Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \underbrace{\int_0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} g(t) h(t) dt =g(c) \underbrace{\int _0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} h(t) dt. \end{equation}

In fact, this can be proved in almost exactly the same way as in the single-integral version, so we do not repeat the proof.

With this theorem, there is a $c \in [0, x]$ such that we see that \eqref{eq:only} can be written as \begin{equation} \frac{f^{(k)}(c) – f^{(k)}(0)}{c} \underbrace{\int _0^{t _k = x} \cdots \int _0^{t _1}} _{k \; \text{times}} t \, dt \, dt _1 \cdots dt _{k-1}. \end{equation} By the mean value theorem, the factor in front of the integrals can be written as $f^{(k+1)}(\xi)$ for some $\xi \in (0, x)$. The integrals can be directly evaluated to be $x^{k+1}/(k+1)! $.

Thus overall, we find that \begin{equation} f(x) = \sum_{n = 0}^n f^{(n)}(0) \frac{x^n}{n!} + f^{(k+1)}(\xi) \frac{x^{k+1}}{(k+1)!} \end{equation} for some $\xi \in (0, x)$. Thus we have proved Taylor’s Theorem (with Lagrange’s error bound).

Posted in Math.CA, Mathematics | Tagged , | Leave a comment

Email configuration for mutt on a webfaction server

I have email setup for my sites through webfaction. I have some number of mailboxes and some number of users, and a few users share the same mailboxes.

For a long time I used either a direct webmail or forwarded my site email to a different account, but I’m moving towards more email self-reliance.

A few minutes of searching didn’t tell me how to set up mutt on webfaction. Here is a minimal configuration for what I did.

I will assume that we are configuring email for user@mysite.com with mailbox MAILBOX, and where the password for that mailbox is MAILBOXPASSWORD. I will also assume that the user, mailbox, and password have already been set up. The missing step is to connect it to mutt.

My .muttrc looks like


set realname = "FIRST LAST"
set from = "user@mysite.com"
set use_from = yes
set edit_headers = yes

set imap_user = 'MAILBOX'
set imap_pass = 'MAILBOXPASSWORD'

set folder = "imaps://mail.webfaction.com:993"
set spoolfile = "+INBOX"
set record = "+sent"
set postponed = "+postponed"

set smtp_url = "smtp://MAILBOX@smtp.webfaction.com:587/"
set smtp_pass = "MAILBOXPASSWORD"

# optional caching and ensure security
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = "~/.mutt/certificates"

set ssl_starttls=yes
set ssl_force_tls=yes

It’s not particularly complicated, but it wasn’t obvious to me at first either.

Posted in Programming | Tagged , , , | Leave a comment

Choosing functions and generating figures for “When are there continuous choices for the mean value abscissa?”

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions well.

Choosing the right functions to plot

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$
g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),
$$
which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

A first idea

Thus the first task is to be able to generate functions $f$ such that:

  1. $f(0) = 0$,
  2. $f(3) = 0$,
  3. $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
  4. $f'(x)$ has prescribed order of vanishing at $1$, and
  5. $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in Hermite interpolation. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

  1. determine the degree of the polynomial,
  2. create a degree $n-1$ polynomial with variable coefficients in sympy,
  3. have sympy symbolically compute the relations the coefficients must satisfy,
  4. ask sympy to solve this symbolic system of equations.

In code, this looks like

import sympy
from sympy.abc import X, B, C, D    # Establish our variable names
def interpolate(conds):
    """
    Finds the polynomial of minimal degree that solves the given Hermite conditions.

    conds is a list of the form
      [(x1, r1, v1), (x2, r2, v2), ...]
    where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
    """
    # the degree will be one less than the number of conditions
    n = len(conds)

    # generate a symbol for each coefficient
    A = [sympy.Symbol("a[%d]" % i) for i in range(n)]

    # generate the desired polynomial symbolically
    P = sum([A[i] * X**i for i in range(n)])

    # generate the equations the polynomial must satisfy
    #
    # for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
    # substitutes x in for X, and requires that this equals v.
    EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]

    # solve this system for the coefficients A[n]
    SOLN = sympy.solve(EQNS, A)

    return P.subs(SOLN)

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$
p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.
$$
In sympy variables, we think of this as

P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1

Indeed, $2x – 1$ is this polynomial.

Too rigid

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

Add knobs to turn

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to interpolate.

# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1

# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]

def c_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at c0 equal to `n`, with
    first nonzero residue `residue`.

    NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
    of additional degeneracy to add.  This may be a source of off-by-one errors.
    """
    return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]


def b_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at b0 equal to `n`, with
    first nonzero residue `residue`.
    """
    return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]

def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return interpolate(conds)

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16

Too many knobs

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

Minimize change

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form (pt, order, VAL) and think of VAL as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown VAL.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition (2, 1, VAL). Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to VAL over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to VAL and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of VAL.1

In code, this looks like

def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
    """
    Find the polynomial of minimal degree that interpolates the Hermite
    conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
    norm on `interval` of its derivative.
    """
    # Add the symbolic point to the conditions.
    # Recall that D is a sympy variable
    new_conds = conds + [(ctrl_point, order, D)]

    # Find the polynomial interpolating `new_conds`, symbolic in X *and* D
    P = interpolate(new_conds)

    # Compute L2 norm of the derivative on `interval`
    L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))

    # Take the first critical point of the L2 norm with respect to D
    SOLN = sympy.solve(sympy.diff(L2, D), D)[0]

    # Substitute the minimizing solution in for D and return
    return P.subs(D, SOLN)


def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem whose
    derivative on (0, 4) has minimal L2 norm.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.

    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return smoother_interpolate(conds)

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
  + 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212

> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
  + 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X

Is it much better? Let’s compute the L2 norms.

> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349

> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

import matplotlib.pyplot as plt
import numpy as np

def basic_plot(F, n=300):
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    b1d = np.linspace(-.5, 4.5, n)
    f = sympy.lambdify(X, F)(b1d)
    ax.plot(b1d,f,'k')
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-1, 5])
    ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
    fig.savefig("basic_plot.pdf")

Then the plot of poly_with_degens() is given by

 

 

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of smoother_poly_with_degens() is given by

This stays in frame between $0$ and $4$, as desired.

Choose data to highlight and make the functions

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Plotting all possible mean value abscissae

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$
F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).
$$
Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

  1. Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
  2. Restrict attention to the domain where solutions are meaningful.
  3. Plot the contour of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol F in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt

def abscissa_plot(F, n=300):
    # Compute the derivative of f
    DF = sympy.diff(F,X)

    # Define CAP_F --- "capital F"
    #
    # this is (f(b) - f(0))/(b - 0) - f'(c).
    CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)

    # build the mesh
    b1d = np.linspace(-.5, 4.5, n)
    b2d, c2d = np.meshgrid(b1d, b1d)

    # compute CAP_F within the mesh
    cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)

    # restrict attention to below the diagonal --- we require c < b
    # (although the mas inequality looks reversed in this perspective)
    valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)

    # Set up plot basics
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-.5, 4.5])

    # plot the contour
    ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')

    # plot a diagonal line representing the boundary
    ax.plot(b1d,b1d,'k--')

    # plot the guaranteed point
    ax.plot(b0,c0,'ko')

    fig.savefig("abscissa_plot.pdf")

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for poly_with_degens(), while for smoother_poly_with_degens() we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

 

Copy of the code

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | 3 Comments

Paper: When are there continuous choices for the Mean Value Abscissa? with Miles Wheeler

When are there continuous choices for the Mean Value Abscissa?

Miles Wheeler and I have recently uploaded a paper to the arXiv called “When are there continuous choices for the mean value abscissa?”, which we have submitted to an expository journal. The underlying question is simple but nontrivial.

The mean value theorem of calculus states that, given a differentiable function $f$ on an interval $[a, b]$, then there exists a $c \in (a, b)$ such that
$$ \frac{f(b) – f(a)}{b – a} = f'(c).$$
We call $c$ the mean value abscissa.
Our question concerns potential behavior of this abscissa when we fix the left endpoint $a$ of the interval and vary $b$. For each $b$, there is at least one abscissa $c_b$ such that the mean value theorem holds with that abscissa. But generically there may be more than one choice of abscissa for each interval. When can we choose $c_b$ as a continuous function of $b$? That is, when can we write $c = c(b)$ such that
$$ \frac{f(b) – f(a)}{b – a} = f'(c(b))$$
for all $b$ in some interval?
We think of this as a continuous choice for the mean value abscissa.

This is a great question. It’s widely understandable — even to students with only one semester of calculus. Further it encourages a proper understanding of what a function is, as thinking of $c$ as potentially a function of $b$ is atypical and interesting.

But I also like this question because the answer is not as simple as you might think, and there are a few nice ideas that get to the answer.

Should you find yourself reading this without knowing the answer, I encourage you to consider it right now. Should continuous choices of abscissas exist? What if the function is really well-behaved? What if it’s smooth? Or analytic?

Let’s focus on the smooth question. Suppose that $f$ is smooth — that it is infinitely differentiable. These are a distinguished class of functions. But it turns out that being smooth is not sufficient: here is a counterexample.

In this figure, there are points $b$ arbitrarily near $b_0$ such that the secant line from $a_0$ to $b$ have positive slope, and points arbitrarily near such that the secant lines have negative slope. There are infinitely many mean value abscissae with $f'(c_0) = 0$, but all of them are either far from a point $c$ where $f'(c) > 0$ or far from a point $c$ where $f'(c) < 0$. And thus there is no continuous choice. From a theorem oriented point of view, our main theorem is that if $f$ is analytic, then there is always a locally continuous choice. That is, for every interval $[a_0, b_0]$, there exists a mean value abscissa $c$ such that $c = c(b)$ for some interval $B$ containing $b_0$. But the purpose of this article isn’t simply to prove this theorem. The purpose is to exposit how the ideas that are used to study this problem and to prove these results are fundamentally based only on a couple of central ideas covered in introductory single and multivariable calculus. All of this paper is completely accessible to a student having studied only single variable calculus (and who is willing to believe that partial derivatives exist are a reasonable object). We prove and use simple-but-nontrivial versions of the contraction mapping theorem, the implicit function theorem, and Morse’s lemma. The implicit function theorem is enough to say that any abscissa $c_0$ such that $f”(c_0) \neq 0$ has a unique continuous extension. Thus immediately for “most” intervals on “most” reasonable functions, we answer in the affirmative. Morse’s lemma allows us to say a bit more about the case when $f”(c_0) = 0$ but $f'{}'{}'(c_0) \neq 0$. In this case there are either multiple continuous extensions or none. And a few small ingredients and the idea behind Morse’s lemma, combined with the implicit function theorem again, is enough to prove the main result. ## Student projects A calculus student looking for a project to dive into and sharpen their calculus skills could find ideas here to sink their teeth into. Beginning by understanding this paper is a great start. A good motivating question would be to carry on one additional step, and to study explicitly the behavior of a function near a point where $f”(c_0) = f'{}'{}'(c_0) = 0$, but $f^{(4)}(c_0) \neq 0$. A slightly more open question that we lightly touch on (but leave largely implicit) is ther inverse question: when can one find a mean value abscissa $c$ such that the right endpoint $b$ can be written as a continuous function $b(c)$ for some neighborhood $C$ containing the initial point $c_0$? Much of the analysis is the same, but figuring it out would require some attention. A much deeper question is to consider the abscissa as a function of both the left endpoint $a$ and the right endpoint $b$. The guiding question here could be to decide when one can write the abscissa as a continuous function $c(a, b)$ in a neighborhood of $(a_0, b_0)$. I would be interested to see a graphical description of the possible shapes of these functions — I’m not quite sure what they might look like. There is also a nice computational problem. In the paper, we include several plots of solution curves in $(b, c)$ space. But we did this with a meshed implicit function theorem solver. A computationally inclined student could devise an explicit way of constructing solutions. On the one hand, this is guaranteed to work since one can apply contraction mappings explicitly to make the resulting function from the implicit function theorem explicit. But on the other hand, many (most?) applications of the implicit function theorem are in more complicated high dimensional spaces, whereas the situation in this paper is the smallest nontrivial example. ## Producing the graphs We made 13 graphs in 5 figures for this article. These pictures were created using matplotlib. The data was created using numpy, scipy, and sympy from within the scipy/numpy python stack, and the actual creation was done interactively within a jupyter notebook. The actual notebook is available here, (along with other relatively raw jupyter notebooks). The most complicated graph is this one.

This figure has graphs of three functions along the top. In each graph, the interval $[0, 3]$ is considered in the mean value theorem, and the point $c_0 = 1$ is a mean value abscissa. In each, we also have $f”(c_0) = 0$, and the point is that the behavior of $f”(b_0)$ has a large impact on the nature of the implicit functions. The three graphs along the bottom are in $(b, c)$ space and present all mean value abscissa for each $b$. This is not a function, but the local structure of the graphs are interesting and visually distinct.

The process of making these examples and making these figures is interesting in itself. We did not make these figures explicitly, but instead chose certain points and certain values of derivatives at those points, and used Hermite interpolation find polynomials with those points.1

In the future I plan on writing a note on the creation of these figures.

Posted in Expository, Math.CA, Mathematics | Tagged , , , , | Leave a comment

How do we decide how many representatives there are for each state?

The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.

But what does this really mean?

If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says

Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.

This doesn’t give clarity.1 In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.

The Apportionment Act of 1792

According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.2

When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.

Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives3

State          ideal    rounded_down
Vermont        2.851    2
NewHampshire   4.727    4
Maine          3.218    3
Massachusetts  12.62    12
RhodeIsland    2.281    2
Connecticut    7.894    7
NewYork        11.05    11
NewJersey      5.985    5
Pennsylvania   14.42    14
Delaware       1.851    1
Maryland       9.283    9
Virginia       21.01    21
Kentucky       2.290    2
NorthCarolina  11.78    11
SouthCarolina  6.874    6
Georgia        2.361    2

But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional “ideal” parts:

  1. New Jersey (0.985)
  2. Connecticut (0.894)
  3. South Carolina (0.874)
  4. Vermont (0.851)
  5. Delaware (0.851)
  6. Massachusetts+Maine (0.838)
  7. North Carolina (0.78)
  8. New Hampshire (0.727)

(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?

Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?

There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, Is it not unfair that the fractional apportionment favours the North?4

Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.

Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the Jefferson Method of Apportionment, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.

As an aside, it’s interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.5 In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.

Apportionment Act of 1792

Measuring the fairness of an apportionment method

At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?

We’ll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.6

So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.

For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander7

The number of people each representative actually represent is at the core of the notion of fairness — but even then, it’s not obvious.

Suppose we enumerate the states, so that Si refers to state i. We’ll also denote by Pi the population of state i, and we’ll let Ri denote the number of representatives allotted to state i.

In the ideal scenario, every representative would represent the exact same number of people. That is, we would have
$$\text{pop. per rep. in state i}
= \frac{P_i}{R_i}
= \frac{P_j}{R_j}
= \text{pop. per rep. in state j}$$

for every pair of states i and j. But this won’t ever happen in practice.

Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If
$$
\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}
$$

then we can say that each representative in state i represents more people, and thus those people have a diluted vote.

Amounts of Inequality

There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.

A few natural ideas emerge:

  • We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
  • We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
  • We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.

This last one needs a bit of explanation. Define the relative difference between two numbers x and y to be
$$
\frac{\lvert x – y \rvert}{\min(x, y)}.
$$

Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state j have smaller constituencies than in state i (and therefore people in state j have more powerful votes). Then the relative difference in constituency size is
$$
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.
$$

The relative difference in per capita representation is
$$
\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =
\frac{P_i/R_i}{P_j/R_j} – 1.
$$

Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.

All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson’s scheme — though to be fair, Jefferson’s scheme doesn’t seek to minimize inequality and there is no reason to think it should behave the same).

Each of these ideas leads to a different apportionment scheme, and in fact each has a name.

  • Minimizing differences in constituency size is the Dean method.
  • Minimizing differences in per capita representation is the Webster method.
  • Minimizing relative differences between both constituency size and per capita representation is the Hill (or sometimes Huntington-Hill) method.

Further, each of these schemes has been used at some time in US history. Webster’s method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.8 The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.9

In 1929 an automatic apportionment act was passed.10 In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:

  1. The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
  2. The apportionment that would come from the Webster method.
  3. The apportionment that would come from the newly introduced Hill method.

If one reads congressional discussion from the time, then it will be good to note that Webster’s method is sometimes called the method of major fractions and Hill’s method is sometimes called the method of equal proportions. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill’s method was declared to be the recommendation of the Academy.11 From 1930 on, Hill’s method has been used.

Why use the Hill method?

The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the Alabama paradox was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one more seat available led to Alabama receiving one fewer seat.

The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section The Apportionment Act of 1792).

Another paradox was discovered in 1900, known as the Population paradox. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia’s population was larger and growing much more rapidly.

In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.

Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it’s still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.12

The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn’t suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.13

Understanding the modern Hill method in practice

Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that Pi is the population of state i, and Ri is the number of representatives allocated to state i. The Hill method seeks to minimize
$$
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1
$$

whenever Pi/Ri > Pj/Rj. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.

We can work out a different way of understanding this apportionment that is easier to implement in practice.

Suppose that we have allocated all of the representatives to each state and state j has Rj representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states i and j with Pi/Ri > Pj/Rj. (If this isn’t possible then the allocation is perfect).

We can ask if it would be a good idea to move one representative from state j to state i, since state j‘s constituency sizes are smaller. This can be thought of as working with Ri′=Ri + 1 and Rj′=Rj − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that Pj/Rj′>Pi/Ri (since otherwise the relative difference is strictly smaller) and
$$
\frac{P_jR_i’}{P_iR_j’} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1
$$

(since the relative difference must be at least as large). This is equivalent to
$$
\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}
\iff
\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.
$$

As every variable is positive, we can rewrite this as
$$
\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}
$$

We’ve shown that $(2)$ must hold whenever Pi/Ri > Pj/Rj in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states i and j.

Clearly it holds if i = j as the denominator on the left is strictly smaller.

If we are in the case when Pj/Rj > Pi/Ri, then we necessarily have the chain Pj/(Rj − 1)>Pj/Rj > Pi/Ri > Pi/(Ri + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.

This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill’s method is the largest fraction
$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$
being too large. (Some call this term the Hill rank-index).

An iterative Hill apportionment

This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for n seats, we can get an apportionment for n + 1 seats by assigning the additional seat the any state i which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.

Further, it can be shown that there is a unique apportionment in Hill’s method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.

This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.

Additional notes: Consequences of the 1870 and 1990 Apportionments

The 1870 Apportionment

Officially, Dean’s method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton’s method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean’s method, not Hamilton’s method. Specifically, New York and Illinois were each given one fewer seat than Hamilton’s method would have given, while New Hampshire and Florida were given one additional seat each.

There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.

One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean’s method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden’s 184.

There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it’s widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the “Reconstruction” period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.

The 1990 Apportionment

Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 26914. If Jefferson’s method had been used, then Gore would have won with 271 votes to Bush’s 266.

These decisions have far-reaching consequences!

Sources

  1. Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
  2. Balinski, Michel L., and H. Peyton Young. “The quota method of apportionment.” The American Mathematical Monthly 82.7 (1975): 701-730.
  3. Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
  4. Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
  5. Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
  6. Peskin, Allan. “Was there a Compromise of 1877.” The Journal of American History 60.1 (1973): 63-75.
  7. US Census Results
  8. US Constitution
  9. US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
  10. George Washington’s collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
  11. Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
  12. Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg
Posted in Data, Expository, Mathematics, Politics, Story | Tagged , , | Leave a comment

African clawed frog

In the early 1930s, Hillel Shapiro and Harry Zwarenstein, two South African researchers, discovered that injecting a pregnant woman’s urine into an African clawed frog (Xenopus laevis) caused the frog to ovulate within the next 18 hours. This became a common (and apparently reliable) pregnancy test until more modern pregnancy tests started to become available in the 1960s.

Behold the marvels of science! (Unless you’re a frog).

When I first heard this, I was both astounded and… astounded. How would you discover this? How many things were injected into how many animals before someone realized this would happen?

Sources

  • https://en.wikipedia.org/wiki/African_clawed_frog

  • Hillel Harry, Shapiro Zwarenstein (March 1935). “A test for the early diagnosis of pregnancy”. South African Medical Journal. 9: 202.

  • Shapiro, H. A.; Zwarenstein, H. (1934-05-19). “A Rapid Test for Pregnancy on Xenopus lævis”. Nature. 133 (3368): 762

Before frogs, there were mice

In 1928, early-endocrinologist Bernhard Zondek and biologist Selmar Aschheim were studying hormones and human biology. As far as I can tell, they hypothesized that hormones associated to pregnancy might still be present in pregnant women’s urine. They decided to see if other animals would react to the presence of this hormone, so they then went and collected the urine of pregnant women in order to… test their hypothesis.1

It turns out that they were right. The hormone human chrionic gonadotropin (hCG) is produced by the placenta shortly after a woman becomes pregnant. And this hormone is present in the urine of pregnant women. But as far as I can tell, hCG itself wasn’t identified until the 50s — so there was still some guesswork going on. Nonetheless, identifying hCG is common in many home-pregnancy tests today. Zondek and Aschheim developed a test (creatively referred to as the Aschheim-Zondek test2) that worked like this:

  1. Take a young female mouse between 3 and 5 weeks old. Actually, take about 5 mice, as one should expect that a few of the mice won’t survive long enough for the test to be complete.
  2. Inject urine into the bloodstream of each mouse three times a day for three days.
  3. Two days after the final injection, kill any surviving mouse and disect them.3
  4. If the ovaries are enlarged (i.e. 2-3 times normal size) and show red dots, then the urine comes from a pregnant woman. If the ovaries are merely enlarged, but there are no red dots, then the woman isn’t pregnant.4

In a trial, this test was performed on 2000 different women and had a 98.9 percent successful identification rate.

From this perspective, it’s not as surprising that young biologists and doctors sought to inject pregnant women’s urine into various animals and to see what happens. In many ways, frogs were superior to mice, as one doesn’t need to kill the frog to determine if the woman is pregnant.

Sources

  • Ettinger, G. H., G. L. M. Smith, and E. W. McHenry. “The Diagnosis of Pregnancy with the Aschheim-Zondek Test.” Canadian Medical Association Journal 24 (1931): 491–2.
  • Evans, Herbert, and Miriam Simpson. “Aschheim-Zondek Test for Pregnancy–Its Present Status.” California and Western Medicine 32 (1930): 145.

And rabbits too

Maurice Friedman, at the University of Pennsylvania, discovered that one could use rabbits instead of mice. (Aside from the animal, it’s essentially the same test).

Apparently this became a very common pregnancy test in the United States. A common misconception arose, where it was thought that the rabbits death indicated pregnancy. People might say that “the rabbit died” to mean that they were pregnant.

But in fact, just like mice, all rabbits used for these pregnancy tests died, as they were dissected.5

Sources

  • Friedman, M. H. (1939). The assay of gonadotropic extracts in the post-partum rabbit. Endocrinology, 24(5), 617-625.
Posted in Story | Leave a comment