How do we decide how many representatives there are for each state?

The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.

But what does this really mean?

If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says

Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.

This doesn’t give clarity.1 In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.

The Apportionment Act of 1792

According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.2

When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.

Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives3

State          ideal    rounded_down
Vermont        2.851    2
NewHampshire   4.727    4
Maine          3.218    3
Massachusetts  12.62    12
RhodeIsland    2.281    2
Connecticut    7.894    7
NewYork        11.05    11
NewJersey      5.985    5
Pennsylvania   14.42    14
Delaware       1.851    1
Maryland       9.283    9
Virginia       21.01    21
Kentucky       2.290    2
NorthCarolina  11.78    11
SouthCarolina  6.874    6
Georgia        2.361    2

But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional “ideal” parts:

  1. New Jersey (0.985)
  2. Connecticut (0.894)
  3. South Carolina (0.874)
  4. Vermont (0.851)
  5. Delaware (0.851)
  6. Massachusetts+Maine (0.838)
  7. North Carolina (0.78)
  8. New Hampshire (0.727)

(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?

Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?

There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, Is it not unfair that the fractional apportionment favours the North?4

Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.

Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the Jefferson Method of Apportionment, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.

As an aside, it’s interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.5 In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.

Apportionment Act of 1792

Measuring the fairness of an apportionment method

At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?

We’ll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.6

So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.

For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander7

The number of people each representative actually represent is at the core of the notion of fairness — but even then, it’s not obvious.

Suppose we enumerate the states, so that Si refers to state i. We’ll also denote by Pi the population of state i, and we’ll let Ri denote the number of representatives allotted to state i.

In the ideal scenario, every representative would represent the exact same number of people. That is, we would have
$$\text{pop. per rep. in state i}
= \frac{P_i}{R_i}
= \frac{P_j}{R_j}
= \text{pop. per rep. in state j}$$

for every pair of states i and j. But this won’t ever happen in practice.

Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If
$$
\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}
$$

then we can say that each representative in state i represents more people, and thus those people have a diluted vote.

Amounts of Inequality

There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.

A few natural ideas emerge:

  • We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
  • We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
  • We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.

This last one needs a bit of explanation. Define the relative difference between two numbers x and y to be
$$
\frac{\lvert x – y \rvert}{\min(x, y)}.
$$

Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state j have smaller constituencies than in state i (and therefore people in state j have more powerful votes). Then the relative difference in constituency size is
$$
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.
$$

The relative difference in per capita representation is
$$
\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =
\frac{P_i/R_i}{P_j/R_j} – 1.
$$

Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.

All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson’s scheme — though to be fair, Jefferson’s scheme doesn’t seek to minimize inequality and there is no reason to think it should behave the same).

Each of these ideas leads to a different apportionment scheme, and in fact each has a name.

  • Minimizing differences in constituency size is the Dean method.
  • Minimizing differences in per capita representation is the Webster method.
  • Minimizing relative differences between both constituency size and per capita representation is the Hill (or sometimes Huntington-Hill) method.

Further, each of these schemes has been used at some time in US history. Webster’s method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.8 The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.9

In 1929 an automatic apportionment act was passed.10 In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:

  1. The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
  2. The apportionment that would come from the Webster method.
  3. The apportionment that would come from the newly introduced Hill method.

If one reads congressional discussion from the time, then it will be good to note that Webster’s method is sometimes called the method of major fractions and Hill’s method is sometimes called the method of equal proportions. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill’s method was declared to be the recommendation of the Academy.11 From 1930 on, Hill’s method has been used.

Why use the Hill method?

The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the Alabama paradox was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one more seat available led to Alabama receiving one fewer seat.

The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section The Apportionment Act of 1792).

Another paradox was discovered in 1900, known as the Population paradox. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia’s population was larger and growing much more rapidly.

In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.

Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it’s still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.12

The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn’t suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.13

Understanding the modern Hill method in practice

Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that Pi is the population of state i, and Ri is the number of representatives allocated to state i. The Hill method seeks to minimize
$$
\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1
$$

whenever Pi/Ri > Pj/Rj. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.

We can work out a different way of understanding this apportionment that is easier to implement in practice.

Suppose that we have allocated all of the representatives to each state and state j has Rj representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states i and j with Pi/Ri > Pj/Rj. (If this isn’t possible then the allocation is perfect).

We can ask if it would be a good idea to move one representative from state j to state i, since state j‘s constituency sizes are smaller. This can be thought of as working with Ri′=Ri + 1 and Rj′=Rj − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that Pj/Rj′>Pi/Ri (since otherwise the relative difference is strictly smaller) and
$$
\frac{P_jR_i’}{P_iR_j’} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1
$$

(since the relative difference must be at least as large). This is equivalent to
$$
\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}
\iff
\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.
$$

As every variable is positive, we can rewrite this as
$$
\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}
$$

We’ve shown that $(2)$ must hold whenever Pi/Ri > Pj/Rj in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states i and j.

Clearly it holds if i = j as the denominator on the left is strictly smaller.

If we are in the case when Pj/Rj > Pi/Ri, then we necessarily have the chain Pj/(Rj − 1)>Pj/Rj > Pi/Ri > Pi/(Ri + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.

This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill’s method is the largest fraction
$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$
being too large. (Some call this term the Hill rank-index).

An iterative Hill apportionment

This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for n seats, we can get an apportionment for n + 1 seats by assigning the additional seat the any state i which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.

Further, it can be shown that there is a unique apportionment in Hill’s method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.

This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.

Additional notes: Consequences of the 1870 and 1990 Apportionments

The 1870 Apportionment

Officially, Dean’s method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton’s method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean’s method, not Hamilton’s method. Specifically, New York and Illinois were each given one fewer seat than Hamilton’s method would have given, while New Hampshire and Florida were given one additional seat each.

There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.

One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean’s method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden’s 184.

There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it’s widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the “Reconstruction” period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.

The 1990 Apportionment

Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 26914. If Jefferson’s method had been used, then Gore would have won with 271 votes to Bush’s 266.

These decisions have far-reaching consequences!

Sources

  1. Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
  2. Balinski, Michel L., and H. Peyton Young. “The quota method of apportionment.” The American Mathematical Monthly 82.7 (1975): 701-730.
  3. Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
  4. Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
  5. Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
  6. Peskin, Allan. “Was there a Compromise of 1877.” The Journal of American History 60.1 (1973): 63-75.
  7. US Census Results
  8. US Constitution
  9. US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
  10. George Washington’s collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
  11. Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
  12. Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg
Posted in Data, Expository, Mathematics, Politics, Story | Tagged , , | Leave a comment

African clawed frog

In the early 1930s, Hillel Shapiro and Harry Zwarenstein, two South African researchers, discovered that injecting a pregnant woman’s urine into an African clawed frog (Xenopus laevis) caused the frog to ovulate within the next 18 hours. This became a common (and apparently reliable) pregnancy test until more modern pregnancy tests started to become available in the 1960s.

Behold the marvels of science! (Unless you’re a frog).

When I first heard this, I was both astounded and… astounded. How would you discover this? How many things were injected into how many animals before someone realized this would happen?

Sources

  • https://en.wikipedia.org/wiki/African_clawed_frog

  • Hillel Harry, Shapiro Zwarenstein (March 1935). “A test for the early diagnosis of pregnancy”. South African Medical Journal. 9: 202.

  • Shapiro, H. A.; Zwarenstein, H. (1934-05-19). “A Rapid Test for Pregnancy on Xenopus lævis”. Nature. 133 (3368): 762

Before frogs, there were mice

In 1928, early-endocrinologist Bernhard Zondek and biologist Selmar Aschheim were studying hormones and human biology. As far as I can tell, they hypothesized that hormones associated to pregnancy might still be present in pregnant women’s urine. They decided to see if other animals would react to the presence of this hormone, so they then went and collected the urine of pregnant women in order to… test their hypothesis.1

It turns out that they were right. The hormone human chrionic gonadotropin (hCG) is produced by the placenta shortly after a woman becomes pregnant. And this hormone is present in the urine of pregnant women. But as far as I can tell, hCG itself wasn’t identified until the 50s — so there was still some guesswork going on. Nonetheless, identifying hCG is common in many home-pregnancy tests today. Zondek and Aschheim developed a test (creatively referred to as the Aschheim-Zondek test2) that worked like this:

  1. Take a young female mouse between 3 and 5 weeks old. Actually, take about 5 mice, as one should expect that a few of the mice won’t survive long enough for the test to be complete.
  2. Inject urine into the bloodstream of each mouse three times a day for three days.
  3. Two days after the final injection, kill any surviving mouse and disect them.3
  4. If the ovaries are enlarged (i.e. 2-3 times normal size) and show red dots, then the urine comes from a pregnant woman. If the ovaries are merely enlarged, but there are no red dots, then the woman isn’t pregnant.4

In a trial, this test was performed on 2000 different women and had a 98.9 percent successful identification rate.

From this perspective, it’s not as surprising that young biologists and doctors sought to inject pregnant women’s urine into various animals and to see what happens. In many ways, frogs were superior to mice, as one doesn’t need to kill the frog to determine if the woman is pregnant.

Sources

  • Ettinger, G. H., G. L. M. Smith, and E. W. McHenry. “The Diagnosis of Pregnancy with the Aschheim-Zondek Test.” Canadian Medical Association Journal 24 (1931): 491–2.
  • Evans, Herbert, and Miriam Simpson. “Aschheim-Zondek Test for Pregnancy–Its Present Status.” California and Western Medicine 32 (1930): 145.

And rabbits too

Maurice Friedman, at the University of Pennsylvania, discovered that one could use rabbits instead of mice. (Aside from the animal, it’s essentially the same test).

Apparently this became a very common pregnancy test in the United States. A common misconception arose, where it was thought that the rabbits death indicated pregnancy. People might say that “the rabbit died” to mean that they were pregnant.

But in fact, just like mice, all rabbits used for these pregnancy tests died, as they were dissected.5

Sources

  • Friedman, M. H. (1939). The assay of gonadotropic extracts in the post-partum rabbit. Endocrinology, 24(5), 617-625.
Posted in Story | Leave a comment

Talk: Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles

Here are some notes for my talk Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles (an invitation to analytic number theory), which I’m giving on Tuesday 26 February at Macalester College.

The slides for my talk are available here.

The overarching idea of the talk is to explore the deep relationship between

  1. right triangles with rational side lengths and area $n$,
  2. three-term arithmetic progressions of squares with common difference $n$, and
  3. rational points on the elliptic curve $Y^2 = X^3 – n^2 X$.

If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a congruent number.

By understanding this relationship, we also describe the ideas and results in the paper A Shifted Sum for the Congruent Number Problem, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.

Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like sage.1

For the rest of this note, I’ll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.

Firstly, one needs access to sage. It’s free to install, but it’s quite large. The easiest way to begin using sage immediately is to use cocalc.com,  a free interface to sage (and other tools) that was created by William Stein, who also created sage.

In a sage session, we can create an elliptic curve through


> E6 = EllipticCurve([-36, 0])
> E6
Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field

More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use


> n = 6   # (or anything you want)
> E = EllipticCurve([-n**2, 0])

We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.


> E6.rank()
1

If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.2

For the curve $Y^2 = X^3 – 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through


> E6.point_search(10)
[(-3 : 9 : 1)]

The 10 in this code is a limit on the complexity of the point. The precise definition isn’t important — using $10$ is a reasonable limit for us.

We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 – 36 X Z^2$ — it turns out that in many cases, it’s easier to perform computations when every term is a polynomial of the same degree. The coordinates it’s giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.


> Pt = E6.point_search(10)[0]  # The [0] means to return the first element of the list
> Pt.xy()
(-3, 9)

In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by
$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$

We can write a sage function to perform this map for us, through


> def pt_to_triangle(P):
    x, y = P.xy()
    return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y

> pt_to_triangle(Pt)
(3, 4, 5)

This returns the $(3, 4, 5)$ triangle!

Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.


> Pt2 = Pt + Pt
> Pt2.xy()
(25/4, -35/8)
> pt_to_triangle(Pt2)
(7/10, 120/7, -1201/70)

Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it’s the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).

Let’s do one more.


> Pt3 = Pt + Pt + Pt
> Pt3.xy()
(-1587/1369, -321057/50653)
> pt_to_triangle(Pt3)
(-4653/851, -3404/1551, -7776485/1319901)

That’s a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!

Posted in Expository, Math.NT, Mathematics, Programming, sage, sagemath, sagemath | Leave a comment

Writing a Python Script to be Used in Vim

It is no secret that I love vim. I’ve written and helped maintain a variety of vim plugins, but I’ve mostly restricted my attention to direct use of vimscript or to call external tools and redirect their output to a vim buffer.

Only recently did I learn how easy it is to use python to interact directly with vim objects through the vim-python package. And it was so easy that this felt a bit like writing a quick shell script to facilitate some immediate task — it allows quick and dirty work.

In this note, I’ll review how I recently wrote a quick python function (to be used in vim) to facilitate conversion of python f-strings (which I love) to python 3.5 (which I also love, but which is admittedly a bit behind).

Of course the real reason is to remind future me just how easy this is.

Description of the Problem

In python 3.7+, one can use f-strings to format strings. This syntax looks like

x = 4
print(f"Python has at least {x} many ways to format strings now.")

I love them, they’re great. But they are not backportable since they are a change to fundamental syntax. Thus systems relying on previous versions of python (be it python2 or even python3.5) can’t parse python files containing f-strings.

The goal is this: given a line containing an f-string, write an equivalent line containing no f-string.

The formula I choose is simple: use .format() instead. Thus given

s = f"string with {some} {variables}"

we will convert this to

s = "string with {} {}".format(some, variables)

I will assume that the string takes place on one line for simplicity (and because in my application this was almost always true), but that there are any number of variables included in the line.

Background on python in vim

Some good insight is gained through reading :h python-vim. In fact, this is essentially the only documentation on the python-vim interface (though it is pretty good documentation — perhaps the actual best source of learning this interface is to read some plugins which leverage the python connection).

I will assume that vim has been compiled with python3 support (which can be checked by looking for +python3 in vim --version). Then the key idea is that one can use :py3 to call python functions. For example,

:py3 print("Hello")

Being restricted to one line is inconvenient, and it is far more useful to use the form

:py3 << EOF
def say_hello():
  print("Sup doc.")
EOF
:py3 say_hello()

In this way, we can define and use python functions. Right now, the output of these functions simply appears in the status lines at the bottom of the screen. This is of limited value. Often, we will really want to interact directly with the vim buffer. The key to do this is the vim module for python.1

The vim module for python can be used through

import vim

# do vim things

This is what :h python-vim actually documents, and I defer to the list of methods there. The important thing is that it provides direct access to vim buffers.

Today, we will use only the direct access to the current line, vim.current.line.

A Solution

I very routinely open scratch buffers (i.e. buffers that I call scratch.tmp). Sometimes I momentary notes, or I edit vim macros in place, or I define vim functions for short-term use. I will assume that we have opened up a scratch buffer and our pythonfile withfstrings.py. (In fact, I usually open it in a vsplit). I also very routinely assign Q to do whatever convenient function I have written in my scratch buffer (or whatever macro I want to repeat most simply). Of course, these are my idiosyncratic habits — the ideas are easy enough to translate.

In our scratch buffer, we can write a vim function, which is internally a python function, and map the symbol Q to call it.

function! Convert()
py3 << EOF

import vim

import re
bracketed = re.compile("{.*?}")

def convert():
    line = vim.current.line
    single = line.find("'")
    double = line.find('"')
    if single == -1:
        sep = '"'
    elif double == -1 or single < double:
        sep = "'"
    else:
        sep = '"'

    # l sep inner sep post
    l, _, m = line.partition(sep)
    inner, _, post = m.partition(sep)

    #get rid of f
    l = l[:-1]

    ret = ""
    var_names = bracketed.findall(inner)
    var_names = list(map(lambda x: x[1:-1], var_names))

    ret += l + sep + inner + sep
    ret += ".format("
    for var in var_names:
        ret += "{}={}, ".format(var, var)
    ret = ret[:-2]
    ret += ")" + post
    vim.current.line = ret

convert()
EOF

endfunction

com! Conv call Convert()
nnoremap Q :Conv<CR>

To use, one sources the scratch buffer (either directly, like :source scratch.tmp, or through navigating to its split and using :so %). Then one can find a line containing an f-string and hit Q (or alternately use :Conv).

Why is this good?

Although nontrivial, writing a simple python script like this takes only a few minutes. And then converting f-strings became the least important part of my backporting project.2

And more broadly, this is such an easy process. Off-loading the mental burden of repeating annoying tasks is valuable. The most revelatory aspect of this experience for me is that it’s easy enough to allow throwaway functions for momentary ease.

In my recent application, I converted 482 f-strings using this function. A few of them were multiline strings and my (admittedly simple) function failed — and so I did those by hand. But it took me less than 10 minutes to write the function (maybe even less than 5, though my first function expected double quotes only). Overall, this was a great gain.

Keeping the Script

Having come this far, it’s worth considering how we might keep the script around if we wanted to.

One good approach is to separate the python from the vim (allowing, for instance, testing on the python itself) and have a thin vim wrapper. We’ll look at this in two possible levels of encapsulation.

  1. Separate the python from the vim.
  2. Make the function a plugin. (Presumably for slightly more meaningful functions than this one).

Separate the Files

Let’s separate the python from the vim. In this was we can approach the python as proper little python script, merely waiting for a thin vim wrapper around it at the end.

We can separate the python out into the following file, convert_fstring.py.

# convert_fstring.py
import re
bracketed = re.compile("{.*?}")

def convert(line):
    single = line.find("'")
    double = line.find('"')
    if single == -1:
        sep = '"'
    elif double == -1 or single < double:
        sep = "'"
    else:
        sep = '"'

    # l sep inner sep post
    l, _, m = line.partition(sep)
    inner, _, post = m.partition(sep)

    # get rid of f
    l = l[:-1]

    ret = ""
    var_names = bracketed.findall(inner)
    var_names = list(map(lambda x: x[1:-1], var_names))

    ret += l + sep + inner + sep
    ret += ".format("
    for var in var_names:
        ret += "{}={}, ".format(var, var)
    ret = ret[:-2]
    ret += ")" + post
    return ret

if __name__ == "__main__":
    test_input = 'mystr = f"This {is} an {f}-string".reverse()'
    expected  = 'mystr = "This {is} an {f}-string".format(is=is, f=f).reverse()'
    assert expected == convert(test_input)

This is a raw python file. I included a mock unit-test (in case one was testing the function while writing it… I didn’t compose the function like this because I didn’t think about it, but in the future I probably will).

Now let’s write a thin vim wrapper around it.

" convert_string.vim

let s:script_dir = fnamemodify(resolve(expand('<sfile>', ':p')), ':h')

function! Convert()
py3 << EOF

import sys
import vim

script_dir = vim.eval('s:script_dir')
sys.path.insert(0, script_dir)

import convert_fstring

def convert():
  out = convert_fstring.convert(vim.current.line)
  vim.current.line = out

convert()
EOF

endfunction

com! Conv call Convert()
nnoremap Q :Conv<CR>

This is mostly pretty clear, with (I think) two exceptions concerning s:script_dir.

The problem comes from trying to import convert_fstring — it’s not in our PYTHONPATH, and the current working directory for python from within vim is a bit weird. It would be possible to directly code in the current directory into the PYTHONPATH, but I think it is more elegant to have convert_fstring.vim add the location of convert_fstring.py to python’s path.3 One can do that through

There are two additions of note that address this import. The first is the line

let s:script_dir = fnamemodify(resolve(expand('<sfile>', ':p')), ':h')

This sets a script-local variable (the prefix s:) containing the directory in which the script was sourced from. (This is where we use the assumption that the vimfile is in the same directory as the python file. If they’re in relative directories, then one much change the path addition to python appropriately).

The string <sfile> is populated automatically by vim to contain the filename of the sourcefile. Calling expand with the :p filemodifier returns the complete path of <sfile>. Calling resolve follows symlinks so that the path is absolute. And the final fnamemodify with the :h modifier returns the head, or rather the directory, of the file. Thus the result is to get an absolute path to the directory containing the script (and in this case also the python file).

Then in the python function, we add this location to the path through

script_dir = vim.eval('s:script_dir')
sys.path.insert(0, script_dir)

The first line has vim convert the vim variable into a python string, and the second inserts this directory into the PYTHONPATH.

I have a collection of loosely organized helper scripts in a (creatively named) ~/scripts directory. Some of these were written as pure python shellscripts that I’ve added some vim wrappers around.

Writing a Plugin

To reiterate, I consider this excessive for my sample application. But we’ve covered so many of the ideas that we should touch on it now.

Any plugin should follow Tim Pope’s Pathogen design model, with the ultimate goal of having a structure that looks like

myplugin/
  README.markdown
  LICENSE
  autoload/
    myplugin.vim
    ...
  plugin/
  ...

Pathogen (or Pathogen-compatible plugin managers, which as far as I know means all of them… like Vundle or vim-plug) will allow plugins to be installed in ~/.vim/bundle, and each plugin there will be added to vim’s rtp. 4 Each plugin manager installs plugins (i.e. adds to the runtimepath) in its own way. For Vundle, I include an extra line in my .vimrc.

Suppose I want to package my script as a plugin called fstring_converter. Then I’ll create the directory structure

fstring_converter/
  ftplugin/
    python.vim
    convert_fstring.py

where python.vim is what I called convert_fstring.vim before. Placing fstring_converter in ~/.vim/bundle (or symlinking it, or installing it there through e.g. Vundle) will “install it”. Then opening a python file will load it, and one can either Conv a line or (as it’s currently written) use Q. (Although I think it’s a bit unkind and certainly unsustainable to create this mapping for users).

I note that one should generically use autoload/ when available, but I don’t touch that now.

As an additional note, vim does add some directories to the PYTHONPATH under the hood. For each directory in vim’s runtimepath, vim adds the subdirectory python3 (and also pythonx) to the python module search path. As a result, we could omit the explicit path addition to the python.vim (aka convert_fstring.vim) file if we organized the plugin like

fstring_converter/
  ftplugin/
    python.vim
  python3/
    convert_fstring.py

Regardless, this describes a few ways to make a vim plugin out of our python script.

Posted in Programming, Python, vim | Tagged , , , , | Leave a comment

Slides for a talk at JMM 2019

Today, I’m giving a talk on Zeroes of L-functions associated to half-integral weight modular forms, which includes some joint work with Li-Mei Lim and Tom Hulse, and which alludes to other joint work touched on previously with Jeff Hoffstein and Min Lee (and which perhaps should have been finished a few years ago).

Here are the slides for my talk.

Posted in Math.NT, Mathematics | Tagged , , , | Leave a comment

Update to Second Moments in the Generalized Gauss Circle Problem

Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a paper to the arXiv called “Second Moments in the Generalized Gauss Circle Problem”. I’ve briefly described its contents before.

This paper has been accepted and will appear in Forum of Mathematics: Sigma.More randomized squares art

This is the first time I’ve submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don’t release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.

An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.1

A new appendix has appeared

There is one major addition to the paper that didn’t appear in the original preprint. At one of the referee’s suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.

If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$ f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz), $$ then one can use a (real analytic) Eisenstein series $$ E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s $$ to recognize the Rankin-Selberg $L$-function \begin{equation}\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k – 1}} = h(s) \langle f g y^k, E(s, z) \rangle, \end{equation} where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.

When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.

When $f$ and $g$ are not cusp forms, then Zagier2 provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.

In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta3 gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.

However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.4 For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.

The Appendix to the paper can be thought of as an extension of Gupta’s paper: it uses Gupta’s ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.

There is more to this story. I’ll return to it in a later note.

Other submission details for FoM: Sigma

I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.

When I first opened these comments, I was a bit afraid. Having so many comments was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).

More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,5 but this process changed and helped sharpen my ideas concerning good technical math writing.

I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn’t expect this.

I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author’s institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).

This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.6

I will certainly consider Forum of Mathematics in the future.

Posted in Expository, Math.NT, Mathematics, Warwick | Tagged , , , | Leave a comment

The wrong way to compute a sum: addendum

Cellular Automata from Rule 106 (random initial configuration)In my previous note, I looked at an amusing but inefficient way to compute the sum $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1}$$ using Mellin and inverse Mellin transforms. This was great fun, but the amount of work required was more intense than the more straightforward approach offered immediately by using Lambert series.

However, Adam Harper suggested that there is a nice shortcut that we can use (although coming up with this shortcut requires either a lot of familiarity with Mellin transforms or knowledge of the answer).

In the Lambert series approach, one shows quickly that $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ and then evaluates this last sum directly. For the Mellin transform approach, we might ask: do the two functions $$ f(x) = \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}$$ and $$ g(x) = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ have the same Mellin transforms? From the previous note, we know that they have the same values at $1$.

We also showed very quickly that $$ \mathcal{M} [f] = \frac{1}{(\log 2)^2} \Gamma(s) \zeta(s-1). $$ The more difficult parts from the previous note arose in the evaluation of the inverse Mellin transform at $x=1$.

Let us compute the Mellin transform of $g$. We find that $$ \begin{align}
\mathcal{M}[g] &= \sum_{n \geq 1} n \int_0^\infty \frac{1}{2^{nx}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} n \int_0^\infty \frac{1}{e^{nx \log 2}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} \frac{n}{(n \log 2)^s} \int_0^\infty x^s e^{-x} \frac{dx}{x} \notag \\
&= \frac{1}{(\log 2)^2} \zeta(s-1)\Gamma(s). \notag
\end{align}$$ To go from the second line to the third line, we did the change of variables $x \mapsto x/(n \log 2)$, yielding an integral which is precisely the definition of the Gamma function.

Thus we see that $$ \mathcal{M}[g] = \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) = \mathcal{M}[f],$$ and thus $f(x) = g(x)$. (“Nice” functions with the same “nice” Mellin transforms are also the same, exactly as with Fourier transforms).

This shows that not only is $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ but in fact $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1} = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ for all $x > 1$.

I think that’s sort of slick.

Posted in Math.NT, Mathematics, Warwick | Tagged , , , | Leave a comment

The wrong way to compute a sum

At a recent colloquium at the University of Warwick, the fact that
\begin{equation}\label{question}
\sum_ {n \geq 1} \frac{\varphi(n)}{2^n – 1} = 2.
\end{equation}
Although this was mentioned in passing, John Cremona asked — How do you prove that?

It almost fails a heuristic check, as one can quickly check that
\begin{equation}\label{similar}
\sum_ {n \geq 1} \frac{n}{2^n} = 2,
\end{equation}
which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor.

[Edit: Note that an addendum to this note has been added here. In it, we see that there is a way to shortcut the “hard part” of the long computation.]

The right way

Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short.

The idea is that, by expanding the denominator in power series, one has that
\begin{equation}
\sum_{n \geq 1} a(n) \frac{x^n}{1 – x^n} \notag
= \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn}
= \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n,
\end{equation}
where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$.

Applied to \eqref{question}, we find that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} \notag
= \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 – 2^{-n}}
= \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d),
\end{equation}
and as
\begin{equation}
\sum_ {d \mid n} \varphi(d) = n, \notag
\end{equation}
we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$.

That’s a nice derivation using a series that I hadn’t come across before. But that’s not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it’s a wrong way that works out in a nice way that at least one person1 finds appealing.

The wrong way

We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates.

Let $f$ denote the function
\begin{equation}
f(x) = \frac{1}{2^x – 1}. \notag
\end{equation}
Denote by $f^ * $ the Mellin transform of $f$,
\begin{equation}
f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x}
= \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag
\end{equation}
where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.2

For a general nice function $g(x)$, its Mellin transform satisfies
\begin{equation}
\mathcal{M}[f(nx)] (s)
= \int_0^\infty g(nx) x^s \frac{dx}{x}
= \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x}
= \frac{1}{n^s} g^ * (s).\notag
\end{equation}
Further, the Mellin transform is linear. Thus
\begin{equation}\label{mellinbase}
\mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}.
\end{equation}

The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag
= \frac{\zeta(s-1)}{\zeta(s)}.
\end{equation}
Thus the Mellin transform in \eqref{mellinbase} can be written as
\begin{equation}
\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag
\end{equation}

By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be
\begin{equation}
\mathcal{M}^{-1}[h(s)] (x) \notag
:=
\frac{1}{2\pi i} \int_ {c – i \infty}^{c + i\infty} x^s h(s) ds,
\end{equation}
where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I’ll write $(c)$ as a shorthand for the limits of integration. Thus
\begin{equation}\label{mellininverse}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}
= \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s}
\Gamma(s) \zeta(s-1) x^{-s} ds.
\end{equation}

We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}.

How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy’s residue theorem, and to recognize the infinite sum as a classical series.

The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren’t poles at negative odd integers, since the zeta function has zeroes at negative even integers.

Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{n} – 1} \notag
=\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 +
\frac{\zeta(-5)}{4!} \log^4 2 + \cdots
\end{equation}

The zeta function at negative integers has a very well-known relation to the Bernoulli numbers,
\begin{equation}\label{zeta_bern}
\zeta(-n) = – \frac{B_ {n+1}}{n+1},
\end{equation}
where Bernoulli numbers are the coefficients in the expansion
\begin{equation}\label{bern_gen}
\frac{t}{1 – e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}.
\end{equation}
Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function.

Translating the zeta values into Bernoulli numbers, we find that
\eqref{question} is equal to
\begin{align}
&\frac{1}{\log^2 2} – \frac{B_2}{2} – \frac{B_4}{2! \cdot 4} \log^2 2 –
\frac{B_6}{4! \cdot 6} \log^4 2 – \frac{B_8}{6! \cdot 8} \cdots \notag \\
&=
-\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog}
\end{align}
This last sum is excellent, and can be recognized.

For a general exponential generating series
\begin{equation}
F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag
\end{equation}
we see that
\begin{equation}
\frac{d}{dt} \frac{1}{t} F(t) \notag
=\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}.
\end{equation}
Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag
=- \frac{e^{-t}}{(1 – e^{-t})^2},
\end{equation}
and also that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag
=\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}.
\end{equation}
This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$.

Putting this together, we find that
\begin{equation}
\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag
=\frac{e^{-\log 2}}{(1 – e^{-\log 2})^2}
= \frac{1/2}{(1/2)^2} = 2.
\end{equation}
Thus we find that \eqref{question} really is equal to $2$, as we had sought to show.

Posted in Math.NT, Mathematics, Warwick | Tagged , , | Leave a comment

Using lcalc to compute half-integral weight L-functions

This is a brief note intended primarily for my collaborators interested in using Rubinstein’s lcalc to compute the values of half-integral weight $L$-functions.

We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn’t expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage’s distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage’s version).

Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.

Building the datafile

It is assumed that the $L$-function is normalized in such a way that
$$\begin{equation}
\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.
\end{equation}$$
This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.

An annotated version of the datafile you should create looks like this

2                  # 2 means the Dirichlet coefficients are reals
0                  # 0 means the L-function isn't a "nice" one
10000              # 10000 coefficients will be provided
0                  # 0 means the coefficients are not periodic
1                  # num Gamma factors of form \Gamma(\gamma s + \lambda)
1                  # the \gamma in the Gamma factor
1.75 0             # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790  # Q. In this case, 1/pi
1 0                # real and imaginary parts of omega, sign of func. eq.
0                  # number of poles
1.000000000000000  # a(1)
-1.78381067250408  # a(2)
...                # ...
-0.622124724090625 # a(10000)

If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, datafiles should only contain numbers and should not contain comments. This annotated version is for convenience, not for use.

You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it’s surely possible to use more, but this was the test-setup that I first set up.

Generating the coefficients for this example

In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.

One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.

num_coeffs = 10**5 + 1
weight     = 9.0 / 2.0

R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
    if i % 2 == 1:
        eta2_coeffs.append(0)
    else:
        eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3

coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])

normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
    normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
    normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])

Using lcalc now

Suppose that you have a datafile, called g1_lcalcfile.txt (for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as

# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')

# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')

# See lcalc.help() for more on calling lcalc.

The key in these is to pass along the datafile through the -F argument.

Posted in Data, Mathematics, Programming, sage, sagemath, sagemath | Tagged | Leave a comment

Extra comparisons in Python’s timsort

Reading through the comp.lang.python mailing list, I saw an interesting question concerning the behavior of the default sorting algorithm in python. This led to this post.

Python uses timsort, a clever hybrid sorting algorithm with ideas borrowing from merge sort and (binary) insertion sort. A major idea in timsort is to use the structure of naturally occuring runs (consecutive elements in the list that are either monotone increasing or monotone decreasing) when sorting.

Let’s look at the following simple list.

10, 20, 5

A simple sorting algorithm is insertion sort, which just advances through the list and inserts each number into the correct spot. More explicitly, insertion sort would

  1. Start with the first element, 10. As a list with one element, it is correctly sorted tautologically.
  2. Now consider the second element, 20. We insert this into the correct position in the already-sorted list of previous elements. Here, that just means that we verify that 20 > 10, and now we have the sorted sublist consisting of 10, 20.
  3. Now consider the third element, 5. We want to insert this into the correct position in the already-sorted list of previous elements. A naively easy way to do this is to either scan the list from the right or from the left, and insert into the correct place. For example, scanning from the right would mean that we compare 5 to the last element of the sublist, 20. As 5 < 20, we shift left and compare 5 to 10. As 5 < 10, we shift left again. As there is nothing left to compare against, we insert 5 at the beginning, yielding the sorted list 5, 10, 20.

How many comparisons did this take? This took 20 > 10, 5 < 20, and 5 < 10. This is three comparisons in total.

We can see this programmatically as well. Here is one implementation of insertion_sort, as described above.

def insertion_sort(lst):
    '''
    Sorts `lst` in-place. Note that this changes `lst`.
    '''
    for index in range(1, len(lst)):
        current_value = lst[index]
        position = index
        while position > 0 and lst[position - 1] > current_value:
            lst[position] = lst[position - 1]
            position = position - 1
        lst[position] = current_value

Let’s also create a simple Number class, which is just like a regular number, except that anytime a comparison is done it prints out the comparison. This will count the number of comparisons made for us.

class Number:
    def __init__(self, value):
        self.value = value

    def __str__(self):
        return str(self.value)

    def __repr__(self):
        return self.__str__()

    def __lt__(self, other):
        if self.value < other.value:
            print("{} < {}".format(self, other))
            return True
        print("{} >= {}".format(self, other))
        return False

    def __eq__(self, other):
        if self.value == other.value:
            print("{} = {}".format(self, other))
            return True
        return False

    def __gt__(self, other):
        return not ((self == other) or (self < other))

    def __le__(self, other):
        return (self < other) or (self == other)

    def __ge__(self, other):
        return not (self < other)

    def __nq__(self, other):
        return not (self == other)

With this class and function, we can run

lst = [Number(10), Number(20), Number(5)]
insertion_sort(lst)
print(lst)

which will print

10 < 20
20 >= 5
10 >= 5
[5, 10, 20]

These are the three comparisons we were expecting to see.

Returning to python’s timsort — what happens if we call python’s default sorting method on this list? The code

lst = [Number(10), Number(20), Number(5)]
lst.sort()
print(lst)

prints

20 >= 10
5 < 20
5 < 20
5 < 10
[5, 10, 20]

There are four comparisons! And weirdly, the method checks that 5 < 20 twice in a row. What’s going on there?1

At its heart, this was at the core of the thread on comp.lang.python. Why are there extra comparisons in cases like this?

Poking around the implementation of timsort taught me a little bit more about timsort.2

Timsort approaches this sorting task in the following way.

  1. First, timsort tries to identify how large the first run within the sequence is. So it keeps comparing terms until it finds one that is out of order. In this case, it compares 20 to 10 (finding that 20 > 10, and thus the run is increasing), and then compares 5 to 20 (finding that 5 < 20, and thus that 5 is not part of the same run as 10, 20). Now the run is identified, and there is one element left to incorporate.
  2. Next timsort tries to insert 5 into the already-sorted run. It is more correct to say that timsort attempts to do a binary insertion, since one knows already that the run is sorted.3 In this binary insertion, timsort will compare 5 with the middle of the already-sorted run 10, 20. But this is a list of length 2, so what is its middle element? It turns out that timsort takes the latter element, 20, in this case. As 5 < 20, timsort concludes that 5 should be inserted somewhere in the first half of the run 10, 20, and not in the second half.
  3. Of course, the first half consists entirely of 10. Thus the remaining comparison is to check that 5 < 10, and now the list is sorted.

We count4 all four of the comparisons. The doubled comparison is due to the two tasks of checking whether 5 is in the same run as 10, 20, and then of deciding through binary insertion where to place 5 in the smaller sublist of 10, 20.

Now that we’ve identified a doubled comparison, we might ask Why is it this way? Is this something that should change?

The short answer is it doesn’t really matter. A longer answer is that to apply this in general would cause additional comparisons to be made, since this applies in the case when the last element of the run agrees in value with the central value of the run (which may occur for longer lists if there are repeated values). Checking that this happens would probably either involve comparing the last element of the run with the central value (one extra comparison, so nothing is really saved anyway), or perhaps adding another data structure like a skip list (which seems sufficiently more complicated to not be worth the effort). Or it would only apply when sorting really short lists, in which case there isn’t much to worry about.

Learning a bit more about timsort made me realize that I could probably learn a lot by really understanding an implementation of timsort, or even a slightly simplified implementation. It’s a nice reminder that one can choose to optimize for certain situations or behaviors, and this might not cover all cases perfectly — and that’s ok.

Posted in Programming, Python | Tagged , | Leave a comment