## Mathematics Category Archive

Below you will find the most recent posts tagged “Mathematics”, arranged in reverse chronological order.

Below you will find the most recent posts tagged “Mathematics”, arranged in reverse chronological order.

Posted in Mathematics
Leave a comment

The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.

But what does this really mean?

If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says

Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.

This doesn’t give clarity.^{1} In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.

According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.^{2}

When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.

Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives^{3}

```
State ideal rounded_down
Vermont 2.851 2
NewHampshire 4.727 4
Maine 3.218 3
Massachusetts 12.62 12
RhodeIsland 2.281 2
Connecticut 7.894 7
NewYork 11.05 11
NewJersey 5.985 5
Pennsylvania 14.42 14
Delaware 1.851 1
Maryland 9.283 9
Virginia 21.01 21
Kentucky 2.290 2
NorthCarolina 11.78 11
SouthCarolina 6.874 6
Georgia 2.361 2
```

But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional “ideal” parts:

- New Jersey (0.985)
- Connecticut (0.894)
- South Carolina (0.874)
- Vermont (0.851)
- Delaware (0.851)
- Massachusetts+Maine (0.838)
- North Carolina (0.78)
- New Hampshire (0.727)

(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?

Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?

There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, *Is it not unfair that the fractional apportionment favours the North*?^{4}

Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.

Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the *Jefferson Method of Apportionment*, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.

As an aside, it’s interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.^{5} In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.

At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?

We’ll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.^{6}

So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.

For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander^{7}

The number of people each representative actually represent is at the core of the notion of fairness — but even then, it’s not obvious.

Suppose we enumerate the states, so that *S*_{i} refers to state *i*. We’ll also denote by *P*_{i} the population of state *i*, and we’ll let *R*_{i} denote the number of representatives allotted to state *i*.

In the ideal scenario, every representative would represent the exact same number of people. That is, we would have

$$\text{pop. per rep. in state i}

= \frac{P_i}{R_i}

= \frac{P_j}{R_j}

= \text{pop. per rep. in state j}$$

for every pair of states *i* and *j*. But this won’t ever happen in practice.

Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If

$$

\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}

$$

then we can say that each representative in state *i* represents more people, and thus those people have a diluted vote.

There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.

A few natural ideas emerge:

- We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
- We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
- We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.

This last one needs a bit of explanation. Define the **relative difference** between two numbers *x* and *y* to be

$$

\frac{\lvert x – y \rvert}{\min(x, y)}.

$$

Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state *j* have smaller constituencies than in state *i* (and therefore people in state *j* have more powerful votes). Then the relative difference in constituency size is

$$

\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.

$$

The relative difference in per capita representation is

$$

\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 =

\frac{P_i/R_i}{P_j/R_j} – 1.

$$

Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.

All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson’s scheme — though to be fair, Jefferson’s scheme doesn’t seek to minimize inequality and there is no reason to think it should behave the same).

Each of these ideas leads to a different apportionment scheme, and in fact each has a name.

- Minimizing differences in constituency size is the
*Dean*method. - Minimizing differences in per capita representation is the
*Webster*method. - Minimizing relative differences between both constituency size and per capita representation is the
*Hill*(or sometimes*Huntington-Hill*) method.

Further, each of these schemes has been used at some time in US history. Webster’s method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.^{8} The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.^{9}

In 1929 an automatic apportionment act was passed.^{10} In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:

- The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
- The apportionment that would come from the Webster method.
- The apportionment that would come from the newly introduced Hill method.

If one reads congressional discussion from the time, then it will be good to note that Webster’s method is sometimes called the *method of major fractions* and Hill’s method is sometimes called the *method of equal proportions*. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill’s method was declared to be the recommendation of the Academy.^{11} From 1930 on, Hill’s method has been used.

The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the *Alabama paradox* was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one *more* seat available led to Alabama receiving one *fewer* seat.

The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section **The Apportionment Act of 1792**).

Another paradox was discovered in 1900, known as the *Population paradox*. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia’s population was larger and growing much more rapidly.

In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.

Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it’s still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.^{12}

The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn’t suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.^{13}

Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that *P*_{i} is the population of state *i*, and *R*_{i} is the number of representatives allocated to state *i*. The Hill method seeks to minimize

$$

\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1

$$

whenever *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j}. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.

We can work out a different way of understanding this apportionment that is easier to implement in practice.

Suppose that we have allocated all of the representatives to each state and state *j* has *R*_{j} representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states *i* and *j* with *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j}. (If this isn’t possible then the allocation is perfect).

We can ask if it would be a good idea to move one representative from state *j* to state *i*, since state *j*‘s constituency sizes are smaller. This can be thought of as working with *R*_{i}′=*R*_{i} + 1 and *R*_{j}′=*R*_{j} − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that *P*_{j}/*R*_{j}′>*P*_{i}/*R*_{i}′ (since otherwise the relative difference is strictly smaller) and

$$

\frac{P_jR_i’}{P_iR_j’} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1

$$

(since the relative difference must be at least as large). This is equivalent to

$$

\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i}

\iff

\frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.

$$

As every variable is positive, we can rewrite this as

$$

\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}

$$

We’ve shown that $(2)$ must hold whenever *P*_{i}/*R*_{i} > *P*_{j}/*R*_{j} in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states *i* and *j*.

Clearly it holds if *i* = *j* as the denominator on the left is strictly smaller.

If we are in the case when *P*_{j}/*R*_{j} > *P*_{i}/*R*_{i}, then we necessarily have the chain *P*_{j}/(*R*_{j} − 1)>*P*_{j}/*R*_{j} > *P*_{i}/*R*_{i} > *P*_{i}/(*R*_{i} + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.

This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill’s method is the largest fraction

$$ \frac{R_i}{\sqrt{P_i(P_i+1)}} $$

being too large. (Some call this term the *Hill rank-index*).

This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for *n* seats, we can get an apportionment for *n* + 1 seats by assigning the additional seat the any state *i* which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.

Further, it can be shown that there is a unique apportionment in Hill’s method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.

This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.

Officially, Dean’s method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton’s method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean’s method, not Hamilton’s method. Specifically, New York and Illinois were each given one fewer seat than Hamilton’s method would have given, while New Hampshire and Florida were given one additional seat each.

There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.

One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean’s method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden’s 184.

There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it’s widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the “Reconstruction” period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.

Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 269^{14}. If Jefferson’s method had been used, then Gore would have won with 271 votes to Bush’s 266.

These decisions have far-reaching consequences!

- Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
- Balinski, Michel L., and H. Peyton Young. “The quota method of apportionment.” The American Mathematical Monthly 82.7 (1975): 701-730.
- Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
- Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
- Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
- Peskin, Allan. “Was there a Compromise of 1877.” The Journal of American History 60.1 (1973): 63-75.
- US Census Results
- US Constitution
- US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
- George Washington’s collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
- Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
- Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg

Posted in Data, Expository, Mathematics, Politics, Story
Tagged apportionment, election, Hill apportionment
Leave a comment

Here are some notes for my talk **Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles** (an invitation to analytic number theory), which I’m giving on Tuesday 26 February at Macalester College.

The slides for my talk are available here.

The overarching idea of the talk is to explore the deep relationship between

- right triangles with rational side lengths and area $n$,
- three-term arithmetic progressions of squares with common difference $n$, and
- rational points on the elliptic curve $Y^2 = X^3 – n^2 X$.

If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a **congruent number**.

By understanding this relationship, we also describe the ideas and results in the paper A Shifted Sum for the Congruent Number Problem, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.

Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like sage.^{1}

For the rest of this note, I’ll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.

Firstly, one needs access to sage. It’s free to install, but it’s quite large. The easiest way to begin using sage immediately is to use cocalc.com, a free interface to sage (and other tools) that was created by William Stein, who also created sage.

In a sage session, we can create an elliptic curve through

```
> E6 = EllipticCurve([-36, 0])
> E6
Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field
```

More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use

```
> n = 6 # (or anything you want)
> E = EllipticCurve([-n**2, 0])
```

We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.

```
> E6.rank()
1
```

If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.^{2}

For the curve $Y^2 = X^3 – 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through

```
> E6.point_search(10)
[(-3 : 9 : 1)]
```

The `10`

in this code is a limit on the complexity of the point. The precise definition isn’t important — using $10$ is a reasonable limit for us.

We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 – 36 X Z^2$ — it turns out that in many cases, it’s easier to perform computations when every term is a polynomial of the same degree. The coordinates it’s giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.

```
> Pt = E6.point_search(10)[0] # The [0] means to return the first element of the list
> Pt.xy()
(-3, 9)
```

In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by

$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$

We can write a sage function to perform this map for us, through

```
> def pt_to_triangle(P):
x, y = P.xy()
return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y
> pt_to_triangle(Pt)
(3, 4, 5)
```

This returns the $(3, 4, 5)$ triangle!

Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.

```
> Pt2 = Pt + Pt
> Pt2.xy()
(25/4, -35/8)
> pt_to_triangle(Pt2)
(7/10, 120/7, -1201/70)
```

Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it’s the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).

Let’s do one more.

```
> Pt3 = Pt + Pt + Pt
> Pt3.xy()
(-1587/1369, -321057/50653)
> pt_to_triangle(Pt3)
(-4653/851, -3404/1551, -7776485/1319901)
```

That’s a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!

Posted in Expository, Math.NT, Mathematics, Programming, sage, sagemath, sagemath
Leave a comment

Today, I’m giving a talk on *Zeroes of L-functions associated to half-integral weight modular forms*, which includes some joint work with Li-Mei Lim and Tom Hulse, and which alludes to other joint work touched on previously with Jeff Hoffstein and Min Lee (and which perhaps should have been finished a few years ago).

Posted in Math.NT, Mathematics
Tagged half-integral weight modular form, l function, modular form, zeroes
Leave a comment

Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a paper to the arXiv called “Second Moments in the Generalized Gauss Circle Problem”. I’ve briefly described its contents before.

This paper has been accepted and will appear in Forum of Mathematics: Sigma.

This is the first time I’ve submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don’t release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.

An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.^{1}

There is one major addition to the paper that didn’t appear in the original preprint. At one of the referee’s suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.

If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$ f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz), $$ then one can use a (real analytic) Eisenstein series $$ E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s $$ to recognize the Rankin-Selberg $L$-function \begin{equation}\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k – 1}} = h(s) \langle f g y^k, E(s, z) \rangle, \end{equation} where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.

When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.

When $f$ and $g$ are not cusp forms, then Zagier^{2} provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.

In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta^{3} gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.

However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.^{4} For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.

The Appendix to the paper can be thought of as an extension of Gupta’s paper: it uses Gupta’s ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.

There is more to this story. I’ll return to it in a later note.

I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.

When I first opened these comments, I was a bit afraid. Having *so many comments* was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).

More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,^{5} but this process changed and helped sharpen my ideas concerning good technical math writing.

I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn’t expect this.

I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author’s institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).

This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.^{6}

I will certainly consider Forum of Mathematics in the future.

Posted in Expository, Math.NT, Mathematics, Warwick
Tagged gauss circle problem, l function, number theory, rankin-selberg convolution
Leave a comment

In my previous note, I looked at an amusing but inefficient way to compute the sum $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1}$$ using Mellin and inverse Mellin transforms. This was great fun, but the amount of work required was more intense than the more straightforward approach offered immediately by using Lambert series.

However, Adam Harper suggested that there is a nice shortcut that we can use (although coming up with this shortcut requires either a lot of familiarity with Mellin transforms or knowledge of the answer).

In the Lambert series approach, one shows quickly that $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ and then evaluates this last sum directly. For the Mellin transform approach, we might ask: do the two functions $$ f(x) = \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}$$ and $$ g(x) = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ have the same Mellin transforms? From the previous note, we know that they have the same values at $1$.

We also showed very quickly that $$ \mathcal{M} [f] = \frac{1}{(\log 2)^2} \Gamma(s) \zeta(s-1). $$ The more difficult parts from the previous note arose in the evaluation of the inverse Mellin transform at $x=1$.

Let us compute the Mellin transform of $g$. We find that $$ \begin{align}

\mathcal{M}[g] &= \sum_{n \geq 1} n \int_0^\infty \frac{1}{2^{nx}} x^s \frac{dx}{x} \notag \\

&= \sum_{n \geq 1} n \int_0^\infty \frac{1}{e^{nx \log 2}} x^s \frac{dx}{x} \notag \\

&= \sum_{n \geq 1} \frac{n}{(n \log 2)^s} \int_0^\infty x^s e^{-x} \frac{dx}{x} \notag \\

&= \frac{1}{(\log 2)^2} \zeta(s-1)\Gamma(s). \notag

\end{align}$$ To go from the second line to the third line, we did the change of variables $x \mapsto x/(n \log 2)$, yielding an integral which is precisely the definition of the Gamma function.

Thus we see that $$ \mathcal{M}[g] = \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) = \mathcal{M}[f],$$ and thus $f(x) = g(x)$. (“Nice” functions with the same “nice” Mellin transforms are also the same, exactly as with Fourier transforms).

This shows that not only is $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ but in fact $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1} = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ for all $x > 1$.

I think that’s sort of slick.

Posted in Math.NT, Mathematics, Warwick
Tagged euler phi, Mellin Transform, number theory, sum evaluation
Leave a comment

At a recent colloquium at the University of Warwick, the fact that

\begin{equation}\label{question}

\sum_ {n \geq 1} \frac{\varphi(n)}{2^n – 1} = 2.

\end{equation}

Although this was mentioned in passing, John Cremona asked — *How do you prove that*?

It almost fails a heuristic check, as one can quickly check that

\begin{equation}\label{similar}

\sum_ {n \geq 1} \frac{n}{2^n} = 2,

\end{equation}

which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor.

**[Edit:** Note that an addendum to this note has been added here. In it, we see that there is a way to shortcut the “hard part” of the long computation.**]**

Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short.

The idea is that, by expanding the denominator in power series, one has that

\begin{equation}

\sum_{n \geq 1} a(n) \frac{x^n}{1 – x^n} \notag

= \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn}

= \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n,

\end{equation}

where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$.

Applied to \eqref{question}, we find that

\begin{equation}

\sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} \notag

= \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 – 2^{-n}}

= \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d),

\end{equation}

and as

\begin{equation}

\sum_ {d \mid n} \varphi(d) = n, \notag

\end{equation}

we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$.

That’s a nice derivation using a series that I hadn’t come across before. But that’s not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it’s a wrong way that works out in a nice way that at least one person^{1} finds appealing.

We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates.

Let $f$ denote the function

\begin{equation}

f(x) = \frac{1}{2^x – 1}. \notag

\end{equation}

Denote by $f^ * $ the Mellin transform of $f$,

\begin{equation}

f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x}

= \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag

\end{equation}

where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.^{2}

For a general nice function $g(x)$, its Mellin transform satisfies

\begin{equation}

\mathcal{M}[f(nx)] (s)

= \int_0^\infty g(nx) x^s \frac{dx}{x}

= \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x}

= \frac{1}{n^s} g^ * (s).\notag

\end{equation}

Further, the Mellin transform is linear. Thus

\begin{equation}\label{mellinbase}

\mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s)

= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s)

= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}.

\end{equation}

The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as

\begin{equation}

\sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag

= \frac{\zeta(s-1)}{\zeta(s)}.

\end{equation}

Thus the Mellin transform in \eqref{mellinbase} can be written as

\begin{equation}

\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag

\end{equation}

By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be

\begin{equation}

\mathcal{M}^{-1}[h(s)] (x) \notag

:=

\frac{1}{2\pi i} \int_ {c – i \infty}^{c + i\infty} x^s h(s) ds,

\end{equation}

where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I’ll write $(c)$ as a shorthand for the limits of integration. Thus

\begin{equation}\label{mellininverse}

\sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}

= \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s}

\Gamma(s) \zeta(s-1) x^{-s} ds.

\end{equation}

We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}.

How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy’s residue theorem, and to recognize the infinite sum as a classical series.

The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren’t poles at negative odd integers, since the zeta function has zeroes at negative even integers.

Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that

\begin{equation}

\sum_{n \geq 1} \frac{\varphi(n)}{2^{n} – 1} \notag

=\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 +

\frac{\zeta(-5)}{4!} \log^4 2 + \cdots

\end{equation}

The zeta function at negative integers has a very well-known relation to the Bernoulli numbers,

\begin{equation}\label{zeta_bern}

\zeta(-n) = – \frac{B_ {n+1}}{n+1},

\end{equation}

where Bernoulli numbers are the coefficients in the expansion

\begin{equation}\label{bern_gen}

\frac{t}{1 – e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}.

\end{equation}

Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function.

Translating the zeta values into Bernoulli numbers, we find that

\eqref{question} is equal to

\begin{align}

&\frac{1}{\log^2 2} – \frac{B_2}{2} – \frac{B_4}{2! \cdot 4} \log^2 2 –

\frac{B_6}{4! \cdot 6} \log^4 2 – \frac{B_8}{6! \cdot 8} \cdots \notag \\

&=

-\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog}

\end{align}

This last sum is excellent, and can be recognized.

For a general exponential generating series

\begin{equation}

F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag

\end{equation}

we see that

\begin{equation}

\frac{d}{dt} \frac{1}{t} F(t) \notag

=\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}.

\end{equation}

Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that

\begin{equation}

\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag

=- \frac{e^{-t}}{(1 – e^{-t})^2},

\end{equation}

and also that

\begin{equation}

\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag

=\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}.

\end{equation}

This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$.

Putting this together, we find that

\begin{equation}

\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag

=\frac{e^{-\log 2}}{(1 – e^{-\log 2})^2}

= \frac{1/2}{(1/2)^2} = 2.

\end{equation}

Thus we find that \eqref{question} really is equal to $2$, as we had sought to show.

Posted in Math.NT, Mathematics, Warwick
Tagged mellin inversion, number theory, zeta
Leave a comment

This is a brief note intended primarily for my collaborators interested in using Rubinstein’s `lcalc`

to compute the values of half-integral weight $L$-functions.

We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn’t expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage’s distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage’s version).

Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.

It is assumed that the $L$-function is normalized in such a way that

$$\begin{equation}

\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.

\end{equation}$$

This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.

An annotated version of the datafile you should create looks like this

```
2 # 2 means the Dirichlet coefficients are reals
0 # 0 means the L-function isn't a "nice" one
10000 # 10000 coefficients will be provided
0 # 0 means the coefficients are not periodic
1 # num Gamma factors of form \Gamma(\gamma s + \lambda)
1 # the \gamma in the Gamma factor
1.75 0 # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790 # Q. In this case, 1/pi
1 0 # real and imaginary parts of omega, sign of func. eq.
0 # number of poles
1.000000000000000 # a(1)
-1.78381067250408 # a(2)
... # ...
-0.622124724090625 # a(10000)
```

If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, **datafiles should only contain numbers and should not contain comments.** This annotated version is for convenience, not for use.

You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it’s surely possible to use more, but this was the test-setup that I first set up.

In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.

One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.

```
num_coeffs = 10**5 + 1
weight = 9.0 / 2.0
R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
if i % 2 == 1:
eta2_coeffs.append(0)
else:
eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3
coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])
normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])
```

Suppose that you have a datafile, called `g1_lcalcfile.txt`

(for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as

```
# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')
# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')
# See lcalc.help() for more on calling lcalc.
```

The key in these is to pass along the datafile through the `-F`

argument.

I recently attended Building Bridges 4, an automorphic forms summer school and workshop. A major goal of the conference is to foster communication and relationships between researchers from North America and Europe, especially junior researchers and graduate students.

It was a great conference, and definitely one of the better conferences that I’ve attended. What made it so good? For one thing, it was in Budapest, and I love Budapest. Many of the main topics were close to my heart, which is a big plus.

But what I think really set it apart was that there were lots of relatively short talks, and almost everyone attended almost every talk.^{1}

The amount of time allotted to a talk carries extreme power in deciding what sort of talk it will be. A typical hour-long seminar talk is long enough to give context, describe a line of research leading to a set of results, discuss how these results fit into the literature, and even to give a non-rushed description of how something is proved. Sometimes a good speaker will even distill a few major ideas and discuss how they are related. A long talk can have multiple major ideas (although just one presented very well can make a good talk too).

In comparison, 50, 40, and 30 minute talks require much more discipline. As the amount of time decreases, the number of ideas that can be inserted into a talk decreases. And this relationship is not linear! Thirty minutes is just about long enough to describe one idea pretty well, and to do anything more is very hard.^{2}

Something interesting happens for shorter talks. For 20 minute, 15 minute, and 10 minute talks, the limitation almost serves as a source of inspiration.^{3} Being forced to focus on what’s important is a powerful organizing force.

The median talk length was 20 minutes, which is a very comfortable number. This is long enough to state a result and give context. It’s also long enough to tempt speakers into describing methodology behind a proof, but not long enough to effectively teach someone how the proof works.

An extraordinary aspect of a 20 minute talk is also that it’s short enough to pay attention to, even if it’s only an okay talk. It is perhaps not a surprise to most conference goers that most talks are not so great. To be a skilled orator is to be exceptional.

At Building Bridges, I was introduced to math *speed talks*. These are two minute talks. I’ve seen many programming *lightning talks* (often used to plug a particular product or solution to a common programming problem), but these math *speed talks* were different.

People used their two minutes to introduce an idea, or a result. And they either chose to give the broadest possible context, or a singular idea in the proof.

People were talking about *real mathematics* in **two minutes**. And I loved it.

Simply having a task where you distill some real mathematics into a two minute coherent description is worthwhile. *What’s important? What do you really want to say? Why?*

Two minutes is so short that it feels silly. And silly means that it doesn’t feel dangerous or scary, and thus many people felt willing to give it a try. At Building Bridges, the organizers gamified the speed talks, so that getting the closest to 2 minutes was rewarded with applause and going over two minutes resulted in a buzzer going off. It was a game, and it was **fun**. It was encouraging.

I firmly support any activity that encourages people who normally don’t speak so much, especially students and junior researchers. You learn a lot by giving a talk, even if it’s only a two minute talk.^{4}

This conference had 19 (I think) speed talks over a three day stretch. They were given in clumps after the last regular talk each day. Since people were there for the big talk, everyone attended the speed talks. This is also important! In conferences like the Joint Math Meetings, where there might even be something like speed talks, it’s essentially impossible to pay attention since there are too many people in too many places and you never can step in the same river twice. Here, speed talks were given on the same stage as long talks, to the same audience, and with the same equipment.

Every conference should have speed talks. And they should be treated as first-class talks, with the exception that they are irrefutably silly.

Go forth and spread the speed talk gospel.

On 18 July 2018 I gave a talk at the 4th Building Bridges Automorphic Forms Workshop, which is hosted at the Renyi Institute in Budapest, Hungary this year. In this talk, I spoke about counting points on hyperboloids, with a certain focus on counting points on the three dimensional hyperboloid

$$\begin{equation} X^2 + Y^2 = Z^2 + h \end{equation}$$

for any fixed integer $h$.

I gave a similar talk at the 32nd Automorphic Forms Workshop in Tufts in March. I don’t say this during my talk, but a big reason for giving these talks is to continue to inspire me to finish the corresponding paper. (There are still a couple of rough edges that need some attention).

The methodology for the result relies on the spectral expansion of half-integral weight modular forms. This is unfriendly to those unfamiliar with the subject, and particularly mysterious to students. But there is a nice connection to a topic discussed by Arpad Toth during the previous week’s associated summer school.

Arpad sketched a proof of the spectral decomposition of holomorphic modular cusp forms on $\Gamma = \mathrm{SL}(2, \mathbb{Z})$. He showed that

$$\begin{equation} L^2(\Gamma \backslash \mathcal{H}) = \textrm{cuspidal} \oplus \textrm{Eisenstein}, \tag{1}

\end{equation}$$

where the *cuspidal* contribution comes from Maass forms and the *Eisenstein* contribution comes from line integrals against Eisenstein series.

The typical Eisenstein series $$\begin{equation} E(z, s) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} \textrm{Im}(\gamma z)^s \end{equation}$$ only converges for $\mathrm{Re}(s) > 1$, and the initial decomposition in $(1)$ implicitly has $s$ in this range.

To write down the integrals appearing in the Eisenstein spectrum explicitly, one normally shifts the line of integration to $1/2$. As Arpad explained, classically this produces a pole at $s = 1$ (which is the constant function).

In half-integral weight, the Eisenstein series has a pole at $s = 3/4$, with the standard theta function

$$\begin{equation} \theta(z) = \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z} \end{equation}$$

as the residue. (More precisely, it’s a constant times $y^{1/4} \theta(z)$, or a related theta function for $\Gamma_0(N)$). I refer to this portion of the spectrum as *the residual spectrum*, since it comes from often-forgotten residues of Eisenstein series. Thus the spectral decomposition for half-integral weight objects is a bit more complicated than the normal case.

When giving talks involving half-integral weight spectral expansions to audiences including non-experts, I usually omit description of this. But for those who attended the summer school, it’s possible to at least recognize where these additional terms come from.

The slides for this talk are available here.

Posted in Expository, Math.NT, Mathematics
Tagged automorphic forms, BB18, Building Bridges, hyperboloid, mathematics
Leave a comment

This is the final chapter in my series about the state of internet fora, and Math.SE and StackOverflow in particular. The previous chapters are Challenges Facing Community Cohesion and Ghosts of Forums Past. Unlike the previous entries, this also sits on Meta.Math.SE (and was posted there a week before here). (As I write this as a moderator of Math.SE, I refer to the Math.SE community as “we”, “us”, and “our” community).

A couple of weeks ago, there was a proposal on meta.Math.SE to introduce a third level of math site to the SE network. Many members of the the MathSE community have reacted very positively to this proposal, to the extent that even some of the moderators have considered throwing their weight behind it.

But a *NoviceMathSE site would be doomed to fail, and such a separation would not solve the underlying problems facing the site.*

To explain my point of view, we need to examine more closely the arguments in favor of NoviceMathSE.