Category Archives: Warwick

Update to Second Moments in the Generalized Gauss Circle Problem

Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a paper to the arXiv called “Second Moments in the Generalized Gauss Circle Problem”. I’ve briefly described its contents before.

This paper has been accepted and will appear in Forum of Mathematics: Sigma.More randomized squares art

This is the first time I’ve submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don’t release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.

An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.1

A new appendix has appeared

There is one major addition to the paper that didn’t appear in the original preprint. At one of the referee’s suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.

If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$ f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz), $$ then one can use a (real analytic) Eisenstein series $$ E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s $$ to recognize the Rankin-Selberg $L$-function \begin{equation}\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k – 1}} = h(s) \langle f g y^k, E(s, z) \rangle, \end{equation} where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.

When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.

When $f$ and $g$ are not cusp forms, then Zagier2 provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.

In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta3 gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.

However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.4 For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.

The Appendix to the paper can be thought of as an extension of Gupta’s paper: it uses Gupta’s ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.

There is more to this story. I’ll return to it in a later note.

Other submission details for FoM: Sigma

I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.

When I first opened these comments, I was a bit afraid. Having so many comments was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).

More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,5 but this process changed and helped sharpen my ideas concerning good technical math writing.

I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn’t expect this.

I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author’s institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).

This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.6

I will certainly consider Forum of Mathematics in the future.

Posted in Expository, Math.NT, Mathematics, Warwick | Tagged , , , | Leave a comment

The wrong way to compute a sum: addendum

Cellular Automata from Rule 106 (random initial configuration)In my previous note, I looked at an amusing but inefficient way to compute the sum $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1}$$ using Mellin and inverse Mellin transforms. This was great fun, but the amount of work required was more intense than the more straightforward approach offered immediately by using Lambert series.

However, Adam Harper suggested that there is a nice shortcut that we can use (although coming up with this shortcut requires either a lot of familiarity with Mellin transforms or knowledge of the answer).

In the Lambert series approach, one shows quickly that $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ and then evaluates this last sum directly. For the Mellin transform approach, we might ask: do the two functions $$ f(x) = \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}$$ and $$ g(x) = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ have the same Mellin transforms? From the previous note, we know that they have the same values at $1$.

We also showed very quickly that $$ \mathcal{M} [f] = \frac{1}{(\log 2)^2} \Gamma(s) \zeta(s-1). $$ The more difficult parts from the previous note arose in the evaluation of the inverse Mellin transform at $x=1$.

Let us compute the Mellin transform of $g$. We find that $$ \begin{align}
\mathcal{M}[g] &= \sum_{n \geq 1} n \int_0^\infty \frac{1}{2^{nx}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} n \int_0^\infty \frac{1}{e^{nx \log 2}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} \frac{n}{(n \log 2)^s} \int_0^\infty x^s e^{-x} \frac{dx}{x} \notag \\
&= \frac{1}{(\log 2)^2} \zeta(s-1)\Gamma(s). \notag
\end{align}$$ To go from the second line to the third line, we did the change of variables $x \mapsto x/(n \log 2)$, yielding an integral which is precisely the definition of the Gamma function.

Thus we see that $$ \mathcal{M}[g] = \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) = \mathcal{M}[f],$$ and thus $f(x) = g(x)$. (“Nice” functions with the same “nice” Mellin transforms are also the same, exactly as with Fourier transforms).

This shows that not only is $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ but in fact $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1} = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ for all $x > 1$.

I think that’s sort of slick.

Posted in Math.NT, Mathematics, Warwick | Tagged , , , | Leave a comment

The wrong way to compute a sum

At a recent colloquium at the University of Warwick, the fact that
\begin{equation}\label{question}
\sum_ {n \geq 1} \frac{\varphi(n)}{2^n – 1} = 2.
\end{equation}
Although this was mentioned in passing, John Cremona asked — How do you prove that?

It almost fails a heuristic check, as one can quickly check that
\begin{equation}\label{similar}
\sum_ {n \geq 1} \frac{n}{2^n} = 2,
\end{equation}
which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor.

[Edit:┬áNote that an addendum to this note has been added here. In it, we see that there is a way to shortcut the “hard part” of the long computation.]

The right way

Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short.

The idea is that, by expanding the denominator in power series, one has that
\begin{equation}
\sum_{n \geq 1} a(n) \frac{x^n}{1 – x^n} \notag
= \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn}
= \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n,
\end{equation}
where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$.

Applied to \eqref{question}, we find that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} \notag
= \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 – 2^{-n}}
= \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d),
\end{equation}
and as
\begin{equation}
\sum_ {d \mid n} \varphi(d) = n, \notag
\end{equation}
we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$.

That’s a nice derivation using a series that I hadn’t come across before. But that’s not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it’s a wrong way that works out in a nice way that at least one person1 finds appealing.

The wrong way

We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates.

Let $f$ denote the function
\begin{equation}
f(x) = \frac{1}{2^x – 1}. \notag
\end{equation}
Denote by $f^ * $ the Mellin transform of $f$,
\begin{equation}
f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x}
= \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag
\end{equation}
where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.2

For a general nice function $g(x)$, its Mellin transform satisfies
\begin{equation}
\mathcal{M}[f(nx)] (s)
= \int_0^\infty g(nx) x^s \frac{dx}{x}
= \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x}
= \frac{1}{n^s} g^ * (s).\notag
\end{equation}
Further, the Mellin transform is linear. Thus
\begin{equation}\label{mellinbase}
\mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}.
\end{equation}

The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag
= \frac{\zeta(s-1)}{\zeta(s)}.
\end{equation}
Thus the Mellin transform in \eqref{mellinbase} can be written as
\begin{equation}
\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag
\end{equation}

By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be
\begin{equation}
\mathcal{M}^{-1}[h(s)] (x) \notag
:=
\frac{1}{2\pi i} \int_ {c – i \infty}^{c + i\infty} x^s h(s) ds,
\end{equation}
where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I’ll write $(c)$ as a shorthand for the limits of integration. Thus
\begin{equation}\label{mellininverse}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}
= \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s}
\Gamma(s) \zeta(s-1) x^{-s} ds.
\end{equation}

We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}.

How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy’s residue theorem, and to recognize the infinite sum as a classical series.

The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren’t poles at negative odd integers, since the zeta function has zeroes at negative even integers.

Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{n} – 1} \notag
=\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 +
\frac{\zeta(-5)}{4!} \log^4 2 + \cdots
\end{equation}

The zeta function at negative integers has a very well-known relation to the Bernoulli numbers,
\begin{equation}\label{zeta_bern}
\zeta(-n) = – \frac{B_ {n+1}}{n+1},
\end{equation}
where Bernoulli numbers are the coefficients in the expansion
\begin{equation}\label{bern_gen}
\frac{t}{1 – e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}.
\end{equation}
Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function.

Translating the zeta values into Bernoulli numbers, we find that
\eqref{question} is equal to
\begin{align}
&\frac{1}{\log^2 2} – \frac{B_2}{2} – \frac{B_4}{2! \cdot 4} \log^2 2 –
\frac{B_6}{4! \cdot 6} \log^4 2 – \frac{B_8}{6! \cdot 8} \cdots \notag \\
&=
-\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog}
\end{align}
This last sum is excellent, and can be recognized.

For a general exponential generating series
\begin{equation}
F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag
\end{equation}
we see that
\begin{equation}
\frac{d}{dt} \frac{1}{t} F(t) \notag
=\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}.
\end{equation}
Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag
=- \frac{e^{-t}}{(1 – e^{-t})^2},
\end{equation}
and also that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag
=\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}.
\end{equation}
This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$.

Putting this together, we find that
\begin{equation}
\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag
=\frac{e^{-\log 2}}{(1 – e^{-\log 2})^2}
= \frac{1/2}{(1/2)^2} = 2.
\end{equation}
Thus we find that \eqref{question} really is equal to $2$, as we had sought to show.

Posted in Math.NT, Mathematics, Warwick | Tagged , , | Leave a comment