Category Archives: Math.NT

Slides from a talk on Half Integral Weight Dirichlet Series

On Thursday, 18 March, I gave a talk on half-integral weight Dirichlet series at the Ole Miss number theory seminar.

This talk is a description of ongoing explicit computational experimentation with Mehmet Kiral, Tom Hulse, and Li-Mei Lim on various aspects of half-integral weight modular forms and their Dirichlet series.

These Dirichlet series behave like typical beautiful automorphic L-functions in many ways, but are very different in other ways.

The first third of the talk is largely about the “typical” story. The general definitions are abstractions designed around the objects that number theorists have been playing with, and we also briefly touch on some of these examples to have an image in mind.

The second third is mostly about how half-integral weight Dirichlet series aren’t quite as well-behaved as L-functions associated to GL(2) automorphic forms, but sufficiently well-behaved to be comprehendable. Unlike the case of a full-integral weight modular form, there isn’t a canonical choice of “nice” forms to study, but we identify a particular set of forms with symmetric functional equations to study. There are several small details that can be considered here, and I largely ignore them for this talk. This is something that I hope to return to in the future.

In the final third of the talk, we examine the behavior and zeros of a handful of half-integral weight Dirichlet series. There are plots of zeros, including a plot of approximately the first 150k zeros of one particular form. These are also interesting, and I intend to investigate and describe these more on this site later.

The slides for this talk are available here.

Posted in Math.NT, Mathematics | Tagged , | Leave a comment

A balancing act in “Uniform bounds for lattice point counting”

I was recently examining a technical hurdle in my project on “Uniform bounds for lattice point counting and partial sums of zeta functions” with Takashi Taniguchi and Frank Thorne. There is a version on the arxiv, but it currently has a mistake in its handling of bounds for small $X$.

In this note, I describe an aspect of this paper that I found surprising. In fact, I’ve found it continually surprising, as I’ve reproven it to myself three times now, I think. By writing this here and in my note system, I hope to perhaps remember this better.

Landau’s Method

In this paper, we revisit an application of “Landau’s Method” to estimate partial sums of coefficients of Dirichlet series. We model this paper off of an earlier application by Chandrasakharan and Narasimhan, except that we explicitly track dependence of the several implicit constants and we prove these results uniformly for all partial sums, as opposed to sufficiently large partial sums.

The only structure is that we have a Dirichlet series $\phi(s)$, some Gamma factors $\Delta(s)$, and a functional equation of the shape $$ \phi(s) \Delta(s) = \psi(s) \Delta(1-s). $$ This is relatively structureless, and correspondingly our attack is very general. We use some smoothed approximation to the sum of coefficients, shift lines of integration to pick up polar main terms, apply the functional equation and change variables so work with the dual, and then get some collection of error terms and error integrals.

It happens to be that it’s much easier to work with a $k$-Riesz smoothed approximation. That is, if $$
\phi(s) = \sum_{n \geq 1} \frac{a(n)}{\lambda_n^s}
$$
is our Dirichlet series, and we are interested in the partial sums $$
A_0(s) = \sum_{\lambda_n \leq X} a(n),
$$
then it happens to be easier to work with the smoothed approximations $$
A_k(X) = \frac{1}{\Gamma(k+1)}\sum_{\lambda_n \leq X} a(n) (X – \lambda_n)^k a(n),
$$
and to somehow combine several of these smoothed sums together.

This smoothed sum is recognizable as $$
A_k(X) =
\frac{1}{2\pi i}\int_{c – i\infty}^{c + i\infty} \phi(s)
\frac{\Gamma(s)}{\Gamma(s + k + 1)} X^{s + k}ds
$$
for $c$ somewhere in the half-plane of convergence of the Dirichlet series. As $k$ gets large, these integrals become better behaved. In application, one takes $k$ sufficiently large to guarantee desired convergence properties.

The process of taking several of these smoothed approximations for large $k$ together, studying them through basic functional equation methods, and combinatorially combining these smoothed approximations via finite differencing to get good estimates for the sharp sum $A_0(s)$ is roughly what I think of as “Landau’s Method”.

Application and shape of the error

In our paper, as we apply Landau’s method, it becomes necessary to understand certain bounds coming from the dual Dirichlet series $$
\psi(s) = \sum_{n \geq 1} \frac{b(n)}{\mu_n^s}.
$$
Specifically, it works out that the (combinatorially finite differenced) between the $k$-smoothed sum $A_k(X)$ and its $k$-smoothed main term $S_k(X)$ can be written as $$
\Delta_y^k [A_k(X) – S_k(X)] = \sum_{n \geq 1}
\frac{b(n)}{\mu_n^{\delta + k}} \Delta_y^k I_k(\mu_n X),\tag{1}
$$
where $\Delta_y^k$ is a finite differencing operator that we should think of as a sum of several shifts of its input function.

More precisely, $\Delta_y F(X) := F(X + y) – F(X)$, and iterating gives $$
\Delta_y^k F(X) = \sum_{j = 0}^k (-1)^{k – j} {k \choose j} F(X + jy).
$$
The $I_k(\cdot)$ term on the right of $(1)$ is an inverse Mellin transform $$
I_k(t) = \frac{1}{2 \pi i} \int_{c – i\infty}^{c + i\infty}
\frac{\Gamma(\delta – s)}{\Gamma(k + 1 + \delta – s)}
\frac{\Delta(s)}{\Delta(\delta – s)} t^{\delta + k – s} ds.
$$
Good control for this inverse Mellin transform yields good control of the error for the overall approximation. Via the method of finite differencing, there are two basic choices: either bound $I_k(t)$ directly, or understand bounds for $(\mu_n y)^k I_k^{(k)}(t)$ for $t \approx \mu_n X$. Here, $I_k^{(k)}(t)$ means the $k$th derivative of $I_k(t)$.

Large input errors

In the classical application (as in the paper of CN), one worries about this asymptotic mostly as $t \to \infty$. In this region, $I_k(t)$ can be well-approximated by a $J$-Bessel function, which is sufficiently well understood in large argument to give good bounds. Similarly, $I_k^{(k)}(t)$ can be contour-shifted in a way that still ends up being well-approximated by $J$-Bessel functions.

The shape of the resulting bounds end up being that $\Delta_y^k I_k(\mu_n X)$ is bounded by either

  • $(\mu_n X)^{\alpha + k(1 – \frac{1}{2A})}$, where $A$ is a fixed parameter that isn’t worth describing fully, and $\alpha$ is a bound coming from the direct bound of $I_k(t)$, or
  • $(\mu_n y)^k (\mu_n X)^\beta$, where $\beta$ is a bound coming from bounding $I_k^{(k)}(t)$.

In both, there is a certain $k$-dependence that comes from the $k$-th Riesz smoothing factors, either directly (from $(\mu_n y)^k$), or via its corresponding inverse Mellin transform (in the bound from $I_k(t)$). But these are the only aspects that depend on $k$.

At this point in the classical argument, one determines when one bound is better than the other, and this happens to be something that can be done exactly, and (surprisingly) independently of $k$. Using this pair of bounds and examining what comes out the other side gives the original result.

Small input errors

In our application, we also worry about asymptotic as $t \to 0$. While it may still be true that $I_k$ can be approximated by a $J$-Bessel function, the “well-known” asymptotics for the $J$-Bessel function behave substantially worse for small argument. Thus different methods are necessary.

It turns out that $I_k$ can be approximated in a relatively trivial way for $t \leq 1$, so the only remaining hurdle is $I_k^{(k)}(t)$ as $t \to 0$.

We’ve proved a variety of different bounds that hold in slightly different circumstances. And for each sort of bound, the next steps would be the same as before: determine when each bound is better, bound by absolute values, sum together, and then choose the various parameters to best shape the final result.

But unlike before, the boundary between the regions where $I_k$ is best bounded directly or bounded via $I_k^{(k)}$ depends on $k$. Aside from choosing $k$ sufficiently large for convergence properties (which relate to the locations of poles and growth properties of the Dirichlet series and gamma factors), any sufficiently large $k$ would suffice.

Limiting behavior gives a heuristic region

After I step away from this paper and argument for a while and come back, I wonder about the right way to choose the balancing error. That is, I rework when to use bounds coming from studying $I_k(t)$ directly vs bounds coming from studying $I_k^{(k)}(t)$.

But it turns out that there is always a reasonable heuristic choice. Further, this heuristic gives the same choice of balancing as in the case when $t \to \infty$ (although this is not the source of the heuristic).

Making these bounds will still give bounds for $\Delta_y^k I_k(\mu_n X)$ of shape

  • $(\mu_n X)^{\alpha + k(1 – \frac{1}{2A})}$, where $A$ is a fixed parameter that isn’t worth describing fully, and $\alpha$ is a bound coming from the direct bound of $I_k(t)$, or
  • $(\mu_n y)^k (\mu_n X)^\beta$, where $\beta$ is a bound coming from bounding $I_k^{(k)}(t)$.

The actual bounds for $\alpha$ and $\beta$ will differ between the case of small $\mu_n X$ and large $\mu_n X$ ($J$-Bessel asymptotics for large, different contour shifting analysis for small), but in both cases it turns out that $\alpha$ and $\beta$ are independent of $k$.

This is relatively easy to see when bounding $I_k^{(k)}(t)$, as repeatedly differentiating under the integral shows essentially that $$
I_k^{(k)}(t) =
\frac{1}{2\pi i}
\int \frac{\Delta(s)}{(\delta – s)\Delta(\delta – s)}
t^{\delta – s} ds.
$$
(I’ll note that the contour does vary with $k$ in a certain way that doesn’t affect the shape of the result for $t \to 0$).

When balancing the error terms $(\mu_n X)^{\alpha + k(1 – \frac{1}{2A})}$ and $(\mu_n y)^k (\mu_n X)^\beta$, the heuristic comes from taking arbitrarily large $k$. As $k \to \infty$, the point where the two error terms balance is independent of $\alpha$ and $\beta$.

This reasoning applies to the case when $\mu_n X \to \infty$ as well, and gives the same point. Coincidentally, the actual $\alpha$ and $\beta$ values we proved for $\mu_n X \to \infty$ perfectly cancel in practice, so this limiting argument is not necessary — but it does still apply!

I suppose it might be possible to add another parameter to tune in the final result — a parameter measuring deviation from the heuristic, that can be refined for any particular error bound in a region of particular interest.

But we haven’t done that.

In fact, we were slightly lossy in how we bounded $I_k^{(k)}(t)$ as $t \to 0$, and (for complicated reasons that I’ll probably also forget and reprove to myself later) the heuristic choice assuming $k \sim \infty$ and our slighly lossy bound introduce the same order of imprecision to the final result.

More coming soon

We’re updating our preprint and will have that up soon. But as I’ve been thinking about this a lot recently, I realize there are a few other things I should note down. I intend to write more on this in the short future.

Posted in Math.NT, Mathematics | Tagged | Leave a comment

Slides from a talk at AIM

I’m currently at an AIM workshop on Arithmetic Statistics, Discrete Restriction, and Fourier Analysis. This morning (AIM time)/afternoon (USEast time), I’ll be giving a talk on Lattice points and sums of Fourier Coefficients of modular forms.

The theme of this talk is embodied in the statement that several lattice counting problems like the Gauss circle problem are essentially the same as very modular-form-heavy problems, sometimes very closely similar and sometimes appearing slightly different.

In this talk, I describe several recent adventures, successes and travails, in my studies of problems related to the Gauss circle problem and the task of producing better bounds for the sum of the first several coefficients of holomorphic cuspforms.

Here are the slides for my talk.

I’ll note that various parts of this talk have appeared in several previous talks of mine, but since it’s the pandemic era this is the first time much of this has appeared in slides.

Posted in Expository, Math.NT, Mathematics | Leave a comment

Slides from a talk on computing Maass forms

Yesterday, I gave a talk on various aspects of computing Maass cuspforms at Rutgers.

Here are the slides for my talk.

Unlike most other talks that I’ve given, this doesn’t center on past results that I’ve proved. Instead, this is a description of an ongoing project to figure out how to rigorously compute many Maass forms, implement this efficiently in code, and add this data to the LMFDB.

Posted in LMFDB, Math.NT, Mathematics | Tagged , , | Leave a comment

Talk on computing Maass forms

In a remarkable coincidence, I’m giving two talks on Maass forms today (after not giving any talks for 3 months). One of these was a chalk talk (or rather camera on pen on paper talk). My other talk can be found at https://davidlowryduda.com/static/Talks/ComputingMaass20/.

In this talk, I briefly describe how one goes about computing Maass forms for congruence subgroups of $\mathrm{SL}(2)$. This is a short and pointed exposition of ideas mostly found in papers of Hejhal and Fredrik Strömberg’s PhD thesis. More precise references are included at the end of the talk.

This amounts to a description of the idea of Hejhal’s algorithm on a congruence subgroup.

Side notes on revealjs

I decided to experiment a bit with this talk. This is not a TeX-Beamer talk (as is most common for math) — instead it’s a revealjs talk. I haven’t written a revealjs talk before, but it was surprisingly easy.

It took me more time than writing a beamer talk, most likely because I don’t have a good workflow with reveal and there were several times when I wanted to use nontrivial javascript capabilities. In particular, I wanted to have a few elements transition from one slide to the next (using the automatic transition capabilities).

At first, I had thought I would write in an intermediate markup format and then translate this into revealjs, but I quickly decided against that plan. The composition stage was a bit more annoying.

But I think the result is more appealing than a beamer talk, and it’s sufficiently interesting that I’ll revisit it later.

Posted in Expository, Math.NT, Mathematics | Tagged , , | Leave a comment

Notes from a talk at Dartmouth on the Fibonacci zeta function

I recently gave a talk “at Dartmouth”1. The focus of the talk was the (odd-indexed) Fibonacci zeta function:
$$ \sum_{n \geq 1} \frac{1}{F(2n-1)^s},$$
where $F(n)$ is the nth Fibonacci number. The theme is that the Fibonacci zeta function can be recognized as coming from an inner product of automorphic forms, and the continuation of the zeta function can be understood in terms of the spectral expansion of the associated automorphic forms.

This is a talk from ongoing research. I do not yet understand “what’s really going on”. But within the talk I describe a few different generalizations; firstly, there is a generalization to other zeta functions that can be viewed as traces of units on quadratic number fields, and secondly there is a generalization to quadratic forms recognizing solutions to Pell’s equation.

I intend to describe additional ideas from this talk in the coming months, as I figure out how pieces fit together. But for now, here are the slides.

Posted in Expository, Math.NT, Mathematics | Tagged , , , , | Leave a comment

Pictures of equidistribution – the line

In my previous note, we considered equidistribution of rational points on the circle $X^2 + Y^2 = 2$. This is but one of a large family of equidistribution results that I’m not particularly familiar with.

This note is the first in a series of notes dedicated to exploring this type of equidistribution visually. In this note, we will investigate a simpler case — rational points on the line.

(more…)

Posted in Expository, Math.AG, Math.NT, Mathematics | Tagged , , , , | Leave a comment

Points on X^2 + Y^2 = 2 equidistribute with respect to height

When you order rational points on the circle $X^2 + Y^2 = 2$ by height, these points equidistribute.

Stated differently, suppose that $I$ is an arc on the circle $X^2 + Y^2 = 2$. Then asymptotically, the number of rational points on the arc $I$ with height bounded by a number $H$ is equal to what you would expect if $\lvert I\rvert /2\sqrt{2}\pi$ of all points with height up to $H$ were on this arc. Here, $\lvert I\rvert /2\sqrt{2}\pi$ the ratio of the arclength of the arc $I$ with the total circumference of the circle.

This only makes sense if we define the height of a rational point on the circle. Given a point $(a/c, b/c)$ (written in least terms) on the circle, we define the height of this point to be $c$.

In forthcoming work with my frequent collaborators Chan Ieong Kuan, Thomas Hulse, and Alexander Walker, we count three term arithmetic progressions of squares. If $C^2 – B^2 = B^2 – A^2$, then clearly $A^2 + C^2 = 2B^2$, and thus a 3AP of squares corresponds to a rational point on the circle $X^2 + Y^2 = 2$. We compare one of our results to what you would expect from equidistribution. From general principles, we expected such equidistribution to be true. But I wasn’t sure how to prove it.

With helpful assistance from Noam Elkies, Emmanuel Peyre, and John Voight (who each immediately knew how to prove this), I learned how to prove this fact.

The rest of this note contains this proof.

(more…)

Posted in Expository, Math.NT, Mathematics, sage, sagemath | Tagged , | 1 Comment

Notes behind a talk: visualizing modular forms

Today, I’ll be at Bowdoin College giving a talk on visualizing modular forms. This is a talk about the actual process and choices involved in illustrating a modular form; it’s not about what little lies one might hold in their head in order to form some mental image of a modular form.1

This is a talk heavily inspired by the ICERM semester program on Illustrating Mathematics (currently wrapping up). In particular, I draw on2 conversations with Frank Farris (about using color to highlight desired features), Elias Wegert (about using logarithmically scaling contours), Ed Harriss (about the choice of colorscheme), and Brendan Hassett (about overall design choices).

There are very many pictures in the talk!

Here are the slides for the talk.

I wrote a few different complex-plotting routines for this project. At their core, they are based on sage’s complex_plot. There are two major variants that I use.

The first (currently called “ccomplex_plot”. Not a good name) overwrites how sage handles lightness in complex_plot in order to produce “contours” at spots where the magnitude is a two-power. These contours are actually a sudden jump in brightness.

The second (currently called “raw_complex_plot”, also not a good name) is even less formal. It vectorizes the computation and produces an object containing the magnitude and argument information for each pixel to be drawn. It then uses numpy and matplotlib to convert these magnitudes and phases into RGB colors according to a matplotlib-compatible colormap.

I am happy to send either of these pieces of code to anyone who wants to see them, but they are very much written for my own use at the moment. I intend to improve them for general use later, after I’ve experimented further.

In addition, I generated all the images for this talk in a single sagemath jupyter notebook (with the two .spyx cython dependencies I allude to above). This is also available here. (Note that using a service like nbviewer or nbconvert to view or convert it to html might be a reasonable idea).

As a final note, I’ll add that I mistyped several times in the preparation of the images for this talk. Included below are a few of the interesting-looking mistakes. The first two resulted from incorrectly applied conformal mappings, while the third came from incorrectly applied color correction.

Posted in Expository, Math.NT, Mathematics, sage, sagemath, sagemath | Tagged , , , | 2 Comments

Non-real poles and irregularity of distribution I

$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.

We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}’ a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron’s formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy \begin{equation}\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds \end{equation} for an appropriate choice of $\sigma$.

Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a “Main Term” $\MT(x)$ and an “Error Term” $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to irregularity in the distribution of the coefficients.

To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider nice weight functions, i.e.\ weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.

For a weight function $v(\cdot)$, denote its Mellin transform by \begin{equation} V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}. \end{equation} Then we can study the more general dual family \begin{equation}\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds. \end{equation}

We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.

Theorem 1

Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big). \end{equation}


Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.

Theorem 2

Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm( x^{\sigma_0 – \epsilon} ). \end{equation}


The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham’s Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) – x = \Omega_\pm(x^{\Theta – \epsilon})$ and $\psi(x) – x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev’s second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan’s text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn’t have it nicely digested when I needed it).

Motivation and Application

Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape \begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s} \end{equation} for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.

Explicit asymptotics take the form \begin{equation} \sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X \end{equation} where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.

In this case, applying Theorem 2 with the Perron-weight \begin{equation} v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases} \end{equation} shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X). \end{equation} Similarly, Theorem 1 shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta – \epsilon}), \end{equation} where $\Theta < 7/64$ is the supremum of the deviations to Selberg’s Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).

More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.

Proofs

The proofs of these results will soon appear on the arXiv.

Posted in Math.NT, Mathematics | Tagged , , , | Leave a comment