mixedmath

Explorations in math and programming
David Lowry-Duda



$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.

We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}' a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron's formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy \begin{equation}\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds \end{equation} for an appropriate choice of $\sigma$.

Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a "Main Term" $\MT(x)$ and an "Error Term" $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to irregularity in the distribution of the coefficients.

To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider nice weight functions, i.e. weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.

For a weight function $v(\cdot)$, denote its Mellin transform by \begin{equation} V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}. \end{equation} Then we can study the more general dual family \begin{equation}\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds. \end{equation}

We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.

Theorem 1

Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) - \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big). \end{equation}

Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.

Theorem 2

Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) - \MT(x) = \Omega_\pm( x^{\sigma_0 - \epsilon} ). \end{equation}

The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham's Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) - x = \Omega_\pm(x^{\Theta - \epsilon})$ and $\psi(x) - x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev's second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan's text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn't have it nicely digested when I needed it).

Motivation and Application

Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape \begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s} \end{equation} for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.

Explicit asymptotics take the form \begin{equation} \sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X \end{equation} where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.

In this case, applying Theorem 2 with the Perron-weight \begin{equation} v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases} \end{equation} shows that \begin{equation} \sideset{}{'}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X). \end{equation} Similarly, Theorem 1 shows that \begin{equation} \sideset{}{'}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta - \epsilon}), \end{equation} where $\Theta < 7/64$ is the supremum of the deviations to Selberg's Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).

More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.


Leave a comment

Info on how to comment

To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.

bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline math)$ or $$(your display equation)$$.

Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.

Comment via email