# Tag Archives: dirichlet series

## Non-real poles and irregularity of distribution I

$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.

We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}’ a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron’s formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy $$\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds$$ for an appropriate choice of $\sigma$.

Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a “Main Term” $\MT(x)$ and an “Error Term” $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to irregularity in the distribution of the coefficients.

To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider nice weight functions, i.e.\ weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.

For a weight function $v(\cdot)$, denote its Mellin transform by $$V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}.$$ Then we can study the more general dual family $$\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds.$$

We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.

### Theorem 1

Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then $$\sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big).$$

Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.

### Theorem 2

Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, $$\sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm( x^{\sigma_0 – \epsilon} ).$$

The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham’s Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) – x = \Omega_\pm(x^{\Theta – \epsilon})$ and $\psi(x) – x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev’s second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan’s text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn’t have it nicely digested when I needed it).

## Motivation and Application

Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape $$D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s}$$ for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.

Explicit asymptotics take the form $$\sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X$$ where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.

In this case, applying Theorem 2 with the Perron-weight $$v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases}$$ shows that $$\sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X).$$ Similarly, Theorem 1 shows that $$\sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta – \epsilon}),$$ where $\Theta < 7/64$ is the supremum of the deviations to Selberg’s Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).

More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.

## Proofs

The proofs of these results will soon appear on the arXiv.

## Notes from a talk at the Maine-Quebec Number Theory Conference

Today I will be giving a talk at the Maine-Quebec Number Theory conference. Each year that I attend this conference, I marvel at how friendly and inviting an environment it is — I highly recommend checking the conference out (and perhaps modelling other conferences after it).

The theme of my talk is about spectral poles and their contribution towards asymptotics (especially of error terms). I describe a few problems in which spectral poles appear in asymptotics. Unlike the nice simple cases where a single pole (or possibly a few poles) appear, in these cases infinite lines of poles appear.

For a bit over a year, I have encountered these and not known what to make of them. Could you have the pathological case that residues of these poles generically cancel? Could they combine to be larger than expected? How do we make sense of them?

The resolution came only very recently.1

I will later write a dedicated note to this new idea (involving Dirichlet integrals and Landau’s theorem in this context), but for now — here are the slides for my talk.

Posted in Expository, Math.NT, Mathematics | | 2 Comments

## A Notebook Preparing for a Talk at Quebec-Maine

This is a notebook containing a representative sample of the code I used to  generate the results and pictures presented at the Quebec-Maine Number Theory Conference on 9 October 2016. It was written in a Jupyter Notebook using Sage 7.3, and later converted for presentation on this site.
There is a version of the notebook available on github. Alternately, a static html version without WordPress formatting is available here. Finally, this notebook is also available in pdf form.
The slides for my talk are available here.

# Testing for a Generalized Conjecture on Iterated Sums of Coefficients of Cusp Forms¶

Let $f$ be a weight $k$ cusp form with Fourier expansion

$$f(z) = \sum_{n \geq 1} a(n) e(nz).$$

Deligne has shown that $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$. It is conjectured that

$$S_f^1(n) := \sum_{m \leq X} a(m) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}.$$

It is known that this holds on average, and we recently showed that this holds on average in short intervals.
(See HKLDW1, HKLDW2, and HKLDW3 for details and an overview of work in this area).
This is particularly notable, as the resulting exponent is only 1/4 higher than that of a single coefficient.
This indicates extreme cancellation, far more than what is implied merely by the signs of $a(n)$ being random.

It seems that we also have

$$\sum_{m \leq X} S_f^1(m) \ll X^{\frac{k-1}{2} + \frac{2}{4} + \epsilon}.$$

That is, the sum of sums seems to add in only an additional 1/4 exponent.
This is unexpected and a bit mysterious.

The purpose of this notebook is to explore this and higher conjectures.
Define the $j$th iterated sum as

$$S_f^j(X) := \sum_{m \leq X} S_f^{j-1} (m).$$

Then we numerically estimate bounds on the exponent $\delta(j)$ such that

$$S_f^j(X) \ll X^{\frac{k-1}{2} + \delta(j) + \epsilon}.$$

In [1]:
# This was written in SageMath 7.3 through a Jupyter Notebook.

# sage plays strangely with ipython. This re-allows inline plotting
from IPython.display import display, Image


We first need a list of coefficients of one (or more) cusp forms.
For initial investigation, we begin with a list of 50,000 coefficients of the weight $12$ cusp form on $\text{SL}(2, \mathbb{Z})$, $\Delta(z)$, i.e. Ramanujan’s delta function.
We will use the data associated to the 50,000 coefficients for pictoral investigation as well.

We will be performing some numerical investigation as well.
For this, we will use the first 2.5 million coefficients of $\Delta(z)$

In [2]:
# Gather 10 coefficients for simple checking
check_10 = delta_qexp(11).coefficients()
print check_10

fiftyk_coeffs = delta_qexp(50000).coefficients()
print fiftyk_coeffs[:10] # these match expected

twomil_coeffs = delta_qexp(2500000).coefficients()
print twomil_coeffs[:10] # these also match expected

[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]

In [3]:
# Function which iterates partial sums from a list of coefficients

def partial_sum(baselist):
ret_list = [baselist[0]]
for b in baselist[1:]:
ret_list.append(ret_list[-1] + b)
return ret_list

print check_10
print partial_sum(check_10) # Should be the partial sums

[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]

In [4]:
# Calculate the first 10 iterated partial sums
# We store them in a single list list, sums_list
# the zeroth elelemnt of the list is the array of initial coefficients
# the first element is the array of first partial sums, S_f(n)
# the second element is the array of second iterated partial sums, S_f^2(n)

fiftyk_sums_list = []
fiftyk_sums_list.append(fiftyk_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
fiftyk_sums_list.append(partial_sum(fiftyk_sums_list[-1]))

print partial_sum(check_10)
print fiftyk_sums_list[1][:10]         # should match above

twomil_sums_list = []
twomil_sums_list.append(twomil_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
twomil_sums_list.append(partial_sum(twomil_sums_list[-1]))

print twomil_sums_list[1][:10]         # should match above

[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]


As is easily visible, the sums alternate in sign very rapidly.
For instance, we believe tha the first partial sums should change sign about once every $X^{1/4}$ terms in the interval $[X, 2X]$.
In this exploration, we are interested in the sizes of the coefficients.
But in HKLDW3, we investigated some of the sign changes of the partial sums.

Now seems like a nice time to briefly look at the data we currently have.
What do the first 50 thousand coefficients look like?
So we normalize them, getting $A(n) = a(n)/n^{5.5}$ and plot these coefficients.

In [5]:
norm_list = []
for n,e in enumerate(fiftyk_coeffs, 1):
normalized_element = 1.0 * e / (1.0 * n**(5.5))
norm_list.append(normalized_element)
print norm_list[:10]

1

In [6]:
# Make a quick display
normed_coeffs_plot = scatter_plot(zip(range(1,60000), norm_list), markersize=.02)
normed_coeffs_plot.save("normed_coeffs_plot.png")
display(Image("normed_coeffs_plot.png"))


Since some figures will be featuring prominently in the talk I’m giving at Quebec-Maine, let us make high-quality figures now.

1. 00000000000000, -0.530330085889911, 0.598733612492945, -0.718750000000000, 0.691213333204735, -0.317526448138560, -0.376547696558964, 0.911504835123284, -0.641518061271148, -0.366571226366719
Posted in Math.NT, Mathematics, Open, Programming, sagemath | | 1 Comment