# Smooth Sums to Sharp Sums 1

In this note, I describe a combination of two smoothed integral transforms that has been very useful in my collaborations with Alex Walker, Chan Ieong Kuan, and Tom Hulse. I suspect that this particular technique was once very well-known. But we were not familiar with it, and so I describe it here.

In application, this is somewhat more complicated. But to show the technique, I apply it to reprove some classic bounds on $\text{GL}(2)$ $L$-functions.

This note is also available as a pdf. This was first written as a LaTeX document, and then modified to fit into wordpress through latex2jax.

## Introduction

Consider a Dirichlet series
$$\begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \notag \end{equation}$$
Suppose that this Dirichlet series converges absolutely for $\Re s > 1$, has meromorphic continuation to the complex plane, and satisfies a functional equation of shape
$$\begin{equation} \Lambda(s) := G(s) D(s) = \epsilon \Lambda(1-s), \notag \end{equation}$$
where $\lvert \epsilon \rvert = 1$ and $G(s)$ is a product of Gamma factors.

Dirichlet series are often used as a tool to study number theoretic functions with multiplicative properties. By studying the analytic properties of the Dirichlet series, one hopes to extract information about the coefficients $a(n)$. Some of the most common interesting information within Dirichlet series comes from partial sums
$$\begin{equation} S(n) = \sum_{m \leq n} a(m). \notag \end{equation}$$
For example, the Gauss Circle and Dirichlet Divisor problems can both be stated as problems concerning sums of coefficients of Dirichlet series.

One can try to understand the partial sum directly by understanding the integral transform
$$\begin{equation} S(n) = \frac{1}{2\pi i} \int_{(2)} D(s) \frac{X^s}{s} ds, \notag \end{equation}$$
a Perron integral. However, it is often challenging to understand this integral, as delicate properties concerning the convergence of the integral often come into play.

Instead, one often tries to understand a smoothed sum of the form
$$\begin{equation} \sum_{m \geq 1} a(m) v(m) \notag \end{equation}$$
where $v(m)$ is a smooth function that vanishes or decays extremely quickly for values of $m$ larger than $n$. A large class of smoothed sums can be obtained by starting with a very nicely behaved weight function $v(m)$ and take its Mellin transform
$$\begin{equation} V(s) = \int_0^\infty v(x) x^s \frac{dx}{x}. \notag \end{equation}$$
Then Mellin inversion gives that
$$\begin{equation} \sum_{m \geq 1} a(m) v(m/X) = \frac{1}{2\pi i} \int_{(2)} D(s) X^s V(s) ds, \notag \end{equation}$$
as long as $v$ and $V$ are nice enough functions.

In this note, we will use two smoothing integral transforms and corresponding smoothed sums. We will use one smooth function $v_1$ (which depends on another parameter $Y$) with the property that
$$\begin{equation} \sum_{m \geq 1} a(m) v_1(m/X) \approx \sum_{\lvert m – X \rvert < X/Y} a(m). \notag \end{equation}$$
And we will use another smooth function $v_2$ (which also depends on $Y$) with the property that
$$\begin{equation} \sum_{m \geq 1} a(m) v_2(m/X) = \sum_{m \leq X} a(m) + \sum_{X < m < X + X/Y} a(m) v_2(m/X). \notag \end{equation}$$
Further, as long as the coefficients $a(m)$ are nonnegative, it will be true that
$$\begin{equation} \sum_{X < m < X + X/Y} a(m) v_2(m/X) \ll \sum_{\lvert m – X \rvert < X/Y} a(m), \notag \end{equation}$$
which is exactly what $\sum a(m) v_1(m/X)$ estimates. Therefore
$$\begin{equation}\label{eq:overall_plan} \sum_{m \leq X} a(m) = \sum_{m \geq 1} a(m) v_2(m/X) + O\Big(\sum_{m \geq 1} a(m) v_1(m/X) \Big). \end{equation}$$

Hence sufficient understanding of $\sum a(m) v_1(m/X)$ and $\sum a(m) v_2(m/X)$ allows one to understand the sharp sum
$$\begin{equation} \sum_{m \leq X} a(m). \notag \end{equation}$$

## Two Smooth Cutoff Functions

Let us now introduce the two cutoff functions that we will use.

### Concentrating Integral

We use the Mellin transform
$$\begin{equation} \frac{1}{2\pi i} \int_{(2)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds = \frac{1}{2\pi} \exp \Big( – \frac{Y^2 \log^2 X}{4\pi} \Big). \notag \end{equation}$$
Then
$$\begin{equation} \frac{1}{2\pi i} \int_{(2)} D(s) \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds = \frac{1}{2\pi} \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big). \notag \end{equation}$$
For $n \in [X – X/Y, X + X/Y]$, the exponential damping term is essentially constant. However for $n$ with $\lvert n – X \rvert > X/Y$, this quickly exponential decay. Therefore this integral is very nearly the sum over those $n$ with $\lvert n – X \rvert < X/Y$.

For this reason we sometimes call this transform a concetrating integral transform. All of the mass of the integral is concentrated in a small interval of width $X/Y$ around the point $X$.

Note that if $a(n)$ is nonnegative, then we have the trivial bound
$$\begin{equation} \sum_{\lvert n – X \rvert < X/Y} a(n) \ll \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big). \notag \end{equation}$$

As this is a bit less known, we include a brief proof of this transform.

Write $X^s = e^{s\log X}$ and complete the square in the exponents. Since the integrand is entire and the integral is absolutely convergent, we may perform a change of variables $s \mapsto s-Y^2 \log X/2\pi$ and shift the line of integration back to the imaginary axis. This yields
$$\begin{equation} \frac{1}{2\pi i} \exp\left( – \frac{Y^2 \log^2 X}{4\pi}\right) \int_{(0)} e^{\pi s^2/Y^2} \frac{ds}{Y}. \notag \end{equation}$$
The change of variables $s \mapsto isY$ transforms the integral into the standard Gaussian, completing the proof.

### Bump and Decay Integral

For $X, Y > 0$, let $v_Y(X)$ denote a smooth non-negative function with maximum value $1$ satisfying

1. $v_Y(X) = 1$ for $X \leq 1$,
2. $v_Y(X) = 0$ for $X \geq 1 + \frac{1}{Y}$.

Let $V(s)$ denote the Mellin transform of $v_Y(X)$, given by
$$\begin{equation} V(s)=\int_0^\infty t^s v_Y(t) \frac{dt}{t}. \notag \end{equation}$$
when $\Re(s) > 0$. Through repeated applications of integration by parts, one can show that $V(s)$ satisfies the following properties:

1. $V(s) = \frac{1}{s} + O_s(\frac{1}{Y})$.
2. $V(s) = -\frac{1}{s}\int_1^{1 + \frac{1}{Y}}v'(t)t^s dt$.
3. For all positive integers $m$, and with $s$ constrained to within a vertical strip where $\lvert s\rvert >\epsilon$, we have
$$\begin{equation} \label{vbound} V(s) \ll_\epsilon \frac{1}{Y}\left(\frac{Y}{1 + \lvert s \rvert}\right)^m. \end{equation}$$

Property $(3)$ above can be extended to real $m > 1$ through the Phragmén-Lindelőf principle.

Then we have that
$$\begin{equation} \frac{1}{2\pi i} \int_{(2)} D(s) V(s) X^s ds = \sum_{n \leq X} a(n) + \sum_{X < n < X + X/Y} a(n) v_Y(n/X). \notag \end{equation}$$

In other words, the sharp sum $\sum_{n \leq X} a(n)$ is captured perfectly, and then there is an amount of smooth fuzz for an additional $X/Y$ terms. As long as the short sum of length $X/Y$ isn’t as large as the sum over the first $X$ terms, then this transform gives a good way of understanding the sharp sum.

When $a(n)$ is nonnegative, we have the trivial bound that
$$\begin{equation} \sum_{X < n < X + X/Y} a(n) v_Y(n/X) \ll \sum_{\lvert n – X \rvert < X/Y} a(n). \notag \end{equation}$$

### In Combination

We have the equality
\begin{align} \sum_{n \geq 1} a(n) v_Y(n/X) &= \sum_{n \leq X} a(n) + \sum_{X < n < X + X/Y} a(n) v_Y(n/X) \notag \ \\ &= \sum_{n \leq X} a(n) + O\Big( \sum_{\lvert n – X \rvert < X/Y} a(n) \Big) \notag \ \\ &= \sum_{n \leq X} a(n) + O\bigg( \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big)\bigg).\notag \end{align}
Rearranging, we have
$$\begin{equation} \sum_{n \leq X} a(n) = \sum_{n \geq 1} a(n) v_Y(n/X) + O\bigg( \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big)\bigg). \notag \end{equation}$$
In terms of integral transforms, we then have that
\begin{align} \sum_{n \leq X} a(n) &= \frac{1}{2\pi i} \int_{(2)} D(s) V(s) X^s ds \notag \ \\ &\quad + O \bigg( \frac{1}{2\pi i} \int_{(2)} D(s) \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds \bigg). \notag \end{align}

Fortunately, the process of understanding these two integral transforms often boils down to the same fundamental task: determine how quickly Dirichlet series grow in vertical strips.

## Application: Sums of Coefficients of $\text{GL}(2)$ Cusp Forms

Suppose that $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a $\text{GL}(2)$ holomorphic cusp form of weight $k$. We do not restrict $k$ to be an integer, and in fact $k$ might be any rational number as long as $k > 2$. Then the Rankin-Selberg convolution
$$\begin{equation} L(s, f \otimes \overline{f}) = \zeta(2s) \sum_{n \geq 1} \frac{\lvert a(n) \rvert^2}{n^{s + k – 1}} \notag \end{equation}$$
is an $L$-function satisfying a functional equation of shape
$$\begin{equation} \Lambda(s, f \otimes \overline{f}) := (2\pi)^{-2s} L(s, f \otimes \overline{f}) \Gamma(s) \Gamma(s + k – 1) = \epsilon \Lambda(s, f\otimes \overline{f}), \notag \end{equation}$$
where $\lvert \epsilon \rvert = 1$ (and in fact the right hand side $L$-function may actually correspond to a related pair of forms $\widetilde{f} \otimes \overline{\widetilde{f}}$, though this does not affect the computations done here).

It is a classically interesting question to consider the sizes of the coefficients $a(n)$. The Ramanujan-Petersson conjecture states that $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$. The Ramanujan-Petersson conjecture is known for full-integral forms on $\text{GL}(2)$, but this is a very deep and very technical result. In general, this type of question is very deep, and very hard.

Using nothing more than the functional equation and the pair of integral transforms, let us analyze the sizes of
$$\begin{equation} \sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}}. \notag \end{equation}$$
Note that the power $n^{k-1}$ serves to normalize the sum to be $1$ on average.

As described above, it is now apparent that
\begin{align} \sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}} &= \frac{1}{2\pi i} \int_{(2)} \frac{L(s, f \otimes \overline{f})}{\zeta(2s)} V(s) X^s ds \notag \ \\ &\quad + O \bigg( \frac{1}{2\pi i} \int_{(2)} \frac{L(s, f \otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds \bigg). \notag \end{align}

We now seek to understand the two integral transforms. Due to the $\zeta(2s)^{-1}$ in the denominator, and due to the mysterious nature of the zeroes of the zeta function, it will only be possible to shift each line of integration to $\Re s = \frac{1}{2}$. Note that $L(s, f\otimes \overline{f})$ has a simple pole at $s = 1$ with a residue that I denote by $R$.

By the Phragmén-Lindelőf Convexity principle, it is known from the functional equation that
$$\begin{equation} L(\frac{1}{2} + it, f \otimes \overline{f}) \ll (1 + \lvert t \rvert)^{1}. \notag \end{equation}$$
Then we have by Cauchy’s Theorem that
\begin{align} &\frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds \notag \ \\ &\quad = \frac{RX e^{1/Y^2}}{Y\zeta(2)} + \frac{1}{2\pi i} \int_{(1/2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds. \notag \end{align}
The shifted integral can be written
$$\begin{equation}\label{eq:exp_shift1} \int_{-\infty}^\infty \frac{L(\frac{1}{2} + it, f \otimes \overline{f})}{\zeta(1 + 2it)} \exp \Big( \frac{\pi (\frac{1}{4} – t^2 + it)}{Y^2}\Big) \frac{X^{\frac{1}{2} + it}}{Y}dt. \end{equation}$$
It is known that
$$\begin{equation} \zeta(1 + 2it)^{-1} \ll \log (1 + \lvert t \rvert). \notag \end{equation}$$
Therefore, bounding by absolute values shows that~\eqref{eq:exp_shift1} is bounded by
$$\begin{equation} \int_{-\infty}^\infty (1 + \lvert t \rvert)^{1 + \epsilon} e^{-t^2/Y^2} \frac{X^{\frac{1}{2}}}{Y}dt. \notag \end{equation}$$

Heuristically, the exponential decay causes this to be an integral over $t \in [-Y, Y]$, as outside this interval there is exponential decay. We can recognize this more formally by performing the change of variables $t \mapsto tY$. Then we have
$$\begin{equation} \int_{-\infty}^\infty (1 + \lvert tY \rvert)^{1 + \epsilon} e^{-t^2} X^{\frac{1}{2}} dt \ll X^{\frac{1}{2}} Y^{1+\epsilon}. \notag \end{equation}$$
In total, this means that
$$\begin{equation} \frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds = \frac{RX e^{1/Y^2}}{Y\zeta(2)} + O(X^{\frac{1}{2}}Y^{\frac{3}{4}+\epsilon}). \notag \end{equation}$$

Working now with the other integral transform, Cauchy’s theorem gives
\begin{align} &\frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} V(s) X^s ds \notag \ \\ &\quad = \frac{RX V(1)}{\zeta(2)} + \frac{1}{2\pi i} \int_{(1/2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} V(s)X^s ds. \notag \end{align}
The shifted integral can again be written
$$\begin{equation}\label{eq:exp_shift2} \int_{-\infty}^\infty \frac{L(\frac{1}{2} + it, f \otimes \overline{f})}{\zeta(1 + 2it)} V(\tfrac{1}{2} + it) X^{\frac{1}{2} + it} dt, \end{equation}$$
and, bounding~\eqref{eq:exp_shift2} by absolute values as above, we get
$$\begin{equation} \int_{-\infty}^\infty (1 + \lvert t \rvert)^{1 + \epsilon} \lvert V(\tfrac{1}{2} + it) \rvert X^{\frac{1}{2}} dt \ll \int_{-\infty}^\infty (1 + \lvert t \rvert)^{\frac{3}{4} + \epsilon} \frac{1}{Y} \bigg(\frac{Y}{1 + \lvert t \rvert}\bigg)^m X^{\frac{1}{2}} dt \notag \end{equation}$$
for any $m \geq 0$. In order to make the integral converge, we choose $m = 2 + 2\epsilon$, which shows that
$$\begin{equation} \int_{-\infty}^\infty (1 + \lvert t \rvert)^{1 + \epsilon} \lvert V(\tfrac{1}{2} + it) \rvert X^{\frac{1}{2}} dt \ll X^{\frac{1}{2}}Y^{1 + \epsilon}. \notag \end{equation}$$
Therefore, we have in total that
$$\begin{equation} \frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} V(s) X^s ds = \frac{RX V(1)}{\zeta(2)} + O(X^{\frac{1}{2}}Y^{1 + \epsilon}). \notag \end{equation}$$

Notice that the $X$ and $Y$ bounds are the exact same for the two separate integral bounds, and that the bounding process was essentially identical. Heuristically, this should generally be true (although in practice there may be some advantage to one over the other).

Now that we have estimated these two integrals, we can say that
$$\begin{equation} \sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}} = cX + O\big(\frac{X}{Y}\big) + O(X^{\frac{1}{2}}Y^{1+\epsilon}) \notag \end{equation}$$
for some computable constant $c$. This is optimized when
$$\begin{equation} X^{\frac{1}{2}} = Y^{2 + \epsilon} \implies Y \approx X^{\frac{1}{4}}, \notag \end{equation}$$
$$\begin{equation} \sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}} = cX + O(X^{\frac{3}{4} + \epsilon}). \notag \end{equation}$$