mixedmath

Explorations in math and programming
David Lowry-Duda



If $f$ is a weight $k$ holomorphic cuspform with expansion $f(z) = \sum_{n \geq 1} a(n) q^n$, then we expect the partial sums to satisfy \begin{equation}\label{eq:basic} S_f(X) := \sum_{n \leq X} a(n) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon} \end{equation} for any $\epsilon > 0$. We do not know how to prove this, but this is true on average.

We expect this to also hold when $f$ is of half integral weight — though we are further from proving it. There is less written about the partial sums in the half integral weight case.

In this note, I want to investigate what can be said about the partial sums $S_f(X)$ using the most basic sorts of information available: the functional equation and fundamental results about the coefficients.

Landau's Method, via Chandrasekharan and Narasimhan

The approach I use here is entirely based on applying "an old method of Landau", where one chooses combinatorial mixings of smoothed partial sums to approximate the sums.See Landau, Über die Anzahl der Gitterpunkte in gewissen Bereichen. Zweite abhandlung. 1915; and Landau, Über die Anzahl der Gitterpunkte in gewissen Bereichen. 1912. This was advanced by Chandrasekharan and Narasimhan.See Chandrasekharan and Narasimhan, Functional equations with multiple gamma factors and the average order of arithmetical functions. 1962.

In 2018, Takashi Taniguchi, Frank Thorne, and I revisited these arguments and made the primary application as uniform "in the shape of the functional equation" as possible in our paper Uniform bounds for lattice point counting and partial sums of zeta functions.1 1This is an arxiv link, but the published version is essentially unchanged from the arxiv version. I'll refer to our paper by LDTT17, and the notation here is consistent with our paper (which is mostly consistent with the 1962 paper of Chandrasekharan and Narasimhan).

Applying this argument requires a distracting amount of notation. Specifically, we need

  1. Two Dirichlet series $\phi(s)$ and $\psi(s)$, denoted by \begin{equation*} \phi(s) = \sum_{n \geq 1} \frac{a(n)}{\lambda_n^s}, \qquad \psi(s) = \sum_{n \geq 1} \frac{b(n)}{\mu_n^s}, \end{equation*} where the sequences $\lambda_n$ and $\mu_n$ are strictly increasing sequences of real numbers tending to $\infty$. (In practice, they are $n$, possibly multiplied by a constant that encapsulates the conductor of the series).

  2. The Dirichlet series should satisfy a function equation of the form \begin{equation*} \Delta(s)\phi(s) = \Delta(\delta - s) \psi(\delta - s) \end{equation*} for some $\delta > 0$ and collected gamma factors \begin{equation*} \Delta(s) = \prod_{\nu = 1}^N \Gamma(\alpha_\nu s + \beta_\nu). \end{equation*} Here, each $\alpha_\nu > 0$ and each $\beta_\nu \in \mathbb{C}$.

  3. In terms of the gamma factor, let $A$ denote $\sum_{\nu = 1}^N \alpha_\nu$. We require $A \geq 1$ here. In other places, many people call $2A$ the degree of the Dirichlet series.

  4. I assume that the Dirichlet series converge somewhere, and that there really is some meromorphic function that each Dirichlet series describes a part of. I do not bother to make this formal here — instead note that for Dirichlet series and $L$-functions of interest, this is true.

  5. Today I assume that the Dirichlet series have no poles. Thus this applies to Dirichlet series from holomorphic cusp forms, but not to standard Epstein zeta functions. This has the effect of removing statements involving main terms.

  6. Denote the partial sum by \begin{equation*} A_\phi(X) = \sum_{\lambda_n \leq X} a(n). \end{equation*}

  7. We require a bound on the partial sums of the coefficients of the dual Dirichlet series, which we take to be of the form \begin{equation*} \sum_{\mu_n \leq Z} \lvert b(n) \rvert \leq B_\psi(Z) = C_\psi Z^r \log^{r'}(C'_\psi Z) \end{equation*} for some positive constants $C_\psi, C'_\psi$, any $r' \geq 0$, and $r > \frac{\delta}{2} + \frac{1}{4A}$. (We require the latter for purely techical reasons, but which is true in our applications today).

With this notation in place, we can now state the general theorem of LDTT17, Theorem 4.

With the above notation, we have \begin{equation}\label{eq:maintheorem} A_\phi(X) \ll \sum_{X \leq \lambda_n \leq X + O(y)} \lvert a(n) \rvert + X^{\frac{\delta}{2} - \frac{1}{4A}} z^{-\frac{\delta}{2} - \frac{1}{4A}} B_\psi(z), \end{equation} for every $\eta \geq - \frac{1}{2A}$, and where \begin{equation*} y = X^{1 - \frac{1}{2A} - \eta}, \qquad z = X^{2A\eta}. \end{equation*}

I remark that one can track what precisely the implicit constants depend on (indeed, this was the primary object of LDTT17), but I ignore that here.

In applications, one optimizes over $\eta$. It is now clear that there are two ingredients necessary to use this theorem: you need to have some understanding of the (absolute) partial sums of the coefficients (represented by the short sum $\sum_{\lambda_n} \lvert a(n) \rvert$ and the (absolute) partial sum of the dual coefficients (represented by $B_\psi(z)$).

Applications to partial sums of cuspform coefficients

We now apply this to the partial sums of coefficients of

  1. full integral weight cuspforms, and
  2. half integral weight cuspforms.

For both, we use $f(z) = \sum_{n \geq 1} a(n) q^n$ and take the weight to be $k$. Note: this differs from some authors who use $k$ for full integral weight and $\kappa = k/2$ or $\kappa = \frac{k}{2} - 1$ or some other notation for half integral weight. I prefer unified notation.

Full Integral Weight

Notationally, we have $\phi(s) = N^{\frac{s}{2}} L(s, f)$, where $N$ is the conductor of the functional equation, and $L(s, f)$ is the standard (unweighted) $L$-function \begin{equation*} L(s, f) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \end{equation*} NOTE: we includes the conductor with the $n^{-s}$, getting $\lambda_n^{-s}$ instead. This has no effect on the asymptotics aside from making the implicit constants depend on the conductor. We ignore the conductor from now on.

The functional equation for $L(s, f)$ looks like \begin{equation*} N^{s/2} L(s, f) \Gamma(s) = \varepsilon N^{(k-s)/2} L(k-s, \widetilde{f}) \Gamma(k-s), \end{equation*} where $\widetilde{f}$ is the conjugate of $f$ and $\lvert \varepsilon\rvert = 1$. (We also allow the dual coefficients $b(n)$ to absorb $\varepsilon$, and we no longer consider it).

In terms of the notation above, we have $\delta = k$ and $A = 1$. To apply the theorem, we study short interval bounds for the (absolute values of the) coefficients $a(n)$ and bounds for the (absolute values of the) dual coefficients $b(n)$.

As we are dealing with full integral weight holomorphic cuspforms, we have Deligne's bound. This implies that \begin{equation*} \lvert a(n) \rvert \ll n^{\frac{k-1}{2} + \epsilon}. \end{equation*} Thus we have the trivial bound \begin{equation}\label{eq:an_full} \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert \ll X^{\frac{k-1}{2} + \epsilon} y \end{equation} as long as $y \ll X^{1 - \epsilon}$. For the dual sum, we can apply the trivial bound \begin{equation}\label{eq:bn_full} \sum_{\mu_n \leq Z} \lvert b(n) \rvert \ll Z^{\frac{k-1}{2} + \epsilon} Z = Z^{\frac{k+1}{2} + \epsilon}. \end{equation} Both of these bounds follow by bounding each summand by the largest summand in the range, and then multiplying by the number of summands.

Applying the theorem, we have \begin{align*} A_\phi(X) &\ll X^{\frac{k-1}{2} + \epsilon} y + X^{\frac{k}{2} - \frac{1}{4}} z^{-\frac{k}{2} - \frac{1}{4}} z^{\frac{k+1}{2} + \epsilon} \\ &\ll X^{\frac{k-1}{2} + 1 - \frac{1}{2} - \eta + \epsilon} + X^{\frac{k-1}{2} + \frac{1}{4}} z^{\frac{1}{4} + \epsilon} \\ &\ll X^{\frac{k-1}{2} + \frac{1}{2} - \eta + \epsilon} + X^{\frac{k-1}{2} + \frac{1}{4} + \frac{\eta}{2} + \epsilon}, \end{align*} which is balanced when $\eta = \frac{1}{6}$. This gives the bound \begin{equation*} A_\phi(X) \ll X^{\frac{k-1}{2} + \frac{1}{2} - \frac{1}{6} + \epsilon} = X^{\frac{k-1}{2} + \frac{1}{3} + \epsilon}. \end{equation*}

To reference later, we codify this.

For full integral weight $k$ and notation as above, $A_\phi(X) \ll X^{\frac{k-1}{2} + \frac{1}{3} + \epsilon}$ for any $\epsilon > 0$.

Remarks on improving this bound: morally, this is approximately an $\epsilon$ factor away from being state of the art. (Actually, working very hard, it is possible to save a fractional log power — but I focus on polynomial sized error terms today).

Hafner and IvićHafner and Ivić, On sums of Fourier coefficients of cusp forms. 1989. showed that one can remove the $\epsilon$. They did this by improving \eqref{eq:an_full}, specifically by removing the $\epsilon$ there. And to do this, they used a clever argument applying multiplicativity of the Fourier coefficients (hence applying only to Hecke eigenforms, which isn't really a restriction).

It is easy to prove a version of the dual bound, \eqref{eq:bn_full}, without any epsilon factor. We use Cauchy-Schwarz and the Rankin-Selberg result2 2We obtain this result by applying Landau's method (or CN or LDTT17) to the Rankin-Selberg $L$-function.

\begin{equation} \sum_{n \leq Z} \lvert b(n) \rvert^2 = c Z^{k-1 + 1} + O(Z^{k-1 + \frac{3}{5}}) \end{equation} for an explicit but unimportant constant $c$. Then Cauchy-Schwarz implies that \begin{equation*} \sum_{n \leq Z} \lvert b(n) \rvert \ll \Big( \sum_{\mu_n \leq Z} \lvert b(n) \rvert^2 \Big)^{\frac{1}{2}} \Big( Z \Big)^{\frac{1}{2}} \ll Z^{\frac{k+1}{2}}. \end{equation*} Stated differently, Rankin-Selberg shows that on average, there is no additional $\epsilon$ factor in the coefficient bound, and the dual bound we require is sufficiently long to only worry about long averages.

After removing the $\epsilon$, this bound for the dual sum $B_\psi(Z) = \sum \lvert b(n) \rvert$ is essentially correct. Similarly, after removing the $\epsilon$, the bound for the nondual short sum is essentially correct. It would be very hard to improve the overall bound by attempting to sharpen the argument here via stronger verstions of either \eqref{eq:an_full} or \eqref{eq:bn_full}. The reason is that the absolute values obscure any further cancellation.

It is natural to ask whether the absolute values really need to be there. In LDTT17, the absolute value in \eqref{eq:an_full} comes from Lemma 7. More precisely, the actual quantity to be bounded is \begin{equation*} \sum_{\nu = 0}^\ell (-1)^{\ell - \nu} {\ell \choose \nu} \sum_{\lambda_n \in (X, X + \nu y]} a(n) (X + \nu y - \lambda_n)^\ell, \end{equation*} for some possibly rather large $\ell$. In the paper, we bound this by $O_\ell (y^\ell \sum_{X \leq \lambda_n \leq X + O_\ell(y)} \lvert a(n) \rvert)$, but this is obviously potentially lossy. Observe that each of the $\ell$ summands require short interval bounds of weighted sums of $a(n)$, which are in theory reasonably attainable. I think this is an untrod line of research.

Half Integral Weight

Notationally we have essentially the same setup, except that now the "dual" object $\widetilde{f}$ might be (in general) a half integral weight modular form on a different space. (Namely, the character of $\widetilde{f}$ might be changed by a quadratic twist, and the level might raise appropriately). In terms of the functional equation, this affects the conductor; but as with the full integral weight case this doesn't actually change anything.

In practice, we have a functionally identical functional equation with $\delta = k$ and $A = 1$. But we don't have Deligne's bound anymore; this isn't known for half integral weight coefficients. (We do have the same Rankin-Selberg bound). We have instead the much weaker boundSee Duke, Hyperbolic distribution problems and half-integral weight maass forms. 1988. Also see Iwaniec, Fourier coefficients of modular forms of half-integral weight. 1987.

\begin{equation} \lvert a(n) \rvert \ll n^{\frac{k}{2} - \frac{2}{7} + \epsilon} \ll n^{\frac{k-1}{2} + \frac{3}{14} + \epsilon}. \end{equation} (This is usually written in the first form, but I prefer the latter as it shows that the bound is $3/14$ larger than what is probably true).

As with the full integral case, the argument of Rankin-Selberg shows that \begin{equation} \sum_{n \leq Z} \lvert b(n) \rvert \ll Z^{\frac{k+1}{2}}. \end{equation} Now we ask what we can say about the short sum \begin{equation} \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert? \end{equation}

There are two obvious ways to try to bound this sum. Let's try them both.

Individual Bound: The "trivial" bound, using the $3/14$ bound, is \begin{equation} \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert \ll X^{\frac{k-1}{2} + \frac{3}{14} + \epsilon} y. \end{equation} (This will hold whenever $y \ll X^{1 - \epsilon}$).

Long-average bound: We apply Cauchy-Schwarz and the Rankin-Selberg bound. Then we find that \begin{align*} \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert &\ll \Big( \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert^2 \Big)^{\frac{1}{2}} \Big( \sum_{X < \lambda_n < X + O(y)} 1 \Big)^{\frac{1}{2}} \\ &\ll \Big( \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert^2 \Big)^{\frac{1}{2}} y^{\frac{1}{2}}. \end{align*} We bound the first term simply, through \begin{align*} \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert^2 &= \sum_{\lambda_n < X + O(y)} \lvert a(n) \rvert^2 - \sum_{\lambda_n < X} \lvert a(n) \rvert^2 \\ &= c(X + O(y))^{k} - cX^{k} + O((X + y)^{k - 1 + \frac{3}{5}}). \\ &= O(X^{k-1 + \frac{3}{5}}). \end{align*} In this last bound, I assume $y \ll \sqrt{X}$, which is true in our bounds. Inserting above, we have \begin{align*} \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert &\ll \Big( \sum_{X < \lambda_n < X + O(y)} \lvert a(n) \rvert^2 \Big)^{\frac{1}{2}} y^{\frac{1}{2}} \\ &\ll X^{\frac{k-1}{2} + \frac{3}{10}} y^{\frac{1}{2}}. \end{align*}

Applying the Theorem

We now apply the theorem to both. Applying the individual $3/14$ estimate, the theorem gives \begin{align*} A_\phi(X) &\ll X^{\frac{k-1}{2} + \frac{3}{14} + \epsilon} y + X^{\frac{k-1}{2} + \frac{1}{4} + \frac{\eta}{2}} \\ &\ll X^{\frac{k-1}{2} + \frac{3}{14} + \epsilon + \frac{1}{2} - \eta} + X^{\frac{k-1}{2} + \frac{1}{4} + \frac{\eta}{2}}, \end{align*} which is balanced when $X^{\frac{1}{4} + \frac{3}{14}} = X^{\frac{3}{2} \eta}$, or when $\eta = \frac{13}{42}$. This gives the bound \begin{equation} A_\phi(X) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \frac{13}{84} + \epsilon} \ll X^{\frac{k-1}{2} + 0.4047\ldots + \epsilon}. \end{equation} Applying the CS+RS estimate shows \begin{align*} A_\phi(X) &\ll X^{\frac{k-1}{2} + \frac{3}{10}} y^{\frac{1}{2}} + X^{\frac{k-1}{2} + \frac{1}{4} + \frac{\eta}{2}} \\ &\ll X^{\frac{k-1}{2} + \frac{3}{10} + \frac{1}{4} - \frac{\eta}{2}} + X^{\frac{k-1}{2} + \frac{1}{4} + \frac{\eta}{2}}, \end{align*} which is balanced when $\eta = \frac{3}{10}$ (which is less than $13/42$, so we know this bound is better). This gives the bound \begin{equation} A_\phi(X) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \frac{3}{20}} \ll X^{\frac{k-1}{2} + \frac{2}{5}}. \end{equation}

We codify this as well.

For half integral weight $k$ and notation as above, $A_\phi(X) \ll X^{\frac{k-1}{2} + \frac{2}{5}}$.

I'll end with two small remarks.

The $3/14$ estimate for coefficients is just barely too large in comparison to the Rankin-Selberg estimate.

I only included the inferior local-coefficient bound to indicate just how close it is.

In contrast to the full integral weight case, there is hope of improving the overall bound by improving the short interval estimate for half intergral weight forms. But in this application, we are studying short intervals of the form $[X, X + y]$ where $y \approx X^{\frac{1}{2} - \frac{3}{10}} = X^{\frac{1}{5}}$, which is very short.


Leave a comment

Info on how to comment

To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.

bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline math)$ or $$(your display equation)$$.

Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.

Comment via email

Comments (1)
  1. 2024-01-18 David Lowry-Duda

    I experiment with mirroring some commentary from Mastodon and here. This is from @davidlowryduda@mathstodon.xyz.

    In my first public note of the year, I comment on applying Landau's method to partial sums of full and half integral weight modular forms. This is where one uses a combinations of smoothed sums $\sum_{n \leq X} a(n) (n - X)^k$ to find reasonable bounds for the sharp sum $\sum_{n \leq X} a(n)$.

    Implicitly, this note compares what one can do from my paper with Thorne and Taniguchi against my sequence of papers with Hulse, Kuan, and Walker — and identifies where there is hope to get better results in the future.

    For additional context: I've been interested in improving these bounds since 2016. I've now studied the primary bound of $\sum_{n \leq X} a(n)$ in several different ways, and they all produce the same bound. This hints at some true barrier to our understanding, but I don't actually understand what this is.