mixedmath

Explorations in math and programming
David Lowry-Duda



This is the second note in a series of notes on zeros of Dirichlet series, with an eye towards examining zeros of Dirichlet series not in the Selberg class. The first note in the series is here. $\renewcommand{\Re}{\operatorname{Re}}$

We focus on Dirichlet series in an extended Selberg Class $\widetilde{S}$, written as \begin{equation}L(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \end{equation}Each $L$ has a functional equation, normalized so that the functional equation is of the shape $s \mapsto 1 - s$, and satisfies the following properties:

  1. The coefficients $a(n)$ satisfy a Ramanujan–Petersson bound on average, meaning that $\sum_{n \leq N} \lvert a(n) \rvert^2 \ll N^{1 + \epsilon}$ for any $\epsilon > 0$.
  2. $L(s)$ has an analytic continuation to $\mathbb{C}$ to an entire function of finite order.
  3. $L(s)$ satisfies a functional equation of the form \begin{equation} \Lambda(s) := L(s) Q^s \prod_{\nu = 1}^N \Gamma( \alpha_\nu s + \beta_\nu)= \omega \overline{\Lambda(1 - \overline{s})}, \end{equation}where $Q$ and $\alpha_\nu$ are positive real numbers, $\beta_\nu$ are complex numbers with nonnegative real part, and $\lvert \omega \rvert = 1$.

In this note, we examine basic counting results on zeros and verify that the proofs apply to the extended Selberg class. (In short, they do). We first show that any Dirichlet series $L \in \widetilde{S}$ is an entire function of order $1$; this is sufficient to guarantee many propertiesof the zeros of $L$.

Entire functions of order one

Recall that a holomorphic function $f(s)$ is called an integral function of finite order if there is a constant $A$ such that \begin{equation} \log \lvert f(s) \rvert \ll \lvert s \rvert^A \qquad (s \to \infty). \end{equation}The infimum of all numbers $A$ for which this inequality holds is called the order of $f$. Here and below, I use the convention that $F(x) \ll G(x)$ as $x \to L$ means that there is a constant $c$ such that $F(x) \leq c G(x)$. Note that I do not mean that $\lvert F(x) \rvert \leq c \lvert G(x) \rvert$ — this is to be used for signed, real-valued functions.

The completed Dirichlet series $\Lambda(s)$ is an integral function of order $1$.

We assumed already that $L \in \widetilde{S}$ is an integral function of finite order. The content of this statement is that having a functional equation and a half-plane of absolute convergence is enough to guarantee that the order is $1$.

It is a consequence of the Phragmen-Lindelof convexity principle that there is a finite constant $A$ such that $\lvert L(s) \rvert \ll (1 + \lvert s\rvert)^A$ for all $s$ with $\Re s \geq \frac{1}{2}$. (We computed this constant towards the end of the previous note, but we do not need to know the actual value). Applying Stirling's formula to estimate the gamma factors then shows that \begin{equation} \log \lvert \Lambda(s) \rvert \ll \lvert s \rvert (\log \lvert s \rvert + 1), \end{equation}uniformly for all $s$ with $\Re s \geq \frac{1}{2}$. The functional equation implies the same bound for $\Re s \leq \frac{1}{2}$, and thus $\Lambda(s)$ is an integral function of order bounded above by $1$.

To see that the order is exactly $1$, we note that for real $s$ as $s \to \infty$, the Dirichlet series behaves like the first nonvanishing coefficient$a(m)/m^s$, and thus $\log \lvert \Lambda(s) \rvert \asymp s \log s$ as $s \to \infty$. $\square$

It is a good exercise to examine how to show that the $L$-function associated to a holomorphic cuspform is an integral function of finite order. One way to do this is to examine the behavior in the Mellin transform \begin{equation} \Lambda(s, f) = \int_0^\infty f(iy) y^s \frac{dy}{y}, \end{equation}which converges absolutely for all $s$. Further, for $s$ in fixed vertical strips, one can show that $\Lambda(s, f)$ is bounded. Similar analysis holds for automorphic $L$-functions of higher degree.

Completely classical arguments give elementary estimates for the number of zeros of integral functions of finite order. Before we give them, we should analyze the basic structure of the zeros.

The completed function $\Lambda(s) = Q^s L(s) \prod \Gamma(\alpha_\nu s + \beta_\nu)$ is entire, but the product of gamma functions will have several poles. These poles must be cancelled by zeros of $L(s)$, and we call these zeros the trivial zeros of $L(s)$. Any remaining zero is called nontrivial. As each gamma factor is never $0$, this is equivalent to saying that the nontrivial zeros of $L(s)$ are precisely the zeros of $\Lambda(s)$.

In contrast to automorphic $L$-functions or $L$-functions in the standard Selberg class, we should generically expect nontrivial zeros off the critical line at $\Re s = \frac{1}{2}$ — and frequently outside the critical strip. Nonetheless, there is always some strip containing all the (nontrivial) zeros.

For each $L \in \widetilde{S}$, there exists $A, B \in \mathbb{R}$ such that all nontrivial zeros of $L(s)$ are in the strip $A \leq \Re s \leq B$.

Suppose $a(m)$ is the first nonzero coefficient of $L(s)$ and consider \begin{equation}L_m(s) := \frac{m^s}{a(m)} L(s) = 1 + \frac{m^s}{a(m)} \sum_{n > m} \frac{a(n)}{n^s}. \end{equation}Clearly $L(s)$ and $L_m(s)$ have the same zeros. The Ramanujan–Petersson type bound $\sum_{n \leq N} \lvert a(n) \rvert^2 \ll N^{1 + \epsilon}$ guarantees that $L_m(s)$ converges absolutely for all$\Re s > 1$. It follows that there is a $B \geq 1$ such that for all $s$ with $\Re s > B$, we have that \begin{equation} \frac{m^{\Re s}}{\lvert a(m) \rvert} \sum_{n > m} \frac{\lvert a(n) \rvert}{n^{\Re s}}< \frac{1}{2}. \end{equation}For any $s$ with $\Re s > B$, we trivially find that $\Re s > \frac{1}{2}$ and thus $L_m(s)$ has no zeros to the right of the line $\Re s = B$. A similar argument applies to the dual $L$-function on the other side of the functional equation, completing the proof. $\square$

This is not a particularly good bound, but I don't know any particularly good ways to estimate the size of a strip containing the zeros.

Nonetheless, in general there is a strip containing the zeros and it is thus meaningful to count the number of nontrivial zeros up to height $T$ in the strip (as is typically done for $\zeta(s)$). To count the number of zeros in the strip, we'll actually count $N(T)$, the number of zeros (counted with multiplicity) of $L(s)$ in a disk of radius $T$ centered at the origin. This differs from the count of zeros of height up to $T$, but the difference will be dominated by other error terms.

We apply Jensen's Formula (see my previous note for a proof) to conclude two classical results on $N(T)$: we bound the number of zeros and the multiplicity of zeros.

Let $f:\mathbb{C} \longrightarrow \mathbb{C}$ be a holomorphic function in $\lvert s \rvert \leq R$ with no zeros on $\lvert s \rvert = R$, and such that $f(0) \neq 0$. Then \begin{equation} \frac{1}{2\pi} \int_0^{2\pi} \log \lvert f(Re^{i \theta}) \rvert d \theta= \log \lvert f(0) \rvert + \int_0^R \frac{n(r)}{r} dr, \end{equation}where $n(r)$ is the number of zeros of $f(s)$ inside the circle $\lvert s \rvert = r$.

Jensen's formula relates local information on the number and distribution of zeros to the size of the function. The content of these results is that functions of order 1 don't grow fast enough to have lots of zeros.

For $T \geq 2$, we have that \begin{align*}N(T) &\ll T \log T, \\\\ N(T+1) - N(T) &\ll \log T. \end{align*}

We prove these two claims separately in two different applications of Jensen's Formula.

The zeros of $\Lambda(s)$ are exactly the zeros of $L(s)$, with the exception of $O(T)$ trivial zeros that cancel poles from the product of Gamma functions in the functional equation. Thus it suffices to provide an upper bound for the number of zeros of $\Lambda(s)$ instead of considering $L(s)$ directly.

We apply Jensen's Formula with $f(s) = \Lambda(s)$, taking $R = 2T$, giving \begin{align}N(T) &\ll n(R/2) \log 2 = n(R/2) \int_{R/2}^R r^{-1} dr \leq \int_{R/2}^R \frac{n(r)}{r} dr \\&\ll \int_0^{2 \pi} \sup_{\lvert s \rvert = R} \log \lvert \Lambda(s) \rvert \; d\theta \ll \int_0^{2\pi} R \log R \; d\theta \ll R \log R \\&\ll T \log T. \end{align}We used the estimate $\log \lvert \Lambda(s) \rvert \ll \lvert s \rvert (\log \lvert s \rvert + 1)$ from the proof of $\Lambda(s)$ being integral of order$1$.

If by coincidence it happens that $\Lambda(0) = 0$, so that Jensen's Theorem doesn't apply, we perturb slightly and choose $s_0$ near $0$ such that $\Lambda(s_0) \neq 0$ and apply Jensen's Theorem to the function $f(s) = \Lambda(s + s_0)$.

This shows the first statement in the theorem. We now prove the second statement.

We again define $L_m(s)$ to handle the possible vanishing of the first coefficient. Suppose $a(m)$ is the first nonzero coefficient of $L(s)$ and define \begin{equation}L_m(s) = \frac{m^s}{a(m)} L(s) = 1 + \frac{m^s}{a(m)} \sum_{n > m} \frac{a(n)}{n^s}. \end{equation}There exists a constant $C \geq 1$ such that for any $s$ with $\Re s > C$, we have that \begin{equation} \frac{m^{\Re s}}{\lvert a(m) \rvert} \sum_{n > m} \frac{ \lvert a(n) \rvert}{n^{\Re s}}< \frac{1}{2}. \end{equation}We also fix $A$ and $B$ such that all zeros of $L_m(s)$ lie in the strip $A < \Re s < B$. After possibly reducing $A$ and increasing $C$, we assume that $C \geq B + 1 \geq A + 2$.

Consider $s = C + iT$ for some large $T$. Let $n(r)$ denote the number of zeros of $L_m(s)$ inside the circle centered at $C + iT$ with radius $r$. There is a radius $R = R(A, B, C)$ such that the disk centered at $C + iT$ with radius $R$ includes the rectangle ${ s : A < \Re s < B, T \leq \Im s \leq T + 1 }$. Note that this radius $R$ is independent of $T$. For $T$ sufficiently large, this circle doesn't include any trivial zero of $L_m(s)$. Without loss of generality, we assume $T$ is taken at least this large.

Let $N^+(T)$ count the number of zeros $\rho = \beta + i \gamma$ with $T > \gamma > 0$. Then necessarily we have \begin{equation}N^+(T + 1) - N^+(T) \leq n(R). \end{equation}Jensen's formula shows that \begin{equation} \int_0^{2R} \frac{n(r)}{r} dr= \frac{1}{2\pi} \int_0^{2\pi} \log \lvert L_m(C + iT + 2Re^{i \theta}) \rvert d\theta- \log \lvert L_m(C + iT) \rvert. \end{equation}As $\Re L_m(C + iT) \geq \frac{1}{2}$ and $L_m(C + iT)$ is uniformly bounded from above at $T$ ranges along the whole line, we see that $\big \lvert \log \lvert L_m(C + iT)\rvert \big \rvert$ is bounded above by a positive constant $K_1$.

Applying the convexity principle, we are guaranteed that there is a positive constant $K_2$ such that \begin{equation} \lvert L(s) \rvert \leq (2 + \lvert t \rvert)^{K+2} \end{equation}for all $s$ with $C - 2R < \Re s < C + 2R$. Thus $\log \lvert L(C + iT + 2R e^{i \theta}) \rvert \leq K_2 \log(2 + \lvert T \rvert)$, and thus \begin{equation} \int_0^{2R} \frac{n(r)}{r} dr< K_2 \log(2 + \lvert T \rvert) + K_1 \ll \log T. \end{equation}Finally we have that \begin{equation}n(R) \log 2 = n(R) \int_R^{2R} \frac{dr}{r} \leq \int_R^{2R} \frac{n(r)}{r} dr \leq \int_0^{2R} \frac{n(r)}{r} dr \ll \log T. \end{equation}An analogous set of work and bounds will apply to similarly defined $N^-(T)$, completing the proof.

It is a good test of understanding to consider why the two applications of Jensen's formula to prove the last theorem were different. In the first, we applied Jensen's formula to $\Lambda(s)$ and got a bound $T \log T$ on the number of zeros. In the second, we applied Jensen's formula to the uncompleted $L(s)$ directly and got a much better bound.

The major difference comes in the horizontal extents of the disks considered. The uncompleted $L$-function $L(s)$ grows significantly as $\Re s \to - \infty$, complicating efforts to bound it. This is why we use $\Lambda(s)$,whose functional equation reflects into itself.

Additional Remarks

I think that there will be two more notes in this series of notes. In the next note, we'll begin to further focus our attention towards the Dirichlet series associated to half-integral weight modular cuspforms.


Leave a comment

Info on how to comment

To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.

bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline math)$ or $$(your display equation)$$.

Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.

Comment via email