This is the third note in a series of notes on zeros of Dirichlet series, written with an eye towards examining zeros of Dirichlet series not in the Selberg class. See also the first note and second note in this series of notes.
We continue to study Dirichlet series in the extended Selberg Class $\widetilde{S}$, which we write as \begin{equation*} L(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \end{equation*} We recall that each such $L$ has a functional equation normalized to be of shape $s \mapsto 1 - s$, is assumed to be nontrivial, satisfies a bound of Ramanujan–Petersson type on average, has analytic continuation to an entire function of finite order, and satisfies a functional equation of the shape \begin{equation*} \Lambda(s) := L(s) Q^s \prod_{\nu = 1}^N \Gamma(\alpha_\nu s + \beta_\nu) = \omega \overline{\lambda(1 - \overline{s})}. \end{equation*} It will be convenient to let $\Delta(s) = \prod \Gamma(\alpha_\nu s + \beta_\nu)$ refer to the collected gamma factors.
Note that we do not assume that $L$ or $\Lambda$ has an Euler product.
In Zeros II we showed being an integral function of finite order and having a functional equation is sufficient to guarantee that the order of the function is at most $1$ (and being nontrivial is enough to show that it's exactly $1$).
The Weierstrass factorization theorem shows that every entire function can be represented as a product involving its zeros, and the Hadamard factorization theorem implies that this product is particularly nice for functions of finite order. When the order is $1$, Hadamard's theorem implies the following.
There exist constants $a$ and $b$ such that \begin{equation} \Lambda(s) = s^r e^{a + bs} \prod_{\substack{\rho \neq 0 \\ \Lambda(\rho) = 0}} (1 - \tfrac{s}{\rho}) e^{s / \rho}, \end{equation} where $r$ is the order of the zero of $\Lambda(s)$ at $s = 0$. Here, $\rho$ ranges over the zeros of $\Lambda(s)$ different from $0$. Further, we have that \begin{equation} -\frac{L'(s)}{L(s)} = \frac{1}{2} \log Q + \frac{\Delta'(s)}{\Delta(s)} + \frac{r}{s} - b - \sum_{\rho \neq 0} \left( \frac{1}{s - \rho} - \frac{1}{\rho} \right). \end{equation}
This is essentially Theorem 5.6 in Analytic Number Theory by Iwaniec and Kowalski, applied here. There are minor cosmetic differences with their theorem based on whether we allow $\Lambda(s)$ to have a finite order pole at $s = 1$. The actual proof follows from statements of Hadamard's factorization theorem1 1See for example Chapter XI of Conway's Functions of One Complex Variable I. and then taking logarithmic derivatives.
Aesthetically, this is perhaps more fundamental than the basic counting results from the second note. The order of exposition in Conway's book on complex analysis shows how this type of crude zero estimate follows very naturally from considering the Hadamard factorization (and similar applications of Jensen's formula as shown previously).
But I note it explicitly here in order to contrast tools available with Dirichlet series in the extended Selberg class against tools available for $L$-functions with an Euler product.
For an $L$-function with Euler product \begin{equation*} L(s, g) := \sum_{n \geq 1} \frac{b(n)}{n^s} = \prod_p L_p(s, g), \end{equation*} the logarithmic derivative $L'(s, g) / L(s, g)$ is closely related to the sums of the coefficients over primes. To be slightly more precise, write \begin{equation*} \frac{L'(s, g)}{L(s, g)} = \sum_{n \geq 1} \frac{\Lambda_g(n)}{n^s}. \end{equation*} Then taking logarithmic derivatives and comparing shows that the coefficients $\Lambda_g(n)$ are supported on prime powers. Further, under standard assumptions on good behavior in the Euler product, one can show that \begin{equation*} \sum_{n \leq X} \Lambda_g(n) \approx \sum_{p \leq X} b(p) \log p, \end{equation*} or rather that the sum concentrates its mass on prime-indexed coefficients.
For the Riemann zeta function $\zeta(s)$, the corresponding Hadamard factorization and a Perron integral shows the explicit formula, stating that \begin{align*} \sum_{p^k \leq X} \log p = \sum_{n \leq X} \Lambda_{\zeta}(n) &= \frac{1}{2 \pi i} \int_{(2)} \left( - \frac{\zeta'(s)}{\zeta(s)} \right) \frac{X^s}{s} ds \\ &= X - \sum_{\substack{\rho \\ 0 < \mathrm{Re} \rho < 1 \\ \zeta(\rho) = 0}} \frac{X^\rho}{\rho} - \log(2 \pi) - \frac{1}{2} \log(1 - X^{-2}). \end{align*} At points of discontiuity, when $X$ is exactly a power of a prime, the sums are to represent the average of the values across the discontinuity. This broad type of result, linking zeros of a nice Dirichlet series to sums across primes, is essentially what gave birth to the field of analytic number theory — it is used in many analytic proofs of the prime number theorem.
But for Dirichlet series without Euler product, there is no fundamental relationship to primes. We can still define coefficients \begin{equation*} \frac{L'(s)}{L(s)} = \sum_{n \geq 1} \frac{\beta(n)}{n^s}, \end{equation*} and the Hadamard factorization theorem above (combined with a Perron-type analysis) will relate partial sums \begin{equation*} \sum_{n \leq X} \beta(n) \end{equation*} to the zeros of the completed Dirichlet series $\Lambda(s)$. But generically the Riemann Hypothesis for Dirichlet series without Euler products is false.2 2Further, as there is no Euler product, it is frequently true that there are nontrivial zeros $\rho$ inside the region of absolute convergence. And as shown in the first note in this series, this necessarily implies that there are infinitely many zeros in the region of absolute convergence. And thus the corresponding "prime number theorem" is typically false. (Here the corresponding prime number theorem, in its weakest form, would state that \begin{equation*} \sum_{n \leq X} \beta(n) = r X + o(X), \end{equation*} where $r$ is the order of a potential pole at $s = 1$, assumed $0$ for the series considered in these notes).
I don't know of any examples of $L$-functions (other than $\zeta(s)$ and Dirichlet $L$-functions $L(s, \chi)$) in the Selberg class whose prime number theorem has independent arithmetic application. But for analytic purposes it is often true that studying logarithmic derivatives of completed $L$-functions via their Euler products is a powerful method of deriving bounds. These enhanced bounds are not available in general for Dirichlet series in the extended Selberg class.
Almost Lindelöf Hypothesis on Average
This note has mostly focused on how the lack of an Euler product makes Dirichlet series in the extended Selberg class suboptimal objects of study. But all is not lost.
For a Dirichlet series $L \in \widetilde{S}$, define \begin{equation*} \mu(\sigma; L) = \limsup_{\lvert t \rvert \to \infty} \frac{\log \lvert L(\sigma + it) \rvert}{\log \lvert t \rvert}, \end{equation*} which is essentilly the exponent of the polynomial growth in $t$ on the line $\mathrm{Re} s = \sigma$. See Theorem 5 and Theorem 7 in the first note in this series for earlier discussion. It's known that $\mu$ is a convex function, that $\mu(\sigma; L) = 0$ for $\sigma > 1$, and that $\mu(\sigma; L)$ is governed strongly by the gamma functions for $\sigma < 0$. In the strip $[0, 1]$, the behavior is mysterious; the Phragmén–Lindelöf convexity principle gives the standard convexity bound.
For $L$-functions in the Selberg class, it is conjectured that $\mu(1/2; L) = 0$, or rather that \begin{equation*} \lvert L(\tfrac{1}{2} + it, f) \rvert \ll (1 + \lvert t \rvert)^\epsilon \end{equation*} for all $\epsilon > 0$. This is the Lindelöf Hypothesis. It is known3 3see discussion around Theorem 5.19 and Corollary 5.20 in Iwaniec and Kowalski's Analytic Number Theory that the Riemann Hypothesis (and Ramanujan–Petersson conjecture) implies the Lindelöf Hypothesis. We might hope that the Lindelöf Hypothesis would be easier to prove, as a consequence of the Riemann Hypothesis, but folklore opinion tends to suggest that the only hope to prove Lindelöf is to prove Riemann.4 4This is suggested in Iwaniec and Kowalski as well.
We will show that some Dirichlet series in the extended Selberg class satisfy the Lindelöf Hypothesis on average. Let us first state this more formally. Define $\widetilde{\mu}(\sigma; L)$ to be the infimum of all $\alpha$ such that \begin{equation} \lim_{T \to \infty} \frac{1}{T} \int_{-T}^T \lvert L(\sigma + it) \rvert dt \ll_\epsilon (1 + \lvert T \rvert)^{\alpha + \epsilon} \end{equation} holds for any $\epsilon > 0$. Just as $\mu$ is convex, one can show that $\widetilde{\mu}$ is a convex function of $\sigma$.5 5See Theorem 5 and Theorem 8 of Theorems concerning mean values of analytic functions by Hardy, Ingham, and Pólya in the Proceedings of the Royal Soc. London Series A, 1927. i Clearly $\widetilde{\mu}(\sigma; L) \leq \mu(\sigma; L)$, as $\widetilde{\mu}$ is an averaged form of $\mu$.
In the classic theory of Dirichlet series, Potter6 6The mean values of certain Dirichlet series, i by HSA Potter, in Proceedings of the Lond. Math. Soc., 1940. proved the following theorem.
Suppose the two functions \begin{equation*} A(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}, \qquad B(s) = \sum_{n \geq 1} \frac{b(n)}{n^s} \end{equation*} are integral of finite order, converge in a half-plane, and that all singularities lie in a rectangle of finite area. Further, assume that \begin{equation*} \sum_{n \leq X} \lvert a(n) \rvert^2 \ll X^{\beta + \epsilon}, \qquad \sum_{n \leq X} \lvert b(n) \rvert^2 \ll X^{\beta + \epsilon} \end{equation*} as $X \to \infty$, and that $A(s)$ and $B(s)$ satisfy \begin{equation*} A(s) = h(s) B(1 - s) \end{equation*} where \begin{equation*} h(s) \asymp \lvert t \rvert^{\gamma(\alpha/2 - \sigma)} \end{equation*} uniformly in $\sigma$ (for $\sigma$ restricted to any finite interval) as $\lvert t \rvert \to \infty$, and where $\alpha, \beta, \gamma$ are nonnegative constants. Then \begin{equation*} \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^T \lvert A(\sigma + it) \rvert^2 = \sum_{n \geq 1} \frac{\lvert a(n) \rvert^2}{n^{2 \sigma}} \end{equation*} for $\sigma > \max\{\tfrac{\alpha}{2}, \tfrac{1}{2}(\beta + 1) - \tfrac{1}{\gamma}\}$.
The Ramanujan–Petersson conjecture on average implies that one can take $\beta = 1$ for $L \in \widetilde{S}$. Applying Stirling's formula to the gamma factors7 7as given in Lemma 6 of Note I shows that we can take $\alpha = 1$ and $\gamma = d_L = 2 \sum_{\nu = 1}^N \alpha_\nu$ (the degree of the Dirichlet series). Inserting these into Potter's result gives the following corollary.
Let $L \in \widetilde{S}$. For $\sigma > \max\{ \tfrac{1}{2}, 1 - \tfrac{1}{d_L} \}$, we have \begin{equation*} \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^T \lvert L(\sigma + it) \rvert^2 dt = \sum_{n \geq 1} \frac{\lvert a(n) \rvert^2}{n^{2 \sigma}}. \end{equation*}
For a concrete example, if $L$ is the Dirichlet series associated to a (full or half-integral weight) modular form on $\mathrm{GL}(2)$, then $d_L = 2$ and it follows that \begin{equation} \lim_{T \to \infty} \frac{1}{2T} \int_{-T}^T \lvert L(\tfrac{1}{2} + \epsilon + it) \rvert^2 dt = \sum_{n \geq 1} \frac{\lvert a(n) \rvert^2}{n^{2 \sigma}} < \infty \end{equation} for any $\epsilon > 0$. A straightforward application of Cauchy–Schwarz shows that $\widetilde{\mu}(\tfrac{1}{2} + \epsilon; L) = 0$ for all $\epsilon > 0$, implying by convexity that $\widetilde{\mu}(\tfrac{1}{2}; L) = 0$. I find this sufficiently interesting to emphasize it as its own theorem.
Let $L \in \widetilde{S}$ be a Dirichlet series with degree $d_L = 2$. Then $\widetilde{\mu}(\tfrac{1}{2}; L) = 0$, i.e. $L$ satisfies the Lindelöf Hypothesis in the $t$-aspect on average.
The second thesis problem that I took on involved proving that a family of half-integral weight modular forms, twisted by characters $\chi_d$, also satisfied Lindelöf on average in the character-conductor aspect. I didn't finish that thesis problem and haven't ever written down those details (much to my advisor's chagrin), but the result is true.
I find this interesting because it suggests that Lindelöf might be true for a family of Dirichlet series even though the Riemann Hypothesis is false. This would necessarily imply that Lindelöf is weaker than Riemann, and also emphasize that Lindelöf might be provable without appealing to any arithmetic behavior coming from an Euler product.
Info on how to comment
To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.
bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using
$(inline math)$
or$$(your display equation)$$
.Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.