mixedmath

Explorations in math and programming
David Lowry-Duda



I was recently examining a technical hurdle in my project on “Uniform bounds for lattice point counting and partial sums of zeta functions” with Takashi Taniguchi and Frank Thorne. There is a version on the arxiv, but it currently has a mistake in its handling of bounds for small X.

In this note, I describe an aspect of this paper that I found surprising. In fact, I’ve found it continually surprising, as I’ve reproven it to myself three times now, I think. By writing this here and in my note system, I hope to perhaps remember this better.

Landau’s Method

In this paper, we revisit an application of “Landau’s Method” to estimate partial sums of coefficients of Dirichlet series. We model this paper off of an earlier application by Chandrasakharan and Narasimhan, except that we explicitly track dependence of the several implicit constants and we prove these results uniformly for all partial sums, as opposed to sufficiently large partial sums.

The only structure is that we have a Dirichlet series ϕ(s), some Gamma factors Δ(s), and a functional equation of the shape ϕ(s)Δ(s)=ψ(s)Δ(1s). This is relatively structureless, and correspondingly our attack is very general. We use some smoothed approximation to the sum of coefficients, shift lines of integration to pick up polar main terms, apply the functional equation and change variables so work with the dual, and then get some collection of error terms and error integrals.

It happens to be that it’s much easier to work with a k-Riesz smoothed approximation. That is, if ϕ(s)=n1a(n)λns is our Dirichlet series, and we are interested in the partial sums A0(s)=λnXa(n), then it happens to be easier to work with the smoothed approximations Ak(X)=1Γ(k+1)λnXa(n)(Xλn)ka(n), and to somehow combine several of these smoothed sums together.

This smoothed sum is recognizable as Ak(X)=12πicic+iϕ(s)Γ(s)Γ(s+k+1)Xs+kds for c somewhere in the half-plane of convergence of the Dirichlet series. As k gets large, these integrals become better behaved. In application, one takes k sufficiently large to guarantee desired convergence properties.

The process of taking several of these smoothed approximations for large k together, studying them through basic functional equation methods, and combinatorially combining these smoothed approximations via finite differencing to get good estimates for the sharp sum A0(s) is roughly what I think of as “Landau’s Method”.

Application and shape of the error

In our paper, as we apply Landau’s method, it becomes necessary to understand certain bounds coming from the dual Dirichlet series ψ(s)=n1b(n)μns. Specifically, it works out that the (combinatorially finite differenced) between the k-smoothed sum Ak(X) and its k-smoothed main term Sk(X) can be written as (1)Δyk[Ak(X)Sk(X)]=n1b(n)μnδ+kΔykIk(μnX), where Δyk is a finite differencing operator that we should think of as a sum of several shifts of its input function.

More precisely, ΔyF(X):=F(X+y)F(X), and iterating gives ΔykF(X)=j=0k(1)kj(kj)F(X+jy). The Ik() term on the right of (1) is an inverse Mellin transform Ik(t)=12πicic+iΓ(δs)Γ(k+1+δs)Δ(s)Δ(δs)tδ+ksds. Good control for this inverse Mellin transform yields good control of the error for the overall approximation. Via the method of finite differencing, there are two basic choices: either bound Ik(t) directly, or understand bounds for (μny)kIk(k)(t) for tμnX. Here, Ik(k)(t) means the kth derivative of Ik(t).

Large input errors

In the classical application (as in the paper of CN), one worries about this asymptotic mostly as t. In this region, Ik(t) can be well-approximated by a J-Bessel function, which is sufficiently well understood in large argument to give good bounds. Similarly, Ik(k)(t) can be contour-shifted in a way that still ends up being well-approximated by J-Bessel functions.

The shape of the resulting bounds end up being that ΔykIk(μnX) is bounded by either

In both, there is a certain k-dependence that comes from the k-th Riesz smoothing factors, either directly (from (μny)k), or via its corresponding inverse Mellin transform (in the bound from Ik(t)). But these are the only aspects that depend on k.

At this point in the classical argument, one determines when one bound is better than the other, and this happens to be something that can be done exactly, and (surprisingly) independently of k. Using this pair of bounds and examining what comes out the other side gives the original result.

Small input errors

In our application, we also worry about asymptotic as t0. While it may still be true that Ik can be approximated by a J-Bessel function, the “well-known” asymptotics for the J-Bessel function behave substantially worse for small argument. Thus different methods are necessary.

It turns out that Ik can be approximated in a relatively trivial way for t1, so the only remaining hurdle is Ik(k)(t) as t0.

We’ve proved a variety of different bounds that hold in slightly different circumstances. And for each sort of bound, the next steps would be the same as before: determine when each bound is better, bound by absolute values, sum together, and then choose the various parameters to best shape the final result.

But unlike before, the boundary between the regions where Ik is best bounded directly or bounded via Ik(k) depends on k. Aside from choosing k sufficiently large for convergence properties (which relate to the locations of poles and growth properties of the Dirichlet series and gamma factors), any sufficiently large k would suffice.

Limiting behavior gives a heuristic region

After I step away from this paper and argument for a while and come back, I wonder about the right way to choose the balancing error. That is, I rework when to use bounds coming from studying Ik(t) directly vs bounds coming from studying Ik(k)(t).

But it turns out that there is always a reasonable heuristic choice. Further, this heuristic gives the same choice of balancing as in the case when t (although this is not the source of the heuristic).

Making these bounds will still give bounds for ΔykIk(μnX) of shape

The actual bounds for α and β will differ between the case of small μnX and large μnX (J-Bessel asymptotics for large, different contour shifting analysis for small), but in both cases it turns out that α and β are independent of k.

This is relatively easy to see when bounding Ik(k)(t), as repeatedly differentiating under the integral shows essentially that Ik(k)(t)=12πiΔ(s)(δs)Δ(δs)tδsds. (I’ll note that the contour does vary with k in a certain way that doesn’t affect the shape of the result for t0).

When balancing the error terms (μnX)α+k(112A) and (μny)k(μnX)β, the heuristic comes from taking arbitrarily large k. As k, the point where the two error terms balance is independent of α and β.

This reasoning applies to the case when μnX as well, and gives the same point. Coincidentally, the actual α and β values we proved for μnX perfectly cancel in practice, so this limiting argument is not necessary — but it does still apply!

I suppose it might be possible to add another parameter to tune in the final result — a parameter measuring deviation from the heuristic, that can be refined for any particular error bound in a region of particular interest.

But we haven’t done that.

In fact, we were slightly lossy in how we bounded Ik(k)(t) as t0, and (for complicated reasons that I’ll probably also forget and reprove to myself later) the heuristic choice assuming k and our slighly lossy bound introduce the same order of imprecision to the final result.

More coming soon

We’re updating our preprint and will have that up soon. But as I’ve been thinking about this a lot recently, I realize there are a few other things I should note down. I intend to write more on this in the short future.


Leave a comment

Info on how to comment

To make a comment, please send an email using the button below. Your email address won't be shared (unless you include it in the body of your comment). If you don't want your real name to be used next to your comment, please specify the name you would like to use. If you want your name to link to a particular url, include that as well.

bold, italics, and plain text are allowed in comments. A reasonable subset of markdown is supported, including lists, links, and fenced code blocks. In addition, math can be formatted using $(inline math)$ or $$(your display equation)$$.

Please use plaintext email when commenting. See Plaintext Email and Comments on this site for more. Note also that comments are expected to be open, considerate, and respectful.

Comment via email