Category Archives: Math.NT

Paper Announcement: A Shifted Sum for the Congruent Number Problem

Tom Hulse, Chan Ieong Kuan, Alex Walker, and I have just uploaded a new paper to the arXiv titled A Shifted Sum for the Congruent Number Problem. In this charming, short paper, we investigate a particular sum of terms which are products of square-indicator functions and show that its asymptotics are deeply connected to congruent numbers. This note serves to describe and provide additional context for these results. (This note is also available as a pdf).

Congruent Numbers

We consider some triangles. There are many right triangles, such as the triangle with sides $(3, 4, 5)$ or the triangle with sides $(1, 1, \sqrt{2})$. We call a right triangle rational when all its side lengths are rational numbers. For illustration, $(3, 4, 5)$ is rational, while $(1, 1, \sqrt{2})$ is not. $\DeclareMathOperator{\sqfree}{sqfree}$

There is mythology surrounding rational right triangles. According to legend, the ancient Greeks, led both philosophcally and mathematically by Pythagoras (who was the first person to call himself a philosopher and essentially the first to begin to distill and codify mathematics), believed all numbers and quantities were ratios of integers (rational). When a disciple of Pythagoras named Hippasus found that the side lengths of the right triangle $(1, 1, \sqrt{2})$ were not rational multiples of each other, the other followers of Pythagoras killed him by casting him overboard while at sea for having produced an element which contradicted the gods. (It with some irony that we now attribute this as a simple consequence of the Pythagorean Theorem).

This mythology is uncertain, but what is certain is that even the ancient Greeks were interested in studying rational right triangles, and they began to investigate what we now call the Congruent Number Problem. By the year 972 the CNP appears in Arabic manuscripts in (essentially) its modern formulation. The Congruent Number Problem (CNP) may be the oldest unresolved math problem.

We call a positive rational number $t$ congruent if there is a rational right triangle with area $t$. The triangle $(3,4,5)$ shows that $6 = 3 \cdot 4 / 2$ is congruent. The CNP is to describe all congruent numbers. Alternately, the CNP asks whether there is an algorithm to show definitively whether or not $t$ is a congruent number for any $t$.

We can reduce the problem to a statement about integers. If the rational number $t = p/q$ is the area of a triangle with legs $a$ and $b$, then the triangle $aq$ and $bq$ has area $tq^2 = pq$. It follows that to every rational number there is an associated squarefree integer for which either both are congruent or neither are congruent. Further, if $t$ is congruent, then $ty^2$ and $t/y^2$ are congruent for any integer $y$.

We may also restrict to integer-sided triangles if we allow ourselves to look for those triangles with squarefree area $t$. That is, if $t$ is the area of a triangle with rational sides $a/A$ and $b/B$, then $tA^2 B^2$ is the area of the triangle with integer sides $aB$ and $bA$.

It is in this form that we consider the CNP today.

Congruent Number Problem

Given a squarefree integer $t$, does there exist a triangle with integer side lengths such that the squarefree part of the area of the triangle is $t$?

We will write this description a lot, so for a triangle $T$ we introduce the notation
\begin{equation}
\sqfree(T) = \text{The squarefree part of the area of } T.
\end{equation}
For example, the area of the triangle $T = (6, 8, 10)$ is $24 = 6 \cdot 2^2$, and so $\sqfree(T) = 6$. We should expect this, as $T$ is exactly a doubled-in-size $(3,4,5)$ triangle, which also corresponds to the congruent number $6$. Note that this allows us to only consider primitive right triangles.

Main Result

Let $\tau(n)$ denote the square-indicator function. That is, $\tau(n)$ is $1$ if $n$ is a square, and is $0$ otherwise. Then the main result of the paper is that the sum
\begin{equation}
S_t(X) := \sum_{m = 1}^X \sum_{n = 1}^X \tau(m-n)\tau(m)\tau(nt)\tau(m+n)
\end{equation}
is related to congruent numbers through the asymptotic
\begin{equation}
S_t(X) = C_t \sqrt X + O_t\Big( \log^{r/2} X\Big),
\end{equation}
where
\begin{equation}
C_t = \sum_{h_i \in \mathcal{H}(t)} \frac{1}{h_i}.
\end{equation}
Each $h_i$ is a hypotenuse of a primitive integer right triangle $T$ with $\sqfree(T) = t$. Each hypotnesue will occur in a pair of similar triangles $(a,b, h_i)$ and $(b, a, h_i)$; $\mathcal{H}(t)$ is the family of these triangles, choosing only one triangle from each similar pair. The exponent $r$ in the error term is the rank of the elliptic curve
\begin{equation}
E_t(\mathbb{Q}): y^2 = x^3 – t^2 x.
\end{equation}

What this says is that $S_t(X)$ will have a main term if and only if $t$ is a congruent number, so that computing $S_t(X)$ for sufficiently large $X$ will show whether $t$ is congruent. (In fact, it’s easy to show that $S_t(X) \neq 0$ if and only if $t$ is congruent, so the added value here is the nature of the asymptotic).

We should be careful to note that this does not solve the CNP, since the error term depends in an inexplicit way on the desired number $t$. What this really means is that we do not have a good way of recognizing when the first nonzero term should occur in the double sum. We can only guarantee that for any $t$, understanding $S_t(X)$ for sufficiently large $X$ will allow one to understand whether $t$ is congruent or not.

Intuition and Methodology

There are four primary components to this result:

  1. There is a bijection between primitive integer right triangles $T$ with
    $\sqfree(T) = t$ and arithmetic progressions of squares $m^2 – tn^2, m^2,
    m^2 + tn^2$ (where each term is itself a square).
  2. There is a bijection between primitive integer right triangles $T$ with
    $\sqfree(T) = t$ and points on the elliptic curve $E_t(\mathbb{Q}): y^2 = x^3
    – t x$ with $y \neq 0 $.
  3. If the triangle $T$ corresponds to a point $P$ on the curve $E_t$, then
    the size of the hypotenuse of $T$ can be bounded below by $H(P)$, the
    (naive) height of the point on the elliptic curve.
  4. Néron (and perhaps Mordell, but I’m not quite fluent in the initial
    history of the theory of elliptic curves) proved strong (upper) bounds on
    the number of points on an elliptic curve up to a given height. (In fact,
    they proved asymptotics which are much stronger than we use).

In this paper, we use $(1)$ to relate triangles $T$ to the sum $S_t(X)$ and we use $(2)$ to relate these triangles to points on the elliptic curve. Tracking the exact nature of the hypotenuses through these bijections allows us to relate the sum to certain points on elliptic curves. In order to facilitate the tracking of these hypotenuses, we phrase these bijections in slightly different ways than have appeared in the literature. By $(3)$ and $(4)$, we can bound the number and size of the hypotenuses which appear in terms of numbers of points on the elliptic curve up to a certain height. Intuitively this is why the higher the rank of the elliptic curve (corresponding roughly to the existence of many more points on the curve), the worse the error term in our asymptotic.

I would further conjecture that the error term in our asymptotic is essentially best-possible, even though we have thrown away some information in our proof.

Additional Context

We are not the first to note either the bijection between triangles $T$ and arithmetic progressions of squares or between triangles $T$ and points on a particular elliptic curve. The first is surely an ancient observation, but I don’t know who first considered the relation to elliptic curves. But it’s certain that this was a fundamental aspect in Tunnell’s famous work A Classical Diophantine Problem and Modular Forms of Weight 3/2 from 1983, where he used the properties of the elliptic curve $E_t$ to relate the CNP to the Birch and Swinnerton-Dyer Conjecture.

One statement following from the Birch and Swinnerton-Dyer conjecture (BSD) is that if an elliptic curve $E$ has rank $r$, then the $L$-function $L(s, E)$ has a zero of order $r$ at $1$. The relation between lots of points on the curve and the existence of a zero is intuitive from the approximate relation that
\begin{equation}
L(1, E) \approx \lim_{X} \prod_{p \leq X} \frac{p}{\#E(\mathbb{F}_p)},
\end{equation}
so if $E$ has lots and lots of points then we should expect the multiplicands to be very small.

On the other hand, the elliptic curve $E_t: y^2 = x^3 – t^2 x$ has the interesting property that any point with $y \neq 0$ generates a free group of points on the curve. From the bijections alluded to above, a primitive right integer triangle $T$ with $\sqfree(T) = t$ corresponds to a point on $E_t$ with $y \neq 0$, and thus guarantees that there are lots of points on the curve. Tunnell showed that what I described as “lots of points” is actually enough points that $L(1, E)$ must be zero (assuming the relation between the rank of the curve and the value of $L(1, E)$ from BSD).

Tunnell proved that if BSD is true, then $L(1, E) = 0$ if and only if $n$ is a congruent number.

Yet for any elliptic curve we know how to compute $L(1, E)$ to guaranteed accuracy (for instance by using Dokchitser’s algorithm). Thus a corollary of Tunnell’s theorem is that BSD implies that there is an algorithm which can be used to determine definitively whether or not any particular integer $t$ is congruent.

This is the state of the art on the congruent number problem. Unfortunately, BSD (or even the somewhat weaker between BSD and mere nonzero rank of elliptic curves as is necessary for Tunnell’s result for the CNP) is quite far from being proven.

In this context, the main result of this paper is not as effective at actually determining whether a number is congruent or not. But it does have the benefit of not relying on any unknown conjecture.

And there is some potential follow-up questions. The sum $S_t(X)$ appears as an integral transform of the multiple Dirichlet series
\begin{equation}
\sum_{m,n} \frac{\tau(m-n)\tau(m)\tau(nt)\tau(m+n)}{m^s n^w}
\approx
\sum_{m,n} \frac{r_1(m-n)r_1(m)r_1(nt)r_1(m+n)}{m^s n^w},
\end{equation}
where $r_1(n)$ is $1$ if $n = 0$ or $2$ if $n$ is a positive square, and $0$ otherwise. Then $r_1(n)$ appears as the Fourier coefficients of the half-integral weight standard theta function
\begin{equation}
\theta(z)
= \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z}
= \sum_{n \geq 0} r_1(n) e^{2 \pi i n z},
\end{equation}
and $S_t(X)$ is a shifted convolution sum coming from some products of modular forms related to $\theta(z)$.

It may be possible to gain further understanding of the behavior of $S_t(X)$ (and therefore the congruent number problem) by studying the shifted convolution as coming from theta functions.

I would guess that there is a deep relation to Tunnell’s analysis in his 1983 paper, as in some sense he constructs appropriate products of three theta functions and uses them centrally in his proof. But I do not understand this relationship well enough yet to know whether it is possible to deepen our understanding of the CNP, BSD, or Tunnell’s proof. That is something to explore in the future.

Posted in Expository, Math.NT, Mathematics | Tagged , | Leave a comment

Notes from a talk at Tufts, Automorphic Forms Workshop

On 19 March I gave a talk at the 32nd Automorphic Forms Workshop, which ishosted by Tufts this year. The content of the talk concerned counting points on hyperboloids, and inparticular counting points on the three dimensional hyperboloid

$$\begin{equation}
X^2 + Y^2 = Z^2 + h
\end{equation}$$

for any fixed integer $h$. But thematically, I wanted to give another concrete example of using modularforms to compute some sort of arithmetic data, and to mention how the perhapsapparently unrelated topic of spectral theory appears even in such an arithmeticapplication.

Somehow, starting from counting points on $X^2 + Y^2 = Z^2 + h$ (which appearssimple enough on its own that I could probably put this in front of anelementary number theory class and they would feel comfortable experimentingaway on the topic), one gets to very scary-looking expressions like

$$\begin{equation}
\sum_{t_j}
\langle P_h^k, \mu_j \rangle
\langle \theta^2 \overline{\theta} y^{3/4}, \mu_j \rangle +
\sum_{\mathfrak{a}}\int_{(1/2)}
\langle P_h^k, E_h^k(\cdot, u) \rangle
\langle \theta^2 \overline{\theta} y^{3/4}, E_h^k(\cdot, u) \rangle du,
\end{equation}$$

which is full of lots of non-obvious symbols and is generically intimidating.

Part of the theme of this talk is to give a very direct idea of how one gets tothe very complicated spectral expansion from the original lattice-countingproblem. Stated differently, perhaps part of the theme is to describe a simple-lookingnail and a scary-looking hammer, and show that the hammer actually works quitewell in this case.

The slides for this talk are available here.

Posted in Expository, Math.NT, Mathematics | Leave a comment

Slides from a talk at the Joint Math Meetings 2018

I’m in San Diego, and it’s charming here. (It’s certainly much nicer outside than the feet of snow in Boston. I’ve apparently brought some British rain with me, though).

Today I give a talk on counting lattice points on one-sheeted hyperboloids. These are the shapes described by
$$ X_1^2 + \cdots + X_{d-1}^2 = X_d^2 + h,$$
where $h > 0$ is a positive integer. The question is: how many lattice points $x$ are on such a hyperboloid with $| x |^2 \leq R$; or equivalently, how many lattice points are on such a hyperboloid and contained within a ball of radius $\sqrt R$ centered at the origin?

I describe my general approach of transforming this into a question about the behavior of modular forms, and then using spectral techniques from the theory of modular forms to understand this behavior. This becomes a question of understanding the shifted convolution Dirichlet series
$$ \sum_{n \geq 0} \frac{r_{d-1}(n+h)r_1(n)}{(2n + h)^s}.$$
Ultimately this comes from the modular form $\theta^{d-1}(z) \overline{\theta(z)}$, where
$$ \theta(z) = \sum_{m \in \mathbb{Z}} e^{2 \pi i m^2 z}.$$

Here are the slides for this talk. Note that this talk is based on chapter 5 of my thesis, and (hopefully) soon a preprint of this chapter ready for submission will appear on the arXiv.

Posted in Math.NT, Mathematics | Leave a comment

A Short Note on Gaps Between Powers of Consecutive Primes

Introduction

The primary purpose of this note is to collect a few hitherto unnoticed or unpublished results concerning gaps between powers of consecutive primes. The study of gaps between primes has attracted many mathematicians and led to many deep realizations in number theory. The literature is full of conjectures, both open and closed, concerning the nature of primes.

In a series of stunning developments, Zhang, Maynard, and Tao12 made the first major progress towards proving the prime $k$-tuple conjecture, and successfully proved the existence of infinitely many pairs of primes differing by a fixed number. As of now, the best known result is due to the massive collaborative Polymath8 project,3 which showed that there are infinitely many pairs of primes of the form $p, p+246$. In the excellent expository article, 4 Granville describes the history and ideas leading to this breakthrough, and also discusses some of the potential impact of the results. This note should be thought of as a few more results following from the ideas of Zhang, Maynard, Tao, and the Polymath8 project.

Throughout, $p_n$ will refer to the $n$th prime number. In a paper, 5 Andrica conjectured that
\begin{equation}\label{eq:Andrica_conj}
\sqrt{p_{n+1}} – \sqrt{p_n} < 1
\end{equation}
holds for all $n$. This conjecture, and related statements, is described in Guy’s Unsolved Problems in Number Theory.
6 It is quickly checked that this holds for primes up to $4.26 \cdot 10^{8}$ in sagemath

# Sage version 8.0.rc1
# started with `sage -ipython`

# sage has pari/GP, which can generate primes super quickly
from sage.all import primes_first_n

# import izip since we'll be zipping a huge list, and sage uses python2 which has
# non-iterable zip by default
from itertools import izip

# The magic number 23150000 appears because pari/GP can't compute
# primes above 436273290 due to fixed precision arithmetic
ps = primes_first_n(23150000)    # This is every prime up to 436006979

# Verify Andrica's Conjecture for all prime pairs = up to 436006979
gap = 0
for a,b in izip(ps[:-1], ps[1:]):
    if b**.5 - a**.5 > gap:
        A, B, gap = a, b, b**.5 - a**.5
        print(gap)
print("")
print(A)
print(B)

In approximately 20 seconds on my machine (so it would not be harder to go much higher, except that I would have to go beyond pari/GP to generate primes), this completes and prints out the following output.

0.317837245196
0.504017169931
0.670873479291

7
11

 

Thus the largest value of $\sqrt{p_{n+1}} – \sqrt{p_n}$ was merely $0.670\ldots$, and occurred on the gap between $7$ and $11$.

So it appears very likely that the conjecture is true. However it is also likely that new, novel ideas are necessary before the conjecture is decided.

Andrica’s Conjecture can also be stated in terms of prime gaps. Let $g_n = p_{n+1} – p_n$ be the gap between the $n$th prime and the $(n+1)$st prime. Then Andrica’s Conjecture is equivalent to the claim that $g_n < 2 \sqrt{p_n} + 1$. In this direction, the best known result is due to Baker, Harman, and Pintz, 7 who show that $g_n \ll p_n^{0.525}$.

In 1985, Sandor 8 proved that \begin{equation}\label{eq:Sandor} \liminf_{n \to \infty} \sqrt[4]{p_n} (\sqrt{p_{n+1}} – \sqrt{p_n}) = 0. \end{equation} The close relation to Andrica’s Conjecture \eqref{eq:Andrica_conj} is clear. The first result of this note is to strengthen this result.

Theorem

Let $\alpha, \beta \geq 0$, and $\alpha + \beta < 1$. Then
\begin{equation}\label{eq:main}
\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0.
\end{equation}

We prove this theorem below. Choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{4}$ verifies Sandor’s result \eqref{eq:Sandor}. But choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{2} – \epsilon$ for a small $\epsilon > 0$ gives stronger results.

This theorem leads naturally to the following conjecture.

Conjecture

For any $0 \leq \alpha < 1$, there exists a constant $C(\alpha)$ such that
\begin{equation}
p_{n+1}^\alpha – p_{n}^\alpha \leq C(\alpha)
\end{equation}
for all $n$.

A simple heuristic argument, given in the last section below, shows that this Conjecture follows from Cramer’s Conjecture.

It is interesting to note that there are generalizations of Andrica’s Conjecture. One can ask what the smallest $\gamma$ is such that
\begin{equation}
p_{n+1}^{\gamma} – p_n^{\gamma} = 1
\end{equation}
has a solution. This is known as the Smarandache Conjecture, and it is believed that the smallest such $\gamma$ is approximately
\begin{equation}
\gamma \approx 0.5671481302539\ldots
\end{equation}
The digits of this constant, sometimes called “the Smarandache constant,” are the contents of sequence A038458 on the OEIS. It is possible to generalize this question as well.

Open Question

For any fixed constant $C$, what is the smallest $\alpha = \alpha(C)$ such that
\begin{equation}
p_{n+1}^\alpha – p_n^\alpha = C
\end{equation}
has solutions? In particular, how does $\alpha(C)$ behave as a function of $C$?

This question does not seem to have been approached in any sort of generality, aside from the case when $C = 1$.

Proof of Theorem

The idea of the proof is very straightforward. We estimate \eqref{eq:main} across prime pairs $p, p+246$, relying on the recent proof from Polymath8 that infinitely many such primes exist.

Fix $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$. Applying the mean value theorem of calculus on the function $x \mapsto x^\alpha$ shows that
\begin{align}
p^\beta \big( (p+246)^\alpha – p^\alpha \big) &= p^\beta \cdot 246 \alpha q^{\alpha – 1} \\\
&\leq p^\beta \cdot 246 \alpha p^{\alpha – 1} = 246 \alpha p^{\alpha + \beta – 1}, \label{eq:bound}
\end{align}
for some $q \in [p, p+246]$. Passing to the inequality in the second line is done by realizing that $q^{\alpha – 1}$ is a decreasing function in $q$. As $\alpha + \beta – 1 < 0$, as $p \to \infty$ we see that\eqref{eq:bound} goes to zero.

Therefore
\begin{equation}
\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0,
\end{equation}
as was to be proved.

Further Heuristics

Cramer’s Conjecture states that there exists a constant $C$ such that for all sufficiently large $n$,
\begin{equation}
p_{n+1} – p_n < C(\log n)^2.
\end{equation}
Thus for a sufficiently large prime $p$, the subsequent prime is at most $p + C (\log p)^2$. Performing a similar estimation as above shows that
\begin{equation}
(p + C (\log p)^2)^\alpha – p^\alpha \leq C (\log p)^2 \alpha p^{\alpha – 1} =
C \alpha \frac{(\log p)^2}{p^{1 – \alpha}}.
\end{equation}
As the right hand side vanishes as $p \to \infty$, we see that it is natural to expect that the main Conjecture above is true. More generally, we should expect the following, stronger conjecture.

Conjecture’

For any $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$, there exists a constant $C(\alpha, \beta)$ such that
\begin{equation}
p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) \leq C(\alpha, \beta).
\end{equation}

Additional Notes

I wrote this note in between waiting in never-ending queues while I sort out my internet service and other mundane activities necessary upon moving to another country. I had just read some papers on the arXiv, and I noticed a paper which referred to unknown statuses concerning Andrica’s Conjecture. So then I sat down and wrote this up.

I am somewhat interested in qualitative information concerning the Open Question in the introduction, and I may return to this subject unless someone beats me to it.

This note is (mostly, minus the code) available as a pdf and (will shortly) appears on the arXiv. This was originally written in LaTeX and converted for display on this site using a  set of tools I’ve written based around latex2jax, which is available on my github.

Posted in Math.NT, Mathematics, sage | Tagged , , , | 1 Comment

Sage Days 87 Demo: Interfacing between sage and the LMFDB

Interfacing sage and the LMFDB — a prototype

The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.
This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is a prototype and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, please request/comment/ask either now, or at my email: david@lowryduda.com.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve
In [2]:
lmfdb_ecurve.search(rank=1)
Out[2]:
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 1240542*x - 531472509 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 - x^2 + 8100*x - 263219 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 + 637*x - 68783 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 + 36*x - 380 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2535*x - 49982 over Rational Field]
This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))
<class 'LMFDB2sage.ell_lmfdb.EllipticCurve_rational_field_lmfdb_with_category'>
Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:
# Try autocompleting the following. It has all the things!
print(dir(E))
['CPS_height_bound', 'CartesianProduct',
'Chow_form', 'Hom',
'Jacobian', 'Jacobian_matrix',
'Lambda', 'Np',
'S_integral_points', '_AlgebraicScheme__A',
'_AlgebraicScheme__divisor_group', '_AlgebraicScheme_subscheme__polys',
'_EllipticCurve_generic__ainvs', '_EllipticCurve_generic__b_invariants',
'_EllipticCurve_generic__base_ring', '_EllipticCurve_generic__discriminant',
'_EllipticCurve_generic__is_over_RationalField', '_EllipticCurve_generic__multiple_x_denominator',
'_EllipticCurve_generic__multiple_x_numerator', '_EllipticCurve_rational_field__conductor_pari',
'_EllipticCurve_rational_field__generalized_congruence_number', '_EllipticCurve_rational_field__generalized_modular_degree',
'_EllipticCurve_rational_field__gens', '_EllipticCurve_rational_field__modular_degree',
'_EllipticCurve_rational_field__np', '_EllipticCurve_rational_field__rank',
'_EllipticCurve_rational_field__regulator', '_EllipticCurve_rational_field__torsion_order',
'_Hom_', '__add__', '__cached_methods', '__call__',
'__class__', '__cmp__', '__contains__', '__delattr__',
'__dict__', '__dir__', '__div__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__',
'__getitem__', '__getstate__', '__gt__', '__hash__',
'__init__', '__le__', '__lt__', '__make_element_class__',
'__module__', '__mul__', '__ne__', '__new__',
'__nonzero__', '__pari__', '__pow__', '__pyx_vtable__',
'__rdiv__', '__reduce__', '__reduce_ex__', '__repr__',
'__rmul__', '__setattr__', '__setstate__', '__sizeof__',
'__str__', '__subclasshook__', '__temporarily_change_names', '__truediv__',
'__weakref__', '_abstract_element_class', '_adjust_heegner_index', '_an_element_',
'_ascii_art_', '_assign_names', '_axiom_', '_axiom_init_',
'_base', '_base_ring', '_base_scheme', '_best_affine_patch',
'_cache__point_homset', '_cache_an_element', '_cache_key', '_check_satisfies_equations',
'_cmp_', '_coerce_map_from_', '_coerce_map_via', '_coercions_used',
'_compute_gens', '_convert_map_from_', '_convert_method_name', '_defining_names',
'_defining_params_', '_doccls', '_element_constructor', '_element_constructor_',
'_element_constructor_from_element_class', '_element_init_pass_parent', '_factory_data', '_first_ngens',
'_forward_image', '_fricas_', '_fricas_init_', '_gap_',
'_gap_init_', '_generalized_congmod_numbers', '_generic_coerce_map', '_generic_convert_map',
'_get_action_', '_get_local_data', '_giac_', '_giac_init_',
'_gp_', '_gp_init_', '_heegner_best_tau', '_heegner_forms_list',
'_heegner_index_in_EK', '_homset', '_init_category_', '_initial_action_list',
'_initial_coerce_list', '_initial_convert_list', '_interface_', '_interface_init_',
'_interface_is_cached_', '_internal_coerce_map_from', '_internal_convert_map_from', '_introspect_coerce',
'_is_category_initialized', '_is_valid_homomorphism_', '_isoclass', '_json',
'_kash_', '_kash_init_', '_known_points', '_latex_',
'_lmfdb_label', '_lmfdb_regulator', '_macaulay2_', '_macaulay2_init_',
'_magma_init_', '_maple_', '_maple_init_', '_mathematica_',
'_mathematica_init_', '_maxima_', '_maxima_init_', '_maxima_lib_',
'_maxima_lib_init_', '_modsym', '_modular_symbol_normalize', '_morphism',
'_multiple_of_degree_of_isogeny_to_optimal_curve', '_multiple_x_denominator', '_multiple_x_numerator', '_names',
'_normalize_padic_lseries', '_octave_', '_octave_init_', '_p_primary_torsion_basis',
'_pari_', '_pari_init_', '_point', '_point_homset',
'_polymake_', '_polymake_init_', '_populate_coercion_lists_', '_r_init_',
'_reduce_model', '_reduce_point', '_reduction', '_refine_category_',
'_repr_', '_repr_option', '_repr_type', '_sage_',
'_scale_by_units', '_set_conductor', '_set_cremona_label', '_set_element_constructor',
'_set_gens', '_set_modular_degree', '_set_rank', '_set_torsion_order',
'_shortest_paths', '_singular_', '_singular_init_', '_symbolic_',
'_test_an_element', '_test_cardinality', '_test_category', '_test_elements',
'_test_elements_eq_reflexive', '_test_elements_eq_symmetric', '_test_elements_eq_transitive', '_test_elements_neq',
'_test_eq', '_test_new', '_test_not_implemented_methods', '_test_pickling',
'_test_some_elements', '_tester', '_torsion_bound', '_unicode_art_',
'_unset_category', '_unset_coercions_used', '_unset_embedding', 'a1',
'a2', 'a3', 'a4', 'a6',
'a_invariants', 'abelian_variety', 'affine_patch', 'ainvs',
'algebra', 'ambient_space', 'an', 'an_element',
'analytic_rank', 'analytic_rank_upper_bound', 'anlist', 'antilogarithm',
'ap', 'aplist', 'arithmetic_genus', 'automorphisms',
'b2', 'b4', 'b6', 'b8',
'b_invariants', 'base', 'base_extend', 'base_field',
'base_morphism', 'base_ring', 'base_scheme', 'c4',
'c6', 'c_invariants', 'cartesian_product', 'categories',
'category', 'change_ring', 'change_weierstrass_model', 'cm_discriminant',
'codimension', 'coerce', 'coerce_embedding', 'coerce_map_from',
'complement', 'conductor', 'congruence_number', 'construction',
'convert_map_from', 'coordinate_ring', 'count_points', 'cremona_label',
'database_attributes', 'database_curve', 'db', 'defining_ideal',
'defining_polynomial', 'defining_polynomials', 'degree', 'descend_to',
'dimension', 'dimension_absolute', 'dimension_relative', 'discriminant',
'division_field', 'division_polynomial', 'division_polynomial_0', 'divisor',
'divisor_group', 'divisor_of_function', 'dual', 'dump',
'dumps', 'element_class', 'elliptic_exponential', 'embedding_center',
'embedding_morphism', 'eval_modular_form', 'excellent_position', 'formal',
'formal_group', 'fundamental_group', 'galois_representation', 'gen',
'gens', 'gens_certain', 'gens_dict', 'gens_dict_recursive',
'genus', 'geometric_genus', 'get_action', 'global_integral_model',
'global_minimal_model', 'global_minimality_class', 'has_additive_reduction', 'has_bad_reduction',
'has_base', 'has_cm', 'has_coerce_map_from', 'has_global_minimal_model',
'has_good_reduction', 'has_good_reduction_outside_S', 'has_multiplicative_reduction', 'has_nonsplit_multiplicative_reduction',
'has_rational_cm', 'has_split_multiplicative_reduction', 'hasse_invariant', 'heegner_discriminants',
'heegner_discriminants_list', 'heegner_index', 'heegner_index_bound', 'heegner_point',
'heegner_point_height', 'heegner_sha_an', 'height', 'height_function',
'height_pairing_matrix', 'hom', 'hyperelliptic_polynomials', 'identity_morphism',
'inject_variables', 'integral_model', 'integral_points', 'integral_short_weierstrass_model',
'integral_weierstrass_model', 'integral_x_coords_in_interval', 'intersection', 'intersection_multiplicity',
'intersection_points', 'intersects_at', 'irreducible_components', 'is_atomic_repr',
'is_coercion_cached', 'is_complete_intersection', 'is_conversion_cached', 'is_exact',
'is_global_integral_model', 'is_global_minimal_model', 'is_good', 'is_integral',
'is_irreducible', 'is_isogenous', 'is_isomorphic', 'is_local_integral_model',
'is_minimal', 'is_on_curve', 'is_ordinary', 'is_ordinary_singularity',
'is_p_integral', 'is_p_minimal', 'is_parent_of', 'is_projective',
'is_quadratic_twist', 'is_quartic_twist', 'is_semistable', 'is_sextic_twist',
'is_singular', 'is_smooth', 'is_supersingular', 'is_transverse',
'is_x_coord', 'isogenies_prime_degree', 'isogeny', 'isogeny_class',
'isogeny_codomain', 'isogeny_degree', 'isogeny_graph', 'isomorphism_to',
'isomorphisms', 'j_invariant', 'kodaira_symbol', 'kodaira_type',
'kodaira_type_old', 'kolyvagin_point', 'label', 'latex_name',
'latex_variable_names', 'lift_x', 'lll_reduce', 'lmfdb_page',
'local_coordinates', 'local_data', 'local_integral_model', 'local_minimal_model',
'lseries', 'lseries_gross_zagier', 'manin_constant', 'matrix_of_frobenius',
'minimal_discriminant_ideal', 'minimal_model', 'minimal_quadratic_twist', 'mod5family',
'modular_degree', 'modular_form', 'modular_parametrization', 'modular_symbol',
'modular_symbol_numerical', 'modular_symbol_space', 'multiplication_by_m', 'multiplication_by_m_isogeny',
'multiplicity', 'mwrank', 'mwrank_curve', 'neighborhood',
'newform', 'ngens', 'non_minimal_primes', 'nth_iterate',
'objgen', 'objgens', 'optimal_curve', 'orbit',
'ordinary_model', 'ordinary_primes', 'padic_E2', 'padic_height',
'padic_height_pairing_matrix', 'padic_height_via_multiply', 'padic_lseries', 'padic_regulator',
'padic_sigma', 'padic_sigma_truncated', 'parent', 'pari_curve',
'pari_mincurve', 'period_lattice', 'plane_projection', 'plot',
'point', 'point_homset', 'point_search', 'point_set',
'pollack_stevens_modular_symbol', 'preimage', 'projection', 'prove_BSD',
'q_eigenform', 'q_expansion', 'quadratic_transform', 'quadratic_twist',
'quartic_twist', 'rank', 'rank_bound', 'rank_bounds',
'rational_parameterization', 'rational_points', 'real_components', 'reduce',
'reduction', 'register_action', 'register_coercion', 'register_conversion',
'register_embedding', 'regulator', 'regulator_of_points', 'rename',
'reset_name', 'root_number', 'rst_transform', 'satisfies_heegner_hypothesis',
'saturation', 'save', 'scale_curve', 'selmer_rank',
'sextic_twist', 'sha', 'short_weierstrass_model', 'silverman_height_bound',
'simon_two_descent', 'singular_points', 'singular_subscheme', 'some_elements',
'specialization', 'structure_morphism', 'supersingular_primes', 'tamagawa_exponent',
'tamagawa_number', 'tamagawa_number_old', 'tamagawa_numbers', 'tamagawa_product',
'tamagawa_product_bsd', 'tangents', 'tate_curve', 'three_selmer_rank',
'torsion_order', 'torsion_points', 'torsion_polynomial', 'torsion_subgroup',
'two_descent', 'two_descent_simon', 'two_division_polynomial', 'two_torsion_rank',
'union', 'variable_name', 'variable_names', 'weierstrass_p',
'weil_restriction', 'zeta_series']
All the things
This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:
E.defining_ideal()
Out[5]:
Ideal (-x^3 + x*y*z + y^2*z + 887688*x*z^2 + 321987008*z^3) of Multivariate Polynomial Ring in x, y, z over Rational Field
But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)
There are 10 curves returned.
Elliptic Curve defined by y^2 + x*y + y = x^3 - 3476*x - 79152 over Rational Field
And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:
print(E.gens())
print(E.rank())
print(E.regulator())
[(-34 : 17 : 1)]
1
1.63852610029
In [8]:
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())
CPU times: user 971 ms, sys: 6.82 ms, total: 978 ms
Wall time: 978 ms
That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the search() function.
In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

This is where we’re really looking for input.

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.
email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues

Now let’s describe what’s going on under the hood a little bit

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050

The result is a large mess of data, which can be exported as json and parsed.
But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time and thought to parse.
So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]
22
The total functionality of this search function is visible from its current documentation.

In [10]:
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)
    Search the LMFDB for an elliptic curve.

    Note that all inputs are optional, but at least one input is necessary.

    INPUT:

    -  ``label=l`` -- a string ``l`` representing a label in the LMFDB.

    -  ``degree=d`` -- an int ``d`` giving the minimum degree of a
       parameterization of the modular curve

    -  ``conductor=c`` -- an int ``c`` giving the conductor of the curve

    -  ``min_conductor=mc`` -- an int ``mc`` giving a lower bound on the
       conductor for desired curves

    -  ``max_conductor=mc`` -- an int ``mc`` giving an upper bound on the
       conductor for desired curves

    -  ``torsion_order=t`` -- an int ``t`` giving the order of the torsion
       subgroup of the curve

    -  ``rank=r`` -- an int ``r`` giving the rank of the curve

    -  ``regulator=f`` -- a float ``f`` giving the regulator of the curve

    -  ``max_items=m`` -- an int ``m`` (default: 10, max: 100) indicating the
       maximum number of results to return

    -  ``base_item=b`` -- an int ``b`` (default: 0) specifying where to start
       returning values from. The search will begin by returning the ``b``th
       curve. Combined with ``max_items`` to return data in chunks.

    -  ``sort=s`` -- a string ``s`` specifying what database field to sort the
       results on. See the LMFDB api for more info.

    EXAMPLES::

        sage: Es = search(conductor=11050, rank=2)
        [Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 442*x + 1716 over Rational Field, Elliptic Curve defined by y^2 + x*y = x^3 - x^2 + 1558*x + 11716 over Rational Field]
        sage: E = E[0]
        sage: E.conductor()
        11050
    
In [11]:
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)
Out[11]:
[Elliptic Curve defined by y^2 + y = x^3 + x^2 - 5155*x + 140756 over Rational Field]

What if there are no curves?

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.
For example, no elliptic curve in the database has rank 5.

In [12]:
lmfdb_ecurve.search(rank=5)
No fields were found satisfying input criteria.

How does one step through the data?

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument base_item (the name will probably change… to skip? or perhaps to offset?) to start returning at the base_itemth element.

In [13]:
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3))              # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field]

[Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field]
Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept max_conductor or min_conductor (or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).

In [14]:
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000)  # Not implemented
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-14-3d98f2cf7a13> in <module>()
----> 1 lmfdb_ecurve.search(rank=Integer(1), min_conductor=Integer(500), max_conductor=Integer(10000))  # Not implemented

/home/djlowry/Dropbox/EllipticCurve_LMFDB/LMFDB2sage/elliptic_curves.py in search(**kwargs)
     76             kwargs[item]
     77             raise NotImplementedError("This would be a great thing to have, " +
---> 78                 "but the LMFDB api does not yet provide this functionality.")
     79         except KeyError:
     80             pass

NotImplementedError: This would be a great thing to have, but the LMFDB api does not yet provide this functionality.
Our EllipticCurve_rational_field_lmfdb class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.
Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:
print(E.gens.__doc__)
        Return generators for the Mordell-Weil group E(Q) *modulo*
        torsion.

        .. warning::

           If the program fails to give a provably correct result, it
           prints a warning message, but does not raise an
           exception. Use :meth:`~gens_certain` to find out if this
           warning message was printed.

        INPUT:

        - ``proof`` -- bool or None (default None), see
          ``proof.elliptic_curve`` or ``sage.structure.proof``

        - ``verbose`` - (default: None), if specified changes the
           verbosity of mwrank computations

        - ``rank1_search`` - (default: 10), if the curve has analytic
          rank 1, try to find a generator by a direct search up to
          this logarithmic height.  If this fails, the usual mwrank
          procedure is called.

        - algorithm -- one of the following:

          - ``'mwrank_shell'`` (default) -- call mwrank shell command

          - ``'mwrank_lib'`` -- call mwrank C library

        - ``only_use_mwrank`` -- bool (default True) if False, first
          attempts to use more naive, natively implemented methods

        - ``use_database`` -- bool (default True) if True, attempts to
          find curve and gens in the (optional) database

        - ``descent_second_limit`` -- (default: 12) used in 2-descent

        - ``sat_bound`` -- (default: 1000) bound on primes used in
          saturation.  If the computed bound on the index of the
          points found by two-descent in the Mordell-Weil group is
          greater than this, a warning message will be displayed.

        OUTPUT:

        - ``generators`` - list of generators for the Mordell-Weil
           group modulo torsion

        IMPLEMENTATION: Uses Cremona's mwrank C library.

        EXAMPLES::

            sage: E = EllipticCurve('389a')
            sage: E.gens()                 # random output
            [(-1 : 1 : 1), (0 : 0 : 1)]

        A non-integral example::

            sage: E = EllipticCurve([-3/8,-2/3])
            sage: E.gens() # random (up to sign)
            [(10/9 : 29/54 : 1)]

        A non-minimal example::

            sage: E = EllipticCurve('389a1')
            sage: E1 = E.change_weierstrass_model([1/20,0,0,0]); E1
            Elliptic Curve defined by y^2 + 8000*y = x^3 + 400*x^2 - 320000*x over Rational Field
            sage: E1.gens() # random (if database not used)
            [(-400 : 8000 : 1), (0 : -8000 : 1)]
        
Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like regulator().)

In [16]:
print(E.regulator.__doc__)
        Return the regulator of the curve. This is taken from the lmfdb if available.

        NOTE:
            In later implementations, this docstring will probably include the
            docstring from sage's regular implementation. But that's not
            currently the case.
        

This concludes our demo of an interface between sage and the LMFDB.

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.
XKCD's automation

Posted in Expository, LMFDB, Math.NT, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | Leave a comment

“Second Moments in the Generalized Gauss Circle Problem” (with T. Hulse, C. Ieong Kuan, and A. Walker)

This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alexander Walker. This is a natural successor to our previous work (see their announcements: one, two, three) concerning bounds and asymptotics for sums of coefficients of modular forms.

We now have a variety of results concerning the behavior of the partial sums

$$ S_f(X) = \sum_{n \leq X} a(n) $$

where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series

$$ D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s} $$

completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.

That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as

$$ S_2(R) = \pi R + P_2(R),$$

where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the “lattice point discrepancy” — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.

The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,

$$ \theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$

Then

$$ \theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$

where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum

$$ \sum_{n \leq R} r_2(n) = S_2(R),$$

and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.

More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,

$$ S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$

giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).

Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.

Strong evidence for the conjectured size of the lattice point discrepancy  comes in the form of mean square estimates. By looking at the square, one doesn’t need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form

$$\begin{align}
\int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\
\int_0^X P_3(t)^2 dt &= C’ X^2 \log X + O(X^2 (\sqrt{ \log X})).
\end{align}$$

These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.

It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.

In a paper that we recently posted to the arxiv, we show that the Dirichlet series

$$ \sum_{n \geq 1} \frac{S_k(n)^2}{n^s} $$

and

$$ \sum_{n \geq 1} \frac{P_k(n)^2}{n^s} $$

for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results.  In particular, we prove a result stated  following.

Theorem

$$\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2}  \\ &\quad + C_kX^{k-1}  + O(X^{k-2} \end{align}$$

In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.

We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.

Theorem

There exists $\lambda > 0$ such that

$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 – \lambda}).$$

We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.

The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.

There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.

Posted in Math.NT, Mathematics | Tagged , , | Leave a comment

Smooth Sums to Sharp Sums 1

In this note, I describe a combination of two smoothed integral transforms that has been very useful in my collaborations with Alex Walker, Chan Ieong Kuan, and Tom Hulse. I suspect that this particular technique was once very well-known. But we were not familiar with it, and so I describe it here.

In application, this is somewhat more complicated. But to show the technique, I apply it to reprove some classic bounds on $\text{GL}(2)$ $L$-functions.

This note is also available as a pdf. This was first written as a LaTeX document, and then modified to fit into wordpress through latex2jax.

Introduction

Consider a Dirichlet series
$$\begin{equation}
D(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \notag
\end{equation}$$
Suppose that this Dirichlet series converges absolutely for $\Re s > 1$, has meromorphic continuation to the complex plane, and satisfies a functional equation of shape
$$\begin{equation}
\Lambda(s) := G(s) D(s) = \epsilon \Lambda(1-s), \notag
\end{equation}$$
where $\lvert \epsilon \rvert = 1$ and $G(s)$ is a product of Gamma factors.

Dirichlet series are often used as a tool to study number theoretic functions with multiplicative properties. By studying the analytic properties of the Dirichlet series, one hopes to extract information about the coefficients $a(n)$. Some of the most common interesting information within Dirichlet series comes from partial sums
$$\begin{equation}
S(n) = \sum_{m \leq n} a(m). \notag
\end{equation}$$
For example, the Gauss Circle and Dirichlet Divisor problems can both be stated as problems concerning sums of coefficients of Dirichlet series.

One can try to understand the partial sum directly by understanding the integral transform
$$\begin{equation}
S(n) = \frac{1}{2\pi i} \int_{(2)} D(s) \frac{X^s}{s} ds, \notag
\end{equation}$$
a Perron integral. However, it is often challenging to understand this integral, as delicate properties concerning the convergence of the integral often come into play.

Instead, one often tries to understand a smoothed sum of the form
$$\begin{equation}
\sum_{m \geq 1} a(m) v(m) \notag
\end{equation}$$
where $v(m)$ is a smooth function that vanishes or decays extremely quickly for values of $m$ larger than $n$. A large class of smoothed sums can be obtained by starting with a very nicely behaved weight function $v(m)$ and take its Mellin transform
$$\begin{equation}
V(s) = \int_0^\infty v(x) x^s \frac{dx}{x}. \notag
\end{equation}$$
Then Mellin inversion gives that
$$\begin{equation}
\sum_{m \geq 1} a(m) v(m/X) = \frac{1}{2\pi i} \int_{(2)} D(s) X^s V(s) ds, \notag
\end{equation}$$
as long as $v$ and $V$ are nice enough functions.

In this note, we will use two smoothing integral transforms and corresponding smoothed sums. We will use one smooth function $v_1$ (which depends on another parameter $Y$) with the property that
$$\begin{equation}
\sum_{m \geq 1} a(m) v_1(m/X) \approx \sum_{\lvert m – X \rvert < X/Y} a(m). \notag
\end{equation}$$
And we will use another smooth function $v_2$ (which also depends on $Y$) with the property that
$$\begin{equation}
\sum_{m \geq 1} a(m) v_2(m/X) = \sum_{m \leq X} a(m) + \sum_{X < m < X + X/Y} a(m) v_2(m/X). \notag
\end{equation}$$
Further, as long as the coefficients $a(m)$ are nonnegative, it will be true that
$$\begin{equation}
\sum_{X < m < X + X/Y} a(m) v_2(m/X) \ll \sum_{\lvert m – X \rvert < X/Y} a(m), \notag
\end{equation}$$
which is exactly what $\sum a(m) v_1(m/X)$ estimates. Therefore
$$\begin{equation}\label{eq:overall_plan}
\sum_{m \leq X} a(m) = \sum_{m \geq 1} a(m) v_2(m/X) + O\Big(\sum_{m \geq 1} a(m) v_1(m/X) \Big).
\end{equation}$$

Hence sufficient understanding of $\sum a(m) v_1(m/X)$ and $\sum a(m) v_2(m/X)$ allows one to understand the sharp sum
$$\begin{equation}
\sum_{m \leq X} a(m). \notag
\end{equation}$$

(more…)

Posted in Expository, Math.NT, Mathematics | Tagged , , , , | 3 Comments

Revealing zero in fully homomorphic encryption is a Bad Thing

When I was first learning number theory, cryptography seemed really fun and really practical. I thought elementary number theory was elegant, and that cryptography was an elegant application. As I continued to learn more about mathematics, and in particular modern mathematics, I began to realize that decades of instruction and improvement (and perhaps of more useful points of view) have simplified the presentation of elementary number theory, and that modern mathematics is less elegant in presentation.

Similarly, as I learned more about cryptography, I learned that though the basic ideas are very simple, their application is often very inelegant. For example, the basis of RSA follows immediately from Euler’s Theorem as learned while studying elementary number theory, or alternately from Lagrange’s Theorem as learned while studying group theory or abstract algebra. And further, these are very early topics in these two areas of study!

But a naive implementation of RSA is doomed (For that matter, many professional implementations have their flaws too). Every now and then, a very clever expert comes up with a new attack on popular cryptosystems, generating new guidelines and recommendations. Some guidelines make intuitive sense [e.g. don’t use too small of an exponent for either the public or secret keys in RSA], but many are more complicated or designed to prevent more sophisticated attacks [especially side-channel attacks].

In the summer of 2013, I participated in the ICERM IdeaLab working towards more efficient homomorphic encryption. We were playing with existing homomorphic encryption schemes and trying to come up with new methods. One guideline that we followed is that an attacker should not be able to recognize an encryption of zero. This seems like a reasonable guideline, but I didn’t really understand why, until I was chatting with others at the 2017 Joint Mathematics Meetings in Atlanta.

It turns out that revealing zero isn’t just against generally sound advice. Revealing zero is a capital B capital T Bad Thing.

Basic Setup

For the rest of this note, I’ll try to identify some of this reasoning.

In a typical cryptosystem, the basic setup is as follows. Andrew has a message that he wants to send to Beatrice. So Andrew converts the message into a list of numbers $M$, and uses some sort of encryption function $E(\cdot)$ to encrypt $M$, forming a ciphertext $C$. We can represent this as $C = E(M)$. Andrew transmits $C$ to Beatrice. If an eavesdropper Eve happens to intercept $C$, it should be very hard for Eve to recover any information about the original message from $C$. But when Beatrice receives $C$, she uses a corresponding decryption function $D(\cdot)$ to decrypt $C$, $M = d(C)$.

Often, the encryption and decryption techniques are based on number theoretic or combinatorial primitives. Some of these have extra structure (or at least they do with basic implementation). For instance, the RSA cryptosystem involves a public exponent $e$, a public mod $N$, and a private exponent $d$. Andrew encrypts the message $M$ by computing $C = E(M) \equiv M^e \bmod N$. Beatrice decrypts the message by computing $M = C^d \equiv M^{ed} \bmod N$.

Notice that in the RSA system, given two messages $M_1, M_2$ and corresponding ciphertexts $C_1, C_2$, we have that
\begin{equation}
E(M_1 M_2) \equiv (M_1 M_2)^e \equiv M_1^e M_2^e \equiv E(M_1) E(M_2) \pmod N. \notag
\end{equation}
The encryption function $E(\cdot)$ is a group homomorphism. This is an example of extra structure.

A fully homomorphic cryptosystem has an encryption function $E(\cdot)$ satisfying both $E(M_1 + M_2) = E(M_1) + E(M_2)$ and $E(M_1M_2) = E(M_1)E(M_2)$ (or more generally an analogous pair of operations). That is, $E(\cdot)$ is a ring homomorphism.

This extra structure allows for (a lot of) extra utility. A fully homomorphic $E(\cdot)$ would allow one to perform meaningful operations on encrypted data, even though you can’t read the data itself. For example, a clinic could store (encrypted) medical information on an external server. A doctor or nurse could pull out a cellphone or tablet with relatively little computing power or memory and securely query the medical data. Fully homomorphic encryption would allow one to securely outsource data infrastructure.

A different usage model suggests that we use a different mental model. So suppose Alice has sensitive data that she wants to store for use on EveCorp’s servers. Alice knows an encryption method $E(\cdot)$ and a decryption method $D(\cdot)$, while EveCorp only ever has mountains of ciphertexts, and cannot read the data [even though they have it].

Why revealing zero is a Bad Thing

Let us now consider some basic cryptographic attacks. We should assume that EveCorp has access to a long list of plaintext messages $M_i$ and their corresponding ciphertexts $C_i$. Not everything, but perhaps from small leaks or other avenues. Among the messages $M_i$ it is very likely that there are two messages $M_1, M_2$ which are relatively prime. Then an application of the Euclidean Algorithm gives a linear combination of $M_1$ and $M_2$ such that
\begin{equation}
M_1 x + M_2 y = 1 \notag
\end{equation}
for some integers $x,y$. Even though EveCorp doesn’t know the encryption method $E(\cdot)$, since we are assuming that they have access to the corresponding ciphertexts $C_1$ and $C_2$, EveCorp has access to an encryption of $1$ using the ring homomorphism properties:
\begin{equation}\label{eq:encryption_of_one}
E(1) = E(M_1 x + M_2 y) = x E(M_1) + y E(M_2) = x C_1 + y C_2.
\end{equation}
By multiplying $E(1)$ by $m$, EveCorp has access to a plaintext and encryption of $m$ for any message $m$.

Now suppose that EveCorp can always recognize an encryption of $0$. Then EveCorp can mount a variety of attacks exposing information about the data it holds.

For example, EveCorp can test whether a particular message $m$ is contained in the encrypted dataset. First, EveCorp generates a ciphertext $C_m$ for $m$ by multiplying $E(1)$ by $m$, as in \eqref{eq:encryption_of_one}. Then for each ciphertext $C$ in the dataset, EveCorp computes $C – C_m$. If $m$ is contained in the dataset, then $C – C_m$ will be an encryption of $0$ for the $C$ corresponding to $m$. EveCorp recognizes this, and now knows that $m$ is in the data. To be more specific, perhaps a list of encrypted names of medical patients appears in the data, and EveCorp wants to see if JohnDoe is in that list. If they can recognize encryptions of $0$, then EveCorp can access this information.

And thus it is unacceptable for external entities to be able to consistently recognize encryptions of $0$.

Up to now, I’ve been a bit loose by saying “an encryption of zero” or “an encryption of $m$”. The reason for this is that to protect against recognition of encryptions of $0$, some entropy is added to the encryption function $E(\cdot)$, making it multivalued. So if we have a message $M$ and we encrypt it once to get $E(M)$, and we encrypt $M$ later and get $E'(M)$, it is often not true that $E(M) = E'(M)$, even though they are both encryptions of the same message. But these systems are designed so that it is true that $C(E(M)) = C(E'(M)) = M$, so that the entropy doesn’t matter.

This is a separate matter, and something that I will probably return to later.

Posted in Crypto, Math.NT, Mathematics, Programming | Tagged , | Leave a comment

A Notebook Preparing for a Talk at Quebec-Maine

This is a notebook containing a representative sample of the code I used to  generate the results and pictures presented at the Quebec-Maine Number Theory Conference on 9 October 2016. It was written in a Jupyter Notebook using Sage 7.3, and later converted for presentation on this site.
There is a version of the notebook available on github. Alternately, a static html version without WordPress formatting is available here. Finally, this notebook is also available in pdf form.
The slides for my talk are available here.

Testing for a Generalized Conjecture on Iterated Sums of Coefficients of Cusp Forms

Let $f$ be a weight $k$ cusp form with Fourier expansion

$$ f(z) = \sum_{n \geq 1} a(n) e(nz). $$

Deligne has shown that $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$. It is conjectured that

$$ S_f^1(n) := \sum_{m \leq X} a(m) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}. $$

It is known that this holds on average, and we recently showed that this holds on average in short intervals.
(See HKLDW1, HKLDW2, and HKLDW3 for details and an overview of work in this area).
This is particularly notable, as the resulting exponent is only 1/4 higher than that of a single coefficient.
This indicates extreme cancellation, far more than what is implied merely by the signs of $a(n)$ being random.

It seems that we also have

$$ \sum_{m \leq X} S_f^1(m) \ll X^{\frac{k-1}{2} + \frac{2}{4} + \epsilon}. $$

That is, the sum of sums seems to add in only an additional 1/4 exponent.
This is unexpected and a bit mysterious.

The purpose of this notebook is to explore this and higher conjectures.
Define the $j$th iterated sum as

$$ S_f^j(X) := \sum_{m \leq X} S_f^{j-1} (m).$$

Then we numerically estimate bounds on the exponent $\delta(j)$ such that

$$ S_f^j(X) \ll X^{\frac{k-1}{2} + \delta(j) + \epsilon}. $$

In [1]:
# This was written in SageMath 7.3 through a Jupyter Notebook.
# Jupyter interfaces to sage by loading it as an extension
%load_ext sage

# sage plays strangely with ipython. This re-allows inline plotting
from IPython.display import display, Image

We first need a list of coefficients of one (or more) cusp forms.
For initial investigation, we begin with a list of 50,000 coefficients of the weight $12$ cusp form on $\text{SL}(2, \mathbb{Z})$, $\Delta(z)$, i.e. Ramanujan’s delta function.
We will use the data associated to the 50,000 coefficients for pictoral investigation as well.

We will be performing some numerical investigation as well.
For this, we will use the first 2.5 million coefficients of $\Delta(z)$

In [2]:
# Gather 10 coefficients for simple checking
check_10 = delta_qexp(11).coefficients()
print check_10

fiftyk_coeffs = delta_qexp(50000).coefficients()
print fiftyk_coeffs[:10] # these match expected

twomil_coeffs = delta_qexp(2500000).coefficients()
print twomil_coeffs[:10] # these also match expected
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
In [3]:
# Function which iterates partial sums from a list of coefficients

def partial_sum(baselist):
    ret_list = [baselist[0]]
    for b in baselist[1:]:
        ret_list.append(ret_list[-1] + b)
    return ret_list

print check_10
print partial_sum(check_10) # Should be the partial sums
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
In [4]:
# Calculate the first 10 iterated partial sums
# We store them in a single list list, `sums_list`
# the zeroth elelemnt of the list is the array of initial coefficients
# the first element is the array of first partial sums, S_f(n)
# the second element is the array of second iterated partial sums, S_f^2(n)

fiftyk_sums_list = []
fiftyk_sums_list.append(fiftyk_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
    fiftyk_sums_list.append(partial_sum(fiftyk_sums_list[-1]))
    
print partial_sum(check_10)
print fiftyk_sums_list[1][:10]         # should match above
    
twomil_sums_list = []
twomil_sums_list.append(twomil_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
    twomil_sums_list.append(partial_sum(twomil_sums_list[-1]))
    
print twomil_sums_list[1][:10]         # should match above
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]

As is easily visible, the sums alternate in sign very rapidly.
For instance, we believe tha the first partial sums should change sign about once every $X^{1/4}$ terms in the interval $[X, 2X]$.
In this exploration, we are interested in the sizes of the coefficients.
But in HKLDW3, we investigated some of the sign changes of the partial sums.

Now seems like a nice time to briefly look at the data we currently have.
What do the first 50 thousand coefficients look like?
So we normalize them, getting $A(n) = a(n)/n^{5.5}$ and plot these coefficients.

In [5]:
norm_list = []
for n,e in enumerate(fiftyk_coeffs, 1):
    normalized_element = 1.0 * e / (1.0 * n**(5.5))
    norm_list.append(normalized_element)
print norm_list[:10]
1
In [6]:
# Make a quick display
normed_coeffs_plot = scatter_plot(zip(range(1,60000), norm_list), markersize=.02)
normed_coeffs_plot.save("normed_coeffs_plot.png")
display(Image("normed_coeffs_plot.png"))

Since some figures will be featuring prominently in the talk I’m giving at Quebec-Maine, let us make high-quality figures now.

 

(more…)

  1. 00000000000000, -0.530330085889911, 0.598733612492945, -0.718750000000000, 0.691213333204735, -0.317526448138560, -0.376547696558964, 0.911504835123284, -0.641518061271148, -0.366571226366719
Posted in Math.NT, Mathematics, Open, Programming, sagemath | Tagged , , , | 1 Comment

Paper: Sign Changes of Coefficients and Sums of Coefficients of Cusp Forms

This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker, and is a another sequel to our previous work. This is the third in a trio of papers, and completes an answer to a question posed by our advisor Jeff Hoffstein two years ago.

We have just uploaded a preprint to the arXiv giving conditions that guarantee that a sequence of numbers contains infinitely many sign changes. More generally, if the sequence consists of complex numbers, then we give conditions that guarantee sign changes in a generalized sense.

Let $\mathcal{W}(\theta_1, \theta_2) := { re^{i\theta} : r \geq 0, \theta \in [\theta_1, \theta_2]}$ denote a wedge of complex plane.

Suppose ${a(n)}$ is a sequence of complex numbers satisfying the following conditions:

  1. $a(n) \ll n^\alpha$,
  2. $\sum_{n \leq X} a(n) \ll X^\beta$,
  3. $\sum_{n \leq X} \lvert a(n) \rvert^2 = c_1 X^{\gamma_1} + O(X^{\eta_1})$,

where $\alpha, \beta, c_1, \gamma_1$, and $\eta_1$ are all real numbers $\geq 0$. Then for any $r$ satisfying $\max(\alpha+\beta, \eta_1) – (\gamma_1 – 1) < r < 1$, the sequence ${a(n)}$ has at least one term outside any wedge $\mathcal{W}(\theta_1, \theta_2)$ with $0 \theta_2 – \theta_1 < \pi$ for some $n \in [X, X+X^r)$ for all sufficiently large $X$.

These wedges can be thought of as just slightly smaller than a half-plane. For a complex number to escape a half plane is analogous to a real number changing sign. So we should think of this result as guaranteeing a sort of sign change in intervals of width $X^r$ for all sufficiently large $X$.

The intuition behind this result is very straightforward. If the sum of coefficients is small while the sum of the squares of the coefficients are large, then the sum of coefficients must experience a lot of cancellation. The fact that we can get quantitative results on the number of sign changes is merely a task of bookkeeping.

Both the statement and proof are based on very similar criteria for sign changes when ${a(n)}$ is a sequence of real numbers first noticed by Ram Murty and Jaban Meher. However, if in addition it is known that

\begin{equation}
\sum_{n \leq X} (a(n))^2 = c_2 X^{\gamma_2} + O(X^{\eta_2}),
\end{equation}

and that $\max(\alpha+\beta, \eta_1, \eta_2) – (\max(\gamma_1, \gamma_2) – 1) < r < 1$, then generically both sequences ${\text{Re} (a(n)) }$ and $latex{ \text{Im} (a(n)) }$ contain at least one sign change for some $n$ in $[X , X + X^r)$ for all sufficiently large $X$. In other words, we can detect sign changes for both the real and imaginary parts in intervals, which is a bit more special.

It is natural to ask for even more specific detection of sign changes. For instance, knowing specific information about the distribution of the arguments of $a(n)$ would be interesting, and very closely reltated to the Sato-Tate Conjectures. But we do not yet know how to investigate this distribution.

In practice, we often understand the various criteria for the application of these two sign changes results by investigating the Dirichlet series
\begin{align}
&\sum_{n \geq 1} \frac{a(n)}{n^s} \\
&\sum_{n \geq 1} \frac{S_f(n)}{n^s} \\
&\sum_{n \geq 1} \frac{\lvert S_f(n) \rvert^2}{n^s} \\
&\sum_{n \geq 1} \frac{S_f(n)^2}{n^s},
\end{align}
where
\begin{equation}
S_f(n) = \sum_{m \leq n} a(n).
\end{equation}

In the case of holomorphic cusp forms, the two previous joint projects with this group investigated exactly the Dirichlet series above. In the paper, we formulate some slightly more general criteria guaranteeing sign changes based directly on the analytic properties of the Dirichlet series involved.

In this paper, we apply our sign change results to our previous work to show that $S_f(n)$ changes sign in each interval $[X, X + X^{\frac{2}{3} + \epsilon})$ for sufficiently large $X$. Further, if there are coefficients with $\text{Im} a(n) \neq 0$, then the real and imaginary parts each change signs in those intervals.

We apply our sign change results to single coefficients of $\text{GL}(2)$ cusp forms (and specifically full integral weight holomorphic cusp forms, half-integral weight holomorphic cusp forms, and Maass forms). In large part these are minor improvements over folklore and what is known, except for the extension to complex coefficients.

We also apply our sign change results to single isolated coefficients $A(1,m)$ of $\text{GL}(3)$ Maass forms. This seems to be a novel result, and adds to the very sparse literature on sign changes of sequences associated to $\text{GL}(3)$ objects. Murty and Meher recently proved a general sign change result for $\text{GL}(n)$ objects which is similar in feel.

As a final application, we also consider sign changes of partial sums of $\nu$-normalized coefficients. Let
\begin{equation}
S_f^\nu(X) := \sum_{n \leq X} \frac{a(n)}{n^{\nu}}.
\end{equation}
As $\nu$ gets larger, the individual coefficients $a(n)n^{-\nu}$ become smaller. So one should expect that sign changes in ${S_f^\nu(n)}$ to change based on $\nu$. And in particular, as $\nu$ gets very large, the number of sign changes of $S_f^\nu$ should decrease.

Interestingly, in the case of holomorphic cusp forms of weight $k$, we are able to show that there are sign changes of $S_f^\nu(n)$ in intervals even for normalizations $\nu$ a bit above $\nu = \frac{k-1}{2}$. This is particularly interesting as $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$, so for $\nu > \frac{k-1}{2}$ the coefficients are \emph{decreasing} with $n$. We are able to show that when $\nu = \frac{k-1}{2} + \frac{1}{6} – \epsilon$, the sequence ${S_f^\nu(n)}$ has at least one sign change for $n$ in $[X, 2X)$ for all sufficiently large $X$.

It may help to consider a simpler example to understand why this is surprising. Consider the classic example of a sequence of $b(n)$, where $b(n) = 1$ or $b(n) = -1$, randomly, with equal probability. Then the expected size of the sums of $b(n)$ is about $\sqrt n$. This is an example of \emph{square-root cancellation}, and such behaviour is a common point of comparison. Similarly, the number of sign changes of the partial sums of $b(n)$ is also expected to be about $\sqrt n$.

Suppose now that $b(n) = \frac{\pm 1}{\sqrt n}$. If the first term is $1$, then it takes more then the second term being negative to make the overall sum negative. And if the first two terms are positive, then it would take more then the following three terms being negative to make the overall sum negative. So sign changes of the partial sums are much rarer. In fact, they’re exceedingly rare, and one might barely detect more than a dozen through computational experiment (although one should still expect infinitely many).

This regularity, in spite of the decreasing size of the individual coefficients $a(n)n^{-\nu}$, suggests an interesting regularity in the sign changes of the individual $a(n)$. We do not know how to understand or measure this effect or its regularity, and for now it remains an entirely qualitative observation.

For more details and specific references, see the paper on the arXiv.

Posted in Math.NT, Mathematics | Tagged , , , | Leave a comment