The purpose of this note is to describe the large effects of having no internet at my home for the last four weeks. I’m at my home about half the time, leading to the title.

I have become accustomed to having the internet at all times. I now see that many various habits of mine involved the internet. In the mornings and evenings, I would check HackerNews, longform, and reddit for interesting reads. Invariably there are more interesting seeming things than I would read, and my *Checkout* bookmarks list is a hundreds-of-items long growing list of maybe interesting stuff. In the middle times throughout the day, I would checkout a few of these bookmarks.

All in all, I would spend an enormous amount of time reading random interesting tidbits, even though much of this time was spread out in the “in-betweens” in my day.

When I didn’t have internet at my home, I had to fill all those “in-between” moments, as well as my waking and sleeping moments, with something else. Faced with the necessity of doing something, I filled most of these moments with reading books. Made out of paper. (The same sort of books whose sales are rising compared to ebooks, contrary to most predictions a few years ago).

I’d forgotten how much I enjoyed reading a book in large chunks, in very few sittings. I usually have an ebook on my phone that I read during commutes, and perhaps most of my idle reading over the last several years has been in 20 page increments. The key phrase here is “idle reading”. I now set aside time to “actively read”, in perhaps 100 page increments. Reading enables a “flow state” very similar to the sensation I get when mathing continuously, or programming continuously, for a long period of time. I not only read more, but I enjoy what I’m reading more.

As a youth, I would read all the time. Fun fact: at one time, I’d read almost every book in the Star Wars expanded universe. There were over a hundred, and they were all canon (before Disney paved over the universe to make room). I learned to love reading by reading science fiction, and the first novel I remember reading was a copy of Andre Norton’s “The Beastmaster” (… which is great. A part telepath part Navajo soldier moves to another planet. Then it’s a space western. What’s not to love?).

My primary source of books is the library at the University of Warwick. Whether through differences in continental taste or simply a case of different focus, the University Library doesn’t have many books in its fiction collection that I’ve been intending to read. I realize now that most of the nonfiction I read originates on the internet, while much of the fiction I read comes from books. Now, encouraged by a lack of alternatives, I picked up many more and varied nonfiction books than I would otherwise have.

As an unexpected side effect, I found that I would also carefully download some of the articles I identified as “interesting” a bit before I headed home from the office. Without internet, I read far more of my *checkout* bookmarks than I did with internet. Weird. Correspondingly, I found that I would spend a bit more time cutting down the false-positive rate — I used to bookmark almost anything that I thought might be interesting, but which I wasn’t going to read right then. Now I culled the wheat from the chaff, as harvesting wheat takes time. (Perhaps this is something I should do more often. I recognize that there are services or newsletters that promise to identify great materials, but somehow none of them have worked better to my tastes than hackernews or longform. But these both have questionable signal to noise.).

The result is that I’ve goofed off reading probably about the same amount of time, but in fewer topics and at greater depth in each. It’s easy to jump from 10 page article to 10 page article online; when the medium is books, things come in larger chunks.

I *feel* more productive reading a book, even though I don’t actually attribute much to the difference. There may be something to the act of reading contiguously and continuously for long periods of time, though. This correlated with an overall increase my “chunking” of tasks across continuous blocks of time, instead of loosely multitasking. I think this is for the better.

I now have internet at my flat. Some habits will slide back, but there are other new habits that I will keep. I’ll keep my bedroom computer-free. In the evening, this means I read books before I sleep. In the morning, this means I must leave and go to the other room before I waste any time on online whatevers. Both of these are good. And I’ll try to continue to chunk time.

To end, I’ll note what I read in the last month, along with a few notes about each.

From best to worse.

- The best fiction I read was
*The Three Body Problem*, by Cixin Liu. I’d heard lots about this book. It’s Chinese scifi, and much of the story takes place against the backdrop of the Chinese cultural revolution… which I know embarassingly little about. The moral and philosophical underpinnings of this book are interesting and atypical (to me). At its core are various groups of people who have lost faith in aspects of science, or humanity, or both. I was unprepared for the many (hundreds?) of pages of philosophizing in the book, but I understood why it was there. This aspect reminded me of the last half of Anathem by Stephenson (perhaps the best book I’ve read in the last few years), which also had many (also hundreds?) of pages of philosophizing. I love this book, I recommend it. And I note that I read it in four sittings. There are two more books completing a trilogy, and I will read them once I can get my hands on them. [No library within 50 miles of me has them. I did buy the first one, though. Perhaps I’ll buy the other two.] - The second best was
*The Lathe of Heaven*by Ursula Le Guin. This is some classic fantasy, and is pretty mindbending. I think the feel of many books of Ursula Le Guin is very similar — there are many interesting ideas throughout the book, but the book deliberately loses coherence as the flow and fury of the plot reaches a climax. I like*The Lathe of Heaven*more than*The Wizard of Earthsea*and about the same as*The Left Hand of Darkness*, also by Le Guin. I read this book in three sittings. - I read three of the Witcher books, by Andzej Sapkowski. Namely,
*The Sword of Destiny*,*Blood of Elves*, and*Time of Concempt*. These are fun, not particularly deep reads. There is a taste of moral ambiguity that I like as it’s different from what I normally find. On the other hand, Sapkowski often uses humor or ambiguity in place of a meaningful, coherent plot.*The Sword of Destiny*is a collection of short tales, and I think his short tales are better than his novels — entirely because one doesn’t need or expect coherence from short stories.

I’m currently reading *Confusion* by Neal Stephenson, book two of the Baroque trilogy. Right now, I am exactly 1 page in.

I rank these from those I most enjoyed to those I least enjoyed.

*How Equal Temperament Ruined Harmony*, by Duffin. This was told to me as an introduction to music theory [in fact, I noted this from a comment thread on hackernews somewhere], but really it is a treatise on the history of tuning and temparaments. It turns out that modern equal termperament suffers from many flaws that aren’t commonly taught. When I got back to the office after reading this book, I spent a good amount of time on youtube listening to songs in mean tone tuning and just intonation. There is a difference! I read this book in 2 sittings — it’s short, pretty simple, and generally nice. However there are several long passages that are simply better to skip. Nonetheless I learned a lot.*A Random Walk down Wall Street*, by Burton Malkiel. I didn’t know too much about investing before reading this book. I wouldn’t actually say that I know too much after reading it either, but the book is about investing. I was warned that reading this book would make me think that the only way to really invest is to purchase index funds. And indeed, that is the overwhelming (and explicit) takeawar from the book. But I found the book surprisingly readable, and read it very quickly. I find that some of the analysis is biased towards long-term investing even as a basis of comparison.*Guesstimation*, by Weinstein. Ok, perhaps it is not fair to say that one “reads” this book. It consists of many Fermi-style questions (how many golf balls does it take to fill up a football stadium type questions), followed by their analysis. So I read a question and then sit down and do my own analysis. And then I compare it against Weinstein’s. I was stunned at how often the analyses were tremendously similar and got essentially the same order of magnitude at the end. [But not always, that’s for sure. There are also lots of things that I estimate very, very poorly]. There’s a small subgenre of “popular mathematics for the reader who is willing to take out a pencil and paper” (which can’t have a big readership, but which I thoroughly enjoy), and this is a good book within that subgenre. I’m currently working through its sequel.*Natures Numbers*, by Ian Stewart. This is a pop math book. Ian Stewart is an emeritus professor at my university, so it seemed appropriate to read something of his. This is a surprisingly fast read (I read it in a single sitting). Stewart is known for writing approachable popular math accounts, and this fits.*The Structure of Scientific Revolutions*, by Thomas Kuhn. This is metascience. I read the first half of this book/essay very quickly, and I struggled through its second half. This came highly recommended to me, but I found the signal to noise ratio to be pretty low. It might be that I wasn’t very willing to navigate the careful treading around equivocation throughout. However, I think many of the ideas are good. I don’t know if someone has written a 30 page summary, but I think this may be possible — and a good alternative to the book/essay itself.

I’m now reading *Grit*, by Angela Duckworth. Another side effect of reading more is that I find myself reading one fiction, one non-fiction, and one “simple” book at the same time.

Written while on a bus without internet to Heathrow, minus the pictures (which were added at Heathrow).

]]>The primary purpose of this note is to collect a few hitherto unnoticed or unpublished results concerning gaps between powers of consecutive primes. The study of gaps between primes has attracted many mathematicians and led to many deep realizations in number theory. The literature is full of conjectures, both open and closed, concerning the nature of primes.

In a series of stunning developments, Zhang, Maynard, and Tao^{1}^{2} made the first major progress towards proving the prime $k$-tuple conjecture, and successfully proved the existence of infinitely many pairs of primes differing by a fixed number. As of now, the best known result is due to the massive collaborative Polymath8 project,^{3} which showed that there are infinitely many pairs of primes of the form $p, p+246$. In the excellent expository article, ^{4} Granville describes the history and ideas leading to this breakthrough, and also discusses some of the potential impact of the results. This note should be thought of as a few more results following from the ideas of Zhang, Maynard, Tao, and the Polymath8 project.

Throughout, $p_n$ will refer to the $n$th prime number. In a paper, ^{5} Andrica conjectured that

\begin{equation}\label{eq:Andrica_conj}

\sqrt{p_{n+1}} – \sqrt{p_n} < 1

\end{equation}

holds for all $n$. This conjecture, and related statements, is described in Guy’s Unsolved Problems in Number Theory.

^{6} It is quickly checked that this holds for primes up to $4.26 \cdot 10^{8}$ in sagemath

```
# Sage version 8.0.rc1
# started with `sage -ipython`
# sage has pari/GP, which can generate primes super quickly
from sage.all import primes_first_n
# import izip since we'll be zipping a huge list, and sage uses python2 which has
# non-iterable zip by default
from itertools import izip
# The magic number 23150000 appears because pari/GP can't compute
# primes above 436273290 due to fixed precision arithmetic
ps = primes_first_n(23150000) # This is every prime up to 436006979
# Verify Andrica's Conjecture for all prime pairs = up to 436006979
gap = 0
for a,b in izip(ps[:-1], ps[1:]):
if b**.5 - a**.5 > gap:
A, B, gap = a, b, b**.5 - a**.5
print(gap)
print("")
print(A)
print(B)
```

In approximately 20 seconds on my machine (so it would not be harder to go much higher, except that I would have to go beyond pari/GP to generate primes), this completes and prints out the following output.

```
0.317837245196
0.504017169931
0.670873479291
7
11
```

Thus the largest value of $\sqrt{p_{n+1}} – \sqrt{p_n}$ was merely $0.670\ldots$, and occurred on the gap between $7$ and $11$.

So it appears very likely that the conjecture is true. However it is also likely that new, novel ideas are necessary before the conjecture is decided.

Andrica’s Conjecture can also be stated in terms of prime gaps. Let $g_n = p_{n+1} – p_n$ be the gap between the $n$th prime and the $(n+1)$st prime. Then Andrica’s Conjecture is equivalent to the claim that $g_n < 2 \sqrt{p_n} + 1$. In this direction, the best known result is due to Baker, Harman, and Pintz, ^{7} who show that $g_n \ll p_n^{0.525}$.

In 1985, Sandor ^{8} proved that \begin{equation}\label{eq:Sandor} \liminf_{n \to \infty} \sqrt[4]{p_n} (\sqrt{p_{n+1}} – \sqrt{p_n}) = 0. \end{equation} The close relation to Andrica’s Conjecture \eqref{eq:Andrica_conj} is clear. The first result of this note is to strengthen this result.

TheoremLet $\alpha, \beta \geq 0$, and $\alpha + \beta < 1$. Then

\begin{equation}\label{eq:main}

\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0.

\end{equation}

We prove this theorem below. Choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{4}$ verifies Sandor’s result \eqref{eq:Sandor}. But choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{2} – \epsilon$ for a small $\epsilon > 0$ gives stronger results.

This theorem leads naturally to the following conjecture.

ConjectureFor any $0 \leq \alpha < 1$, there exists a constant $C(\alpha)$ such that

\begin{equation}

p_{n+1}^\alpha – p_{n}^\alpha \leq C(\alpha)

\end{equation}

for all $n$.

A simple heuristic argument, given in the last section below, shows that this Conjecture follows from Cramer’s Conjecture.

It is interesting to note that there are generalizations of Andrica’s Conjecture. One can ask what the smallest $\gamma$ is such that

\begin{equation}

p_{n+1}^{\gamma} – p_n^{\gamma} = 1

\end{equation}

has a solution. This is known as the Smarandache Conjecture, and it is believed that the smallest such $\gamma$ is approximately

\begin{equation}

\gamma \approx 0.5671481302539\ldots

\end{equation}

The digits of this constant, sometimes called “the Smarandache constant,” are the contents of sequence A038458 on the OEIS. It is possible to generalize this question as well.

Open QuestionFor any fixed constant $C$, what is the smallest $\alpha = \alpha(C)$ such that

\begin{equation}

p_{n+1}^\alpha – p_n^\alpha = C

\end{equation}

has solutions? In particular, how does $\alpha(C)$ behave as a function of $C$?

This question does not seem to have been approached in any sort of generality, aside from the case when $C = 1$.

The idea of the proof is very straightforward. We estimate \eqref{eq:main} across prime pairs $p, p+246$, relying on the recent proof from Polymath8 that infinitely many such primes exist.

Fix $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$. Applying the mean value theorem of calculus on the function $x \mapsto x^\alpha$ shows that

\begin{align}

p^\beta \big( (p+246)^\alpha – p^\alpha \big) &= p^\beta \cdot 246 \alpha q^{\alpha – 1} \\\

&\leq p^\beta \cdot 246 \alpha p^{\alpha – 1} = 246 \alpha p^{\alpha + \beta – 1}, \label{eq:bound}

\end{align}

for some $q \in [p, p+246]$. Passing to the inequality in the second line is done by realizing that $q^{\alpha – 1}$ is a decreasing function in $q$. As $\alpha + \beta – 1 < 0$, as $p \to \infty$ we see that\eqref{eq:bound} goes to zero.

Therefore

\begin{equation}

\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0,

\end{equation}

as was to be proved.

Cramer’s Conjecture states that there exists a constant $C$ such that for all sufficiently large $n$,

\begin{equation}

p_{n+1} – p_n < C(\log n)^2.

\end{equation}

Thus for a sufficiently large prime $p$, the subsequent prime is at most $p + C (\log p)^2$. Performing a similar estimation as above shows that

\begin{equation}

(p + C (\log p)^2)^\alpha – p^\alpha \leq C (\log p)^2 \alpha p^{\alpha – 1} =

C \alpha \frac{(\log p)^2}{p^{1 – \alpha}}.

\end{equation}

As the right hand side vanishes as $p \to \infty$, we see that it is natural to expect that the main Conjecture above is true. More generally, we should expect the following, stronger conjecture.

Conjecture’For any $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$, there exists a constant $C(\alpha, \beta)$ such that

\begin{equation}

p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) \leq C(\alpha, \beta).

\end{equation}

I wrote this note in between waiting in never-ending queues while I sort out my internet service and other mundane activities necessary upon moving to another country. I had just read some papers on the arXiv, and I noticed a paper which referred to unknown statuses concerning Andrica’s Conjecture. So then I sat down and wrote this up.

I am somewhat interested in qualitative information concerning the Open Question in the introduction, and I may return to this subject unless someone beats me to it.

This note is (mostly, minus the code) available as a pdf and (will shortly) appears on the arXiv. This was originally written in LaTeX and converted for display on this site using a set of tools I’ve written based around latex2jax, which is available on my github.

]]>The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.

This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is **a prototype** and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, **please request/comment/ask** either now, or at my email: `david@lowryduda.com`

.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:

```
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve
```

In [2]:

```
lmfdb_ecurve.search(rank=1)
```

Out[2]:

This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:

```
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))
```

Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:

```
# Try autocompleting the following. It has all the things!
print(dir(E))
```

This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:

```
E.defining_ideal()
```

Out[5]:

But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:

```
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)
```

And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:

```
print(E.gens())
print(E.rank())
print(E.regulator())
```

In [8]:

```
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())
```

That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the

In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.### This is where we’re really looking for input.¶

## Now let’s describe what’s going on under the hood a little bit¶

`search()`

function.In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.

**email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues**

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

```
http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050
```

The result is a large mess of data, which can be exported as json and parsed.

But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time *and thought* to parse.

So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:

```
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]
```

The total functionality of this search function is visible from its current documentation.

In [10]:

```
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)
```

In [11]:

```
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)
```

Out[11]:

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.

For example, no elliptic curve in the database has rank 5.

In [12]:

```
lmfdb_ecurve.search(rank=5)
```

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument `base_item`

(the name will probably change… to `skip`

? or perhaps to `offset`

?) to start returning at the `base_item`

th element.

In [13]:

```
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3)) # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list
```

Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept

`max_conductor`

or `min_conductor`

(or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).
In [14]:

```
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000) # Not implemented
```

Our

Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

`EllipticCurve_rational_field_lmfdb`

class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:

```
print(E.gens.__doc__)
```

Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like

`regulator()`

.)
In [16]:

```
print(E.regulator.__doc__)
```

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.

We now have a variety of results concerning the behavior of the partial sums

$$ S_f(X) = \sum_{n \leq X} a(n) $$

where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series

$$ D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s} $$

completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.

That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as

$$ S_2(R) = \pi R + P_2(R),$$

where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the “lattice point discrepancy” — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.

The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,

$$ \theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$

Then

$$ \theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$

where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum

$$ \sum_{n \leq R} r_2(n) = S_2(R),$$

and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.

More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,

$$ S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$

giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).

Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.

Strong evidence for the conjectured size of the lattice point discrepancy comes in the form of mean square estimates. By looking at the square, one doesn’t need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form

$$\begin{align}

\int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\

\int_0^X P_3(t)^2 dt &= C’ X^2 \log X + O(X^2 (\sqrt{ \log X})).

\end{align}$$

These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.

It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.

In a paper that we recently posted to the arxiv, we show that the Dirichlet series

$$ \sum_{n \geq 1} \frac{S_k(n)^2}{n^s} $$

and

$$ \sum_{n \geq 1} \frac{P_k(n)^2}{n^s} $$

for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results. In particular, we prove a result stated following.

Theorem$$\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2} \\ &\quad + C_kX^{k-1} + O(X^{k-2} \end{align}$$

In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.

We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.

TheoremThere exists $\lambda > 0$ such that

$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 – \lambda}).$$

We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.

The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.

There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.

]]>Disclaimer: There are several greenhouse gasses, and lots of other things that we’re throwing wantonly into the environment. Considering them makes things incredibly complicated incredibly quickly, so I blithely ignore them in this note.

Such rapid changes have side effects, many of which lead to bad things. That’s why nearly 150 countries ratified the Paris Agreement on Climate Change.^{1} Even if we assume that all these countries will accomplish what they agreed to (which might be challenging for the US),^{2}

most nations and advocacy groups are focusing on *increasing efficiency* and *reducing emissions.* These are good goals! But what about all the carbon that is already in the atmosphere?^{3}

You know what else is a problem? Obesity! How are we to solve all of these problems?

Looking at this (very unscientific) graph,^{4} we see that the red isn’t keeping up! Maybe we aren’t using the valuable resource of our own bodies enough! Fat has carbon in it — often over 20% by weight. What if we took advantage of our propensity to become propense? How fat would we need to get to balance last year’s carbon emissions?

That’s what we investigate here.

We need some data. It turns out that, despite knowing that we put *a lot* of carbon into the atmosphere, I don’t have any idea how much *a lot* actually is. Usually it’s given in nice, relatable terms that we’re supposed to be able to make sense of — like estimates on the number of degrees of warming to expect given a certain amount of emissions. So question number one: how much carbon do we put into the atmosphere?

This uses real data from the US Energy Information Association (in the “International Energy Statistics” dataset). This shows the highest carbon contributors from the year 2014 (the year with the most recent complete data. All countries not explicitly displayed are included in “All Others.”

What does this tell us?^{5} The vertical bars are measured in terms of “Million Metric Tons of CO2”. In total, the world released 33716 MMTons CO2.^{6}

This unit is a bit hard to wrap my head around, MMTon CO2, a million metric ton of CO2. Firstly, we should note that only 9195 MMTons of that is carbon, which is what we’re focusing on. To put this in proper perspective, that’s 2700 pounds per person alive today (Or 1226 kilograms, for that crowd).^{7}

So how fat would we need to get to balance one year of carbon emissions? If every man, woman, child, and elder gained a mere 2700 pounds (1226 kilograms!) of pure carbon, we would successfully sequester one year’s worth of carbon.

Unfortunately, that means about 13000 pounds (6000 kilograms) of fat, which is a bit much. So the chart really looks like this.

Wow. So this isn’t a reasonable carbon sequestration plan.^{8} We toss an **unbelievable** amount of carbon into the atmosphere. According to LiveScience, a fully grown T-Rex could weight as much as 18000 pounds (8160 kilograms). If we assume that the overall body composition of a dinosaur is about the same as a human,^{9} so that roughly 20% of a T-Rex’s weight is carbon, then a fully grown T-Rex might have 3590 pounds of carbon within his or her body. This is approximately the same amount of carbon that corresponds to each man, woman, child, and elder’s carbon use in 2014.

That’s a weird thought. How much carbon did we pull out of the ground and burn in 2014? About the same as if every human dug up a fully grown T-Rex, burned it, and then resumed their normal lives.

A fully grown male African elephant can weigh as much as 6000 kilograms. So we might grasp the magnitude of this as thinking of every person unearthing a fully grown male African elephant each year. Alternately, although we can’t gain enough weight to sequester enough carbon, elephants can. We could initiate a policy where every human adopts and raises a new African elephant each year.

I think I’m starting to get a bigger idea of just how daunting a task of large scale carbon sequestration will actually be. 2700 pounds per person per year. Whoa. Let’s move away from fat, towards better ideas.

Following guidelines set by the US Forestry Service for computing tree weight, a fully grown oak tree can weight as much as 14 metric tons, with as much as 4 metric tons (8800 pounds) being carbon. Thus one fully grown oak tree can hold three people’s average yearly carbon emissions.

Instead of an elephant a year, every person could plant an oak tree every year. (Actually, it just takes one in every three people). If these trees never died and were able to grow to complete size, then this would also offset carbon emissions. Conversely, when we cut and burn down trees, these release lots and lots and lots of carbon.

Suppose we did this. So this year, we were to plant 2.5 billion oak trees. That’s one for every three people on Earth. According to Penn State’s Forestry Extension School, a healthy, mature, hardwood forest can have as many as 120 trees per acre. If all 2.5 billion trees were planted at this density together, then this would cover 32552 square miles. The area of South Carolina is 32020 square miles, so we could cover the entire state of South Carolina with newly planted oak trees.^{10}

Of course, oak trees are probably not the best choice for a carbon sequestration tree, and there are probably plants that, in optimal growth conditions, hold a much higher carbon per square mile concentration.^{11} Perhaps some trees are three times as effective (a Maryland per year), or maybe even ten times as effective (a Delaware per year).

But that is the magnitude of the effort. Now if you’ll excuse me, I’m going to go hug a tree.

]]>Here are the slides from my defense.

After the defense, I gave Jeff and Jill a poster of our family tree. I made this using data from Math Genealogy, which has so much data.

]]>$$ \int_0^1 f(x) dx = F(1) – F(0). $$

The dream of latex2html5 is to be able to describe a diagram using the language of PSTricks inside LaTeX, throw in a bit of sugar to describe how interactivity should work on the web, and then render this to a beautiful svg using javascript.

Unfortunately, I did not try to make this work on WordPress (as WordPress is a bit finicky about how it interacts with javascript). So instead, I wrote a more detailed description about latex2html5, including some examples and some criticisms, on my non-Wordpress website david.lowryduda.com.

]]>

The story began when (with Tom Hulse, Chan Ieong Kuan, and Alex Walker — and with helpful input from Mehmet Kiral, Jeff Hoffstein, and others) we introduced and studied the Dirichlet series

$$\begin{equation}

\sum_{n \geq 1} \frac{S(n)^2}{n^s}, \notag

\end{equation}$$

where $S(n)$ is a sum of the first $n$ Fourier coefficients of an automorphic form on GL(2)$. We’ve done this successfully with a variety of automorphic forms, leading to new results for averages, short-interval averages, sign changes, and mean-square estimates of the error for several classical problems. Many of these papers and results have been discussed in other places on this site.

Ultimately, the problem becomes acquiring sufficiently detailed understandings of the spectral behavior of various forms (or more correctly, the behavior of the spectral expansion of a Poincare series against various forms).

We are continuing to research and study a variety of problems through this general approach.

The slides for this talk are available here.

]]>In application, this is somewhat more complicated. But to show the technique, I apply it to reprove some classic bounds on $\text{GL}(2)$ $L$-functions.

This note is also available as a pdf. This was first written as a LaTeX document, and then modified to fit into wordpress through latex2jax.

Consider a Dirichlet series

$$\begin{equation}

D(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \notag

\end{equation}$$

Suppose that this Dirichlet series converges absolutely for $\Re s > 1$, has meromorphic continuation to the complex plane, and satisfies a functional equation of shape

$$\begin{equation}

\Lambda(s) := G(s) D(s) = \epsilon \Lambda(1-s), \notag

\end{equation}$$

where $\lvert \epsilon \rvert = 1$ and $G(s)$ is a product of Gamma factors.

Dirichlet series are often used as a tool to study number theoretic functions with multiplicative properties. By studying the analytic properties of the Dirichlet series, one hopes to extract information about the coefficients $a(n)$. Some of the most common interesting information within Dirichlet series comes from partial sums

$$\begin{equation}

S(n) = \sum_{m \leq n} a(m). \notag

\end{equation}$$

For example, the Gauss Circle and Dirichlet Divisor problems can both be stated as problems concerning sums of coefficients of Dirichlet series.

One can try to understand the partial sum directly by understanding the integral transform

$$\begin{equation}

S(n) = \frac{1}{2\pi i} \int_{(2)} D(s) \frac{X^s}{s} ds, \notag

\end{equation}$$

a Perron integral. However, it is often challenging to understand this integral, as delicate properties concerning the convergence of the integral often come into play.

Instead, one often tries to understand a smoothed sum of the form

$$\begin{equation}

\sum_{m \geq 1} a(m) v(m) \notag

\end{equation}$$

where $v(m)$ is a smooth function that vanishes or decays extremely quickly for values of $m$ larger than $n$. A large class of smoothed sums can be obtained by starting with a very nicely behaved weight function $v(m)$ and take its Mellin transform

$$\begin{equation}

V(s) = \int_0^\infty v(x) x^s \frac{dx}{x}. \notag

\end{equation}$$

Then Mellin inversion gives that

$$\begin{equation}

\sum_{m \geq 1} a(m) v(m/X) = \frac{1}{2\pi i} \int_{(2)} D(s) X^s V(s) ds, \notag

\end{equation}$$

as long as $v$ and $V$ are nice enough functions.

In this note, we will use two smoothing integral transforms and corresponding smoothed sums. We will use one smooth function $v_1$ (which depends on another parameter $Y$) with the property that

$$\begin{equation}

\sum_{m \geq 1} a(m) v_1(m/X) \approx \sum_{\lvert m – X \rvert < X/Y} a(m). \notag

\end{equation}$$

And we will use another smooth function $v_2$ (which also depends on $Y$) with the property that

$$\begin{equation}

\sum_{m \geq 1} a(m) v_2(m/X) = \sum_{m \leq X} a(m) + \sum_{X < m < X + X/Y} a(m) v_2(m/X). \notag

\end{equation}$$

Further, as long as the coefficients $a(m)$ are nonnegative, it will be true that

$$\begin{equation}

\sum_{X < m < X + X/Y} a(m) v_2(m/X) \ll \sum_{\lvert m – X \rvert < X/Y} a(m), \notag

\end{equation}$$

which is exactly what $\sum a(m) v_1(m/X)$ estimates. Therefore

$$\begin{equation}\label{eq:overall_plan}

\sum_{m \leq X} a(m) = \sum_{m \geq 1} a(m) v_2(m/X) + O\Big(\sum_{m \geq 1} a(m) v_1(m/X) \Big).

\end{equation}$$

Hence sufficient understanding of $\sum a(m) v_1(m/X)$ and $\sum a(m) v_2(m/X)$ allows one to understand the sharp sum

$$\begin{equation}

\sum_{m \leq X} a(m). \notag

\end{equation}$$

Let us now introduce the two cutoff functions that we will use.

We use the Mellin transform

$$\begin{equation}

\frac{1}{2\pi i} \int_{(2)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds = \frac{1}{2\pi} \exp \Big( – \frac{Y^2 \log^2 X}{4\pi} \Big). \notag

\end{equation}$$

Then

$$\begin{equation}

\frac{1}{2\pi i} \int_{(2)} D(s) \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds = \frac{1}{2\pi} \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big). \notag

\end{equation}$$

For $n \in [X – X/Y, X + X/Y]$, the exponential damping term is essentially constant. However for $n$ with $\lvert n – X \rvert > X/Y$, this quickly exponential decay. Therefore this integral is very nearly the sum over those $n$ with $\lvert n – X \rvert < X/Y$.

For this reason we sometimes call this transform a concetrating integral transform. All of the mass of the integral is concentrated in a small interval of width $X/Y$ around the point $X$.

Note that if $a(n)$ is nonnegative, then we have the trivial bound

$$\begin{equation}

\sum_{\lvert n – X \rvert < X/Y} a(n) \ll \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big). \notag

\end{equation}$$

As this is a bit less known, we include a brief proof of this transform.

Write $X^s = e^{s\log X}$ and complete the square in the exponents. Since the integrand is entire and the integral is absolutely convergent, we may perform a change of variables $s \mapsto s-Y^2 \log X/2\pi$ and shift the line of integration back to the imaginary axis. This yields

$$\begin{equation}

\frac{1}{2\pi i} \exp\left( – \frac{Y^2 \log^2 X}{4\pi}\right) \int_{(0)} e^{\pi s^2/Y^2} \frac{ds}{Y}. \notag

\end{equation}$$

The change of variables $s \mapsto isY$ transforms the integral into the standard Gaussian, completing the proof.

$$\begin{equation}

\frac{1}{2\pi i} \exp\left( – \frac{Y^2 \log^2 X}{4\pi}\right) \int_{(0)} e^{\pi s^2/Y^2} \frac{ds}{Y}. \notag

\end{equation}$$

The change of variables $s \mapsto isY$ transforms the integral into the standard Gaussian, completing the proof.

For $X, Y > 0$, let $v_Y(X)$ denote a smooth non-negative function with maximum value $1$ satisfying

- $v_Y(X) = 1$ for $X \leq 1$,
- $v_Y(X) = 0$ for $X \geq 1 + \frac{1}{Y}$.

Let $V(s)$ denote the Mellin transform of $v_Y(X)$, given by

$$\begin{equation}

V(s)=\int_0^\infty t^s v_Y(t) \frac{dt}{t}. \notag

\end{equation}$$

when $\Re(s) > 0$. Through repeated applications of integration by parts, one can show that $V(s)$ satisfies the following properties:

- $V(s) = \frac{1}{s} + O_s(\frac{1}{Y})$.
- $V(s) = -\frac{1}{s}\int_1^{1 + \frac{1}{Y}}v'(t)t^s dt$.
- For all positive integers $m$, and with $s$ constrained to within a vertical strip where $\lvert s\rvert >\epsilon$, we have

$$\begin{equation} \label{vbound}

V(s) \ll_\epsilon \frac{1}{Y}\left(\frac{Y}{1 + \lvert s \rvert}\right)^m.

\end{equation}$$

Property $(3)$ above can be extended to real $m > 1$ through the Phragmén-Lindelőf principle.

Then we have that

$$\begin{equation}

\frac{1}{2\pi i} \int_{(2)} D(s) V(s) X^s ds = \sum_{n \leq X} a(n) + \sum_{X < n < X + X/Y} a(n) v_Y(n/X). \notag

\end{equation}$$

In other words, the sharp sum $\sum_{n \leq X} a(n)$ is captured perfectly, and then there is an amount of smooth fuzz for an additional $X/Y$ terms. As long as the short sum of length $X/Y$ isn’t as large as the sum over the first $X$ terms, then this transform gives a good way of understanding the sharp sum.

When $a(n)$ is nonnegative, we have the trivial bound that

$$\begin{equation}

\sum_{X < n < X + X/Y} a(n) v_Y(n/X) \ll \sum_{\lvert n – X \rvert < X/Y} a(n). \notag

\end{equation}$$

We have the equality

$$\begin{align}

\sum_{n \geq 1} a(n) v_Y(n/X) &= \sum_{n \leq X} a(n) + \sum_{X < n < X + X/Y} a(n) v_Y(n/X) \notag \ \\

&= \sum_{n \leq X} a(n) + O\Big( \sum_{\lvert n – X \rvert < X/Y} a(n) \Big) \notag \ \\

&= \sum_{n \leq X} a(n) + O\bigg( \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big)\bigg).\notag

\end{align}$$

Rearranging, we have

$$\begin{equation}

\sum_{n \leq X} a(n) = \sum_{n \geq 1} a(n) v_Y(n/X) + O\bigg( \sum_{n \geq 1} a(n) \exp \Big( – \frac{Y^2 \log^2 (X/n)}{4\pi} \Big)\bigg). \notag

\end{equation}$$

In terms of integral transforms, we then have that

$$\begin{align}

\sum_{n \leq X} a(n) &= \frac{1}{2\pi i} \int_{(2)} D(s) V(s) X^s ds \notag \ \\

&\quad + O \bigg( \frac{1}{2\pi i} \int_{(2)} D(s) \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds \bigg). \notag

\end{align}$$

Fortunately, the process of understanding these two integral transforms often boils down to the same fundamental task: determine how quickly Dirichlet series grow in vertical strips.

Suppose that $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a $\text{GL}(2)$ holomorphic cusp form of weight $k$. We do not restrict $k$ to be an integer, and in fact $k$ might be any rational number as long as $k > 2$. Then the Rankin-Selberg convolution

$$\begin{equation}

L(s, f \otimes \overline{f}) = \zeta(2s) \sum_{n \geq 1} \frac{\lvert a(n) \rvert^2}{n^{s + k – 1}} \notag

\end{equation}$$

is an $L$-function satisfying a functional equation of shape

$$\begin{equation}

\Lambda(s, f \otimes \overline{f}) := (2\pi)^{-2s} L(s, f \otimes \overline{f}) \Gamma(s) \Gamma(s + k – 1) = \epsilon \Lambda(s, f\otimes \overline{f}), \notag

\end{equation}$$

where $\lvert \epsilon \rvert = 1$ (and in fact the right hand side $L$-function may actually correspond to a related pair of forms $\widetilde{f} \otimes \overline{\widetilde{f}}$, though this does not affect the computations done here).

It is a classically interesting question to consider the sizes of the coefficients $a(n)$. The Ramanujan-Petersson conjecture states that $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$. The Ramanujan-Petersson conjecture is known for full-integral forms on $\text{GL}(2)$, but this is a very deep and very technical result. In general, this type of question is very deep, and very hard.

Using nothing more than the functional equation and the pair of integral transforms, let us analyze the sizes of

$$\begin{equation}

\sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}}. \notag

\end{equation}$$

Note that the power $n^{k-1}$ serves to normalize the sum to be $1$ on average.

As described above, it is now apparent that

$$\begin{align}

\sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}} &= \frac{1}{2\pi i} \int_{(2)} \frac{L(s, f \otimes \overline{f})}{\zeta(2s)} V(s) X^s ds \notag \ \\

&\quad + O \bigg( \frac{1}{2\pi i} \int_{(2)} \frac{L(s, f \otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds \bigg). \notag

\end{align}$$

We now seek to understand the two integral transforms. Due to the $\zeta(2s)^{-1}$ in the denominator, and due to the mysterious nature of the zeroes of the zeta function, it will only be possible to shift each line of integration to $\Re s = \frac{1}{2}$. Note that $L(s, f\otimes \overline{f})$ has a simple pole at $s = 1$ with a residue that I denote by $R$.

By the Phragmén-Lindelőf Convexity principle, it is known from the functional equation that

$$\begin{equation}

L(\frac{1}{2} + it, f \otimes \overline{f}) \ll (1 + \lvert t \rvert)^{1}. \notag

\end{equation}$$

Then we have by Cauchy’s Theorem that

$$\begin{align}

&\frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds \notag \ \\

&\quad = \frac{RX e^{1/Y^2}}{Y\zeta(2)} + \frac{1}{2\pi i} \int_{(1/2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds. \notag

\end{align}$$

The shifted integral can be written

$$\begin{equation}\label{eq:exp_shift1}

\int_{-\infty}^\infty \frac{L(\frac{1}{2} + it, f \otimes \overline{f})}{\zeta(1 + 2it)} \exp \Big( \frac{\pi (\frac{1}{4} – t^2 + it)}{Y^2}\Big) \frac{X^{\frac{1}{2} + it}}{Y}dt.

\end{equation}$$

It is known that

$$\begin{equation}

\zeta(1 + 2it)^{-1} \ll \log (1 + \lvert t \rvert). \notag

\end{equation}$$

Therefore, bounding by absolute values shows that~\eqref{eq:exp_shift1} is bounded by

$$\begin{equation}

\int_{-\infty}^\infty (1 + \lvert t \rvert)^{1 + \epsilon} e^{-t^2/Y^2} \frac{X^{\frac{1}{2}}}{Y}dt. \notag

\end{equation}$$

Heuristically, the exponential decay causes this to be an integral over $t \in [-Y, Y]$, as outside this interval there is exponential decay. We can recognize this more formally by performing the change of variables $t \mapsto tY$. Then we have

$$\begin{equation}

\int_{-\infty}^\infty (1 + \lvert tY \rvert)^{1 + \epsilon} e^{-t^2} X^{\frac{1}{2}} dt \ll X^{\frac{1}{2}} Y^{1+\epsilon}. \notag

\end{equation}$$

In total, this means that

$$\begin{equation}

\frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} \exp \Big( \frac{\pi s^2}{Y^2} \Big) \frac{X^s}{Y} ds = \frac{RX e^{1/Y^2}}{Y\zeta(2)} + O(X^{\frac{1}{2}}Y^{\frac{3}{4}+\epsilon}). \notag

\end{equation}$$

Working now with the other integral transform, Cauchy’s theorem gives

$$\begin{align}

&\frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} V(s) X^s ds \notag \ \\

&\quad = \frac{RX V(1)}{\zeta(2)} + \frac{1}{2\pi i} \int_{(1/2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} V(s)X^s ds. \notag

\end{align}$$

The shifted integral can again be written

$$\begin{equation}\label{eq:exp_shift2}

\int_{-\infty}^\infty \frac{L(\frac{1}{2} + it, f \otimes \overline{f})}{\zeta(1 + 2it)} V(\tfrac{1}{2} + it) X^{\frac{1}{2} + it} dt,

\end{equation}$$

and, bounding~\eqref{eq:exp_shift2} by absolute values as above, we get

$$\begin{equation}

\int_{-\infty}^\infty (1 + \lvert t \rvert)^{1 + \epsilon} \lvert V(\tfrac{1}{2} + it) \rvert X^{\frac{1}{2}} dt \ll \int_{-\infty}^\infty (1 + \lvert t \rvert)^{\frac{3}{4} + \epsilon} \frac{1}{Y} \bigg(\frac{Y}{1 + \lvert t \rvert}\bigg)^m X^{\frac{1}{2}} dt \notag

\end{equation}$$

for any $m \geq 0$. In order to make the integral converge, we choose $m = 2 + 2\epsilon$, which shows that

$$\begin{equation}

\int_{-\infty}^\infty (1 + \lvert t \rvert)^{1 + \epsilon} \lvert V(\tfrac{1}{2} + it) \rvert X^{\frac{1}{2}} dt \ll X^{\frac{1}{2}}Y^{1 + \epsilon}. \notag

\end{equation}$$

Therefore, we have in total that

$$\begin{equation}

\frac{1}{2\pi i} \int_{(2)} \frac{L(s, f\otimes \overline{f})}{\zeta(2s)} V(s) X^s ds = \frac{RX V(1)}{\zeta(2)} + O(X^{\frac{1}{2}}Y^{1 + \epsilon}). \notag

\end{equation}$$

Notice that the $X$ and $Y$ bounds are the exact same for the two separate integral bounds, and that the bounding process was essentially identical. Heuristically, this should generally be true (although in practice there may be some advantage to one over the other).

Now that we have estimated these two integrals, we can say that

$$\begin{equation}

\sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}} = cX + O\big(\frac{X}{Y}\big) + O(X^{\frac{1}{2}}Y^{1+\epsilon}) \notag

\end{equation}$$

for some computable constant $c$. This is optimized when

$$\begin{equation}

X^{\frac{1}{2}} = Y^{2 + \epsilon} \implies Y \approx X^{\frac{1}{4}}, \notag

\end{equation}$$

leading to

$$\begin{equation}

\sum_{n \leq X} \frac{\lvert a(n) \rvert^2}{n^{k-1}} = cX + O(X^{\frac{3}{4} + \epsilon}). \notag

\end{equation}$$

This isn’t the best possible or best-known result, but it came for almost free! (So one can’t complain too much). Smooth cutoffs and understood polynomial growth allow sharp cutoffs with polynomial-savings error term.

It is possible to revisit this example and be slighly more clever in our application of this technique of comparing two smooth integral transforms together. Some topics will be touched on again in a later note.

]]>- 2017 is a prime number. 2017 is the 306th prime. The 2017th prime is 17539.
- As 2011 is also prime, we call 2017 a sexy prime.
- 2017 can be written as a sum of two squares,

$$ 2017 = 9^2 +44^2,$$

and this is the only way to write it as a sum of two squares. - Similarly, 2017 appears as the hypotenuse of a primitive Pythagorean triangle,

$$ 2017^2 = 792^2 + 1855^2,$$

and this is the only such right triangle. - 2017 is uniquely identified as the first odd prime that leaves a remainder of $2$ when divided by $5$, $13$, and $31$. That is,

$$ 2017 \equiv 2 \pmod {5, 13, 31}.$$ - In different bases,

$$ \begin{align} (2017)_{10} &= (2681)_9 = (3741)_8 = (5611)_7 = (13201)_6 \notag \\ &= (31032)_5 = (133201)_4 = (2202201)_3 = (11111100001)_2 \notag \end{align}$$

The base $2$ and base $3$ expressions are sort of nice, including repetition.

$$\begin{array}{ll}

1 = 2\cdot 0 + 1^7 & 11 = 2 + 0! + 1 + 7 \\

2 = 2 + 0 \cdot 1 \cdot 7 & 12 = 20 – 1 – 7 = -2 + (0! + 1)\cdot 7 \\

3 = (20 + 1)/7 = 20 – 17 & 13 = 20 – 1 \cdot 7 \\

4 = -2 + 0 – 1 + 7 & 14 = 20 – (-1 + 7) \\

5 = -2 + 0\cdot 1 + 7 & 15 = -2 + 0 + 17 \\

6 = -2 + 0 + 1 + 7 & 16 = -(2^0) + 17 \\

7 = 2^0 – 1 + 7 & 17 = 2\cdot 0 + 17 \\

8 = 2 + 0 – 1 + 7 & 18 = 2^0 + 17 \\

9 = 2 + 0\cdot 1 + 7 & 19 = 2\cdot 0! + 17 \\

10 = 2 + 0 + 1 + 7 & 20 = 2 + 0! + 17.

\end{array}$$

In each expression, the digits $2, 0, 1, 7$ appear, in order, with basic mathematical symbols. I wonder what the first number is that can’t be nicely expressed (subjectively, of course)?

Now let’s look at less-common manipulations with numbers.

- The digit sum of $2017$ is $10$, which has digit sum $1$.
- Take $2017$ and its reverse, $7102$. The difference between these two numbers is $5085$. Repeating gives $720$. Continuing, we get

$$ 2017 \mapsto 5085 \mapsto 720 \mapsto 693 \mapsto 297 \mapsto 495 \mapsto 99 \mapsto 0.$$

So it takes seven iterations to hit $0$, where the iteration stabilizes. - Take $2017$ and its reverse, $7102$. Add them. We get $9119$, a palindromic number. Continuing, we get

$$ \begin{align} 2017 &\mapsto 9119 \mapsto 18238 \mapsto 101519 \notag \\ &\mapsto 1016620 \mapsto 1282721 \mapsto 2555542 \mapsto 5011094 \mapsto 9912199. \notag \end{align}$$

It takes one map to get to the first palindrome, and then seven more maps to get to the next palindrome. Another five maps would yield the next palindrome. - Rearrange the digits of $2017$ into decreasing order, $7210$, and subtract the digits in increasing order, $0127$. This gives $7083$. Repeating once gives $8352$. Repeating again gives $6174$, at which point the iteration stabilizes. This is called Kaprekar’s Constant.
- Consider Collatz: If $n$ is even, replace $n$ by $n/2$. Otherwise, replace $n$ by $3\cdot n + 1$. On $2017$, this gives

$$\begin{align}

2017 &\mapsto 6052 \mapsto 3026 \mapsto 1513 \mapsto 4540 \mapsto \notag \\

&\mapsto 2270 \mapsto 1135 \mapsto 3406 \mapsto 1703 \mapsto 5110 \mapsto \notag \\

&\mapsto 2555 \mapsto 7666 \mapsto 3833 \mapsto 11500 \mapsto 5750 \mapsto \notag \\

&\mapsto 2875 \mapsto 8626 \mapsto 4313 \mapsto 12940 \mapsto 6470 \mapsto \notag \\

&\mapsto 3235 \mapsto 9706 \mapsto 4853 \mapsto 14560 \mapsto 7280 \mapsto \notag \\

&\mapsto 3640 \mapsto 1820 \mapsto 910 \mapsto 455 \mapsto 1366 \mapsto \notag \\

&\mapsto 683 \mapsto 2050 \mapsto 1025 \mapsto 3076 \mapsto 1538 \mapsto \notag \\

&\mapsto 769 \mapsto 2308 \mapsto 1154 \mapsto 577 \mapsto 1732 \mapsto \notag \\

&\mapsto 866 \mapsto 433 \mapsto 1300 \mapsto 650 \mapsto 325 \mapsto \notag \\

&\mapsto 976 \mapsto 488 \mapsto 244 \mapsto 122 \mapsto 61 \mapsto \notag \\

&\mapsto 184 \mapsto 92 \mapsto 46 \mapsto 23 \mapsto 70 \mapsto \notag \\

&\mapsto 35 \mapsto 106 \mapsto 53 \mapsto 160 \mapsto 80 \mapsto \notag \\

&\mapsto 40 \mapsto 20 \mapsto 10 \mapsto 5 \mapsto 16 \mapsto \notag \\

&\mapsto 8 \mapsto 4 \mapsto 2 \mapsto 1 \notag

\end{align}$$

It takes $69$ steps to reach the seemingly inevitable $1$. This is much shorter than the $113$ steps necessary for $2016$ or the $113$ (yes, same number) steps necessary for $2018$. - Consider the digits $2,1,7$ (in that order). To generate the next number, take the units digit of the product of the previous $3$. This yields

$$2,1,7,4,8,4,8,6,2,6,2,4,8,4,\ldots$$

This immediately jumps into a periodic pattern of length $8$, but $217$ is not part of the period. So this is preperiodic. - Consider the digits $2,0,1,7$. To generate the next number, take the units digit of the sum of the previous $4$. This yields

$$ 2,0,1,7,0,8,6,1,5,0,2,8,\ldots, 2,0,1,7.$$

After 1560 steps, this produces $2,0,1,7$ again, yielding a cycle. Interestingly, the loop starting with $2018$ and $2019$ also repeat after $1560$ steps. - Take the digits $2,0,1,7$, square them, and add the result. This gives $2^2 + 0^2 + 1^2 + 7^2 = 54$. Repeating, this gives

$$ \begin{align} 2017 &\mapsto 54 \mapsto 41 \mapsto 17 \mapsto 50 \mapsto 25 \mapsto 29 \notag \\ &\mapsto 85 \mapsto 89 \mapsto 145 \mapsto 42 \mapsto 20 \mapsto 4 \notag \\ &\mapsto 16 \mapsto 37 \mapsto 58 \mapsto 89\notag\end{align}$$

and then it reaches a cycle. - Take the digits $2,0,1,7$, cube them, and add the result. This gives $352$. Repeating, we get $160$, and then $217$, and then $352$. This is a very tight loop.

- One can make $2017$ from determinants of basic matrices in a few ways. For instance,

$$ \begin{align}

\left \lvert \begin{pmatrix} 1&2&3 \\ 4&6&7 \\ 5&8&9 \end{pmatrix}\right \rvert &= 2, \qquad

\left \lvert \begin{pmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{pmatrix}\right \rvert &= 0\notag \\

\left \lvert \begin{pmatrix} 1&2&3 \\ 4&7&6 \\ 5&9&8 \end{pmatrix}\right \rvert &= 1 , \qquad

\left \lvert \begin{pmatrix} 1&2&3 \\ 4&5&7 \\ 6&8&9 \end{pmatrix}\right \rvert &= 7\notag

\end{align}$$

The matrix with determinant $0$ has the numbers $1$ through $9$ in the most obvious configuration. The other matrices are very close in configuration. - Alternately,

$$ \begin{align}

\left \lvert \begin{pmatrix} 1&2&3 \\ 5&6&9 \\ 4&8&7 \end{pmatrix}\right \rvert &= 20 \notag \\

\left \lvert \begin{pmatrix} 1&2&3 \\ 6&8&9 \\ 5&7&4 \end{pmatrix}\right \rvert &= 17 \notag

\end{align}$$

So one can form $20$ and $27$ separately from determinants. - One cannot make $2017$ from a determinant using the digits $1$ through $9$ (without repetition).
- If one uses the digits from the first $9$ primes, it is interesting that one can choose configurations with determinants equal to $2016$ or $2018$, but there is no such configuration with determinant equal to $2017$.