Author Archives: David Lowry-Duda

A Jupyter Notebook from a SageMath tutorial

I gave an introduction to sage tutorial at the University of Warwick Computational Group seminar today, 2 November 2017. Below is a conversion of the sage/jupyter notebook I based the rest of the tutorial on. I said many things which are not included in the notebook, and during the seminar we added a few additional examples and took extra consideration to a few different calls. But for reference, the notebook is here.

The notebook itself (as a jupyter notebook) can be found and viewed on my github (link to jupyter notebook). When written, this notebook used a Sage 8.0.0.rc1 backend kernel and ran fine on the standard Sage 8.0 release , though I expect it to work fine with any recent official version of sage. The last cell requires an active notebook to be seen (or some way to export jupyter widgets to standalone javascript or something; this either doesn’t yet exist, or I am not aware of it).

I will also note that I converted the notebook for display on this website using jupyter’s nbconvert package. I have some CSS and syntax coloring set up that affects the display.

Good luck learning sage, and happy hacking.

Sage

Sage (also known as SageMath) is a general purpose computer algebra system written on top of the python language. In Mathematica, Magma, and Maple, one writes code in the mathematica-language, the magma-language, or the maple-language. Sage is python.

But no python background is necessary for the rest of today’s guided tutorial. The purpose of today’s tutorial is to give an indication about how one really uses sage, and what might be available to you if you want to try it out.

I will spoil the surprise by telling you upfront the two main points I hope you’ll take away from this tutorial.

  1. With tab-completion and documentation, you can do many things in sage without ever having done them before.
  2. The ecosystem of libraries and functionality available in sage is tremendous, and (usually) pretty easy to use.

Lightning Preview

Let’s first get a small feel for sage by seeing some standard operations and what typical use looks like through a series of trivial, mostly unconnected examples.

In [1]:
# Fundamental manipulations work as you hope

2+3
Out[1]:
5

You can also subtract, multiply, divide, exponentiate…

>>> 3-2
1
>>> 2*3
6
>>> 2^3
8
>>> 2**3 # (also exponentiation)
8

There is an order of operations, but these things work pretty much as you want them to work. You might try out several different operations.

Sage includes a lot of functionality, too. For instance,

In [2]:
factor(-1008)
Out[2]:
-1 * 2^4 * 3^2 * 7
In [3]:
list(factor(1008))
Out[3]:
[(2, 4), (3, 2), (7, 1)]

In the background, Sage is actually calling on pari/GP to do this factorization. Sage bundles lots of free and open source math software within it (which is why it’s so large), and provides a common access point. The great thing here is that you can often use sage without needing to know much pari/GP (or other software).

Sage knows many functions and constants, and these are accessible.

(more…)

Posted in Expository, Mathematics, sage, sagemath | Tagged , , , , , | Leave a comment

Having no internet for four half weeks isn’t necessarily all bad

I moved to the UK to begin a postdoc with John Cremona at the University of Warwick. And for the last four weeks, I have had no internet at my home. This wasn’t by choice — it’s due to the reluctance of my local gigantitelecom to press a button that says “begin internet service.” I could write more about that, but that’s not the purpose of this note.

The purpose of this note is to describe the large effects of having no internet at my home for the last four weeks. I’m at my home about half the time, leading to the title.

I don't talk about it, but I do know *exactly* where I can get other's wifi in my place. Sadly, no linksys.

 

I have become accustomed to having the internet at all times. I now see that many various habits of mine involved the internet. In the mornings and evenings, I would check HackerNews, longform, and reddit for interesting reads. Invariably there are more interesting seeming things than I would read, and my Checkout bookmarks list is a hundreds-of-items long growing list of maybe interesting stuff. In the middle times throughout the day, I would checkout a few of these bookmarks.

All in all, I would spend an enormous amount of time reading random interesting tidbits, even though much of this time was spread out in the “in-betweens” in my day.

 

I still remember modem sounds. This is a defining aspect of my generation. My postdoc advisor was telling me about the difference between 100 and 300 baud teletypes. Things change, you know?

 

When I didn’t have internet at my home, I had to fill all those “in-between” moments, as well as my waking and sleeping moments, with something else. Faced with the necessity of doing something, I filled most of these moments with reading books. Made out of paper. (The same sort of books whose sales are rising compared to ebooks, contrary to most predictions a few years ago).

I’d forgotten how much I enjoyed reading a book in large chunks, in very few sittings. I usually have an ebook on my phone that I read during commutes, and perhaps most of my idle reading over the last several years has been in 20 page increments. The key phrase here is “idle reading”. I now set aside time to “actively read”, in perhaps 100 page increments. Reading enables a “flow state” very similar to the sensation I get when mathing continuously, or programming continuously, for a long period of time. I not only read more, but I enjoy what I’m reading more.

As a youth, I would read all the time. Fun fact: at one time, I’d read almost every book in the Star Wars expanded universe. There were over a hundred, and they were all canon (before Disney paved over the universe to make room). I learned to love reading by reading science fiction, and the first novel I remember reading was a copy of Andre Norton’s “The Beastmaster” (… which is great. A part telepath part Navajo soldier moves to another planet. Then it’s a space western. What’s not to love?).

I have also been known to open hackernews, think there's nothing interesting on the front page, close the tab, and then go immediately to hackernews to see if there's something interesting. There isn't. This reminds me of opening the fridge, hoping for tastier food to have appeared since I last didn't put anything in there.

 

My primary source of books is the library at the University of Warwick. Whether through differences in continental taste or simply a case of different focus, the University Library doesn’t have many books in its fiction collection that I’ve been intending to read. I realize now that most of the nonfiction I read originates on the internet, while much of the fiction I read comes from books. Now, encouraged by a lack of alternatives, I picked up many more and varied nonfiction books than I would otherwise have.

As an unexpected side effect, I found that I would also carefully download some of the articles I identified as “interesting” a bit before I headed home from the office. Without internet, I read far more of my checkout bookmarks than I did with internet. Weird. Correspondingly, I found that I would spend a bit more time cutting down the false-positive rate — I used to bookmark almost anything that I thought might be interesting, but which I wasn’t going to read right then. Now I culled the wheat from the chaff, as harvesting wheat takes time. (Perhaps this is something I should do more often. I recognize that there are services or newsletters that promise to identify great materials, but somehow none of them have worked better to my tastes than hackernews or longform. But these both have questionable signal to noise.).

The result is that I’ve goofed off reading probably about the same amount of time, but in fewer topics and at greater depth in each. It’s easy to jump from 10 page article to 10 page article online; when the medium is books, things come in larger chunks.

I feel more productive reading a book, even though I don’t actually attribute much to the difference. There may be something to the act of reading contiguously and continuously for long periods of time, though. This correlated with an overall increase my “chunking” of tasks across continuous blocks of time, instead of loosely multitasking. I think this is for the better.

I now have internet at my flat. Some habits will slide back, but there are other new habits that I will keep. I’ll keep my bedroom computer-free. In the evening, this means I read books before I sleep. In the morning, this means I must leave and go to the other room before I waste any time on online whatevers. Both of these are good. And I’ll try to continue to chunk time.

To end, I’ll note what I read in the last month, along with a few notes about each.

Fiction

From best to worse.

  • The best fiction I read was The Three Body Problem, by Cixin Liu. I’d heard lots about this book. It’s Chinese scifi, and much of the story takes place against the backdrop of the Chinese cultural revolution… which I know embarassingly little about. The moral and philosophical underpinnings of this book are interesting and atypical (to me). At its core are various groups of people who have lost faith in aspects of science, or humanity, or both. I was unprepared for the many (hundreds?) of pages of philosophizing in the book, but I understood why it was there. This aspect reminded me of the last half of Anathem by Stephenson (perhaps the best book I’ve read in the last few years), which also had many (also hundreds?) of pages of philosophizing. I love this book, I recommend it. And I note that I read it in four sittings. There are two more books completing a trilogy, and I will read them once I can get my hands on them. [No library within 50 miles of me has them. I did buy the first one, though. Perhaps I’ll buy the other two.]
  • The second best was The Lathe of Heaven by Ursula Le Guin. This is some classic fantasy, and is pretty mindbending. I think the feel of many books of Ursula Le Guin is very similar — there are many interesting ideas throughout the book, but the book deliberately loses coherence as the flow and fury of the plot reaches a climax. I like The Lathe of Heaven more than The Wizard of Earthsea and about the same as The Left Hand of Darkness, also by Le Guin. I read this book in three sittings.
  • I read three of the Witcher books, by Andzej Sapkowski. Namely, The Sword of Destiny, Blood of Elves, and Time of Concempt. These are fun, not particularly deep reads. There is a taste of moral ambiguity that I like as it’s different from what I normally find. On the other hand, Sapkowski often uses humor or ambiguity in place of a meaningful, coherent plot. The Sword of Destiny is a collection of short tales, and I think his short tales are better than his novels — entirely because one doesn’t need or expect coherence from short stories.

I’m currently reading Confusion by Neal Stephenson, book two of the Baroque trilogy. Right now, I am exactly 1 page in.

Nonfiction

I rank these from those I most enjoyed to those I least enjoyed.

  • How Equal Temperament Ruined Harmony, by Duffin. This was told to me as an introduction to music theory [in fact, I noted this from a comment thread on hackernews somewhere], but really it is a treatise on the history of tuning and temparaments. It turns out that modern equal termperament suffers from many flaws that aren’t commonly taught. When I got back to the office after reading this book, I spent a good amount of time on youtube listening to songs in mean tone tuning and just intonation. There is a difference! I read this book in 2 sittings — it’s short, pretty simple, and generally nice. However there are several long passages that are simply better to skip. Nonetheless I learned a lot.
  • A Random Walk down Wall Street, by Burton Malkiel. I didn’t know too much about investing before reading this book. I wouldn’t actually say that I know too much after reading it either, but the book is about investing. I was warned that reading this book would make me think that the only way to really invest is to purchase index funds. And indeed, that is the overwhelming (and explicit) takeawar from the book. But I found the book surprisingly readable, and read it very quickly. I find that some of the analysis is biased towards long-term investing even as a basis of comparison.
  • Guesstimation, by Weinstein. Ok, perhaps it is not fair to say that one “reads” this book. It consists of many Fermi-style questions (how many golf balls does it take to fill up a football stadium type questions), followed by their analysis. So I read a question and then sit down and do my own analysis. And then I compare it against Weinstein’s. I was stunned at how often the analyses were tremendously similar and got essentially the same order of magnitude at the end. [But not always, that’s for sure. There are also lots of things that I estimate very, very poorly]. There’s a small subgenre of “popular mathematics for the reader who is willing to take out a pencil and paper” (which can’t have a big readership, but which I thoroughly enjoy), and this is a good book within that subgenre. I’m currently working through its sequel.
  • Natures Numbers, by Ian Stewart. This is a pop math book. Ian Stewart is an emeritus professor at my university, so it seemed appropriate to read something of his. This is a surprisingly fast read (I read it in a single sitting). Stewart is known for writing approachable popular math accounts, and this fits.
  • The Structure of Scientific Revolutions, by Thomas Kuhn. This is metascience. I read the first half of this book/essay very quickly, and I struggled through its second half. This came highly recommended to me, but I found the signal to noise ratio to be pretty low. It might be that I wasn’t very willing to navigate the careful treading around equivocation throughout. However, I think many of the ideas are good. I don’t know if someone has written a 30 page summary, but I think this may be possible — and a good alternative to the book/essay itself.

I’m now reading Grit, by Angela Duckworth. Another side effect of reading more is that I find myself reading one fiction, one non-fiction, and one “simple” book at the same time.


Written while on a bus without internet to Heathrow, minus the pictures (which were added at Heathrow).

Posted in Book Review, Story | Tagged | Leave a comment

A Short Note on Gaps Between Powers of Consecutive Primes

Introduction

The primary purpose of this note is to collect a few hitherto unnoticed or unpublished results concerning gaps between powers of consecutive primes. The study of gaps between primes has attracted many mathematicians and led to many deep realizations in number theory. The literature is full of conjectures, both open and closed, concerning the nature of primes.

In a series of stunning developments, Zhang, Maynard, and Tao12 made the first major progress towards proving the prime $k$-tuple conjecture, and successfully proved the existence of infinitely many pairs of primes differing by a fixed number. As of now, the best known result is due to the massive collaborative Polymath8 project,3 which showed that there are infinitely many pairs of primes of the form $p, p+246$. In the excellent expository article, 4 Granville describes the history and ideas leading to this breakthrough, and also discusses some of the potential impact of the results. This note should be thought of as a few more results following from the ideas of Zhang, Maynard, Tao, and the Polymath8 project.

Throughout, $p_n$ will refer to the $n$th prime number. In a paper, 5 Andrica conjectured that
\begin{equation}\label{eq:Andrica_conj}
\sqrt{p_{n+1}} – \sqrt{p_n} < 1
\end{equation}
holds for all $n$. This conjecture, and related statements, is described in Guy’s Unsolved Problems in Number Theory.
6 It is quickly checked that this holds for primes up to $4.26 \cdot 10^{8}$ in sagemath

# Sage version 8.0.rc1
# started with `sage -ipython`

# sage has pari/GP, which can generate primes super quickly
from sage.all import primes_first_n

# import izip since we'll be zipping a huge list, and sage uses python2 which has
# non-iterable zip by default
from itertools import izip

# The magic number 23150000 appears because pari/GP can't compute
# primes above 436273290 due to fixed precision arithmetic
ps = primes_first_n(23150000)    # This is every prime up to 436006979

# Verify Andrica's Conjecture for all prime pairs = up to 436006979
gap = 0
for a,b in izip(ps[:-1], ps[1:]):
    if b**.5 - a**.5 > gap:
        A, B, gap = a, b, b**.5 - a**.5
        print(gap)
print("")
print(A)
print(B)

In approximately 20 seconds on my machine (so it would not be harder to go much higher, except that I would have to go beyond pari/GP to generate primes), this completes and prints out the following output.

0.317837245196
0.504017169931
0.670873479291

7
11

 

Thus the largest value of $\sqrt{p_{n+1}} – \sqrt{p_n}$ was merely $0.670\ldots$, and occurred on the gap between $7$ and $11$.

So it appears very likely that the conjecture is true. However it is also likely that new, novel ideas are necessary before the conjecture is decided.

Andrica’s Conjecture can also be stated in terms of prime gaps. Let $g_n = p_{n+1} – p_n$ be the gap between the $n$th prime and the $(n+1)$st prime. Then Andrica’s Conjecture is equivalent to the claim that $g_n < 2 \sqrt{p_n} + 1$. In this direction, the best known result is due to Baker, Harman, and Pintz, 7 who show that $g_n \ll p_n^{0.525}$.

In 1985, Sandor 8 proved that \begin{equation}\label{eq:Sandor} \liminf_{n \to \infty} \sqrt[4]{p_n} (\sqrt{p_{n+1}} – \sqrt{p_n}) = 0. \end{equation} The close relation to Andrica’s Conjecture \eqref{eq:Andrica_conj} is clear. The first result of this note is to strengthen this result.

Theorem

Let $\alpha, \beta \geq 0$, and $\alpha + \beta < 1$. Then
\begin{equation}\label{eq:main}
\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0.
\end{equation}

We prove this theorem below. Choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{4}$ verifies Sandor’s result \eqref{eq:Sandor}. But choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{2} – \epsilon$ for a small $\epsilon > 0$ gives stronger results.

This theorem leads naturally to the following conjecture.

Conjecture

For any $0 \leq \alpha < 1$, there exists a constant $C(\alpha)$ such that
\begin{equation}
p_{n+1}^\alpha – p_{n}^\alpha \leq C(\alpha)
\end{equation}
for all $n$.

A simple heuristic argument, given in the last section below, shows that this Conjecture follows from Cramer’s Conjecture.

It is interesting to note that there are generalizations of Andrica’s Conjecture. One can ask what the smallest $\gamma$ is such that
\begin{equation}
p_{n+1}^{\gamma} – p_n^{\gamma} = 1
\end{equation}
has a solution. This is known as the Smarandache Conjecture, and it is believed that the smallest such $\gamma$ is approximately
\begin{equation}
\gamma \approx 0.5671481302539\ldots
\end{equation}
The digits of this constant, sometimes called “the Smarandache constant,” are the contents of sequence A038458 on the OEIS. It is possible to generalize this question as well.

Open Question

For any fixed constant $C$, what is the smallest $\alpha = \alpha(C)$ such that
\begin{equation}
p_{n+1}^\alpha – p_n^\alpha = C
\end{equation}
has solutions? In particular, how does $\alpha(C)$ behave as a function of $C$?

This question does not seem to have been approached in any sort of generality, aside from the case when $C = 1$.

Proof of Theorem

The idea of the proof is very straightforward. We estimate \eqref{eq:main} across prime pairs $p, p+246$, relying on the recent proof from Polymath8 that infinitely many such primes exist.

Fix $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$. Applying the mean value theorem of calculus on the function $x \mapsto x^\alpha$ shows that
\begin{align}
p^\beta \big( (p+246)^\alpha – p^\alpha \big) &= p^\beta \cdot 246 \alpha q^{\alpha – 1} \\\
&\leq p^\beta \cdot 246 \alpha p^{\alpha – 1} = 246 \alpha p^{\alpha + \beta – 1}, \label{eq:bound}
\end{align}
for some $q \in [p, p+246]$. Passing to the inequality in the second line is done by realizing that $q^{\alpha – 1}$ is a decreasing function in $q$. As $\alpha + \beta – 1 < 0$, as $p \to \infty$ we see that\eqref{eq:bound} goes to zero.

Therefore
\begin{equation}
\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0,
\end{equation}
as was to be proved.

Further Heuristics

Cramer’s Conjecture states that there exists a constant $C$ such that for all sufficiently large $n$,
\begin{equation}
p_{n+1} – p_n < C(\log n)^2.
\end{equation}
Thus for a sufficiently large prime $p$, the subsequent prime is at most $p + C (\log p)^2$. Performing a similar estimation as above shows that
\begin{equation}
(p + C (\log p)^2)^\alpha – p^\alpha \leq C (\log p)^2 \alpha p^{\alpha – 1} =
C \alpha \frac{(\log p)^2}{p^{1 – \alpha}}.
\end{equation}
As the right hand side vanishes as $p \to \infty$, we see that it is natural to expect that the main Conjecture above is true. More generally, we should expect the following, stronger conjecture.

Conjecture’

For any $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$, there exists a constant $C(\alpha, \beta)$ such that
\begin{equation}
p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) \leq C(\alpha, \beta).
\end{equation}

Additional Notes

I wrote this note in between waiting in never-ending queues while I sort out my internet service and other mundane activities necessary upon moving to another country. I had just read some papers on the arXiv, and I noticed a paper which referred to unknown statuses concerning Andrica’s Conjecture. So then I sat down and wrote this up.

I am somewhat interested in qualitative information concerning the Open Question in the introduction, and I may return to this subject unless someone beats me to it.

This note is (mostly, minus the code) available as a pdf and (will shortly) appears on the arXiv. This was originally written in LaTeX and converted for display on this site using a  set of tools I’ve written based around latex2jax, which is available on my github.

Posted in Math.NT, Mathematics, sage | Tagged , , , | 1 Comment

Sage Days 87 Demo: Interfacing between sage and the LMFDB

Interfacing sage and the LMFDB — a prototype

The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.
This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is a prototype and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, please request/comment/ask either now, or at my email: david@lowryduda.com.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve
In [2]:
lmfdb_ecurve.search(rank=1)
Out[2]:
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 1240542*x - 531472509 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 - x^2 + 8100*x - 263219 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 + 637*x - 68783 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 + 36*x - 380 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2535*x - 49982 over Rational Field]
This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))
<class 'LMFDB2sage.ell_lmfdb.EllipticCurve_rational_field_lmfdb_with_category'>
Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:
# Try autocompleting the following. It has all the things!
print(dir(E))
['CPS_height_bound', 'CartesianProduct',
'Chow_form', 'Hom',
'Jacobian', 'Jacobian_matrix',
'Lambda', 'Np',
'S_integral_points', '_AlgebraicScheme__A',
'_AlgebraicScheme__divisor_group', '_AlgebraicScheme_subscheme__polys',
'_EllipticCurve_generic__ainvs', '_EllipticCurve_generic__b_invariants',
'_EllipticCurve_generic__base_ring', '_EllipticCurve_generic__discriminant',
'_EllipticCurve_generic__is_over_RationalField', '_EllipticCurve_generic__multiple_x_denominator',
'_EllipticCurve_generic__multiple_x_numerator', '_EllipticCurve_rational_field__conductor_pari',
'_EllipticCurve_rational_field__generalized_congruence_number', '_EllipticCurve_rational_field__generalized_modular_degree',
'_EllipticCurve_rational_field__gens', '_EllipticCurve_rational_field__modular_degree',
'_EllipticCurve_rational_field__np', '_EllipticCurve_rational_field__rank',
'_EllipticCurve_rational_field__regulator', '_EllipticCurve_rational_field__torsion_order',
'_Hom_', '__add__', '__cached_methods', '__call__',
'__class__', '__cmp__', '__contains__', '__delattr__',
'__dict__', '__dir__', '__div__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__',
'__getitem__', '__getstate__', '__gt__', '__hash__',
'__init__', '__le__', '__lt__', '__make_element_class__',
'__module__', '__mul__', '__ne__', '__new__',
'__nonzero__', '__pari__', '__pow__', '__pyx_vtable__',
'__rdiv__', '__reduce__', '__reduce_ex__', '__repr__',
'__rmul__', '__setattr__', '__setstate__', '__sizeof__',
'__str__', '__subclasshook__', '__temporarily_change_names', '__truediv__',
'__weakref__', '_abstract_element_class', '_adjust_heegner_index', '_an_element_',
'_ascii_art_', '_assign_names', '_axiom_', '_axiom_init_',
'_base', '_base_ring', '_base_scheme', '_best_affine_patch',
'_cache__point_homset', '_cache_an_element', '_cache_key', '_check_satisfies_equations',
'_cmp_', '_coerce_map_from_', '_coerce_map_via', '_coercions_used',
'_compute_gens', '_convert_map_from_', '_convert_method_name', '_defining_names',
'_defining_params_', '_doccls', '_element_constructor', '_element_constructor_',
'_element_constructor_from_element_class', '_element_init_pass_parent', '_factory_data', '_first_ngens',
'_forward_image', '_fricas_', '_fricas_init_', '_gap_',
'_gap_init_', '_generalized_congmod_numbers', '_generic_coerce_map', '_generic_convert_map',
'_get_action_', '_get_local_data', '_giac_', '_giac_init_',
'_gp_', '_gp_init_', '_heegner_best_tau', '_heegner_forms_list',
'_heegner_index_in_EK', '_homset', '_init_category_', '_initial_action_list',
'_initial_coerce_list', '_initial_convert_list', '_interface_', '_interface_init_',
'_interface_is_cached_', '_internal_coerce_map_from', '_internal_convert_map_from', '_introspect_coerce',
'_is_category_initialized', '_is_valid_homomorphism_', '_isoclass', '_json',
'_kash_', '_kash_init_', '_known_points', '_latex_',
'_lmfdb_label', '_lmfdb_regulator', '_macaulay2_', '_macaulay2_init_',
'_magma_init_', '_maple_', '_maple_init_', '_mathematica_',
'_mathematica_init_', '_maxima_', '_maxima_init_', '_maxima_lib_',
'_maxima_lib_init_', '_modsym', '_modular_symbol_normalize', '_morphism',
'_multiple_of_degree_of_isogeny_to_optimal_curve', '_multiple_x_denominator', '_multiple_x_numerator', '_names',
'_normalize_padic_lseries', '_octave_', '_octave_init_', '_p_primary_torsion_basis',
'_pari_', '_pari_init_', '_point', '_point_homset',
'_polymake_', '_polymake_init_', '_populate_coercion_lists_', '_r_init_',
'_reduce_model', '_reduce_point', '_reduction', '_refine_category_',
'_repr_', '_repr_option', '_repr_type', '_sage_',
'_scale_by_units', '_set_conductor', '_set_cremona_label', '_set_element_constructor',
'_set_gens', '_set_modular_degree', '_set_rank', '_set_torsion_order',
'_shortest_paths', '_singular_', '_singular_init_', '_symbolic_',
'_test_an_element', '_test_cardinality', '_test_category', '_test_elements',
'_test_elements_eq_reflexive', '_test_elements_eq_symmetric', '_test_elements_eq_transitive', '_test_elements_neq',
'_test_eq', '_test_new', '_test_not_implemented_methods', '_test_pickling',
'_test_some_elements', '_tester', '_torsion_bound', '_unicode_art_',
'_unset_category', '_unset_coercions_used', '_unset_embedding', 'a1',
'a2', 'a3', 'a4', 'a6',
'a_invariants', 'abelian_variety', 'affine_patch', 'ainvs',
'algebra', 'ambient_space', 'an', 'an_element',
'analytic_rank', 'analytic_rank_upper_bound', 'anlist', 'antilogarithm',
'ap', 'aplist', 'arithmetic_genus', 'automorphisms',
'b2', 'b4', 'b6', 'b8',
'b_invariants', 'base', 'base_extend', 'base_field',
'base_morphism', 'base_ring', 'base_scheme', 'c4',
'c6', 'c_invariants', 'cartesian_product', 'categories',
'category', 'change_ring', 'change_weierstrass_model', 'cm_discriminant',
'codimension', 'coerce', 'coerce_embedding', 'coerce_map_from',
'complement', 'conductor', 'congruence_number', 'construction',
'convert_map_from', 'coordinate_ring', 'count_points', 'cremona_label',
'database_attributes', 'database_curve', 'db', 'defining_ideal',
'defining_polynomial', 'defining_polynomials', 'degree', 'descend_to',
'dimension', 'dimension_absolute', 'dimension_relative', 'discriminant',
'division_field', 'division_polynomial', 'division_polynomial_0', 'divisor',
'divisor_group', 'divisor_of_function', 'dual', 'dump',
'dumps', 'element_class', 'elliptic_exponential', 'embedding_center',
'embedding_morphism', 'eval_modular_form', 'excellent_position', 'formal',
'formal_group', 'fundamental_group', 'galois_representation', 'gen',
'gens', 'gens_certain', 'gens_dict', 'gens_dict_recursive',
'genus', 'geometric_genus', 'get_action', 'global_integral_model',
'global_minimal_model', 'global_minimality_class', 'has_additive_reduction', 'has_bad_reduction',
'has_base', 'has_cm', 'has_coerce_map_from', 'has_global_minimal_model',
'has_good_reduction', 'has_good_reduction_outside_S', 'has_multiplicative_reduction', 'has_nonsplit_multiplicative_reduction',
'has_rational_cm', 'has_split_multiplicative_reduction', 'hasse_invariant', 'heegner_discriminants',
'heegner_discriminants_list', 'heegner_index', 'heegner_index_bound', 'heegner_point',
'heegner_point_height', 'heegner_sha_an', 'height', 'height_function',
'height_pairing_matrix', 'hom', 'hyperelliptic_polynomials', 'identity_morphism',
'inject_variables', 'integral_model', 'integral_points', 'integral_short_weierstrass_model',
'integral_weierstrass_model', 'integral_x_coords_in_interval', 'intersection', 'intersection_multiplicity',
'intersection_points', 'intersects_at', 'irreducible_components', 'is_atomic_repr',
'is_coercion_cached', 'is_complete_intersection', 'is_conversion_cached', 'is_exact',
'is_global_integral_model', 'is_global_minimal_model', 'is_good', 'is_integral',
'is_irreducible', 'is_isogenous', 'is_isomorphic', 'is_local_integral_model',
'is_minimal', 'is_on_curve', 'is_ordinary', 'is_ordinary_singularity',
'is_p_integral', 'is_p_minimal', 'is_parent_of', 'is_projective',
'is_quadratic_twist', 'is_quartic_twist', 'is_semistable', 'is_sextic_twist',
'is_singular', 'is_smooth', 'is_supersingular', 'is_transverse',
'is_x_coord', 'isogenies_prime_degree', 'isogeny', 'isogeny_class',
'isogeny_codomain', 'isogeny_degree', 'isogeny_graph', 'isomorphism_to',
'isomorphisms', 'j_invariant', 'kodaira_symbol', 'kodaira_type',
'kodaira_type_old', 'kolyvagin_point', 'label', 'latex_name',
'latex_variable_names', 'lift_x', 'lll_reduce', 'lmfdb_page',
'local_coordinates', 'local_data', 'local_integral_model', 'local_minimal_model',
'lseries', 'lseries_gross_zagier', 'manin_constant', 'matrix_of_frobenius',
'minimal_discriminant_ideal', 'minimal_model', 'minimal_quadratic_twist', 'mod5family',
'modular_degree', 'modular_form', 'modular_parametrization', 'modular_symbol',
'modular_symbol_numerical', 'modular_symbol_space', 'multiplication_by_m', 'multiplication_by_m_isogeny',
'multiplicity', 'mwrank', 'mwrank_curve', 'neighborhood',
'newform', 'ngens', 'non_minimal_primes', 'nth_iterate',
'objgen', 'objgens', 'optimal_curve', 'orbit',
'ordinary_model', 'ordinary_primes', 'padic_E2', 'padic_height',
'padic_height_pairing_matrix', 'padic_height_via_multiply', 'padic_lseries', 'padic_regulator',
'padic_sigma', 'padic_sigma_truncated', 'parent', 'pari_curve',
'pari_mincurve', 'period_lattice', 'plane_projection', 'plot',
'point', 'point_homset', 'point_search', 'point_set',
'pollack_stevens_modular_symbol', 'preimage', 'projection', 'prove_BSD',
'q_eigenform', 'q_expansion', 'quadratic_transform', 'quadratic_twist',
'quartic_twist', 'rank', 'rank_bound', 'rank_bounds',
'rational_parameterization', 'rational_points', 'real_components', 'reduce',
'reduction', 'register_action', 'register_coercion', 'register_conversion',
'register_embedding', 'regulator', 'regulator_of_points', 'rename',
'reset_name', 'root_number', 'rst_transform', 'satisfies_heegner_hypothesis',
'saturation', 'save', 'scale_curve', 'selmer_rank',
'sextic_twist', 'sha', 'short_weierstrass_model', 'silverman_height_bound',
'simon_two_descent', 'singular_points', 'singular_subscheme', 'some_elements',
'specialization', 'structure_morphism', 'supersingular_primes', 'tamagawa_exponent',
'tamagawa_number', 'tamagawa_number_old', 'tamagawa_numbers', 'tamagawa_product',
'tamagawa_product_bsd', 'tangents', 'tate_curve', 'three_selmer_rank',
'torsion_order', 'torsion_points', 'torsion_polynomial', 'torsion_subgroup',
'two_descent', 'two_descent_simon', 'two_division_polynomial', 'two_torsion_rank',
'union', 'variable_name', 'variable_names', 'weierstrass_p',
'weil_restriction', 'zeta_series']
All the things
This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:
E.defining_ideal()
Out[5]:
Ideal (-x^3 + x*y*z + y^2*z + 887688*x*z^2 + 321987008*z^3) of Multivariate Polynomial Ring in x, y, z over Rational Field
But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)
There are 10 curves returned.
Elliptic Curve defined by y^2 + x*y + y = x^3 - 3476*x - 79152 over Rational Field
And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:
print(E.gens())
print(E.rank())
print(E.regulator())
[(-34 : 17 : 1)]
1
1.63852610029
In [8]:
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())
CPU times: user 971 ms, sys: 6.82 ms, total: 978 ms
Wall time: 978 ms
That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the search() function.
In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

This is where we’re really looking for input.

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.
email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues

Now let’s describe what’s going on under the hood a little bit

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050

The result is a large mess of data, which can be exported as json and parsed.
But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time and thought to parse.
So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]
22
The total functionality of this search function is visible from its current documentation.

In [10]:
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)
    Search the LMFDB for an elliptic curve.

    Note that all inputs are optional, but at least one input is necessary.

    INPUT:

    -  ``label=l`` -- a string ``l`` representing a label in the LMFDB.

    -  ``degree=d`` -- an int ``d`` giving the minimum degree of a
       parameterization of the modular curve

    -  ``conductor=c`` -- an int ``c`` giving the conductor of the curve

    -  ``min_conductor=mc`` -- an int ``mc`` giving a lower bound on the
       conductor for desired curves

    -  ``max_conductor=mc`` -- an int ``mc`` giving an upper bound on the
       conductor for desired curves

    -  ``torsion_order=t`` -- an int ``t`` giving the order of the torsion
       subgroup of the curve

    -  ``rank=r`` -- an int ``r`` giving the rank of the curve

    -  ``regulator=f`` -- a float ``f`` giving the regulator of the curve

    -  ``max_items=m`` -- an int ``m`` (default: 10, max: 100) indicating the
       maximum number of results to return

    -  ``base_item=b`` -- an int ``b`` (default: 0) specifying where to start
       returning values from. The search will begin by returning the ``b``th
       curve. Combined with ``max_items`` to return data in chunks.

    -  ``sort=s`` -- a string ``s`` specifying what database field to sort the
       results on. See the LMFDB api for more info.

    EXAMPLES::

        sage: Es = search(conductor=11050, rank=2)
        [Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 442*x + 1716 over Rational Field, Elliptic Curve defined by y^2 + x*y = x^3 - x^2 + 1558*x + 11716 over Rational Field]
        sage: E = E[0]
        sage: E.conductor()
        11050
    
In [11]:
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)
Out[11]:
[Elliptic Curve defined by y^2 + y = x^3 + x^2 - 5155*x + 140756 over Rational Field]

What if there are no curves?

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.
For example, no elliptic curve in the database has rank 5.

In [12]:
lmfdb_ecurve.search(rank=5)
No fields were found satisfying input criteria.

How does one step through the data?

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument base_item (the name will probably change… to skip? or perhaps to offset?) to start returning at the base_itemth element.

In [13]:
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3))              # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field]

[Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field]
Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept max_conductor or min_conductor (or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).

In [14]:
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000)  # Not implemented
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-14-3d98f2cf7a13> in <module>()
----> 1 lmfdb_ecurve.search(rank=Integer(1), min_conductor=Integer(500), max_conductor=Integer(10000))  # Not implemented

/home/djlowry/Dropbox/EllipticCurve_LMFDB/LMFDB2sage/elliptic_curves.py in search(**kwargs)
     76             kwargs[item]
     77             raise NotImplementedError("This would be a great thing to have, " +
---> 78                 "but the LMFDB api does not yet provide this functionality.")
     79         except KeyError:
     80             pass

NotImplementedError: This would be a great thing to have, but the LMFDB api does not yet provide this functionality.
Our EllipticCurve_rational_field_lmfdb class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.
Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:
print(E.gens.__doc__)
        Return generators for the Mordell-Weil group E(Q) *modulo*
        torsion.

        .. warning::

           If the program fails to give a provably correct result, it
           prints a warning message, but does not raise an
           exception. Use :meth:`~gens_certain` to find out if this
           warning message was printed.

        INPUT:

        - ``proof`` -- bool or None (default None), see
          ``proof.elliptic_curve`` or ``sage.structure.proof``

        - ``verbose`` - (default: None), if specified changes the
           verbosity of mwrank computations

        - ``rank1_search`` - (default: 10), if the curve has analytic
          rank 1, try to find a generator by a direct search up to
          this logarithmic height.  If this fails, the usual mwrank
          procedure is called.

        - algorithm -- one of the following:

          - ``'mwrank_shell'`` (default) -- call mwrank shell command

          - ``'mwrank_lib'`` -- call mwrank C library

        - ``only_use_mwrank`` -- bool (default True) if False, first
          attempts to use more naive, natively implemented methods

        - ``use_database`` -- bool (default True) if True, attempts to
          find curve and gens in the (optional) database

        - ``descent_second_limit`` -- (default: 12) used in 2-descent

        - ``sat_bound`` -- (default: 1000) bound on primes used in
          saturation.  If the computed bound on the index of the
          points found by two-descent in the Mordell-Weil group is
          greater than this, a warning message will be displayed.

        OUTPUT:

        - ``generators`` - list of generators for the Mordell-Weil
           group modulo torsion

        IMPLEMENTATION: Uses Cremona's mwrank C library.

        EXAMPLES::

            sage: E = EllipticCurve('389a')
            sage: E.gens()                 # random output
            [(-1 : 1 : 1), (0 : 0 : 1)]

        A non-integral example::

            sage: E = EllipticCurve([-3/8,-2/3])
            sage: E.gens() # random (up to sign)
            [(10/9 : 29/54 : 1)]

        A non-minimal example::

            sage: E = EllipticCurve('389a1')
            sage: E1 = E.change_weierstrass_model([1/20,0,0,0]); E1
            Elliptic Curve defined by y^2 + 8000*y = x^3 + 400*x^2 - 320000*x over Rational Field
            sage: E1.gens() # random (if database not used)
            [(-400 : 8000 : 1), (0 : -8000 : 1)]
        
Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like regulator().)

In [16]:
print(E.regulator.__doc__)
        Return the regulator of the curve. This is taken from the lmfdb if available.

        NOTE:
            In later implementations, this docstring will probably include the
            docstring from sage's regular implementation. But that's not
            currently the case.
        

This concludes our demo of an interface between sage and the LMFDB.

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.
XKCD's automation

Posted in Expository, LMFDB, Math.NT, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | Leave a comment

“Second Moments in the Generalized Gauss Circle Problem” (with T. Hulse, C. Ieong Kuan, and A. Walker)

This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alexander Walker. This is a natural successor to our previous work (see their announcements: one, two, three) concerning bounds and asymptotics for sums of coefficients of modular forms.

We now have a variety of results concerning the behavior of the partial sums

$$ S_f(X) = \sum_{n \leq X} a(n) $$

where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series

$$ D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s} $$

completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.

That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as

$$ S_2(R) = \pi R + P_2(R),$$

where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the “lattice point discrepancy” — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.

The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,

$$ \theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$

Then

$$ \theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$

where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum

$$ \sum_{n \leq R} r_2(n) = S_2(R),$$

and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.

More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,

$$ S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$

giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).

Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.

Strong evidence for the conjectured size of the lattice point discrepancy  comes in the form of mean square estimates. By looking at the square, one doesn’t need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form

$$\begin{align}
\int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\
\int_0^X P_3(t)^2 dt &= C’ X^2 \log X + O(X^2 (\sqrt{ \log X})).
\end{align}$$

These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.

It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.

In a paper that we recently posted to the arxiv, we show that the Dirichlet series

$$ \sum_{n \geq 1} \frac{S_k(n)^2}{n^s} $$

and

$$ \sum_{n \geq 1} \frac{P_k(n)^2}{n^s} $$

for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results.  In particular, we prove a result stated  following.

Theorem

$$\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2}  \\ &\quad + C_kX^{k-1}  + O(X^{k-2} \end{align}$$

In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.

We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.

Theorem

There exists $\lambda > 0$ such that

$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 – \lambda}).$$

We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.

The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.

There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.

Posted in Math.NT, Mathematics | Tagged , , | Leave a comment

How fat would we have to get to balance carbon emissions?

Let’s consider a ridiculous solution to a real problem. We’re unearthing tons of carbon, burning it, and releasing it into the atmosphere.

Disclaimer: There are several greenhouse gasses, and lots of other things that we’re throwing wantonly into the environment. Considering them makes things incredibly complicated incredibly quickly, so I blithely ignore them in this note.

Such rapid changes have side effects, many of which lead to bad things. That’s why nearly 150 countries ratified the Paris Agreement on Climate Change.1 Even if we assume that all these countries will accomplish what they agreed to (which might be challenging for the US),2

most nations and advocacy groups are focusing on increasing efficiency and reducing emissions. These are good goals! But what about all the carbon that is already in the atmosphere?3

You know what else is a problem? Obesity! How are we to solve all of these problems?

Looking at this (very unscientific) graph,4 we see that the red isn’t keeping up! Maybe we aren’t using the valuable resource of our own bodies enough! Fat has carbon in it — often over 20% by weight. What if we took advantage of our propensity to become propense? How fat would we need to get to balance last year’s carbon emissions?

That’s what we investigate here.

(more…)

Posted in Data, Mathematics, Story | Tagged , , , | 1 Comment

Slides from a Dissertation Defense

I just defended my dissertation. Thank you to Jeff, Jill, and Dinakar for being on my Defense Committee. In this talk, I discuss some of the ideas and follow-ups on my thesis. I’ll also take this moment to include the dedication in my thesis.

ThesisDedicationHere are the slides from my defense.

After the defense, I gave Jeff and Jill a poster of our family tree. I made this using data from Math Genealogy, which has so much data.

ThesisFrontPage

Posted in Mathematics | Tagged , , , | 2 Comments

Experimenting with latex2html5: PSTricks to HTML interactivity

I recently learned about about latex2html5, a javascript library which allows one to write LaTeX and PSTricks to produce interactive objects on websites.At its core, it functions in a similar way to MathJax, which is what I use to generate mathematics on this (and my other) sites. As an example of MathJax, I can write the following.

$$ \int_0^1 f(x) dx = F(1) – F(0). $$

The dream of latex2html5 is to be able to describe a diagram using the language of PSTricks inside LaTeX, throw in a bit of sugar to describe how interactivity should work on the web, and then render this to a beautiful svg using javascript.

Unfortunately, I did not try to make this work on WordPress (as WordPress is a bit finicky about how it interacts with javascript). So instead, I wrote a more detailed description about latex2html5, including some examples and some criticisms, on my non-Wordpress website david.lowryduda.com.

 

Posted in Programming | Leave a comment

Slides from a Talk at the Dartmouth Number Theory Seminar

I recently gave a talk at the Dartmouth Number Theory Seminar (thank you Edgar for inviting me and to Edgar, Naomi, and John for being such good hosts). In this talk, I described the recent successes we’ve had on working with variants of the Gauss Circle Problem.

The story began when (with Tom Hulse, Chan Ieong Kuan, and Alex Walker — and with helpful input from Mehmet Kiral, Jeff Hoffstein, and others) we introduced and studied the Dirichlet series
$$\begin{equation}
\sum_{n \geq 1} \frac{S(n)^2}{n^s}, \notag
\end{equation}$$
where $S(n)$ is a sum of the first $n$ Fourier coefficients of an automorphic form on GL(2)$. We’ve done this successfully with a variety of automorphic forms, leading to new results for averages, short-interval averages, sign changes, and mean-square estimates of the error for several classical problems. Many of these papers and results have been discussed in other places on this site.

Ultimately, the problem becomes acquiring sufficiently detailed understandings of the spectral behavior of various forms (or more correctly, the behavior of the spectral expansion of a Poincare series against various forms).
We are continuing to research and study a variety of problems through this general approach.

The slides for this talk are available here.

Posted in Uncategorized | Leave a comment

Smooth Sums to Sharp Sums 1

In this note, I describe a combination of two smoothed integral transforms that has been very useful in my collaborations with Alex Walker, Chan Ieong Kuan, and Tom Hulse. I suspect that this particular technique was once very well-known. But we were not familiar with it, and so I describe it here.

In application, this is somewhat more complicated. But to show the technique, I apply it to reprove some classic bounds on $\text{GL}(2)$ $L$-functions.

This note is also available as a pdf. This was first written as a LaTeX document, and then modified to fit into wordpress through latex2jax.

Introduction

Consider a Dirichlet series
$$\begin{equation}
D(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \notag
\end{equation}$$
Suppose that this Dirichlet series converges absolutely for $\Re s > 1$, has meromorphic continuation to the complex plane, and satisfies a functional equation of shape
$$\begin{equation}
\Lambda(s) := G(s) D(s) = \epsilon \Lambda(1-s), \notag
\end{equation}$$
where $\lvert \epsilon \rvert = 1$ and $G(s)$ is a product of Gamma factors.

Dirichlet series are often used as a tool to study number theoretic functions with multiplicative properties. By studying the analytic properties of the Dirichlet series, one hopes to extract information about the coefficients $a(n)$. Some of the most common interesting information within Dirichlet series comes from partial sums
$$\begin{equation}
S(n) = \sum_{m \leq n} a(m). \notag
\end{equation}$$
For example, the Gauss Circle and Dirichlet Divisor problems can both be stated as problems concerning sums of coefficients of Dirichlet series.

One can try to understand the partial sum directly by understanding the integral transform
$$\begin{equation}
S(n) = \frac{1}{2\pi i} \int_{(2)} D(s) \frac{X^s}{s} ds, \notag
\end{equation}$$
a Perron integral. However, it is often challenging to understand this integral, as delicate properties concerning the convergence of the integral often come into play.

Instead, one often tries to understand a smoothed sum of the form
$$\begin{equation}
\sum_{m \geq 1} a(m) v(m) \notag
\end{equation}$$
where $v(m)$ is a smooth function that vanishes or decays extremely quickly for values of $m$ larger than $n$. A large class of smoothed sums can be obtained by starting with a very nicely behaved weight function $v(m)$ and take its Mellin transform
$$\begin{equation}
V(s) = \int_0^\infty v(x) x^s \frac{dx}{x}. \notag
\end{equation}$$
Then Mellin inversion gives that
$$\begin{equation}
\sum_{m \geq 1} a(m) v(m/X) = \frac{1}{2\pi i} \int_{(2)} D(s) X^s V(s) ds, \notag
\end{equation}$$
as long as $v$ and $V$ are nice enough functions.

In this note, we will use two smoothing integral transforms and corresponding smoothed sums. We will use one smooth function $v_1$ (which depends on another parameter $Y$) with the property that
$$\begin{equation}
\sum_{m \geq 1} a(m) v_1(m/X) \approx \sum_{\lvert m – X \rvert < X/Y} a(m). \notag
\end{equation}$$
And we will use another smooth function $v_2$ (which also depends on $Y$) with the property that
$$\begin{equation}
\sum_{m \geq 1} a(m) v_2(m/X) = \sum_{m \leq X} a(m) + \sum_{X < m < X + X/Y} a(m) v_2(m/X). \notag
\end{equation}$$
Further, as long as the coefficients $a(m)$ are nonnegative, it will be true that
$$\begin{equation}
\sum_{X < m < X + X/Y} a(m) v_2(m/X) \ll \sum_{\lvert m – X \rvert < X/Y} a(m), \notag
\end{equation}$$
which is exactly what $\sum a(m) v_1(m/X)$ estimates. Therefore
$$\begin{equation}\label{eq:overall_plan}
\sum_{m \leq X} a(m) = \sum_{m \geq 1} a(m) v_2(m/X) + O\Big(\sum_{m \geq 1} a(m) v_1(m/X) \Big).
\end{equation}$$

Hence sufficient understanding of $\sum a(m) v_1(m/X)$ and $\sum a(m) v_2(m/X)$ allows one to understand the sharp sum
$$\begin{equation}
\sum_{m \leq X} a(m). \notag
\end{equation}$$

(more…)

Posted in Expository, Math.NT, Mathematics | Tagged , , , , | 3 Comments