# Interfacing sage and the LMFDB — a prototype¶

The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.
This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is a prototype and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, please request/comment/ask either now, or at my email: david@lowryduda.com.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve

In [2]:
lmfdb_ecurve.search(rank=1)

Out[2]:
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field,
Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 1240542*x - 531472509 over Rational Field,
Elliptic Curve defined by y^2 + y = x^3 - x^2 + 8100*x - 263219 over Rational Field,
Elliptic Curve defined by y^2 + x*y = x^3 + 637*x - 68783 over Rational Field,
Elliptic Curve defined by y^2 + y = x^3 + x^2 + 36*x - 380 over Rational Field,
Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2535*x - 49982 over Rational Field]
This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))

<class 'LMFDB2sage.ell_lmfdb.EllipticCurve_rational_field_lmfdb_with_category'>

Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:
# Try autocompleting the following. It has all the things!
print(dir(E))

['CPS_height_bound', 'CartesianProduct',
'Chow_form', 'Hom',
'Jacobian', 'Jacobian_matrix',
'Lambda', 'Np',
'S_integral_points', '_AlgebraicScheme__A',
'_AlgebraicScheme__divisor_group', '_AlgebraicScheme_subscheme__polys',
'_EllipticCurve_generic__ainvs', '_EllipticCurve_generic__b_invariants',
'_EllipticCurve_generic__base_ring', '_EllipticCurve_generic__discriminant',
'_EllipticCurve_generic__is_over_RationalField', '_EllipticCurve_generic__multiple_x_denominator',
'_EllipticCurve_generic__multiple_x_numerator', '_EllipticCurve_rational_field__conductor_pari',
'_EllipticCurve_rational_field__generalized_congruence_number', '_EllipticCurve_rational_field__generalized_modular_degree',
'_EllipticCurve_rational_field__gens', '_EllipticCurve_rational_field__modular_degree',
'_EllipticCurve_rational_field__np', '_EllipticCurve_rational_field__rank',
'_EllipticCurve_rational_field__regulator', '_EllipticCurve_rational_field__torsion_order',
'__class__', '__cmp__', '__contains__', '__delattr__',
'__dict__', '__dir__', '__div__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__',
'__getitem__', '__getstate__', '__gt__', '__hash__',
'__init__', '__le__', '__lt__', '__make_element_class__',
'__module__', '__mul__', '__ne__', '__new__',
'__nonzero__', '__pari__', '__pow__', '__pyx_vtable__',
'__rdiv__', '__reduce__', '__reduce_ex__', '__repr__',
'__rmul__', '__setattr__', '__setstate__', '__sizeof__',
'__str__', '__subclasshook__', '__temporarily_change_names', '__truediv__',
'_ascii_art_', '_assign_names', '_axiom_', '_axiom_init_',
'_base', '_base_ring', '_base_scheme', '_best_affine_patch',
'_cache__point_homset', '_cache_an_element', '_cache_key', '_check_satisfies_equations',
'_cmp_', '_coerce_map_from_', '_coerce_map_via', '_coercions_used',
'_compute_gens', '_convert_map_from_', '_convert_method_name', '_defining_names',
'_defining_params_', '_doccls', '_element_constructor', '_element_constructor_',
'_element_constructor_from_element_class', '_element_init_pass_parent', '_factory_data', '_first_ngens',
'_forward_image', '_fricas_', '_fricas_init_', '_gap_',
'_gap_init_', '_generalized_congmod_numbers', '_generic_coerce_map', '_generic_convert_map',
'_get_action_', '_get_local_data', '_giac_', '_giac_init_',
'_gp_', '_gp_init_', '_heegner_best_tau', '_heegner_forms_list',
'_heegner_index_in_EK', '_homset', '_init_category_', '_initial_action_list',
'_initial_coerce_list', '_initial_convert_list', '_interface_', '_interface_init_',
'_interface_is_cached_', '_internal_coerce_map_from', '_internal_convert_map_from', '_introspect_coerce',
'_is_category_initialized', '_is_valid_homomorphism_', '_isoclass', '_json',
'_kash_', '_kash_init_', '_known_points', '_latex_',
'_lmfdb_label', '_lmfdb_regulator', '_macaulay2_', '_macaulay2_init_',
'_magma_init_', '_maple_', '_maple_init_', '_mathematica_',
'_mathematica_init_', '_maxima_', '_maxima_init_', '_maxima_lib_',
'_maxima_lib_init_', '_modsym', '_modular_symbol_normalize', '_morphism',
'_multiple_of_degree_of_isogeny_to_optimal_curve', '_multiple_x_denominator', '_multiple_x_numerator', '_names',
'_pari_', '_pari_init_', '_point', '_point_homset',
'_polymake_', '_polymake_init_', '_populate_coercion_lists_', '_r_init_',
'_reduce_model', '_reduce_point', '_reduction', '_refine_category_',
'_repr_', '_repr_option', '_repr_type', '_sage_',
'_scale_by_units', '_set_conductor', '_set_cremona_label', '_set_element_constructor',
'_set_gens', '_set_modular_degree', '_set_rank', '_set_torsion_order',
'_shortest_paths', '_singular_', '_singular_init_', '_symbolic_',
'_test_an_element', '_test_cardinality', '_test_category', '_test_elements',
'_test_elements_eq_reflexive', '_test_elements_eq_symmetric', '_test_elements_eq_transitive', '_test_elements_neq',
'_test_eq', '_test_new', '_test_not_implemented_methods', '_test_pickling',
'_test_some_elements', '_tester', '_torsion_bound', '_unicode_art_',
'_unset_category', '_unset_coercions_used', '_unset_embedding', 'a1',
'a2', 'a3', 'a4', 'a6',
'a_invariants', 'abelian_variety', 'affine_patch', 'ainvs',
'algebra', 'ambient_space', 'an', 'an_element',
'analytic_rank', 'analytic_rank_upper_bound', 'anlist', 'antilogarithm',
'ap', 'aplist', 'arithmetic_genus', 'automorphisms',
'b2', 'b4', 'b6', 'b8',
'b_invariants', 'base', 'base_extend', 'base_field',
'base_morphism', 'base_ring', 'base_scheme', 'c4',
'c6', 'c_invariants', 'cartesian_product', 'categories',
'category', 'change_ring', 'change_weierstrass_model', 'cm_discriminant',
'codimension', 'coerce', 'coerce_embedding', 'coerce_map_from',
'complement', 'conductor', 'congruence_number', 'construction',
'convert_map_from', 'coordinate_ring', 'count_points', 'cremona_label',
'database_attributes', 'database_curve', 'db', 'defining_ideal',
'defining_polynomial', 'defining_polynomials', 'degree', 'descend_to',
'dimension', 'dimension_absolute', 'dimension_relative', 'discriminant',
'division_field', 'division_polynomial', 'division_polynomial_0', 'divisor',
'divisor_group', 'divisor_of_function', 'dual', 'dump',
'dumps', 'element_class', 'elliptic_exponential', 'embedding_center',
'embedding_morphism', 'eval_modular_form', 'excellent_position', 'formal',
'formal_group', 'fundamental_group', 'galois_representation', 'gen',
'gens', 'gens_certain', 'gens_dict', 'gens_dict_recursive',
'genus', 'geometric_genus', 'get_action', 'global_integral_model',
'has_base', 'has_cm', 'has_coerce_map_from', 'has_global_minimal_model',
'has_good_reduction', 'has_good_reduction_outside_S', 'has_multiplicative_reduction', 'has_nonsplit_multiplicative_reduction',
'has_rational_cm', 'has_split_multiplicative_reduction', 'hasse_invariant', 'heegner_discriminants',
'heegner_discriminants_list', 'heegner_index', 'heegner_index_bound', 'heegner_point',
'heegner_point_height', 'heegner_sha_an', 'height', 'height_function',
'height_pairing_matrix', 'hom', 'hyperelliptic_polynomials', 'identity_morphism',
'inject_variables', 'integral_model', 'integral_points', 'integral_short_weierstrass_model',
'integral_weierstrass_model', 'integral_x_coords_in_interval', 'intersection', 'intersection_multiplicity',
'intersection_points', 'intersects_at', 'irreducible_components', 'is_atomic_repr',
'is_coercion_cached', 'is_complete_intersection', 'is_conversion_cached', 'is_exact',
'is_global_integral_model', 'is_global_minimal_model', 'is_good', 'is_integral',
'is_irreducible', 'is_isogenous', 'is_isomorphic', 'is_local_integral_model',
'is_minimal', 'is_on_curve', 'is_ordinary', 'is_ordinary_singularity',
'is_p_integral', 'is_p_minimal', 'is_parent_of', 'is_projective',
'is_singular', 'is_smooth', 'is_supersingular', 'is_transverse',
'is_x_coord', 'isogenies_prime_degree', 'isogeny', 'isogeny_class',
'isogeny_codomain', 'isogeny_degree', 'isogeny_graph', 'isomorphism_to',
'isomorphisms', 'j_invariant', 'kodaira_symbol', 'kodaira_type',
'kodaira_type_old', 'kolyvagin_point', 'label', 'latex_name',
'latex_variable_names', 'lift_x', 'lll_reduce', 'lmfdb_page',
'local_coordinates', 'local_data', 'local_integral_model', 'local_minimal_model',
'lseries', 'lseries_gross_zagier', 'manin_constant', 'matrix_of_frobenius',
'modular_degree', 'modular_form', 'modular_parametrization', 'modular_symbol',
'modular_symbol_numerical', 'modular_symbol_space', 'multiplication_by_m', 'multiplication_by_m_isogeny',
'multiplicity', 'mwrank', 'mwrank_curve', 'neighborhood',
'newform', 'ngens', 'non_minimal_primes', 'nth_iterate',
'objgen', 'objgens', 'optimal_curve', 'orbit',
'pari_mincurve', 'period_lattice', 'plane_projection', 'plot',
'point', 'point_homset', 'point_search', 'point_set',
'pollack_stevens_modular_symbol', 'preimage', 'projection', 'prove_BSD',
'quartic_twist', 'rank', 'rank_bound', 'rank_bounds',
'rational_parameterization', 'rational_points', 'real_components', 'reduce',
'reduction', 'register_action', 'register_coercion', 'register_conversion',
'register_embedding', 'regulator', 'regulator_of_points', 'rename',
'reset_name', 'root_number', 'rst_transform', 'satisfies_heegner_hypothesis',
'saturation', 'save', 'scale_curve', 'selmer_rank',
'sextic_twist', 'sha', 'short_weierstrass_model', 'silverman_height_bound',
'simon_two_descent', 'singular_points', 'singular_subscheme', 'some_elements',
'specialization', 'structure_morphism', 'supersingular_primes', 'tamagawa_exponent',
'tamagawa_number', 'tamagawa_number_old', 'tamagawa_numbers', 'tamagawa_product',
'tamagawa_product_bsd', 'tangents', 'tate_curve', 'three_selmer_rank',
'torsion_order', 'torsion_points', 'torsion_polynomial', 'torsion_subgroup',
'two_descent', 'two_descent_simon', 'two_division_polynomial', 'two_torsion_rank',
'union', 'variable_name', 'variable_names', 'weierstrass_p',
'weil_restriction', 'zeta_series']


This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:
E.defining_ideal()

Out[5]:
Ideal (-x^3 + x*y*z + y^2*z + 887688*x*z^2 + 321987008*z^3) of Multivariate Polynomial Ring in x, y, z over Rational Field
But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)

There are 10 curves returned.
Elliptic Curve defined by y^2 + x*y + y = x^3 - 3476*x - 79152 over Rational Field

And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:
print(E.gens())
print(E.rank())
print(E.regulator())

[(-34 : 17 : 1)]
1
1.63852610029

In [8]:
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())

CPU times: user 971 ms, sys: 6.82 ms, total: 978 ms
Wall time: 978 ms

That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the search() function.
In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

### This is where we’re really looking for input.¶

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.
email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues

## Now let’s describe what’s going on under the hood a little bit¶

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050



The result is a large mess of data, which can be exported as json and parsed.
But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time and thought to parse.
So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]

22

The total functionality of this search function is visible from its current documentation.

In [10]:
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)

    Search the LMFDB for an elliptic curve.

Note that all inputs are optional, but at least one input is necessary.

INPUT:

-  label=l -- a string l representing a label in the LMFDB.

-  degree=d -- an int d giving the minimum degree of a
parameterization of the modular curve

-  conductor=c -- an int c giving the conductor of the curve

-  min_conductor=mc -- an int mc giving a lower bound on the
conductor for desired curves

-  max_conductor=mc -- an int mc giving an upper bound on the
conductor for desired curves

-  torsion_order=t -- an int t giving the order of the torsion
subgroup of the curve

-  rank=r -- an int r giving the rank of the curve

-  regulator=f -- a float f giving the regulator of the curve

-  max_items=m -- an int m (default: 10, max: 100) indicating the
maximum number of results to return

-  base_item=b -- an int b (default: 0) specifying where to start
returning values from. The search will begin by returning the bth
curve. Combined with max_items to return data in chunks.

-  sort=s -- a string s specifying what database field to sort the

EXAMPLES::

sage: Es = search(conductor=11050, rank=2)
[Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 442*x + 1716 over Rational Field, Elliptic Curve defined by y^2 + x*y = x^3 - x^2 + 1558*x + 11716 over Rational Field]
sage: E = E[0]
sage: E.conductor()
11050


In [11]:
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)

Out[11]:
[Elliptic Curve defined by y^2 + y = x^3 + x^2 - 5155*x + 140756 over Rational Field]

### What if there are no curves?¶

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.
For example, no elliptic curve in the database has rank 5.

In [12]:
lmfdb_ecurve.search(rank=5)

No fields were found satisfying input criteria.


### How does one step through the data?¶

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument base_item (the name will probably change… to skip? or perhaps to offset?) to start returning at the base_itemth element.

In [13]:
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3))              # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list

[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field]

[Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field]

Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept max_conductor or min_conductor (or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).

In [14]:
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000)  # Not implemented

---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-14-3d98f2cf7a13> in <module>()
----> 1 lmfdb_ecurve.search(rank=Integer(1), min_conductor=Integer(500), max_conductor=Integer(10000))  # Not implemented

/home/djlowry/Dropbox/EllipticCurve_LMFDB/LMFDB2sage/elliptic_curves.py in search(**kwargs)
76             kwargs[item]
77             raise NotImplementedError("This would be a great thing to have, " +
---> 78                 "but the LMFDB api does not yet provide this functionality.")
79         except KeyError:
80             pass

NotImplementedError: This would be a great thing to have, but the LMFDB api does not yet provide this functionality.
Our EllipticCurve_rational_field_lmfdb class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.
Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:
print(E.gens.__doc__)

        Return generators for the Mordell-Weil group E(Q) *modulo*
torsion.

.. warning::

If the program fails to give a provably correct result, it
prints a warning message, but does not raise an
exception. Use :meth:~gens_certain to find out if this
warning message was printed.

INPUT:

- proof -- bool or None (default None), see
proof.elliptic_curve or sage.structure.proof

- verbose - (default: None), if specified changes the
verbosity of mwrank computations

- rank1_search - (default: 10), if the curve has analytic
rank 1, try to find a generator by a direct search up to
this logarithmic height.  If this fails, the usual mwrank
procedure is called.

- algorithm -- one of the following:

- 'mwrank_shell' (default) -- call mwrank shell command

- 'mwrank_lib' -- call mwrank C library

- only_use_mwrank -- bool (default True) if False, first
attempts to use more naive, natively implemented methods

- use_database -- bool (default True) if True, attempts to
find curve and gens in the (optional) database

- descent_second_limit -- (default: 12) used in 2-descent

- sat_bound -- (default: 1000) bound on primes used in
saturation.  If the computed bound on the index of the
points found by two-descent in the Mordell-Weil group is
greater than this, a warning message will be displayed.

OUTPUT:

- generators - list of generators for the Mordell-Weil
group modulo torsion

IMPLEMENTATION: Uses Cremona's mwrank C library.

EXAMPLES::

sage: E = EllipticCurve('389a')
sage: E.gens()                 # random output
[(-1 : 1 : 1), (0 : 0 : 1)]

A non-integral example::

sage: E = EllipticCurve([-3/8,-2/3])
sage: E.gens() # random (up to sign)
[(10/9 : 29/54 : 1)]

A non-minimal example::

sage: E = EllipticCurve('389a1')
sage: E1 = E.change_weierstrass_model([1/20,0,0,0]); E1
Elliptic Curve defined by y^2 + 8000*y = x^3 + 400*x^2 - 320000*x over Rational Field
sage: E1.gens() # random (if database not used)
[(-400 : 8000 : 1), (0 : -8000 : 1)]


Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like regulator().)

In [16]:
print(E.regulator.__doc__)

        Return the regulator of the curve. This is taken from the lmfdb if available.

NOTE:
In later implementations, this docstring will probably include the
docstring from sage's regular implementation. But that's not
currently the case.



## This concludes our demo of an interface between sage and the LMFDB.¶

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.

## “Second Moments in the Generalized Gauss Circle Problem” (with T. Hulse, C. Ieong Kuan, and A. Walker)

This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alexander Walker. This is a natural successor to our previous work (see their announcements: one, two, three) concerning bounds and asymptotics for sums of coefficients of modular forms.

We now have a variety of results concerning the behavior of the partial sums

$$S_f(X) = \sum_{n \leq X} a(n)$$

where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series

$$D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s}$$

completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.

That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as

$$S_2(R) = \pi R + P_2(R),$$

where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the “lattice point discrepancy” — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.

The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,

$$\theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$

Then

$$\theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$

where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum

$$\sum_{n \leq R} r_2(n) = S_2(R),$$

and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.

More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,

$$S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$

giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).

Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.

Strong evidence for the conjectured size of the lattice point discrepancy  comes in the form of mean square estimates. By looking at the square, one doesn’t need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form

\begin{align} \int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\ \int_0^X P_3(t)^2 dt &= C’ X^2 \log X + O(X^2 (\sqrt{ \log X})). \end{align}

These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.

It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.

In a paper that we recently posted to the arxiv, we show that the Dirichlet series

$$\sum_{n \geq 1} \frac{S_k(n)^2}{n^s}$$

and

$$\sum_{n \geq 1} \frac{P_k(n)^2}{n^s}$$

for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results.  In particular, we prove a result stated  following.

Theorem

\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2} \\ &\quad + C_kX^{k-1} + O(X^{k-2} \end{align}

In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.

We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.

Theorem

There exists $\lambda > 0$ such that

$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 – \lambda}).$$

We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.

The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.

There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.

## How fat would we have to get to balance carbon emissions?

Let’s consider a ridiculous solution to a real problem. We’re unearthing tons of carbon, burning it, and releasing it into the atmosphere.

Disclaimer: There are several greenhouse gasses, and lots of other things that we’re throwing wantonly into the environment. Considering them makes things incredibly complicated incredibly quickly, so I blithely ignore them in this note.

Such rapid changes have side effects, many of which lead to bad things. That’s why nearly 150 countries ratified the Paris Agreement on Climate Change.1 Even if we assume that all these countries will accomplish what they agreed to (which might be challenging for the US),2

most nations and advocacy groups are focusing on increasing efficiency and reducing emissions. These are good goals! But what about all the carbon that is already in the atmosphere?3

You know what else is a problem? Obesity! How are we to solve all of these problems?

Looking at this (very unscientific) graph,4 we see that the red isn’t keeping up! Maybe we aren’t using the valuable resource of our own bodies enough! Fat has carbon in it — often over 20% by weight. What if we took advantage of our propensity to become propense? How fat would we need to get to balance last year’s carbon emissions?

That’s what we investigate here.

Posted in Data, Mathematics, Story | | 1 Comment

## Slides from a Dissertation Defense

I just defended my dissertation. Thank you to Jeff, Jill, and Dinakar for being on my Defense Committee. In this talk, I discuss some of the ideas and follow-ups on my thesis. I’ll also take this moment to include the dedication in my thesis.

Here are the slides from my defense.

After the defense, I gave Jeff and Jill a poster of our family tree. I made this using data from Math Genealogy, which has so much data.

Posted in Mathematics | Tagged , , , | 2 Comments

## Smooth Sums to Sharp Sums 1

In this note, I describe a combination of two smoothed integral transforms that has been very useful in my collaborations with Alex Walker, Chan Ieong Kuan, and Tom Hulse. I suspect that this particular technique was once very well-known. But we were not familiar with it, and so I describe it here.

In application, this is somewhat more complicated. But to show the technique, I apply it to reprove some classic bounds on $\text{GL}(2)$ $L$-functions.

This note is also available as a pdf. This was first written as a LaTeX document, and then modified to fit into wordpress through latex2jax.

## Introduction

Consider a Dirichlet series
$$D(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}. \notag$$
Suppose that this Dirichlet series converges absolutely for $\Re s > 1$, has meromorphic continuation to the complex plane, and satisfies a functional equation of shape
$$\Lambda(s) := G(s) D(s) = \epsilon \Lambda(1-s), \notag$$
where $\lvert \epsilon \rvert = 1$ and $G(s)$ is a product of Gamma factors.

Dirichlet series are often used as a tool to study number theoretic functions with multiplicative properties. By studying the analytic properties of the Dirichlet series, one hopes to extract information about the coefficients $a(n)$. Some of the most common interesting information within Dirichlet series comes from partial sums
$$S(n) = \sum_{m \leq n} a(m). \notag$$
For example, the Gauss Circle and Dirichlet Divisor problems can both be stated as problems concerning sums of coefficients of Dirichlet series.

One can try to understand the partial sum directly by understanding the integral transform
$$S(n) = \frac{1}{2\pi i} \int_{(2)} D(s) \frac{X^s}{s} ds, \notag$$
a Perron integral. However, it is often challenging to understand this integral, as delicate properties concerning the convergence of the integral often come into play.

Instead, one often tries to understand a smoothed sum of the form
$$\sum_{m \geq 1} a(m) v(m) \notag$$
where $v(m)$ is a smooth function that vanishes or decays extremely quickly for values of $m$ larger than $n$. A large class of smoothed sums can be obtained by starting with a very nicely behaved weight function $v(m)$ and take its Mellin transform
$$V(s) = \int_0^\infty v(x) x^s \frac{dx}{x}. \notag$$
Then Mellin inversion gives that
$$\sum_{m \geq 1} a(m) v(m/X) = \frac{1}{2\pi i} \int_{(2)} D(s) X^s V(s) ds, \notag$$
as long as $v$ and $V$ are nice enough functions.

In this note, we will use two smoothing integral transforms and corresponding smoothed sums. We will use one smooth function $v_1$ (which depends on another parameter $Y$) with the property that
$$\sum_{m \geq 1} a(m) v_1(m/X) \approx \sum_{\lvert m – X \rvert < X/Y} a(m). \notag$$
And we will use another smooth function $v_2$ (which also depends on $Y$) with the property that
$$\sum_{m \geq 1} a(m) v_2(m/X) = \sum_{m \leq X} a(m) + \sum_{X < m < X + X/Y} a(m) v_2(m/X). \notag$$
Further, as long as the coefficients $a(m)$ are nonnegative, it will be true that
$$\sum_{X < m < X + X/Y} a(m) v_2(m/X) \ll \sum_{\lvert m – X \rvert < X/Y} a(m), \notag$$
which is exactly what $\sum a(m) v_1(m/X)$ estimates. Therefore
$$\label{eq:overall_plan} \sum_{m \leq X} a(m) = \sum_{m \geq 1} a(m) v_2(m/X) + O\Big(\sum_{m \geq 1} a(m) v_1(m/X) \Big).$$

Hence sufficient understanding of $\sum a(m) v_1(m/X)$ and $\sum a(m) v_2(m/X)$ allows one to understand the sharp sum
$$\sum_{m \leq X} a(m). \notag$$

Posted in Expository, Math.NT, Mathematics | | 3 Comments

While idly thinking while heading back from the office, and then more later while thinking after dinner with my academic little brother Alex Walker and my future academic little sister-in-law Sara Schulz, we began to think about $2017$, the number.

## General Patterns

• 2017 is a prime number. 2017 is the 306th prime. The 2017th prime is 17539.
• As 2011 is also prime, we call 2017 a sexy prime.
• 2017 can be written as a sum of two squares,
$$2017 = 9^2 +44^2,$$
and this is the only way to write it as a sum of two squares.
• Similarly, 2017 appears as the hypotenuse of a primitive Pythagorean triangle,
$$2017^2 = 792^2 + 1855^2,$$
and this is the only such right triangle.
• 2017 is uniquely identified as the first odd prime that leaves a remainder of $2$ when divided by $5$, $13$, and $31$. That is,
$$2017 \equiv 2 \pmod {5, 13, 31}.$$
• In different bases,
\begin{align} (2017)_{10} &= (2681)_9 = (3741)_8 = (5611)_7 = (13201)_6 \notag \\ &= (31032)_5 = (133201)_4 = (2202201)_3 = (11111100001)_2 \notag \end{align}
The base $2$ and base $3$ expressions are sort of nice, including repetition.
Posted in Mathematics | Tagged , | 1 Comment

## Revealing zero in fully homomorphic encryption is a Bad Thing

When I was first learning number theory, cryptography seemed really fun and really practical. I thought elementary number theory was elegant, and that cryptography was an elegant application. As I continued to learn more about mathematics, and in particular modern mathematics, I began to realize that decades of instruction and improvement (and perhaps of more useful points of view) have simplified the presentation of elementary number theory, and that modern mathematics is less elegant in presentation.

Similarly, as I learned more about cryptography, I learned that though the basic ideas are very simple, their application is often very inelegant. For example, the basis of RSA follows immediately from Euler’s Theorem as learned while studying elementary number theory, or alternately from Lagrange’s Theorem as learned while studying group theory or abstract algebra. And further, these are very early topics in these two areas of study!

But a naive implementation of RSA is doomed (For that matter, many professional implementations have their flaws too). Every now and then, a very clever expert comes up with a new attack on popular cryptosystems, generating new guidelines and recommendations. Some guidelines make intuitive sense [e.g. don’t use too small of an exponent for either the public or secret keys in RSA], but many are more complicated or designed to prevent more sophisticated attacks [especially side-channel attacks].

In the summer of 2013, I participated in the ICERM IdeaLab working towards more efficient homomorphic encryption. We were playing with existing homomorphic encryption schemes and trying to come up with new methods. One guideline that we followed is that an attacker should not be able to recognize an encryption of zero. This seems like a reasonable guideline, but I didn’t really understand why, until I was chatting with others at the 2017 Joint Mathematics Meetings in Atlanta.

It turns out that revealing zero isn’t just against generally sound advice. Revealing zero is a capital B capital T Bad Thing.

## Basic Setup

For the rest of this note, I’ll try to identify some of this reasoning.

In a typical cryptosystem, the basic setup is as follows. Andrew has a message that he wants to send to Beatrice. So Andrew converts the message into a list of numbers $M$, and uses some sort of encryption function $E(\cdot)$ to encrypt $M$, forming a ciphertext $C$. We can represent this as $C = E(M)$. Andrew transmits $C$ to Beatrice. If an eavesdropper Eve happens to intercept $C$, it should be very hard for Eve to recover any information about the original message from $C$. But when Beatrice receives $C$, she uses a corresponding decryption function $D(\cdot)$ to decrypt $C$, $M = d(C)$.

Often, the encryption and decryption techniques are based on number theoretic or combinatorial primitives. Some of these have extra structure (or at least they do with basic implementation). For instance, the RSA cryptosystem involves a public exponent $e$, a public mod $N$, and a private exponent $d$. Andrew encrypts the message $M$ by computing $C = E(M) \equiv M^e \bmod N$. Beatrice decrypts the message by computing $M = C^d \equiv M^{ed} \bmod N$.

Notice that in the RSA system, given two messages $M_1, M_2$ and corresponding ciphertexts $C_1, C_2$, we have that

E(M_1 M_2) \equiv (M_1 M_2)^e \equiv M_1^e M_2^e \equiv E(M_1) E(M_2) \pmod N. \notag

The encryption function $E(\cdot)$ is a group homomorphism. This is an example of extra structure.

A fully homomorphic cryptosystem has an encryption function $E(\cdot)$ satisfying both $E(M_1 + M_2) = E(M_1) + E(M_2)$ and $E(M_1M_2) = E(M_1)E(M_2)$ (or more generally an analogous pair of operations). That is, $E(\cdot)$ is a ring homomorphism.

This extra structure allows for (a lot of) extra utility. A fully homomorphic $E(\cdot)$ would allow one to perform meaningful operations on encrypted data, even though you can’t read the data itself. For example, a clinic could store (encrypted) medical information on an external server. A doctor or nurse could pull out a cellphone or tablet with relatively little computing power or memory and securely query the medical data. Fully homomorphic encryption would allow one to securely outsource data infrastructure.

A different usage model suggests that we use a different mental model. So suppose Alice has sensitive data that she wants to store for use on EveCorp’s servers. Alice knows an encryption method $E(\cdot)$ and a decryption method $D(\cdot)$, while EveCorp only ever has mountains of ciphertexts, and cannot read the data [even though they have it].

## Why revealing zero is a Bad Thing

Let us now consider some basic cryptographic attacks. We should assume that EveCorp has access to a long list of plaintext messages $M_i$ and their corresponding ciphertexts $C_i$. Not everything, but perhaps from small leaks or other avenues. Among the messages $M_i$ it is very likely that there are two messages $M_1, M_2$ which are relatively prime. Then an application of the Euclidean Algorithm gives a linear combination of $M_1$ and $M_2$ such that

M_1 x + M_2 y = 1 \notag

for some integers $x,y$. Even though EveCorp doesn’t know the encryption method $E(\cdot)$, since we are assuming that they have access to the corresponding ciphertexts $C_1$ and $C_2$, EveCorp has access to an encryption of $1$ using the ring homomorphism properties:
\label{eq:encryption_of_one}
E(1) = E(M_1 x + M_2 y) = x E(M_1) + y E(M_2) = x C_1 + y C_2.

By multiplying $E(1)$ by $m$, EveCorp has access to a plaintext and encryption of $m$ for any message $m$.

Now suppose that EveCorp can always recognize an encryption of $0$. Then EveCorp can mount a variety of attacks exposing information about the data it holds.

For example, EveCorp can test whether a particular message $m$ is contained in the encrypted dataset. First, EveCorp generates a ciphertext $C_m$ for $m$ by multiplying $E(1)$ by $m$, as in \eqref{eq:encryption_of_one}. Then for each ciphertext $C$ in the dataset, EveCorp computes $C – C_m$. If $m$ is contained in the dataset, then $C – C_m$ will be an encryption of $0$ for the $C$ corresponding to $m$. EveCorp recognizes this, and now knows that $m$ is in the data. To be more specific, perhaps a list of encrypted names of medical patients appears in the data, and EveCorp wants to see if JohnDoe is in that list. If they can recognize encryptions of $0$, then EveCorp can access this information.

And thus it is unacceptable for external entities to be able to consistently recognize encryptions of $0$.

Up to now, I’ve been a bit loose by saying “an encryption of zero” or “an encryption of $m$”. The reason for this is that to protect against recognition of encryptions of $0$, some entropy is added to the encryption function $E(\cdot)$, making it multivalued. So if we have a message $M$ and we encrypt it once to get $E(M)$, and we encrypt $M$ later and get $E'(M)$, it is often not true that $E(M) = E'(M)$, even though they are both encryptions of the same message. But these systems are designed so that it is true that $C(E(M)) = C(E'(M)) = M$, so that the entropy doesn’t matter.

This is a separate matter, and something that I will probably return to later.

## Math 100 Fall 2016: Concluding Remarks

It is that time of year. Classes are over. Campus is emptying. Soon it will be mostly emptiness, snow, and grad students (who of course never leave).

I like to take some time to reflect on the course. How did it go? What went well and what didn’t work out? And now that all the numbers are in, we can examine course trends and data.

Since numbers are direct and graphs are pretty, let’s look at the numbers first.

## Math 100 grades at a glance

Let’s get an understanding of the distribution of grades in the course, all at once.

These are classic box plots. The center line of each box denotes the median. The left and right ends of the box indicate the 1st and 3rd quartiles. As a quick reminder, the 1st quartile is the point where 25% of students received that grade or lower. The 3rd quartile is the point where 75% of students received that grade or lower. So within each box lies 50% of the course.

Each box has two arms (or “whiskers”) extending out, indicating the other grades of students. Points that are plotted separately are statistical outliers, which means that they are $1.5 \cdot (Q_3 – Q_1)$ higher than $Q_3$ or lower than $Q_1$ (where $Q_1$ denotes the first quartile and $Q_3$ indicates the third quartile).

Within each blob, you’ll notice an embedded box-and-whisker graph. The white dots indicate the medians, and the thicker black parts indicate the central 50% of the grade. The width of the colored blobs roughly indicate how many students scored within that region. [As an aside, each blob actually has the same area, so the area is a meaningful data point].

Posted in Brown University, Math 100, Mathematics, Teaching | Tagged , , , , | 1 Comment

## Computing $\pi$

This note was originally written in the context of my fall Math 100 class at Brown University. It is also available as a pdf note.

While investigating Taylor series, we proved that
\label{eq:base}
\frac{\pi}{4} = 1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \frac{1}{9} + \cdots

Let’s remind ourselves how. Begin with the geometric series

\frac{1}{1 + x^2} = 1 – x^2 + x^4 – x^6 + x^8 + \cdots = \sum_{n = 0}^\infty (-1)^n x^{2n}. \notag

(We showed that this has interval of convergence $\lvert x \rvert < 1$). Integrating this geometric series yields

\int_0^x \frac{1}{1 + t^2} dt = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}. \notag

Note that this has interval of convergence $-1 < x \leq 1$.

We also recognize this integral as

\int_0^x \frac{1}{1 + t^2} dt = \text{arctan}(x), \notag

one of the common integrals arising from trigonometric substitution. Putting these together, we find that

\text{arctan}(x) = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}. \notag

As $x = 1$ is within the interval of convergence, we can substitute $x = 1$ into the series to find the representation

\text{arctan}(1) = 1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{1}{2n+1}. \notag

Since $\text{arctan}(1) = \frac{\pi}{4}$, this gives the representation for $\pi/4$ given in \eqref{eq:base}.

However, since $x=1$ was at the very edge of the interval of convergence, this series converges very, very slowly. For instance, using the first $50$ terms gives the approximation

\pi \approx 3.121594652591011. \notag

The expansion of $\pi$ is actually

\pi = 3.141592653589793238462\ldots \notag

So the first $50$ terms of \eqref{eq:base} gives two digits of accuracy. That’s not very good.

I think it is very natural to ask: can we do better? This series converges slowly — can we find one that converges more quickly?

## Series Convergence Tests with Prototypical Examples

This is a note written for my Fall 2016 Math 100 class at Brown University. We are currently learning about various tests for determining whether series converge or diverge. In this note, we collect these tests together in a single document. We give a brief description of each test, some indicators of when each test would be good to use, and give a prototypical example for each. Note that we do justify any of these tests here — we’ve discussed that extensively in class. [But if something is unclear, send me an email or head to my office hours]. This is here to remind us of the variety of the various tests of convergence.

A copy of just the statements of the tests, put together, can be found here. A pdf copy of this whole post can be found here.

In order, we discuss the following tests:

1. The $n$th term test, also called the basic divergence test
2. Recognizing an alternating series
3. Recognizing a geometric series
4. Recognizing a telescoping series
5. The Integral Test
6. P-series
7. Direct (or basic) comparison
8. Limit comparison
9. The ratio test
10. The root test

## The $n$th term test

### Statement

Suppose we are looking at $\sum_{n = 1}^\infty a_n$ and

\lim_{n \to \infty} a_n \neq 0. \notag

Then $\sum_{n = 1}^\infty a_n$ does not converge.

### When to use it

When applicable, the $n$th term test for divergence is usually the easiest and quickest way to confirm that a series diverges. When first considering a series, it’s a good idea to think about whether the terms go to zero or not. But remember that if the limit of the individual terms is zero, then it is necessary to think harder about whether the series converges or diverges.

### Example

Each of the series

\sum_{n = 1}^\infty \frac{n+1}{2n + 4}, \quad \sum_{n = 1}^\infty \cos n, \quad \sum_{n = 1}^\infty \sqrt{n} \notag

diverges since their limits are not $0$.

## Recognizing alternating series

### Statement

Suppose $\sum_{n = 1}^\infty (-1)^n a_n$ is a series where

1. $a_n \geq 0$,
2. $a_n$ is decreasing, and
3. $\lim_{n \to \infty} a_n = 0$.

Then $\sum_{n = 1}^\infty (-1)^n a_n$ converges.

Stated differently, if the terms are alternating sign, decreasing in absolute size, and converging to zero, then the series converges.

### When to use it

The key is in the name — if the series is alternating, then this is the goto idea of analysis. Note that if the terms of a series are alternating and decreasing, but the terms do not go to zero, then the series diverges by the $n$th term test.

### Example

Suppose we are looking at the series

\sum_{n = 1}^\infty \frac{(-1)^n}{\log(n+1)} = \frac{-1}{\log 2} + \frac{1}{\log 3} + \frac{-1}{\log 4} + \cdots \notag

The terms are alternating.
The sizes of the terms are $\frac{1}{\log (n+1)}$, and these are decreasing.
Finally,

\lim_{n \to \infty} \frac{1}{\log(n+1)} = 0. \notag

Thus the alternating series test applies and shows that this series converges.