Category Archives: sagemath

Choosing functions and generating figures for “When are there continuous choices for the mean value abscissa?”

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions well.

Choosing the right functions to plot

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$
g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),
$$
which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

A first idea

Thus the first task is to be able to generate functions $f$ such that:

  1. $f(0) = 0$,
  2. $f(3) = 0$,
  3. $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
  4. $f'(x)$ has prescribed order of vanishing at $1$, and
  5. $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in Hermite interpolation. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

  1. determine the degree of the polynomial,
  2. create a degree $n-1$ polynomial with variable coefficients in sympy,
  3. have sympy symbolically compute the relations the coefficients must satisfy,
  4. ask sympy to solve this symbolic system of equations.

In code, this looks like

import sympy
from sympy.abc import X, B, C, D    # Establish our variable names
def interpolate(conds):
    """
    Finds the polynomial of minimal degree that solves the given Hermite conditions.

    conds is a list of the form
      [(x1, r1, v1), (x2, r2, v2), ...]
    where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
    """
    # the degree will be one less than the number of conditions
    n = len(conds)

    # generate a symbol for each coefficient
    A = [sympy.Symbol("a[%d]" % i) for i in range(n)]

    # generate the desired polynomial symbolically
    P = sum([A[i] * X**i for i in range(n)])

    # generate the equations the polynomial must satisfy
    #
    # for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
    # substitutes x in for X, and requires that this equals v.
    EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]

    # solve this system for the coefficients A[n]
    SOLN = sympy.solve(EQNS, A)

    return P.subs(SOLN)

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$
p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.
$$
In sympy variables, we think of this as

P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1

Indeed, $2x – 1$ is this polynomial.

Too rigid

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

Add knobs to turn

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to interpolate.

# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1

# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]

def c_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at c0 equal to `n`, with
    first nonzero residue `residue`.

    NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
    of additional degeneracy to add.  This may be a source of off-by-one errors.
    """
    return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]


def b_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at b0 equal to `n`, with
    first nonzero residue `residue`.
    """
    return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]

def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return interpolate(conds)

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16

Too many knobs

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

Minimize change

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form (pt, order, VAL) and think of VAL as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown VAL.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition (2, 1, VAL). Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to VAL over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to VAL and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of VAL.1

In code, this looks like

def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
    """
    Find the polynomial of minimal degree that interpolates the Hermite
    conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
    norm on `interval` of its derivative.
    """
    # Add the symbolic point to the conditions.
    # Recall that D is a sympy variable
    new_conds = conds + [(ctrl_point, order, D)]

    # Find the polynomial interpolating `new_conds`, symbolic in X *and* D
    P = interpolate(new_conds)

    # Compute L2 norm of the derivative on `interval`
    L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))

    # Take the first critical point of the L2 norm with respect to D
    SOLN = sympy.solve(sympy.diff(L2, D), D)[0]

    # Substitute the minimizing solution in for D and return
    return P.subs(D, SOLN)


def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem whose
    derivative on (0, 4) has minimal L2 norm.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.

    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return smoother_interpolate(conds)

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
  + 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212

> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
  + 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X

Is it much better? Let’s compute the L2 norms.

> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349

> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

import matplotlib.pyplot as plt
import numpy as np

def basic_plot(F, n=300):
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    b1d = np.linspace(-.5, 4.5, n)
    f = sympy.lambdify(X, F)(b1d)
    ax.plot(b1d,f,'k')
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-1, 5])
    ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
    fig.savefig("basic_plot.pdf")

Then the plot of poly_with_degens() is given by

 

 

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of smoother_poly_with_degens() is given by

This stays in frame between $0$ and $4$, as desired.

Choose data to highlight and make the functions

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Plotting all possible mean value abscissae

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$
F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).
$$
Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

  1. Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
  2. Restrict attention to the domain where solutions are meaningful.
  3. Plot the contour of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol F in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt

def abscissa_plot(F, n=300):
    # Compute the derivative of f
    DF = sympy.diff(F,X)

    # Define CAP_F --- "capital F"
    #
    # this is (f(b) - f(0))/(b - 0) - f'(c).
    CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)

    # build the mesh
    b1d = np.linspace(-.5, 4.5, n)
    b2d, c2d = np.meshgrid(b1d, b1d)

    # compute CAP_F within the mesh
    cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)

    # restrict attention to below the diagonal --- we require c < b
    # (although the mas inequality looks reversed in this perspective)
    valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)

    # Set up plot basics
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-.5, 4.5])

    # plot the contour
    ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')

    # plot a diagonal line representing the boundary
    ax.plot(b1d,b1d,'k--')

    # plot the guaranteed point
    ax.plot(b0,c0,'ko')

    fig.savefig("abscissa_plot.pdf")

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for poly_with_degens(), while for smoother_poly_with_degens() we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

 

Copy of the code

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | 3 Comments

Talk: Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles

Here are some notes for my talk Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles (an invitation to analytic number theory), which I’m giving on Tuesday 26 February at Macalester College.

The slides for my talk are available here.

The overarching idea of the talk is to explore the deep relationship between

  1. right triangles with rational side lengths and area $n$,
  2. three-term arithmetic progressions of squares with common difference $n$, and
  3. rational points on the elliptic curve $Y^2 = X^3 – n^2 X$.

If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a congruent number.

By understanding this relationship, we also describe the ideas and results in the paper A Shifted Sum for the Congruent Number Problem, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.

Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like sage.1

For the rest of this note, I’ll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.

Firstly, one needs access to sage. It’s free to install, but it’s quite large. The easiest way to begin using sage immediately is to use cocalc.com,  a free interface to sage (and other tools) that was created by William Stein, who also created sage.

In a sage session, we can create an elliptic curve through


> E6 = EllipticCurve([-36, 0])
> E6
Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field

More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use


> n = 6   # (or anything you want)
> E = EllipticCurve([-n**2, 0])

We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.


> E6.rank()
1

If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.2

For the curve $Y^2 = X^3 – 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through


> E6.point_search(10)
[(-3 : 9 : 1)]

The 10 in this code is a limit on the complexity of the point. The precise definition isn’t important — using $10$ is a reasonable limit for us.

We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 – 36 X Z^2$ — it turns out that in many cases, it’s easier to perform computations when every term is a polynomial of the same degree. The coordinates it’s giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.


> Pt = E6.point_search(10)[0]  # The [0] means to return the first element of the list
> Pt.xy()
(-3, 9)

In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by
$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$

We can write a sage function to perform this map for us, through


> def pt_to_triangle(P):
    x, y = P.xy()
    return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y

> pt_to_triangle(Pt)
(3, 4, 5)

This returns the $(3, 4, 5)$ triangle!

Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.


> Pt2 = Pt + Pt
> Pt2.xy()
(25/4, -35/8)
> pt_to_triangle(Pt2)
(7/10, 120/7, -1201/70)

Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it’s the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).

Let’s do one more.


> Pt3 = Pt + Pt + Pt
> Pt3.xy()
(-1587/1369, -321057/50653)
> pt_to_triangle(Pt3)
(-4653/851, -3404/1551, -7776485/1319901)

That’s a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!

Posted in Expository, Math.NT, Mathematics, Programming, sage, sagemath, sagemath | Leave a comment

Using lcalc to compute half-integral weight L-functions

This is a brief note intended primarily for my collaborators interested in using Rubinstein’s lcalc to compute the values of half-integral weight $L$-functions.

We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn’t expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage’s distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage’s version).

Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.

Building the datafile

It is assumed that the $L$-function is normalized in such a way that
$$\begin{equation}
\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.
\end{equation}$$
This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.

An annotated version of the datafile you should create looks like this

2                  # 2 means the Dirichlet coefficients are reals
0                  # 0 means the L-function isn't a "nice" one
10000              # 10000 coefficients will be provided
0                  # 0 means the coefficients are not periodic
1                  # num Gamma factors of form \Gamma(\gamma s + \lambda)
1                  # the \gamma in the Gamma factor
1.75 0             # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790  # Q. In this case, 1/pi
1 0                # real and imaginary parts of omega, sign of func. eq.
0                  # number of poles
1.000000000000000  # a(1)
-1.78381067250408  # a(2)
...                # ...
-0.622124724090625 # a(10000)

If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, datafiles should only contain numbers and should not contain comments. This annotated version is for convenience, not for use.

You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it’s surely possible to use more, but this was the test-setup that I first set up.

Generating the coefficients for this example

In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.

One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.

num_coeffs = 10**5 + 1
weight     = 9.0 / 2.0

R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
    if i % 2 == 1:
        eta2_coeffs.append(0)
    else:
        eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3

coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])

normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
    normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
    normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])

Using lcalc now

Suppose that you have a datafile, called g1_lcalcfile.txt (for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as

# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')

# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')

# See lcalc.help() for more on calling lcalc.

The key in these is to pass along the datafile through the -F argument.

Posted in Data, Mathematics, Programming, sage, sagemath, sagemath | Tagged | Leave a comment

Sage Days 87 Demo: Interfacing between sage and the LMFDB

Interfacing sage and the LMFDB — a prototype

The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.
This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is a prototype and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, please request/comment/ask either now, or at my email: david@lowryduda.com.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve
In [2]:
lmfdb_ecurve.search(rank=1)
Out[2]:
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 1240542*x - 531472509 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 - x^2 + 8100*x - 263219 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 + 637*x - 68783 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 + 36*x - 380 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2535*x - 49982 over Rational Field]
This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))
<class 'LMFDB2sage.ell_lmfdb.EllipticCurve_rational_field_lmfdb_with_category'>
Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:
# Try autocompleting the following. It has all the things!
print(dir(E))
['CPS_height_bound', 'CartesianProduct',
'Chow_form', 'Hom',
'Jacobian', 'Jacobian_matrix',
'Lambda', 'Np',
'S_integral_points', '_AlgebraicScheme__A',
'_AlgebraicScheme__divisor_group', '_AlgebraicScheme_subscheme__polys',
'_EllipticCurve_generic__ainvs', '_EllipticCurve_generic__b_invariants',
'_EllipticCurve_generic__base_ring', '_EllipticCurve_generic__discriminant',
'_EllipticCurve_generic__is_over_RationalField', '_EllipticCurve_generic__multiple_x_denominator',
'_EllipticCurve_generic__multiple_x_numerator', '_EllipticCurve_rational_field__conductor_pari',
'_EllipticCurve_rational_field__generalized_congruence_number', '_EllipticCurve_rational_field__generalized_modular_degree',
'_EllipticCurve_rational_field__gens', '_EllipticCurve_rational_field__modular_degree',
'_EllipticCurve_rational_field__np', '_EllipticCurve_rational_field__rank',
'_EllipticCurve_rational_field__regulator', '_EllipticCurve_rational_field__torsion_order',
'_Hom_', '__add__', '__cached_methods', '__call__',
'__class__', '__cmp__', '__contains__', '__delattr__',
'__dict__', '__dir__', '__div__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__',
'__getitem__', '__getstate__', '__gt__', '__hash__',
'__init__', '__le__', '__lt__', '__make_element_class__',
'__module__', '__mul__', '__ne__', '__new__',
'__nonzero__', '__pari__', '__pow__', '__pyx_vtable__',
'__rdiv__', '__reduce__', '__reduce_ex__', '__repr__',
'__rmul__', '__setattr__', '__setstate__', '__sizeof__',
'__str__', '__subclasshook__', '__temporarily_change_names', '__truediv__',
'__weakref__', '_abstract_element_class', '_adjust_heegner_index', '_an_element_',
'_ascii_art_', '_assign_names', '_axiom_', '_axiom_init_',
'_base', '_base_ring', '_base_scheme', '_best_affine_patch',
'_cache__point_homset', '_cache_an_element', '_cache_key', '_check_satisfies_equations',
'_cmp_', '_coerce_map_from_', '_coerce_map_via', '_coercions_used',
'_compute_gens', '_convert_map_from_', '_convert_method_name', '_defining_names',
'_defining_params_', '_doccls', '_element_constructor', '_element_constructor_',
'_element_constructor_from_element_class', '_element_init_pass_parent', '_factory_data', '_first_ngens',
'_forward_image', '_fricas_', '_fricas_init_', '_gap_',
'_gap_init_', '_generalized_congmod_numbers', '_generic_coerce_map', '_generic_convert_map',
'_get_action_', '_get_local_data', '_giac_', '_giac_init_',
'_gp_', '_gp_init_', '_heegner_best_tau', '_heegner_forms_list',
'_heegner_index_in_EK', '_homset', '_init_category_', '_initial_action_list',
'_initial_coerce_list', '_initial_convert_list', '_interface_', '_interface_init_',
'_interface_is_cached_', '_internal_coerce_map_from', '_internal_convert_map_from', '_introspect_coerce',
'_is_category_initialized', '_is_valid_homomorphism_', '_isoclass', '_json',
'_kash_', '_kash_init_', '_known_points', '_latex_',
'_lmfdb_label', '_lmfdb_regulator', '_macaulay2_', '_macaulay2_init_',
'_magma_init_', '_maple_', '_maple_init_', '_mathematica_',
'_mathematica_init_', '_maxima_', '_maxima_init_', '_maxima_lib_',
'_maxima_lib_init_', '_modsym', '_modular_symbol_normalize', '_morphism',
'_multiple_of_degree_of_isogeny_to_optimal_curve', '_multiple_x_denominator', '_multiple_x_numerator', '_names',
'_normalize_padic_lseries', '_octave_', '_octave_init_', '_p_primary_torsion_basis',
'_pari_', '_pari_init_', '_point', '_point_homset',
'_polymake_', '_polymake_init_', '_populate_coercion_lists_', '_r_init_',
'_reduce_model', '_reduce_point', '_reduction', '_refine_category_',
'_repr_', '_repr_option', '_repr_type', '_sage_',
'_scale_by_units', '_set_conductor', '_set_cremona_label', '_set_element_constructor',
'_set_gens', '_set_modular_degree', '_set_rank', '_set_torsion_order',
'_shortest_paths', '_singular_', '_singular_init_', '_symbolic_',
'_test_an_element', '_test_cardinality', '_test_category', '_test_elements',
'_test_elements_eq_reflexive', '_test_elements_eq_symmetric', '_test_elements_eq_transitive', '_test_elements_neq',
'_test_eq', '_test_new', '_test_not_implemented_methods', '_test_pickling',
'_test_some_elements', '_tester', '_torsion_bound', '_unicode_art_',
'_unset_category', '_unset_coercions_used', '_unset_embedding', 'a1',
'a2', 'a3', 'a4', 'a6',
'a_invariants', 'abelian_variety', 'affine_patch', 'ainvs',
'algebra', 'ambient_space', 'an', 'an_element',
'analytic_rank', 'analytic_rank_upper_bound', 'anlist', 'antilogarithm',
'ap', 'aplist', 'arithmetic_genus', 'automorphisms',
'b2', 'b4', 'b6', 'b8',
'b_invariants', 'base', 'base_extend', 'base_field',
'base_morphism', 'base_ring', 'base_scheme', 'c4',
'c6', 'c_invariants', 'cartesian_product', 'categories',
'category', 'change_ring', 'change_weierstrass_model', 'cm_discriminant',
'codimension', 'coerce', 'coerce_embedding', 'coerce_map_from',
'complement', 'conductor', 'congruence_number', 'construction',
'convert_map_from', 'coordinate_ring', 'count_points', 'cremona_label',
'database_attributes', 'database_curve', 'db', 'defining_ideal',
'defining_polynomial', 'defining_polynomials', 'degree', 'descend_to',
'dimension', 'dimension_absolute', 'dimension_relative', 'discriminant',
'division_field', 'division_polynomial', 'division_polynomial_0', 'divisor',
'divisor_group', 'divisor_of_function', 'dual', 'dump',
'dumps', 'element_class', 'elliptic_exponential', 'embedding_center',
'embedding_morphism', 'eval_modular_form', 'excellent_position', 'formal',
'formal_group', 'fundamental_group', 'galois_representation', 'gen',
'gens', 'gens_certain', 'gens_dict', 'gens_dict_recursive',
'genus', 'geometric_genus', 'get_action', 'global_integral_model',
'global_minimal_model', 'global_minimality_class', 'has_additive_reduction', 'has_bad_reduction',
'has_base', 'has_cm', 'has_coerce_map_from', 'has_global_minimal_model',
'has_good_reduction', 'has_good_reduction_outside_S', 'has_multiplicative_reduction', 'has_nonsplit_multiplicative_reduction',
'has_rational_cm', 'has_split_multiplicative_reduction', 'hasse_invariant', 'heegner_discriminants',
'heegner_discriminants_list', 'heegner_index', 'heegner_index_bound', 'heegner_point',
'heegner_point_height', 'heegner_sha_an', 'height', 'height_function',
'height_pairing_matrix', 'hom', 'hyperelliptic_polynomials', 'identity_morphism',
'inject_variables', 'integral_model', 'integral_points', 'integral_short_weierstrass_model',
'integral_weierstrass_model', 'integral_x_coords_in_interval', 'intersection', 'intersection_multiplicity',
'intersection_points', 'intersects_at', 'irreducible_components', 'is_atomic_repr',
'is_coercion_cached', 'is_complete_intersection', 'is_conversion_cached', 'is_exact',
'is_global_integral_model', 'is_global_minimal_model', 'is_good', 'is_integral',
'is_irreducible', 'is_isogenous', 'is_isomorphic', 'is_local_integral_model',
'is_minimal', 'is_on_curve', 'is_ordinary', 'is_ordinary_singularity',
'is_p_integral', 'is_p_minimal', 'is_parent_of', 'is_projective',
'is_quadratic_twist', 'is_quartic_twist', 'is_semistable', 'is_sextic_twist',
'is_singular', 'is_smooth', 'is_supersingular', 'is_transverse',
'is_x_coord', 'isogenies_prime_degree', 'isogeny', 'isogeny_class',
'isogeny_codomain', 'isogeny_degree', 'isogeny_graph', 'isomorphism_to',
'isomorphisms', 'j_invariant', 'kodaira_symbol', 'kodaira_type',
'kodaira_type_old', 'kolyvagin_point', 'label', 'latex_name',
'latex_variable_names', 'lift_x', 'lll_reduce', 'lmfdb_page',
'local_coordinates', 'local_data', 'local_integral_model', 'local_minimal_model',
'lseries', 'lseries_gross_zagier', 'manin_constant', 'matrix_of_frobenius',
'minimal_discriminant_ideal', 'minimal_model', 'minimal_quadratic_twist', 'mod5family',
'modular_degree', 'modular_form', 'modular_parametrization', 'modular_symbol',
'modular_symbol_numerical', 'modular_symbol_space', 'multiplication_by_m', 'multiplication_by_m_isogeny',
'multiplicity', 'mwrank', 'mwrank_curve', 'neighborhood',
'newform', 'ngens', 'non_minimal_primes', 'nth_iterate',
'objgen', 'objgens', 'optimal_curve', 'orbit',
'ordinary_model', 'ordinary_primes', 'padic_E2', 'padic_height',
'padic_height_pairing_matrix', 'padic_height_via_multiply', 'padic_lseries', 'padic_regulator',
'padic_sigma', 'padic_sigma_truncated', 'parent', 'pari_curve',
'pari_mincurve', 'period_lattice', 'plane_projection', 'plot',
'point', 'point_homset', 'point_search', 'point_set',
'pollack_stevens_modular_symbol', 'preimage', 'projection', 'prove_BSD',
'q_eigenform', 'q_expansion', 'quadratic_transform', 'quadratic_twist',
'quartic_twist', 'rank', 'rank_bound', 'rank_bounds',
'rational_parameterization', 'rational_points', 'real_components', 'reduce',
'reduction', 'register_action', 'register_coercion', 'register_conversion',
'register_embedding', 'regulator', 'regulator_of_points', 'rename',
'reset_name', 'root_number', 'rst_transform', 'satisfies_heegner_hypothesis',
'saturation', 'save', 'scale_curve', 'selmer_rank',
'sextic_twist', 'sha', 'short_weierstrass_model', 'silverman_height_bound',
'simon_two_descent', 'singular_points', 'singular_subscheme', 'some_elements',
'specialization', 'structure_morphism', 'supersingular_primes', 'tamagawa_exponent',
'tamagawa_number', 'tamagawa_number_old', 'tamagawa_numbers', 'tamagawa_product',
'tamagawa_product_bsd', 'tangents', 'tate_curve', 'three_selmer_rank',
'torsion_order', 'torsion_points', 'torsion_polynomial', 'torsion_subgroup',
'two_descent', 'two_descent_simon', 'two_division_polynomial', 'two_torsion_rank',
'union', 'variable_name', 'variable_names', 'weierstrass_p',
'weil_restriction', 'zeta_series']
All the things
This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:
E.defining_ideal()
Out[5]:
Ideal (-x^3 + x*y*z + y^2*z + 887688*x*z^2 + 321987008*z^3) of Multivariate Polynomial Ring in x, y, z over Rational Field
But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)
There are 10 curves returned.
Elliptic Curve defined by y^2 + x*y + y = x^3 - 3476*x - 79152 over Rational Field
And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:
print(E.gens())
print(E.rank())
print(E.regulator())
[(-34 : 17 : 1)]
1
1.63852610029
In [8]:
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())
CPU times: user 971 ms, sys: 6.82 ms, total: 978 ms
Wall time: 978 ms
That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the search() function.
In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

This is where we’re really looking for input.

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.
email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues

Now let’s describe what’s going on under the hood a little bit

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050

The result is a large mess of data, which can be exported as json and parsed.
But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time and thought to parse.
So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]
22
The total functionality of this search function is visible from its current documentation.

In [10]:
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)
    Search the LMFDB for an elliptic curve.

    Note that all inputs are optional, but at least one input is necessary.

    INPUT:

    -  ``label=l`` -- a string ``l`` representing a label in the LMFDB.

    -  ``degree=d`` -- an int ``d`` giving the minimum degree of a
       parameterization of the modular curve

    -  ``conductor=c`` -- an int ``c`` giving the conductor of the curve

    -  ``min_conductor=mc`` -- an int ``mc`` giving a lower bound on the
       conductor for desired curves

    -  ``max_conductor=mc`` -- an int ``mc`` giving an upper bound on the
       conductor for desired curves

    -  ``torsion_order=t`` -- an int ``t`` giving the order of the torsion
       subgroup of the curve

    -  ``rank=r`` -- an int ``r`` giving the rank of the curve

    -  ``regulator=f`` -- a float ``f`` giving the regulator of the curve

    -  ``max_items=m`` -- an int ``m`` (default: 10, max: 100) indicating the
       maximum number of results to return

    -  ``base_item=b`` -- an int ``b`` (default: 0) specifying where to start
       returning values from. The search will begin by returning the ``b``th
       curve. Combined with ``max_items`` to return data in chunks.

    -  ``sort=s`` -- a string ``s`` specifying what database field to sort the
       results on. See the LMFDB api for more info.

    EXAMPLES::

        sage: Es = search(conductor=11050, rank=2)
        [Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 442*x + 1716 over Rational Field, Elliptic Curve defined by y^2 + x*y = x^3 - x^2 + 1558*x + 11716 over Rational Field]
        sage: E = E[0]
        sage: E.conductor()
        11050
    
In [11]:
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)
Out[11]:
[Elliptic Curve defined by y^2 + y = x^3 + x^2 - 5155*x + 140756 over Rational Field]

What if there are no curves?

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.
For example, no elliptic curve in the database has rank 5.

In [12]:
lmfdb_ecurve.search(rank=5)
No fields were found satisfying input criteria.

How does one step through the data?

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument base_item (the name will probably change… to skip? or perhaps to offset?) to start returning at the base_itemth element.

In [13]:
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3))              # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field]

[Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field]
Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept max_conductor or min_conductor (or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).

In [14]:
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000)  # Not implemented
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-14-3d98f2cf7a13> in <module>()
----> 1 lmfdb_ecurve.search(rank=Integer(1), min_conductor=Integer(500), max_conductor=Integer(10000))  # Not implemented

/home/djlowry/Dropbox/EllipticCurve_LMFDB/LMFDB2sage/elliptic_curves.py in search(**kwargs)
     76             kwargs[item]
     77             raise NotImplementedError("This would be a great thing to have, " +
---> 78                 "but the LMFDB api does not yet provide this functionality.")
     79         except KeyError:
     80             pass

NotImplementedError: This would be a great thing to have, but the LMFDB api does not yet provide this functionality.
Our EllipticCurve_rational_field_lmfdb class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.
Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:
print(E.gens.__doc__)
        Return generators for the Mordell-Weil group E(Q) *modulo*
        torsion.

        .. warning::

           If the program fails to give a provably correct result, it
           prints a warning message, but does not raise an
           exception. Use :meth:`~gens_certain` to find out if this
           warning message was printed.

        INPUT:

        - ``proof`` -- bool or None (default None), see
          ``proof.elliptic_curve`` or ``sage.structure.proof``

        - ``verbose`` - (default: None), if specified changes the
           verbosity of mwrank computations

        - ``rank1_search`` - (default: 10), if the curve has analytic
          rank 1, try to find a generator by a direct search up to
          this logarithmic height.  If this fails, the usual mwrank
          procedure is called.

        - algorithm -- one of the following:

          - ``'mwrank_shell'`` (default) -- call mwrank shell command

          - ``'mwrank_lib'`` -- call mwrank C library

        - ``only_use_mwrank`` -- bool (default True) if False, first
          attempts to use more naive, natively implemented methods

        - ``use_database`` -- bool (default True) if True, attempts to
          find curve and gens in the (optional) database

        - ``descent_second_limit`` -- (default: 12) used in 2-descent

        - ``sat_bound`` -- (default: 1000) bound on primes used in
          saturation.  If the computed bound on the index of the
          points found by two-descent in the Mordell-Weil group is
          greater than this, a warning message will be displayed.

        OUTPUT:

        - ``generators`` - list of generators for the Mordell-Weil
           group modulo torsion

        IMPLEMENTATION: Uses Cremona's mwrank C library.

        EXAMPLES::

            sage: E = EllipticCurve('389a')
            sage: E.gens()                 # random output
            [(-1 : 1 : 1), (0 : 0 : 1)]

        A non-integral example::

            sage: E = EllipticCurve([-3/8,-2/3])
            sage: E.gens() # random (up to sign)
            [(10/9 : 29/54 : 1)]

        A non-minimal example::

            sage: E = EllipticCurve('389a1')
            sage: E1 = E.change_weierstrass_model([1/20,0,0,0]); E1
            Elliptic Curve defined by y^2 + 8000*y = x^3 + 400*x^2 - 320000*x over Rational Field
            sage: E1.gens() # random (if database not used)
            [(-400 : 8000 : 1), (0 : -8000 : 1)]
        
Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like regulator().)

In [16]:
print(E.regulator.__doc__)
        Return the regulator of the curve. This is taken from the lmfdb if available.

        NOTE:
            In later implementations, this docstring will probably include the
            docstring from sage's regular implementation. But that's not
            currently the case.
        

This concludes our demo of an interface between sage and the LMFDB.

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.
XKCD's automation

Posted in Expository, LMFDB, Math.NT, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | Leave a comment

A Notebook Preparing for a Talk at Quebec-Maine

This is a notebook containing a representative sample of the code I used to  generate the results and pictures presented at the Quebec-Maine Number Theory Conference on 9 October 2016. It was written in a Jupyter Notebook using Sage 7.3, and later converted for presentation on this site.
There is a version of the notebook available on github. Alternately, a static html version without WordPress formatting is available here. Finally, this notebook is also available in pdf form.
The slides for my talk are available here.

Testing for a Generalized Conjecture on Iterated Sums of Coefficients of Cusp Forms

Let $f$ be a weight $k$ cusp form with Fourier expansion

$$ f(z) = \sum_{n \geq 1} a(n) e(nz). $$

Deligne has shown that $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$. It is conjectured that

$$ S_f^1(n) := \sum_{m \leq X} a(m) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}. $$

It is known that this holds on average, and we recently showed that this holds on average in short intervals.
(See HKLDW1, HKLDW2, and HKLDW3 for details and an overview of work in this area).
This is particularly notable, as the resulting exponent is only 1/4 higher than that of a single coefficient.
This indicates extreme cancellation, far more than what is implied merely by the signs of $a(n)$ being random.

It seems that we also have

$$ \sum_{m \leq X} S_f^1(m) \ll X^{\frac{k-1}{2} + \frac{2}{4} + \epsilon}. $$

That is, the sum of sums seems to add in only an additional 1/4 exponent.
This is unexpected and a bit mysterious.

The purpose of this notebook is to explore this and higher conjectures.
Define the $j$th iterated sum as

$$ S_f^j(X) := \sum_{m \leq X} S_f^{j-1} (m).$$

Then we numerically estimate bounds on the exponent $\delta(j)$ such that

$$ S_f^j(X) \ll X^{\frac{k-1}{2} + \delta(j) + \epsilon}. $$

In [1]:
# This was written in SageMath 7.3 through a Jupyter Notebook.
# Jupyter interfaces to sage by loading it as an extension
%load_ext sage

# sage plays strangely with ipython. This re-allows inline plotting
from IPython.display import display, Image

We first need a list of coefficients of one (or more) cusp forms.
For initial investigation, we begin with a list of 50,000 coefficients of the weight $12$ cusp form on $\text{SL}(2, \mathbb{Z})$, $\Delta(z)$, i.e. Ramanujan’s delta function.
We will use the data associated to the 50,000 coefficients for pictoral investigation as well.

We will be performing some numerical investigation as well.
For this, we will use the first 2.5 million coefficients of $\Delta(z)$

In [2]:
# Gather 10 coefficients for simple checking
check_10 = delta_qexp(11).coefficients()
print check_10

fiftyk_coeffs = delta_qexp(50000).coefficients()
print fiftyk_coeffs[:10] # these match expected

twomil_coeffs = delta_qexp(2500000).coefficients()
print twomil_coeffs[:10] # these also match expected
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
In [3]:
# Function which iterates partial sums from a list of coefficients

def partial_sum(baselist):
    ret_list = [baselist[0]]
    for b in baselist[1:]:
        ret_list.append(ret_list[-1] + b)
    return ret_list

print check_10
print partial_sum(check_10) # Should be the partial sums
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
In [4]:
# Calculate the first 10 iterated partial sums
# We store them in a single list list, `sums_list`
# the zeroth elelemnt of the list is the array of initial coefficients
# the first element is the array of first partial sums, S_f(n)
# the second element is the array of second iterated partial sums, S_f^2(n)

fiftyk_sums_list = []
fiftyk_sums_list.append(fiftyk_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
    fiftyk_sums_list.append(partial_sum(fiftyk_sums_list[-1]))
    
print partial_sum(check_10)
print fiftyk_sums_list[1][:10]         # should match above
    
twomil_sums_list = []
twomil_sums_list.append(twomil_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
    twomil_sums_list.append(partial_sum(twomil_sums_list[-1]))
    
print twomil_sums_list[1][:10]         # should match above
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]

As is easily visible, the sums alternate in sign very rapidly.
For instance, we believe tha the first partial sums should change sign about once every $X^{1/4}$ terms in the interval $[X, 2X]$.
In this exploration, we are interested in the sizes of the coefficients.
But in HKLDW3, we investigated some of the sign changes of the partial sums.

Now seems like a nice time to briefly look at the data we currently have.
What do the first 50 thousand coefficients look like?
So we normalize them, getting $A(n) = a(n)/n^{5.5}$ and plot these coefficients.

In [5]:
norm_list = []
for n,e in enumerate(fiftyk_coeffs, 1):
    normalized_element = 1.0 * e / (1.0 * n**(5.5))
    norm_list.append(normalized_element)
print norm_list[:10]
1
In [6]:
# Make a quick display
normed_coeffs_plot = scatter_plot(zip(range(1,60000), norm_list), markersize=.02)
normed_coeffs_plot.save("normed_coeffs_plot.png")
display(Image("normed_coeffs_plot.png"))

Since some figures will be featuring prominently in the talk I’m giving at Quebec-Maine, let us make high-quality figures now.

 

(more…)

  1. 00000000000000, -0.530330085889911, 0.598733612492945, -0.718750000000000, 0.691213333204735, -0.317526448138560, -0.376547696558964, 0.911504835123284, -0.641518061271148, -0.366571226366719
Posted in Math.NT, Mathematics, Open, Programming, sagemath | Tagged , , , | 1 Comment