Category Archives: Programming

Programming Masthead

I maintain the following programming projects:

LMFDB: (source), the code running the L-functions and Modular Forms website LMFDB. I’m one of the major contributors to the project, and the grant supporting my postdoc at the University of Warwick includes support for me to contribute to the LMFDB.

HNRSS: (source), a HackerNews RSS generator written in python. HNRSS periodically updates RSS feeds from the HN frontpage and best list. It also attempts to automatically summarize the link (if there is a link) and includes the top five comments, all to make it easier to determine whether it’s worth checking out.

LaTeX2Jax: (source), a tool to convert LaTeX documents to HTML with MathJax. This is a modification of the earlier MSE2WP, which converts Math.StackExchange flavored markdown to WordPress+MathJax compatible html. In particular, this is more general, and allows better control of the resulting html by exposing more CSS elements (that generically aren’t available on free WordPress setups). This is what is used for all math posts on this site.

MSE2WP: (source), a tool to convert Math.Stackexchange flavored markdown to WordPress+MathJax compatible html. This was once written for the Math.Stackexchange Community Blog. But as that blog is shutting down, there is much less of a purpose for this script. Note that this began as a modified version of latex2wp.

 

I actively contribute to:

vundle: (source), a minimalist but powerful vim plugin manager. Now that vim8 comes with a native plugin manager, there are fewer reasons to use vundle. But I find the vundle interface and code very straightforward. Users with lots of plugins, who frequently change plugins, or with complicated dependencies may want to consider vim-plug, which was inspired by vundle, but which takes a more sophisticated and complicated look at plugins and is often faster.

python-markdown2: (source),  a fast and complete python implementation of markdown, with a few additional features.

 

And I generally support or have contributed to:

SageMath: (main site), a free and open source system of tools for mathematics. Some think of it as a free alternative to the “Big M’s” — Maple, Mathematica, Magma.

Matplotlib: (main site), a plotting library in python. Most of the static plots on this site were creating using matplotlib.

crouton: (source), a tool for making Chromebooks, which by default are very limited in capability, into hackable linux laptops. This lets you directly run Linux on the device at the same time as having ChromeOS installed. The only cost is that there is absolutely no physical security at all (and every once in a while a ChromeOS update comes around and breaks lots of things). It’s great!

 

Below, you can find my most recent posts tagged “Programming” on this site.

I will note the following posts which have received lots of positive feedback.

  1. A Notebook Preparing for a Talk at Quebec-Maine
  2. A Brief Notebook on Cryptography
  3. Computing pi with Tools from Calculus (which includes computational tidbits, though no actual programming).
Posted in Programming | Leave a comment

Email configuration for mutt on a webfaction server

I have email setup for my sites through webfaction. I have some number of mailboxes and some number of users, and a few users share the same mailboxes.

For a long time I used either a direct webmail or forwarded my site email to a different account, but I’m moving towards more email self-reliance.

A few minutes of searching didn’t tell me how to set up mutt on webfaction. Here is a minimal configuration for what I did.

I will assume that we are configuring email for user@mysite.com with mailbox MAILBOX, and where the password for that mailbox is MAILBOXPASSWORD. I will also assume that the user, mailbox, and password have already been set up. The missing step is to connect it to mutt.

My .muttrc looks like


set realname = "FIRST LAST"
set from = "user@mysite.com"
set use_from = yes
set edit_headers = yes

set imap_user = 'MAILBOX'
set imap_pass = 'MAILBOXPASSWORD'

set folder = "imaps://mail.webfaction.com:993"
set spoolfile = "+INBOX"
set record = "+sent"
set postponed = "+postponed"

set smtp_url = "smtp://MAILBOX@smtp.webfaction.com:587/"
set smtp_pass = "MAILBOXPASSWORD"

# optional caching and ensure security
set header_cache = "~/.mutt/cache/headers"
set message_cachedir = "~/.mutt/cache/bodies"
set certificate_file = "~/.mutt/certificates"

set ssl_starttls=yes
set ssl_force_tls=yes

It’s not particularly complicated, but it wasn’t obvious to me at first either.

Posted in Programming | Tagged , , , | Leave a comment

Choosing functions and generating figures for “When are there continuous choices for the mean value abscissa?”

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions well.

Choosing the right functions to plot

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$
g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),
$$
which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

A first idea

Thus the first task is to be able to generate functions $f$ such that:

  1. $f(0) = 0$,
  2. $f(3) = 0$,
  3. $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
  4. $f'(x)$ has prescribed order of vanishing at $1$, and
  5. $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in Hermite interpolation. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

  1. determine the degree of the polynomial,
  2. create a degree $n-1$ polynomial with variable coefficients in sympy,
  3. have sympy symbolically compute the relations the coefficients must satisfy,
  4. ask sympy to solve this symbolic system of equations.

In code, this looks like

import sympy
from sympy.abc import X, B, C, D    # Establish our variable names
def interpolate(conds):
    """
    Finds the polynomial of minimal degree that solves the given Hermite conditions.

    conds is a list of the form
      [(x1, r1, v1), (x2, r2, v2), ...]
    where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
    """
    # the degree will be one less than the number of conditions
    n = len(conds)

    # generate a symbol for each coefficient
    A = [sympy.Symbol("a[%d]" % i) for i in range(n)]

    # generate the desired polynomial symbolically
    P = sum([A[i] * X**i for i in range(n)])

    # generate the equations the polynomial must satisfy
    #
    # for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
    # substitutes x in for X, and requires that this equals v.
    EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]

    # solve this system for the coefficients A[n]
    SOLN = sympy.solve(EQNS, A)

    return P.subs(SOLN)

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$
p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.
$$
In sympy variables, we think of this as

P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1

Indeed, $2x – 1$ is this polynomial.

Too rigid

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

Add knobs to turn

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to interpolate.

# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1

# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]

def c_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at c0 equal to `n`, with
    first nonzero residue `residue`.

    NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
    of additional degeneracy to add.  This may be a source of off-by-one errors.
    """
    return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]


def b_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at b0 equal to `n`, with
    first nonzero residue `residue`.
    """
    return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]

def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return interpolate(conds)

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16

Too many knobs

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

Minimize change

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form (pt, order, VAL) and think of VAL as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown VAL.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition (2, 1, VAL). Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to VAL over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to VAL and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of VAL.1

In code, this looks like

def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
    """
    Find the polynomial of minimal degree that interpolates the Hermite
    conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
    norm on `interval` of its derivative.
    """
    # Add the symbolic point to the conditions.
    # Recall that D is a sympy variable
    new_conds = conds + [(ctrl_point, order, D)]

    # Find the polynomial interpolating `new_conds`, symbolic in X *and* D
    P = interpolate(new_conds)

    # Compute L2 norm of the derivative on `interval`
    L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))

    # Take the first critical point of the L2 norm with respect to D
    SOLN = sympy.solve(sympy.diff(L2, D), D)[0]

    # Substitute the minimizing solution in for D and return
    return P.subs(D, SOLN)


def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem whose
    derivative on (0, 4) has minimal L2 norm.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.

    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return smoother_interpolate(conds)

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
  + 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212

> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
  + 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X

Is it much better? Let’s compute the L2 norms.

> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349

> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

import matplotlib.pyplot as plt
import numpy as np

def basic_plot(F, n=300):
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    b1d = np.linspace(-.5, 4.5, n)
    f = sympy.lambdify(X, F)(b1d)
    ax.plot(b1d,f,'k')
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-1, 5])
    ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
    fig.savefig("basic_plot.pdf")

Then the plot of poly_with_degens() is given by

 

 

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of smoother_poly_with_degens() is given by

This stays in frame between $0$ and $4$, as desired.

Choose data to highlight and make the functions

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Plotting all possible mean value abscissae

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$
F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).
$$
Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

  1. Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
  2. Restrict attention to the domain where solutions are meaningful.
  3. Plot the contour of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol F in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt

def abscissa_plot(F, n=300):
    # Compute the derivative of f
    DF = sympy.diff(F,X)

    # Define CAP_F --- "capital F"
    #
    # this is (f(b) - f(0))/(b - 0) - f'(c).
    CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)

    # build the mesh
    b1d = np.linspace(-.5, 4.5, n)
    b2d, c2d = np.meshgrid(b1d, b1d)

    # compute CAP_F within the mesh
    cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)

    # restrict attention to below the diagonal --- we require c < b
    # (although the mas inequality looks reversed in this perspective)
    valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)

    # Set up plot basics
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-.5, 4.5])

    # plot the contour
    ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')

    # plot a diagonal line representing the boundary
    ax.plot(b1d,b1d,'k--')

    # plot the guaranteed point
    ax.plot(b0,c0,'ko')

    fig.savefig("abscissa_plot.pdf")

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for poly_with_degens(), while for smoother_poly_with_degens() we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

 

Copy of the code

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | 3 Comments

Talk: Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles

Here are some notes for my talk Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles (an invitation to analytic number theory), which I’m giving on Tuesday 26 February at Macalester College.

The slides for my talk are available here.

The overarching idea of the talk is to explore the deep relationship between

  1. right triangles with rational side lengths and area $n$,
  2. three-term arithmetic progressions of squares with common difference $n$, and
  3. rational points on the elliptic curve $Y^2 = X^3 – n^2 X$.

If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a congruent number.

By understanding this relationship, we also describe the ideas and results in the paper A Shifted Sum for the Congruent Number Problem, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.

Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like sage.1

For the rest of this note, I’ll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.

Firstly, one needs access to sage. It’s free to install, but it’s quite large. The easiest way to begin using sage immediately is to use cocalc.com,  a free interface to sage (and other tools) that was created by William Stein, who also created sage.

In a sage session, we can create an elliptic curve through


> E6 = EllipticCurve([-36, 0])
> E6
Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field

More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use


> n = 6   # (or anything you want)
> E = EllipticCurve([-n**2, 0])

We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.


> E6.rank()
1

If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.2

For the curve $Y^2 = X^3 – 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through


> E6.point_search(10)
[(-3 : 9 : 1)]

The 10 in this code is a limit on the complexity of the point. The precise definition isn’t important — using $10$ is a reasonable limit for us.

We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 – 36 X Z^2$ — it turns out that in many cases, it’s easier to perform computations when every term is a polynomial of the same degree. The coordinates it’s giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.


> Pt = E6.point_search(10)[0]  # The [0] means to return the first element of the list
> Pt.xy()
(-3, 9)

In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by
$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$

We can write a sage function to perform this map for us, through


> def pt_to_triangle(P):
    x, y = P.xy()
    return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y

> pt_to_triangle(Pt)
(3, 4, 5)

This returns the $(3, 4, 5)$ triangle!

Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.


> Pt2 = Pt + Pt
> Pt2.xy()
(25/4, -35/8)
> pt_to_triangle(Pt2)
(7/10, 120/7, -1201/70)

Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it’s the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).

Let’s do one more.


> Pt3 = Pt + Pt + Pt
> Pt3.xy()
(-1587/1369, -321057/50653)
> pt_to_triangle(Pt3)
(-4653/851, -3404/1551, -7776485/1319901)

That’s a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!

Posted in Expository, Math.NT, Mathematics, Programming, sage, sagemath, sagemath | Leave a comment

Writing a Python Script to be Used in Vim

It is no secret that I love vim. I’ve written and helped maintain a variety of vim plugins, but I’ve mostly restricted my attention to direct use of vimscript or to call external tools and redirect their output to a vim buffer.

Only recently did I learn how easy it is to use python to interact directly with vim objects through the vim-python package. And it was so easy that this felt a bit like writing a quick shell script to facilitate some immediate task — it allows quick and dirty work.

In this note, I’ll review how I recently wrote a quick python function (to be used in vim) to facilitate conversion of python f-strings (which I love) to python 3.5 (which I also love, but which is admittedly a bit behind).

Of course the real reason is to remind future me just how easy this is.

Description of the Problem

In python 3.7+, one can use f-strings to format strings. This syntax looks like

x = 4
print(f"Python has at least {x} many ways to format strings now.")

I love them, they’re great. But they are not backportable since they are a change to fundamental syntax. Thus systems relying on previous versions of python (be it python2 or even python3.5) can’t parse python files containing f-strings.

The goal is this: given a line containing an f-string, write an equivalent line containing no f-string.

The formula I choose is simple: use .format() instead. Thus given

s = f"string with {some} {variables}"

we will convert this to

s = "string with {} {}".format(some, variables)

I will assume that the string takes place on one line for simplicity (and because in my application this was almost always true), but that there are any number of variables included in the line.

Background on python in vim

Some good insight is gained through reading :h python-vim. In fact, this is essentially the only documentation on the python-vim interface (though it is pretty good documentation — perhaps the actual best source of learning this interface is to read some plugins which leverage the python connection).

I will assume that vim has been compiled with python3 support (which can be checked by looking for +python3 in vim --version). Then the key idea is that one can use :py3 to call python functions. For example,

:py3 print("Hello")

Being restricted to one line is inconvenient, and it is far more useful to use the form

:py3 << EOF
def say_hello():
  print("Sup doc.")
EOF
:py3 say_hello()

In this way, we can define and use python functions. Right now, the output of these functions simply appears in the status lines at the bottom of the screen. This is of limited value. Often, we will really want to interact directly with the vim buffer. The key to do this is the vim module for python.1

The vim module for python can be used through

import vim

# do vim things

This is what :h python-vim actually documents, and I defer to the list of methods there. The important thing is that it provides direct access to vim buffers.

Today, we will use only the direct access to the current line, vim.current.line.

A Solution

I very routinely open scratch buffers (i.e. buffers that I call scratch.tmp). Sometimes I momentary notes, or I edit vim macros in place, or I define vim functions for short-term use. I will assume that we have opened up a scratch buffer and our pythonfile withfstrings.py. (In fact, I usually open it in a vsplit). I also very routinely assign Q to do whatever convenient function I have written in my scratch buffer (or whatever macro I want to repeat most simply). Of course, these are my idiosyncratic habits — the ideas are easy enough to translate.

In our scratch buffer, we can write a vim function, which is internally a python function, and map the symbol Q to call it.

function! Convert()
py3 << EOF

import vim

import re
bracketed = re.compile("{.*?}")

def convert():
    line = vim.current.line
    single = line.find("'")
    double = line.find('"')
    if single == -1:
        sep = '"'
    elif double == -1 or single < double:
        sep = "'"
    else:
        sep = '"'

    # l sep inner sep post
    l, _, m = line.partition(sep)
    inner, _, post = m.partition(sep)

    #get rid of f
    l = l[:-1]

    ret = ""
    var_names = bracketed.findall(inner)
    var_names = list(map(lambda x: x[1:-1], var_names))

    ret += l + sep + inner + sep
    ret += ".format("
    for var in var_names:
        ret += "{}={}, ".format(var, var)
    ret = ret[:-2]
    ret += ")" + post
    vim.current.line = ret

convert()
EOF

endfunction

com! Conv call Convert()
nnoremap Q :Conv<CR>

To use, one sources the scratch buffer (either directly, like :source scratch.tmp, or through navigating to its split and using :so %). Then one can find a line containing an f-string and hit Q (or alternately use :Conv).

Why is this good?

Although nontrivial, writing a simple python script like this takes only a few minutes. And then converting f-strings became the least important part of my backporting project.2

And more broadly, this is such an easy process. Off-loading the mental burden of repeating annoying tasks is valuable. The most revelatory aspect of this experience for me is that it’s easy enough to allow throwaway functions for momentary ease.

In my recent application, I converted 482 f-strings using this function. A few of them were multiline strings and my (admittedly simple) function failed — and so I did those by hand. But it took me less than 10 minutes to write the function (maybe even less than 5, though my first function expected double quotes only). Overall, this was a great gain.

Keeping the Script

Having come this far, it’s worth considering how we might keep the script around if we wanted to.

One good approach is to separate the python from the vim (allowing, for instance, testing on the python itself) and have a thin vim wrapper. We’ll look at this in two possible levels of encapsulation.

  1. Separate the python from the vim.
  2. Make the function a plugin. (Presumably for slightly more meaningful functions than this one).

Separate the Files

Let’s separate the python from the vim. In this was we can approach the python as proper little python script, merely waiting for a thin vim wrapper around it at the end.

We can separate the python out into the following file, convert_fstring.py.

# convert_fstring.py
import re
bracketed = re.compile("{.*?}")

def convert(line):
    single = line.find("'")
    double = line.find('"')
    if single == -1:
        sep = '"'
    elif double == -1 or single < double:
        sep = "'"
    else:
        sep = '"'

    # l sep inner sep post
    l, _, m = line.partition(sep)
    inner, _, post = m.partition(sep)

    # get rid of f
    l = l[:-1]

    ret = ""
    var_names = bracketed.findall(inner)
    var_names = list(map(lambda x: x[1:-1], var_names))

    ret += l + sep + inner + sep
    ret += ".format("
    for var in var_names:
        ret += "{}={}, ".format(var, var)
    ret = ret[:-2]
    ret += ")" + post
    return ret

if __name__ == "__main__":
    test_input = 'mystr = f"This {is} an {f}-string".reverse()'
    expected  = 'mystr = "This {is} an {f}-string".format(is=is, f=f).reverse()'
    assert expected == convert(test_input)

This is a raw python file. I included a mock unit-test (in case one was testing the function while writing it… I didn’t compose the function like this because I didn’t think about it, but in the future I probably will).

Now let’s write a thin vim wrapper around it.

" convert_string.vim

let s:script_dir = fnamemodify(resolve(expand('<sfile>', ':p')), ':h')

function! Convert()
py3 << EOF

import sys
import vim

script_dir = vim.eval('s:script_dir')
sys.path.insert(0, script_dir)

import convert_fstring

def convert():
  out = convert_fstring.convert(vim.current.line)
  vim.current.line = out

convert()
EOF

endfunction

com! Conv call Convert()
nnoremap Q :Conv<CR>

This is mostly pretty clear, with (I think) two exceptions concerning s:script_dir.

The problem comes from trying to import convert_fstring — it’s not in our PYTHONPATH, and the current working directory for python from within vim is a bit weird. It would be possible to directly code in the current directory into the PYTHONPATH, but I think it is more elegant to have convert_fstring.vim add the location of convert_fstring.py to python’s path.3 One can do that through

There are two additions of note that address this import. The first is the line

let s:script_dir = fnamemodify(resolve(expand('<sfile>', ':p')), ':h')

This sets a script-local variable (the prefix s:) containing the directory in which the script was sourced from. (This is where we use the assumption that the vimfile is in the same directory as the python file. If they’re in relative directories, then one much change the path addition to python appropriately).

The string <sfile> is populated automatically by vim to contain the filename of the sourcefile. Calling expand with the :p filemodifier returns the complete path of <sfile>. Calling resolve follows symlinks so that the path is absolute. And the final fnamemodify with the :h modifier returns the head, or rather the directory, of the file. Thus the result is to get an absolute path to the directory containing the script (and in this case also the python file).

Then in the python function, we add this location to the path through

script_dir = vim.eval('s:script_dir')
sys.path.insert(0, script_dir)

The first line has vim convert the vim variable into a python string, and the second inserts this directory into the PYTHONPATH.

I have a collection of loosely organized helper scripts in a (creatively named) ~/scripts directory. Some of these were written as pure python shellscripts that I’ve added some vim wrappers around.

Writing a Plugin

To reiterate, I consider this excessive for my sample application. But we’ve covered so many of the ideas that we should touch on it now.

Any plugin should follow Tim Pope’s Pathogen design model, with the ultimate goal of having a structure that looks like

myplugin/
  README.markdown
  LICENSE
  autoload/
    myplugin.vim
    ...
  plugin/
  ...

Pathogen (or Pathogen-compatible plugin managers, which as far as I know means all of them… like Vundle or vim-plug) will allow plugins to be installed in ~/.vim/bundle, and each plugin there will be added to vim’s rtp. 4 Each plugin manager installs plugins (i.e. adds to the runtimepath) in its own way. For Vundle, I include an extra line in my .vimrc.

Suppose I want to package my script as a plugin called fstring_converter. Then I’ll create the directory structure

fstring_converter/
  ftplugin/
    python.vim
    convert_fstring.py

where python.vim is what I called convert_fstring.vim before. Placing fstring_converter in ~/.vim/bundle (or symlinking it, or installing it there through e.g. Vundle) will “install it”. Then opening a python file will load it, and one can either Conv a line or (as it’s currently written) use Q. (Although I think it’s a bit unkind and certainly unsustainable to create this mapping for users).

I note that one should generically use autoload/ when available, but I don’t touch that now.

As an additional note, vim does add some directories to the PYTHONPATH under the hood. For each directory in vim’s runtimepath, vim adds the subdirectory python3 (and also pythonx) to the python module search path. As a result, we could omit the explicit path addition to the python.vim (aka convert_fstring.vim) file if we organized the plugin like

fstring_converter/
  ftplugin/
    python.vim
  python3/
    convert_fstring.py

Regardless, this describes a few ways to make a vim plugin out of our python script.

Posted in Programming, Python, vim | Tagged , , , , | Leave a comment

Using lcalc to compute half-integral weight L-functions

This is a brief note intended primarily for my collaborators interested in using Rubinstein’s lcalc to compute the values of half-integral weight $L$-functions.

We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn’t expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage’s distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage’s version).

Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.

Building the datafile

It is assumed that the $L$-function is normalized in such a way that
$$\begin{equation}
\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.
\end{equation}$$
This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.

An annotated version of the datafile you should create looks like this

2                  # 2 means the Dirichlet coefficients are reals
0                  # 0 means the L-function isn't a "nice" one
10000              # 10000 coefficients will be provided
0                  # 0 means the coefficients are not periodic
1                  # num Gamma factors of form \Gamma(\gamma s + \lambda)
1                  # the \gamma in the Gamma factor
1.75 0             # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790  # Q. In this case, 1/pi
1 0                # real and imaginary parts of omega, sign of func. eq.
0                  # number of poles
1.000000000000000  # a(1)
-1.78381067250408  # a(2)
...                # ...
-0.622124724090625 # a(10000)

If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, datafiles should only contain numbers and should not contain comments. This annotated version is for convenience, not for use.

You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it’s surely possible to use more, but this was the test-setup that I first set up.

Generating the coefficients for this example

In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.

One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.

num_coeffs = 10**5 + 1
weight     = 9.0 / 2.0

R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
    if i % 2 == 1:
        eta2_coeffs.append(0)
    else:
        eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3

coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])

normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
    normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
    normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])

Using lcalc now

Suppose that you have a datafile, called g1_lcalcfile.txt (for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as

# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')

# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')

# See lcalc.help() for more on calling lcalc.

The key in these is to pass along the datafile through the -F argument.

Posted in Data, Mathematics, Programming, sage, sagemath, sagemath | Tagged | Leave a comment

Extra comparisons in Python’s timsort

Reading through the comp.lang.python mailing list, I saw an interesting question concerning the behavior of the default sorting algorithm in python. This led to this post.

Python uses timsort, a clever hybrid sorting algorithm with ideas borrowing from merge sort and (binary) insertion sort. A major idea in timsort is to use the structure of naturally occuring runs (consecutive elements in the list that are either monotone increasing or monotone decreasing) when sorting.

Let’s look at the following simple list.

10, 20, 5

A simple sorting algorithm is insertion sort, which just advances through the list and inserts each number into the correct spot. More explicitly, insertion sort would

  1. Start with the first element, 10. As a list with one element, it is correctly sorted tautologically.
  2. Now consider the second element, 20. We insert this into the correct position in the already-sorted list of previous elements. Here, that just means that we verify that 20 > 10, and now we have the sorted sublist consisting of 10, 20.
  3. Now consider the third element, 5. We want to insert this into the correct position in the already-sorted list of previous elements. A naively easy way to do this is to either scan the list from the right or from the left, and insert into the correct place. For example, scanning from the right would mean that we compare 5 to the last element of the sublist, 20. As 5 < 20, we shift left and compare 5 to 10. As 5 < 10, we shift left again. As there is nothing left to compare against, we insert 5 at the beginning, yielding the sorted list 5, 10, 20.

How many comparisons did this take? This took 20 > 10, 5 < 20, and 5 < 10. This is three comparisons in total.

We can see this programmatically as well. Here is one implementation of insertion_sort, as described above.

def insertion_sort(lst):
    '''
    Sorts `lst` in-place. Note that this changes `lst`.
    '''
    for index in range(1, len(lst)):
        current_value = lst[index]
        position = index
        while position > 0 and lst[position - 1] > current_value:
            lst[position] = lst[position - 1]
            position = position - 1
        lst[position] = current_value

Let’s also create a simple Number class, which is just like a regular number, except that anytime a comparison is done it prints out the comparison. This will count the number of comparisons made for us.

class Number:
    def __init__(self, value):
        self.value = value

    def __str__(self):
        return str(self.value)

    def __repr__(self):
        return self.__str__()

    def __lt__(self, other):
        if self.value < other.value:
            print("{} < {}".format(self, other))
            return True
        print("{} >= {}".format(self, other))
        return False

    def __eq__(self, other):
        if self.value == other.value:
            print("{} = {}".format(self, other))
            return True
        return False

    def __gt__(self, other):
        return not ((self == other) or (self < other))

    def __le__(self, other):
        return (self < other) or (self == other)

    def __ge__(self, other):
        return not (self < other)

    def __nq__(self, other):
        return not (self == other)

With this class and function, we can run

lst = [Number(10), Number(20), Number(5)]
insertion_sort(lst)
print(lst)

which will print

10 < 20
20 >= 5
10 >= 5
[5, 10, 20]

These are the three comparisons we were expecting to see.

Returning to python’s timsort — what happens if we call python’s default sorting method on this list? The code

lst = [Number(10), Number(20), Number(5)]
lst.sort()
print(lst)

prints

20 >= 10
5 < 20
5 < 20
5 < 10
[5, 10, 20]

There are four comparisons! And weirdly, the method checks that 5 < 20 twice in a row. What’s going on there?1

At its heart, this was at the core of the thread on comp.lang.python. Why are there extra comparisons in cases like this?

Poking around the implementation of timsort taught me a little bit more about timsort.2

Timsort approaches this sorting task in the following way.

  1. First, timsort tries to identify how large the first run within the sequence is. So it keeps comparing terms until it finds one that is out of order. In this case, it compares 20 to 10 (finding that 20 > 10, and thus the run is increasing), and then compares 5 to 20 (finding that 5 < 20, and thus that 5 is not part of the same run as 10, 20). Now the run is identified, and there is one element left to incorporate.
  2. Next timsort tries to insert 5 into the already-sorted run. It is more correct to say that timsort attempts to do a binary insertion, since one knows already that the run is sorted.3 In this binary insertion, timsort will compare 5 with the middle of the already-sorted run 10, 20. But this is a list of length 2, so what is its middle element? It turns out that timsort takes the latter element, 20, in this case. As 5 < 20, timsort concludes that 5 should be inserted somewhere in the first half of the run 10, 20, and not in the second half.
  3. Of course, the first half consists entirely of 10. Thus the remaining comparison is to check that 5 < 10, and now the list is sorted.

We count4 all four of the comparisons. The doubled comparison is due to the two tasks of checking whether 5 is in the same run as 10, 20, and then of deciding through binary insertion where to place 5 in the smaller sublist of 10, 20.

Now that we’ve identified a doubled comparison, we might ask Why is it this way? Is this something that should change?

The short answer is it doesn’t really matter. A longer answer is that to apply this in general would cause additional comparisons to be made, since this applies in the case when the last element of the run agrees in value with the central value of the run (which may occur for longer lists if there are repeated values). Checking that this happens would probably either involve comparing the last element of the run with the central value (one extra comparison, so nothing is really saved anyway), or perhaps adding another data structure like a skip list (which seems sufficiently more complicated to not be worth the effort). Or it would only apply when sorting really short lists, in which case there isn’t much to worry about.

Learning a bit more about timsort made me realize that I could probably learn a lot by really understanding an implementation of timsort, or even a slightly simplified implementation. It’s a nice reminder that one can choose to optimize for certain situations or behaviors, and this might not cover all cases perfectly — and that’s ok.

Posted in Programming, Python | Tagged , | Leave a comment

A bookmarklet to inject colorblind friendly CSS into Travis CI

In my previous post, I noted that the ability to see in color gave me an apparent superpower in quickly analyzing Travis CI and pytest logs.

I wondered: how hard is it to use colorblind friendly colors here?

I had in the back of my mind the thought of the next time I sit down and pair program with someone who is colorblind (which will definitely happen). Pair programming is largely about sharing experiences and ideas, and color disambiguation shouldn’t be a wedge.

I decided that loading customized CSS is the way to go. There are different ways to do this, but an easy method for quick replicability is to create a bookmarklet that adds CSS into the page. So, I did that.

You can get that bookmarklet here. (Due to very sensible security reasons, WordPress doesn’t want to allow me to provide a link which is actually a javascript function. So I make it available on a static, handwritten page).1

Here’s how it works. A Travis log looks typically like this:

After clicking on the bookmarklet, it looks like

This is not beautiful, but it works and it’s very noticable. Nonetheless, when the goal is just to be able to quickly recognize if errors are occuring, or to recognize exceptional lines on a quick scroll-by, the black-text-on-white-box wins the standout crown.

The LMFDB uses pytest, which conveniently produces error summaries at the end of the test. (We used to use nosetest, and we hadn’t set it up to have nice summaries before transitioning to pytest). This bookmark will also effect the error summary, so that it now looks like

Again, I would say this is not beautiful, but definitely noticeable.


As an aside, I also looked through the variety of colorschemes that I have collected over the years. And it turns out that 100 percent of them are unkind to colorblind users, with the exception of the monotone or monochromatic schemes (which are equal in the Harrison Bergeron sense).

We should do better.

Posted in LMFDB, Programming | Tagged , , , , | Leave a comment

Seeing color shouldn’t feel like a superpower

In the last month, I have found myself pair programming with three different people. All three times involved working on the LMFDB. I rarely pair program outside a mentor-mentee or instructor-student situation.1

This is fun. It’s fun seeing other people’s workflows. (In these cases, it happened to be that the other person was usually the one at the keyboard and typing, and I was backseat driving). I live in the terminal, subscribe to the Unix-is-my-IDE general philosophy: vim is my text editor; a mixture of makefiles, linters, and fifos with tmux perform automated building, testing, and linting; git for source control; and a medium-sized but consistently growing set of homegrown bash/python/c tools and scripts make it fun and work how I want.

I’m distinctly interested in seeing tools other people have made for their own workflows. Those scripts that aren’t polished, but get the work done. There is a whole world of git-hooks and aliases that amaze me.

But my recent encounters with pair programming exposed me to a totally different and unexpected experience: two of my programming partners were color blind.2

At first, I didn’t think much of it. I had thought that you might set some colorblind-friendly colorschemes, and otherwise configure your way around it. But as is so often the case with accessibility problems, I underestimated both the number of challenges and the difficulty in solving them (lousy but true aside: most companies almost completely ignore problems with accessibility).

I first noticed differences while trying to fix bugs and review bugfixes in the LMFDB. We use Travis CI for automated testing, and we were examining a build that had failed. We brought up the Travic CI interface and scroll through the log. I immediately point out the failure, since I see something like this.3

an image from Travic CI showing "FAIL" in red and "PASS" in green.
How do you know something failed? asks John, my partner for the day. Oh, it’s because the output is colored, isn’t it? I didn’t know. With the help of the color-blindness.com color-blindness simulator, I now see that John saw something like
an image from Travic CI that has been altered to appear with contrast as a red-green colorblind person might see it. Now "FAIL" and "PASS" appear in essentially the same shade.
With red-green colorblindness, there is essentially no difference in the shades of PASSED and FAILED. That’s sort of annoying.

We’d make a few changes, and then rerun some tests. Now we were running tests in a terminal, and the testlogs are scolling by. We’re chatting about emacs wizardy (or c++ magic, or compiler differences between gcc and clang, or something), and I point out that we can stop the tests since three tests have already failed.

He stared at me a bit dumbfoundedly. It was like I had superpowers. I could recognize failures without paying almost any attention, since flashes of red stand out.

But if you don’t recognize differences in color, how would you even know that the terminal outputs different colors for PASSED and FAILED? (We use pytest, which does). A quick look for different colorschemes led to frustration, as there are different sorts of colorblindness and no single solution that will work for everyone (and changing colorschemes is sort of annoying anyway).4

I should say that the Travis team has made some accessibility improvements for colorblind users in the past. The build-passing and build-failing icons used to be little circles that were red or green, as shown here.

That means the build status was effectively invisible to colorblind users. After an issue was raised and discussed, they moved to the current green-checkmark-circle for passing and red-exed-circle for failing, which is a big improvement.

The colorscheme used for Travic CI’s online logs is based on the nord color palette, and there is no colorscheme-switching option. It’s a beautiful and well-researched theme for me, but not for everybody.

The colors on the page are controllable by CSS, but not in a uniform way that works on many sites. (Or at least, not to my knowledge. I would be interested if someone else knew more about this and knew a generic approach. The people I was pair-programming with didn’t have a good solution to this problem).

Should you really need to write your own solution to every colorblind accessibility problem?

In the next post, I’ll give a (lousy but functional) bookmarklet that injects CSS into the page to see Travis CI FAILs immediately.

Posted in LMFDB, Programming | Tagged , , , , | Leave a comment

Ghosts of Forums Past

This is the second in a miniseries of posts on internet fora, and Math.SE and StackOverflow in particular. In the previous entry in the miniseries, I described some of the common major problems facing community cohesion. I claimed that when communities get large, they tend to fracture and the ratio of meaningful communication to noise plummets. To combat this tendency, communities use some mixture of core moderation, peer moderation, membership requirements, or creating subcommunities/splitting off in to other communities.

In this chapter I focus more on Math.SE and StackOverflow. Math.SE is now experiencing growing pains and looking for solutions. But many users of Math.SE have little involvement in the rest of the StackExchange network and are mostly unaware of the fact that StackOverflow has already encountered and passed many of the same trials and tribulations (with varying degrees of success).

Thinking more broadly, many communities have faced these same challenges. Viewed from the point of view from the last chapter, it may appear that there are only a handful of tools a community might use to try to retain group cohesion. Yet it is possible to craft clever mixtures of these tools synergistically. The major reason the StackExchange model has succeeded where other fora have stalled (or failed outright) is through its innovations on the implementation of communition cohesion strategies while still allowing essentially anyone to visit the site.

Imaginary Internet Points

Slashdot1 popularized the idea of associating imaginary internet points to different users. It was called karma. You got karma if other users rated your comments or submissions well, and lost karma if they rated your posts as poor. But perhaps most importantly, each user can set a threshold for minimum scores of content to see. Thus if people have reasonable thresholds and you post crap, then most people won’t even see it after it’s scored badly.

(more…)

Posted in Programming, SE | Tagged , , | 1 Comment

Challenges facing community cohesion (and Math.StackExchange in particular)

In about a month, Math.StackExchange will turn 8. Way back when I was an undergrad, I joined the site. This was 7 years ago, during the site’s first year.

Now with some perspective as a frequent contributor/user/moderator of various online newsgroups and fora, I want to take a moment to examine the current state of Math.SE.

To a certain extent, this is inspired by Joel Spolsky’s series of posts on StackOverflow (which he is currently writing and sending out). But this is also inspired by recent discussion on Meta.Math.SE. As I began to collect my thoughts to make a coherent reply, I realized that I have a lot of thoughts, and a lot to say.

So this is chapter one of a miniseries of writings on internet fora, and Math.SE and StackOverflow in particular.

(more…)

Posted in Programming, SE | Tagged , , | 2 Comments