Author Archives: David Lowry-Duda

The wrong way to compute a sum: addendum

Cellular Automata from Rule 106 (random initial configuration)In my previous note, I looked at an amusing but inefficient way to compute the sum $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1}$$ using Mellin and inverse Mellin transforms. This was great fun, but the amount of work required was more intense than the more straightforward approach offered immediately by using Lambert series.

However, Adam Harper suggested that there is a nice shortcut that we can use (although coming up with this shortcut requires either a lot of familiarity with Mellin transforms or knowledge of the answer).

In the Lambert series approach, one shows quickly that $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ and then evaluates this last sum directly. For the Mellin transform approach, we might ask: do the two functions $$ f(x) = \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}$$ and $$ g(x) = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ have the same Mellin transforms? From the previous note, we know that they have the same values at $1$.

We also showed very quickly that $$ \mathcal{M} [f] = \frac{1}{(\log 2)^2} \Gamma(s) \zeta(s-1). $$ The more difficult parts from the previous note arose in the evaluation of the inverse Mellin transform at $x=1$.

Let us compute the Mellin transform of $g$. We find that $$ \begin{align}
\mathcal{M}[g] &= \sum_{n \geq 1} n \int_0^\infty \frac{1}{2^{nx}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} n \int_0^\infty \frac{1}{e^{nx \log 2}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} \frac{n}{(n \log 2)^s} \int_0^\infty x^s e^{-x} \frac{dx}{x} \notag \\
&= \frac{1}{(\log 2)^2} \zeta(s-1)\Gamma(s). \notag
\end{align}$$ To go from the second line to the third line, we did the change of variables $x \mapsto x/(n \log 2)$, yielding an integral which is precisely the definition of the Gamma function.

Thus we see that $$ \mathcal{M}[g] = \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) = \mathcal{M}[f],$$ and thus $f(x) = g(x)$. (“Nice” functions with the same “nice” Mellin transforms are also the same, exactly as with Fourier transforms).

This shows that not only is $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ but in fact $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1} = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ for all $x > 1$.

I think that’s sort of slick.

Posted in Math.NT, Mathematics, Warwick | Tagged , , , | Leave a comment

The wrong way to compute a sum

At a recent colloquium at the University of Warwick, the fact that
\begin{equation}\label{question}
\sum_ {n \geq 1} \frac{\varphi(n)}{2^n – 1} = 2.
\end{equation}
Although this was mentioned in passing, John Cremona asked — How do you prove that?

It almost fails a heuristic check, as one can quickly check that
\begin{equation}\label{similar}
\sum_ {n \geq 1} \frac{n}{2^n} = 2,
\end{equation}
which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor.

[Edit: Note that an addendum to this note has been added here. In it, we see that there is a way to shortcut the “hard part” of the long computation.]

The right way

Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short.

The idea is that, by expanding the denominator in power series, one has that
\begin{equation}
\sum_{n \geq 1} a(n) \frac{x^n}{1 – x^n} \notag
= \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn}
= \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n,
\end{equation}
where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$.

Applied to \eqref{question}, we find that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^n – 1} \notag
= \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 – 2^{-n}}
= \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d),
\end{equation}
and as
\begin{equation}
\sum_ {d \mid n} \varphi(d) = n, \notag
\end{equation}
we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$.

That’s a nice derivation using a series that I hadn’t come across before. But that’s not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it’s a wrong way that works out in a nice way that at least one person1 finds appealing.

The wrong way

We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates.

Let $f$ denote the function
\begin{equation}
f(x) = \frac{1}{2^x – 1}. \notag
\end{equation}
Denote by $f^ * $ the Mellin transform of $f$,
\begin{equation}
f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x}
= \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag
\end{equation}
where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.2

For a general nice function $g(x)$, its Mellin transform satisfies
\begin{equation}
\mathcal{M}[f(nx)] (s)
= \int_0^\infty g(nx) x^s \frac{dx}{x}
= \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x}
= \frac{1}{n^s} g^ * (s).\notag
\end{equation}
Further, the Mellin transform is linear. Thus
\begin{equation}\label{mellinbase}
\mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}.
\end{equation}

The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag
= \frac{\zeta(s-1)}{\zeta(s)}.
\end{equation}
Thus the Mellin transform in \eqref{mellinbase} can be written as
\begin{equation}
\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag
\end{equation}

By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be
\begin{equation}
\mathcal{M}^{-1}[h(s)] (x) \notag
:=
\frac{1}{2\pi i} \int_ {c – i \infty}^{c + i\infty} x^s h(s) ds,
\end{equation}
where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I’ll write $(c)$ as a shorthand for the limits of integration. Thus
\begin{equation}\label{mellininverse}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} – 1}
= \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s}
\Gamma(s) \zeta(s-1) x^{-s} ds.
\end{equation}

We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}.

How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy’s residue theorem, and to recognize the infinite sum as a classical series.

The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren’t poles at negative odd integers, since the zeta function has zeroes at negative even integers.

Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{n} – 1} \notag
=\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 +
\frac{\zeta(-5)}{4!} \log^4 2 + \cdots
\end{equation}

The zeta function at negative integers has a very well-known relation to the Bernoulli numbers,
\begin{equation}\label{zeta_bern}
\zeta(-n) = – \frac{B_ {n+1}}{n+1},
\end{equation}
where Bernoulli numbers are the coefficients in the expansion
\begin{equation}\label{bern_gen}
\frac{t}{1 – e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}.
\end{equation}
Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function.

Translating the zeta values into Bernoulli numbers, we find that
\eqref{question} is equal to
\begin{align}
&\frac{1}{\log^2 2} – \frac{B_2}{2} – \frac{B_4}{2! \cdot 4} \log^2 2 –
\frac{B_6}{4! \cdot 6} \log^4 2 – \frac{B_8}{6! \cdot 8} \cdots \notag \\
&=
-\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog}
\end{align}
This last sum is excellent, and can be recognized.

For a general exponential generating series
\begin{equation}
F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag
\end{equation}
we see that
\begin{equation}
\frac{d}{dt} \frac{1}{t} F(t) \notag
=\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}.
\end{equation}
Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag
=- \frac{e^{-t}}{(1 – e^{-t})^2},
\end{equation}
and also that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 – e^{-t}} \notag
=\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}.
\end{equation}
This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$.

Putting this together, we find that
\begin{equation}
\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag
=\frac{e^{-\log 2}}{(1 – e^{-\log 2})^2}
= \frac{1/2}{(1/2)^2} = 2.
\end{equation}
Thus we find that \eqref{question} really is equal to $2$, as we had sought to show.

Posted in Math.NT, Mathematics, Warwick | Tagged , , | Leave a comment

Using lcalc to compute half-integral weight L-functions

This is a brief note intended primarily for my collaborators interested in using Rubinstein’s lcalc to compute the values of half-integral weight $L$-functions.

We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn’t expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage’s distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage’s version).

Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.

Building the datafile

It is assumed that the $L$-function is normalized in such a way that
$$\begin{equation}
\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.
\end{equation}$$
This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.

An annotated version of the datafile you should create looks like this

2                  # 2 means the Dirichlet coefficients are reals
0                  # 0 means the L-function isn't a "nice" one
10000              # 10000 coefficients will be provided
0                  # 0 means the coefficients are not periodic
1                  # num Gamma factors of form \Gamma(\gamma s + \lambda)
1                  # the \gamma in the Gamma factor
1.75 0             # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790  # Q. In this case, 1/pi
1 0                # real and imaginary parts of omega, sign of func. eq.
0                  # number of poles
1.000000000000000  # a(1)
-1.78381067250408  # a(2)
...                # ...
-0.622124724090625 # a(10000)

If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, datafiles should only contain numbers and should not contain comments. This annotated version is for convenience, not for use.

You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it’s surely possible to use more, but this was the test-setup that I first set up.

Generating the coefficients for this example

In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.

One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.

num_coeffs = 10**5 + 1
weight     = 9.0 / 2.0

R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
    if i % 2 == 1:
        eta2_coeffs.append(0)
    else:
        eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3

coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])

normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
    normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
    normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])

Using lcalc now

Suppose that you have a datafile, called g1_lcalcfile.txt (for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as

# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')

# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')

# See lcalc.help() for more on calling lcalc.

The key in these is to pass along the datafile through the -F argument.

Posted in Data, Mathematics, Programming, sage, sagemath, sagemath | Tagged | Leave a comment

Extra comparisons in Python’s timsort

Reading through the comp.lang.python mailing list, I saw an interesting question concerning the behavior of the default sorting algorithm in python. This led to this post.

Python uses timsort, a clever hybrid sorting algorithm with ideas borrowing from merge sort and (binary) insertion sort. A major idea in timsort is to use the structure of naturally occuring runs (consecutive elements in the list that are either monotone increasing or monotone decreasing) when sorting.

Let’s look at the following simple list.

10, 20, 5

A simple sorting algorithm is insertion sort, which just advances through the list and inserts each number into the correct spot. More explicitly, insertion sort would

  1. Start with the first element, 10. As a list with one element, it is correctly sorted tautologically.
  2. Now consider the second element, 20. We insert this into the correct position in the already-sorted list of previous elements. Here, that just means that we verify that 20 > 10, and now we have the sorted sublist consisting of 10, 20.
  3. Now consider the third element, 5. We want to insert this into the correct position in the already-sorted list of previous elements. A naively easy way to do this is to either scan the list from the right or from the left, and insert into the correct place. For example, scanning from the right would mean that we compare 5 to the last element of the sublist, 20. As 5 < 20, we shift left and compare 5 to 10. As 5 < 10, we shift left again. As there is nothing left to compare against, we insert 5 at the beginning, yielding the sorted list 5, 10, 20.

How many comparisons did this take? This took 20 > 10, 5 < 20, and 5 < 10. This is three comparisons in total.

We can see this programmatically as well. Here is one implementation of insertion_sort, as described above.

def insertion_sort(lst):
    '''
    Sorts `lst` in-place. Note that this changes `lst`.
    '''
    for index in range(1, len(lst)):
        current_value = lst[index]
        position = index
        while position > 0 and lst[position - 1] > current_value:
            lst[position] = lst[position - 1]
            position = position - 1
        lst[position] = current_value

Let’s also create a simple Number class, which is just like a regular number, except that anytime a comparison is done it prints out the comparison. This will count the number of comparisons made for us.

class Number:
    def __init__(self, value):
        self.value = value

    def __str__(self):
        return str(self.value)

    def __repr__(self):
        return self.__str__()

    def __lt__(self, other):
        if self.value < other.value:
            print("{} < {}".format(self, other))
            return True
        print("{} >= {}".format(self, other))
        return False

    def __eq__(self, other):
        if self.value == other.value:
            print("{} = {}".format(self, other))
            return True
        return False

    def __gt__(self, other):
        return not ((self == other) or (self < other))

    def __le__(self, other):
        return (self < other) or (self == other)

    def __ge__(self, other):
        return not (self < other)

    def __nq__(self, other):
        return not (self == other)

With this class and function, we can run

lst = [Number(10), Number(20), Number(5)]
insertion_sort(lst)
print(lst)

which will print

10 < 20
20 >= 5
10 >= 5
[5, 10, 20]

These are the three comparisons we were expecting to see.

Returning to python’s timsort — what happens if we call python’s default sorting method on this list? The code

lst = [Number(10), Number(20), Number(5)]
lst.sort()
print(lst)

prints

20 >= 10
5 < 20
5 < 20
5 < 10
[5, 10, 20]

There are four comparisons! And weirdly, the method checks that 5 < 20 twice in a row. What’s going on there?1

At its heart, this was at the core of the thread on comp.lang.python. Why are there extra comparisons in cases like this?

Poking around the implementation of timsort taught me a little bit more about timsort.2

Timsort approaches this sorting task in the following way.

  1. First, timsort tries to identify how large the first run within the sequence is. So it keeps comparing terms until it finds one that is out of order. In this case, it compares 20 to 10 (finding that 20 > 10, and thus the run is increasing), and then compares 5 to 20 (finding that 5 < 20, and thus that 5 is not part of the same run as 10, 20). Now the run is identified, and there is one element left to incorporate.
  2. Next timsort tries to insert 5 into the already-sorted run. It is more correct to say that timsort attempts to do a binary insertion, since one knows already that the run is sorted.3 In this binary insertion, timsort will compare 5 with the middle of the already-sorted run 10, 20. But this is a list of length 2, so what is its middle element? It turns out that timsort takes the latter element, 20, in this case. As 5 < 20, timsort concludes that 5 should be inserted somewhere in the first half of the run 10, 20, and not in the second half.
  3. Of course, the first half consists entirely of 10. Thus the remaining comparison is to check that 5 < 10, and now the list is sorted.

We count4 all four of the comparisons. The doubled comparison is due to the two tasks of checking whether 5 is in the same run as 10, 20, and then of deciding through binary insertion where to place 5 in the smaller sublist of 10, 20.

Now that we’ve identified a doubled comparison, we might ask Why is it this way? Is this something that should change?

The short answer is it doesn’t really matter. A longer answer is that to apply this in general would cause additional comparisons to be made, since this applies in the case when the last element of the run agrees in value with the central value of the run (which may occur for longer lists if there are repeated values). Checking that this happens would probably either involve comparing the last element of the run with the central value (one extra comparison, so nothing is really saved anyway), or perhaps adding another data structure like a skip list (which seems sufficiently more complicated to not be worth the effort). Or it would only apply when sorting really short lists, in which case there isn’t much to worry about.

Learning a bit more about timsort made me realize that I could probably learn a lot by really understanding an implementation of timsort, or even a slightly simplified implementation. It’s a nice reminder that one can choose to optimize for certain situations or behaviors, and this might not cover all cases perfectly — and that’s ok.

Posted in Programming, Python | Tagged , | Leave a comment

Speed talks should be at every conference

I recently attended Building Bridges 4, an automorphic forms summer school and workshop. A major goal of the conference is to foster communication and relationships between researchers from North America and Europe, especially junior researchers and graduate students.

It was a great conference, and definitely one of the better conferences that I’ve attended. What made it so good? For one thing, it was in Budapest, and I love Budapest. Many of the main topics were close to my heart, which is a big plus.

But what I think really set it apart was that there were lots of relatively short talks, and almost everyone attended almost every talk.1

The amount of time allotted to a talk carries extreme power in deciding what sort of talk it will be. A typical hour-long seminar talk is long enough to give context, describe a line of research leading to a set of results, discuss how these results fit into the literature, and even to give a non-rushed description of how something is proved. Sometimes a good speaker will even distill a few major ideas and discuss how they are related. A long talk can have multiple major ideas (although just one presented very well can make a good talk too).

In comparison, 50, 40, and 30 minute talks require much more discipline. As the amount of time decreases, the number of ideas that can be inserted into a talk decreases. And this relationship is not linear! Thirty minutes is just about long enough to describe one idea pretty well, and to do anything more is very hard.2

Something interesting happens for shorter talks. For 20 minute, 15 minute, and 10 minute talks, the limitation almost serves as a source of inspiration.3 Being forced to focus on what’s important is a powerful organizing force.

The median talk length was 20 minutes, which is a very comfortable number. This is long enough to state a result and give context. It’s also long enough to tempt speakers into describing methodology behind a proof, but not long enough to effectively teach someone how the proof works.

An extraordinary aspect of a 20 minute talk is also that it’s short enough to pay attention to, even if it’s only an okay talk. It is perhaps not a surprise to most conference goers that most talks are not so great. To be a skilled orator is to be exceptional.

At Building Bridges, I was introduced to math speed talks. These are two minute talks. I’ve seen many programming lightning talks (often used to plug a particular product or solution to a common programming problem), but these math speed talks were different.

People used their two minutes to introduce an idea, or a result. And they either chose to give the broadest possible context, or a singular idea in the proof.

People were talking about real mathematics in two minutes. And I loved it.

Simply having a task where you distill some real mathematics into a two minute coherent description is worthwhile. What’s important? What do you really want to say? Why?

Two minutes is so short that it feels silly. And silly means that it doesn’t feel dangerous or scary, and thus many people felt willing to give it a try. At Building Bridges, the organizers gamified the speed talks, so that getting the closest to 2 minutes was rewarded with applause and going over two minutes resulted in a buzzer going off. It was a game, and it was fun. It was encouraging.

I firmly support any activity that encourages people who normally don’t speak so much, especially students and junior researchers. You learn a lot by giving a talk, even if it’s only a two minute talk.4

This conference had 19 (I think) speed talks over a three day stretch. They were given in clumps after the last regular talk each day. Since people were there for the big talk, everyone attended the speed talks. This is also important! In conferences like the Joint Math Meetings, where there might even be something like speed talks, it’s essentially impossible to pay attention since there are too many people in too many places and you never can step in the same river twice. Here, speed talks were given on the same stage as long talks, to the same audience, and with the same equipment.

Every conference should have speed talks. And they should be treated as first-class talks, with the exception that they are irrefutably silly.

Go forth and spread the speed talk gospel.

Posted in Mathematics | Tagged , , | Leave a comment

A bookmarklet to inject colorblind friendly CSS into Travis CI

In my previous post, I noted that the ability to see in color gave me an apparent superpower in quickly analyzing Travis CI and pytest logs.

I wondered: how hard is it to use colorblind friendly colors here?

I had in the back of my mind the thought of the next time I sit down and pair program with someone who is colorblind (which will definitely happen). Pair programming is largely about sharing experiences and ideas, and color disambiguation shouldn’t be a wedge.

I decided that loading customized CSS is the way to go. There are different ways to do this, but an easy method for quick replicability is to create a bookmarklet that adds CSS into the page. So, I did that.

You can get that bookmarklet here. (Due to very sensible security reasons, WordPress doesn’t want to allow me to provide a link which is actually a javascript function. So I make it available on a static, handwritten page).1

Here’s how it works. A Travis log looks typically like this:

After clicking on the bookmarklet, it looks like

This is not beautiful, but it works and it’s very noticable. Nonetheless, when the goal is just to be able to quickly recognize if errors are occuring, or to recognize exceptional lines on a quick scroll-by, the black-text-on-white-box wins the standout crown.

The LMFDB uses pytest, which conveniently produces error summaries at the end of the test. (We used to use nosetest, and we hadn’t set it up to have nice summaries before transitioning to pytest). This bookmark will also effect the error summary, so that it now looks like

Again, I would say this is not beautiful, but definitely noticeable.


As an aside, I also looked through the variety of colorschemes that I have collected over the years. And it turns out that 100 percent of them are unkind to colorblind users, with the exception of the monotone or monochromatic schemes (which are equal in the Harrison Bergeron sense).

We should do better.

Posted in LMFDB, Programming | Tagged , , , , | Leave a comment

Seeing color shouldn’t feel like a superpower

In the last month, I have found myself pair programming with three different people. All three times involved working on the LMFDB. I rarely pair program outside a mentor-mentee or instructor-student situation.1

This is fun. It’s fun seeing other people’s workflows. (In these cases, it happened to be that the other person was usually the one at the keyboard and typing, and I was backseat driving). I live in the terminal, subscribe to the Unix-is-my-IDE general philosophy: vim is my text editor; a mixture of makefiles, linters, and fifos with tmux perform automated building, testing, and linting; git for source control; and a medium-sized but consistently growing set of homegrown bash/python/c tools and scripts make it fun and work how I want.

I’m distinctly interested in seeing tools other people have made for their own workflows. Those scripts that aren’t polished, but get the work done. There is a whole world of git-hooks and aliases that amaze me.

But my recent encounters with pair programming exposed me to a totally different and unexpected experience: two of my programming partners were color blind.2

At first, I didn’t think much of it. I had thought that you might set some colorblind-friendly colorschemes, and otherwise configure your way around it. But as is so often the case with accessibility problems, I underestimated both the number of challenges and the difficulty in solving them (lousy but true aside: most companies almost completely ignore problems with accessibility).

I first noticed differences while trying to fix bugs and review bugfixes in the LMFDB. We use Travis CI for automated testing, and we were examining a build that had failed. We brought up the Travic CI interface and scroll through the log. I immediately point out the failure, since I see something like this.3

an image from Travic CI showing "FAIL" in red and "PASS" in green.
How do you know something failed? asks John, my partner for the day. Oh, it’s because the output is colored, isn’t it? I didn’t know. With the help of the color-blindness.com color-blindness simulator, I now see that John saw something like
an image from Travic CI that has been altered to appear with contrast as a red-green colorblind person might see it. Now "FAIL" and "PASS" appear in essentially the same shade.
With red-green colorblindness, there is essentially no difference in the shades of PASSED and FAILED. That’s sort of annoying.

We’d make a few changes, and then rerun some tests. Now we were running tests in a terminal, and the testlogs are scolling by. We’re chatting about emacs wizardy (or c++ magic, or compiler differences between gcc and clang, or something), and I point out that we can stop the tests since three tests have already failed.

He stared at me a bit dumbfoundedly. It was like I had superpowers. I could recognize failures without paying almost any attention, since flashes of red stand out.

But if you don’t recognize differences in color, how would you even know that the terminal outputs different colors for PASSED and FAILED? (We use pytest, which does). A quick look for different colorschemes led to frustration, as there are different sorts of colorblindness and no single solution that will work for everyone (and changing colorschemes is sort of annoying anyway).4

I should say that the Travis team has made some accessibility improvements for colorblind users in the past. The build-passing and build-failing icons used to be little circles that were red or green, as shown here.

That means the build status was effectively invisible to colorblind users. After an issue was raised and discussed, they moved to the current green-checkmark-circle for passing and red-exed-circle for failing, which is a big improvement.

The colorscheme used for Travic CI’s online logs is based on the nord color palette, and there is no colorscheme-switching option. It’s a beautiful and well-researched theme for me, but not for everybody.

The colors on the page are controllable by CSS, but not in a uniform way that works on many sites. (Or at least, not to my knowledge. I would be interested if someone else knew more about this and knew a generic approach. The people I was pair-programming with didn’t have a good solution to this problem).

Should you really need to write your own solution to every colorblind accessibility problem?

In the next post, I’ll give a (lousy but functional) bookmarklet that injects CSS into the page to see Travis CI FAILs immediately.

Posted in LMFDB, Programming | Tagged , , , , | Leave a comment

Notes from a Talk at Building Bridges 4

On 18 July 2018 I gave a talk at the 4th Building Bridges Automorphic Forms Workshop, which is hosted at the Renyi Institute in Budapest, Hungary this year. In this talk, I spoke about counting points on hyperboloids, with a certain focus on counting points on the three dimensional hyperboloid

$$\begin{equation} X^2 + Y^2 = Z^2 + h \end{equation}$$

for any fixed integer $h$.

I gave a similar talk at the 32nd Automorphic Forms Workshop in Tufts in March. I don’t say this during my talk, but a big reason for giving these talks is to continue to inspire me to finish the corresponding paper. (There are still a couple of rough edges that need some attention).

The methodology for the result relies on the spectral expansion of half-integral weight modular forms. This is unfriendly to those unfamiliar with the subject, and particularly mysterious to students. But there is a nice connection to a topic discussed by Arpad Toth during the previous week’s associated summer school.

Arpad sketched a proof of the spectral decomposition of holomorphic modular cusp forms on $\Gamma = \mathrm{SL}(2, \mathbb{Z})$. He showed that
$$\begin{equation} L^2(\Gamma \backslash \mathcal{H}) = \textrm{cuspidal} \oplus \textrm{Eisenstein}, \tag{1}
\end{equation}$$
where the cuspidal contribution comes from Maass forms and the Eisenstein contribution comes from line integrals against Eisenstein series.

The typical Eisenstein series $$\begin{equation} E(z, s) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} \textrm{Im}(\gamma z)^s \end{equation}$$ only converges for $\mathrm{Re}(s) > 1$, and the initial decomposition in $(1)$ implicitly has $s$ in this range.

To write down the integrals appearing in the Eisenstein spectrum explicitly, one normally shifts the line of integration to $1/2$. As Arpad explained, classically this produces a pole at $s = 1$ (which is the constant function).

In half-integral weight, the Eisenstein series has a pole at $s = 3/4$, with the standard theta function

$$\begin{equation} \theta(z) = \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z} \end{equation}$$

as the residue. (More precisely, it’s a constant times $y^{1/4} \theta(z)$, or a related theta function for $\Gamma_0(N)$). I refer to this portion of the spectrum as the residual spectrum, since it comes from often-forgotten residues of Eisenstein series. Thus the spectral decomposition for half-integral weight objects is a bit more complicated than the normal case.

When giving talks involving half-integral weight spectral expansions to audiences including non-experts, I usually omit description of this. But for those who attended the summer school, it’s possible to at least recognize where these additional terms come from.

The slides for this talk are available here.

Posted in Expository, Math.NT, Mathematics | Tagged , , , , | Leave a comment

Splitting Easy MathSE Questions into a New NoviceMathSE Is a Bad Idea

(But there are some positive changes that can be made)

This is the final chapter in my series about the state of internet fora, and Math.SE and StackOverflow in particular. The previous chapters are Challenges Facing Community Cohesion and Ghosts of Forums Past. Unlike the previous entries, this also sits on Meta.Math.SE (and was posted there a week before here). (As I write this as a moderator of Math.SE, I refer to the Math.SE community as “we”, “us”, and “our” community).

A couple of weeks ago, there was a proposal on meta.Math.SE to introduce a third level of math site to the SE network. Many members of the the MathSE community have reacted very positively to this proposal, to the extent that even some of the moderators have considered throwing their weight behind it.

But a NoviceMathSE site would be doomed to fail, and such a separation would not solve the underlying problems facing the site.

To explain my point of view, we need to examine more closely the arguments in favor of NoviceMathSE.

(more…)

Posted in Mathematics, SE | Tagged , | Leave a comment

Ghosts of Forums Past

This is the second in a miniseries of posts on internet fora, and Math.SE and StackOverflow in particular. In the previous entry in the miniseries, I described some of the common major problems facing community cohesion. I claimed that when communities get large, they tend to fracture and the ratio of meaningful communication to noise plummets. To combat this tendency, communities use some mixture of core moderation, peer moderation, membership requirements, or creating subcommunities/splitting off in to other communities.

In this chapter I focus more on Math.SE and StackOverflow. Math.SE is now experiencing growing pains and looking for solutions. But many users of Math.SE have little involvement in the rest of the StackExchange network and are mostly unaware of the fact that StackOverflow has already encountered and passed many of the same trials and tribulations (with varying degrees of success).

Thinking more broadly, many communities have faced these same challenges. Viewed from the point of view from the last chapter, it may appear that there are only a handful of tools a community might use to try to retain group cohesion. Yet it is possible to craft clever mixtures of these tools synergistically. The major reason the StackExchange model has succeeded where other fora have stalled (or failed outright) is through its innovations on the implementation of communition cohesion strategies while still allowing essentially anyone to visit the site.

Imaginary Internet Points

Slashdot1 popularized the idea of associating imaginary internet points to different users. It was called karma. You got karma if other users rated your comments or submissions well, and lost karma if they rated your posts as poor. But perhaps most importantly, each user can set a threshold for minimum scores of content to see. Thus if people have reasonable thresholds and you post crap, then most people won’t even see it after it’s scored badly.

(more…)

Posted in Programming, SE | Tagged , , | 1 Comment