Tag Archives: Programming

A bookmarklet to inject colorblind friendly CSS into Travis CI

In my previous post, I noted that the ability to see in color gave me an apparent superpower in quickly analyzing Travis CI and pytest logs.

I wondered: how hard is it to use colorblind friendly colors here?

I had in the back of my mind the thought of the next time I sit down and pair program with someone who is colorblind (which will definitely happen). Pair programming is largely about sharing experiences and ideas, and color disambiguation shouldn’t be a wedge.

I decided that loading customized CSS is the way to go. There are different ways to do this, but an easy method for quick replicability is to create a bookmarklet that adds CSS into the page. So, I did that.

You can get that bookmarklet here. (Due to very sensible security reasons, WordPress doesn’t want to allow me to provide a link which is actually a javascript function. So I make it available on a static, handwritten page).1

Here’s how it works. A Travis log looks typically like this:

After clicking on the bookmarklet, it looks like

This is not beautiful, but it works and it’s very noticable. Nonetheless, when the goal is just to be able to quickly recognize if errors are occuring, or to recognize exceptional lines on a quick scroll-by, the black-text-on-white-box wins the standout crown.

The LMFDB uses pytest, which conveniently produces error summaries at the end of the test. (We used to use nosetest, and we hadn’t set it up to have nice summaries before transitioning to pytest). This bookmark will also effect the error summary, so that it now looks like

Again, I would say this is not beautiful, but definitely noticeable.


As an aside, I also looked through the variety of colorschemes that I have collected over the years. And it turns out that 100 percent of them are unkind to colorblind users, with the exception of the monotone or monochromatic schemes (which are equal in the Harrison Bergeron sense).

We should do better.

Posted in LMFDB, Programming | Tagged , , , , | Leave a comment

Seeing color shouldn’t feel like a superpower

In the last month, I have found myself pair programming with three different people. All three times involved working on the LMFDB. I rarely pair program outside a mentor-mentee or instructor-student situation.1

This is fun. It’s fun seeing other people’s workflows. (In these cases, it happened to be that the other person was usually the one at the keyboard and typing, and I was backseat driving). I live in the terminal, subscribe to the Unix-is-my-IDE general philosophy: vim is my text editor; a mixture of makefiles, linters, and fifos with tmux perform automated building, testing, and linting; git for source control; and a medium-sized but consistently growing set of homegrown bash/python/c tools and scripts make it fun and work how I want.

I’m distinctly interested in seeing tools other people have made for their own workflows. Those scripts that aren’t polished, but get the work done. There is a whole world of git-hooks and aliases that amaze me.

But my recent encounters with pair programming exposed me to a totally different and unexpected experience: two of my programming partners were color blind.2

At first, I didn’t think much of it. I had thought that you might set some colorblind-friendly colorschemes, and otherwise configure your way around it. But as is so often the case with accessibility problems, I underestimated both the number of challenges and the difficulty in solving them (lousy but true aside: most companies almost completely ignore problems with accessibility).

I first noticed differences while trying to fix bugs and review bugfixes in the LMFDB. We use Travis CI for automated testing, and we were examining a build that had failed. We brought up the Travic CI interface and scroll through the log. I immediately point out the failure, since I see something like this.3

an image from Travic CI showing "FAIL" in red and "PASS" in green.
How do you know something failed? asks John, my partner for the day. Oh, it’s because the output is colored, isn’t it? I didn’t know. With the help of the color-blindness.com color-blindness simulator, I now see that John saw something like
an image from Travic CI that has been altered to appear with contrast as a red-green colorblind person might see it. Now "FAIL" and "PASS" appear in essentially the same shade.
With red-green colorblindness, there is essentially no difference in the shades of PASSED and FAILED. That’s sort of annoying.

We’d make a few changes, and then rerun some tests. Now we were running tests in a terminal, and the testlogs are scolling by. We’re chatting about emacs wizardy (or c++ magic, or compiler differences between gcc and clang, or something), and I point out that we can stop the tests since three tests have already failed.

He stared at me a bit dumbfoundedly. It was like I had superpowers. I could recognize failures without paying almost any attention, since flashes of red stand out.

But if you don’t recognize differences in color, how would you even know that the terminal outputs different colors for PASSED and FAILED? (We use pytest, which does). A quick look for different colorschemes led to frustration, as there are different sorts of colorblindness and no single solution that will work for everyone (and changing colorschemes is sort of annoying anyway).4

I should say that the Travis team has made some accessibility improvements for colorblind users in the past. The build-passing and build-failing icons used to be little circles that were red or green, as shown here.

That means the build status was effectively invisible to colorblind users. After an issue was raised and discussed, they moved to the current green-checkmark-circle for passing and red-exed-circle for failing, which is a big improvement.

The colorscheme used for Travic CI’s online logs is based on the nord color palette, and there is no colorscheme-switching option. It’s a beautiful and well-researched theme for me, but not for everybody.

The colors on the page are controllable by CSS, but not in a uniform way that works on many sites. (Or at least, not to my knowledge. I would be interested if someone else knew more about this and knew a generic approach. The people I was pair-programming with didn’t have a good solution to this problem).

Should you really need to write your own solution to every colorblind accessibility problem?

In the next post, I’ll give a (lousy but functional) bookmarklet that injects CSS into the page to see Travis CI FAILs immediately.

Posted in LMFDB, Programming | Tagged , , , , | Leave a comment

Advent of Code: Day 4

This is a very short post in my collection working through this year’s Advent of Code challenges. Unlike the previous ones, this has no mathematical comments, as it was a very short exercise. This notebook is available in its original format on my github.

Day 4: High Entropy Passphrases

Given a list of strings, determine how many strings have no duplicate words.

This is a classic problem, and it’s particularly easy to solve this in python. Some might use collections.Counter, but I think it’s more straightforward to use sets.

The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
def count_lines_with_unique_words(lines):
    num_pass = 0
    for line in lines:
        s = line.split()
        if len(s) == len(set(s)):
            num_pass += 1
    return num_pass

count_lines_with_unique_words(lines)
Out[1]:
455

I think this is the first day where I would have had a shot at the leaderboard if I’d been gunning for it.

Part 2

Let’s add in another constraint. Determine how many strings have no duplicate words, even after anagramming. Thus the string

abc bac

is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words.

In [2]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
sorted_lines = []
for line in lines:
    sorted_line = ' '.join([''.join(l) for l in map(sorted, line.split())])
    sorted_lines.append(sorted_line)

sorted_lines[:2]
    
Out[2]:
['bddjjow acimrv bcjjm anr flmmos fiosv',
 'bcmnoxy dfinyzz dgmp dfgioy hinrrv eeklpuu adgpw kqv']
In [3]:
count_lines_with_unique_words(sorted_lines)
Out[3]:
186
Posted in Expository, Programming, Python | Tagged , , | 1 Comment

Advent of Code: Day 2

This is the second notebook in my posts on the Advent of Code challenges. This notebook in its original format can be found on my github.

Day 2: Corruption Checksum, part I

You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together.

There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
lines[0]
Out[1]:
'5048\t177\t5280\t5058\t4504\t3805\t5735\t220\t4362\t1809\t1521\t230\t772\t1088\t178\t1794\n'
In [2]:
l = lines[0]
l = l.split()
l
Out[2]:
['5048',
 '177',
 '5280',
 '5058',
 '4504',
 '3805',
 '5735',
 '220',
 '4362',
 '1809',
 '1521',
 '230',
 '772',
 '1088',
 '178',
 '1794']
In [3]:
def max_minus_min(line):
    '''Compute the difference between the largest and smallest integer in a line'''
    line = list(map(int, line.split()))
    return max(line) - min(line)

def sum_differences(lines):
    '''Sum the value of `max_minus_min` for each line in `lines`'''
    return sum(max_minus_min(line) for line in lines)
In [4]:
testcase = ['5 1 9 5','7 5 3', '2 4 6 8']
assert sum_differences(testcase) == 18
In [5]:
sum_differences(lines)
Out[5]:
58975

Mathematical Interlude

In line with the first day’s challenge, I’m inclined to ask what we should “expect.” But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense.

Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect?

As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum?

 

Expected Minimum

Let’s begin with the minimum. The minimum is $1$ unless all the integers are greater than $2$. This has probability
$$ 1 – \left( \frac{9}{10} \right)^m = \frac{10^m – 9^m}{10^m}$$
of occurring. We rewrite it as the version on the right for reasons that will soon be clear.
The minimum is $2$ if all the integers are at least $2$ (which can occur in $9$ different ways for each integer), but not all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$). Thus this has probability
$$ \frac{9^m – 8^m}{10^m}.$$
Continuing to do one more for posterity, the minimum is $3$ if all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$), but not all integers are at least $4$ (each integer has $7$ different ways of being at least $4$). Thus this has probability

$$ \frac{8^m – 7^m}{10^m}.$$

And so on.

Recall that the expected value of a random variable is

$$ E[X] = \sum x_i P(X = x_i),$$

so the expected value of the minimum is

$$ \frac{1}{10^m} \big( 1(10^m – 9^m) + 2(9^m – 8^m) + 3(8^m – 7^m) + \cdots + 9(2^m – 1^m) + 10(1^m – 0^m)\big).$$

This simplifies nicely to

$$ \sum_ {k = 1}^{10} \frac{k^m}{10^m}. $$

Expected Maximum

The same style of thinking shows that the expected value of the maximum is

$$ \frac{1}{10^m} \big( 10(10^m – 9^m) + 9(9^m – 8^m) + 8(8^m – 7^m) + \cdots + 2(2^m – 1^m) + 1(1^m – 0^m)\big).$$

This simplifies to

$$ \frac{1}{10^m} \big( 10 \cdot 10^m – 9^m – 8^m – \cdots – 2^m – 1^m \big) = 10 – \sum_ {k = 1}^{9} \frac{k^m}{10^m}.$$

Expected Difference

Subtracting, we find that the expected difference is

$$ 9 – 2\sum_ {k=1}^{9} \frac{k^m}{10^m}. $$

From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It’s also pretty straightforward to see how to extend this to values of integers from $1$ to $N$.

Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn’t expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random.
But we can try it out anyway.

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Comments vs documentation in python

In my first programming class we learned python. It went fine (I thought), I got the idea, and I moved on (although I do now see that much of what we did was not ‘pythonic’).

But now that I’m returning to programming (mostly in python), I see that I did much of it all wrong. One of my biggest surprises was how wrong I was about comments. Too many comments are terrible. Redundant comments make code harder to maintain. If the code is too complex to understand without comments, it’s probably just bad code.

That’s not so hard, right? You read some others’ code, see their comment conventions, and move on. You sort of get into zen moments where the code becomes very clear and commentless, and there is bliss. But then we were at a puzzling competition, and someone wanted to use a piece of code I’d written. Sure, no problem! And the source was so beautifully clear that it almost didn’t even need a single comment to understand.

But they didn’t want to read the code. They wanted to use the code. Comments are very different than documentation. The realization struck me, and again I had it all wrong. In hindsight, it seems so obvious! I’ve programmed java, I know about javadoc. But no one had ever actually wanted to use my code before (and wouldn’t have been able to easily if they had)!

Enter pydoc and sphinx. These are tools that allow html API to be generated from comment docstrings in the code itself. There is a cost – some comments with specific formatting are below each method or class. But it’s reasonable, I think.

Pydoc ships with python, and is fine for single modules or small projects. But for large projects, you’ll need something more. In fact, even the documentation linked above for pydoc was generated with sphinx:

The pydoc documentation is  ‘Created using Sphinx 1.0.7.’

This isn’t to say that pydoc is bad. But I didn’t want to limit myself. Python uses sphinx, so I’ll give it a try too.

And thus I (slightly excessively, to get the hang of it) comment on my solutions to Project Euler on my github. The current documentation looks alright, and will hopefully look better as I get the hang of it.

Full disclosure – this was originally going to be a post on setting up sphinx-generated documentation on github’s pages automatically. I got sidetracked – that will be the next post.

Posted in Programming, Python | Tagged , , , , , | Leave a comment