Category Archives: Python

Choosing functions and generating figures for “When are there continuous choices for the mean value abscissa?”

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions well.

Choosing the right functions to plot

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$
g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),
$$
which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

A first idea

Thus the first task is to be able to generate functions $f$ such that:

  1. $f(0) = 0$,
  2. $f(3) = 0$,
  3. $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
  4. $f'(x)$ has prescribed order of vanishing at $1$, and
  5. $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in Hermite interpolation. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

  1. determine the degree of the polynomial,
  2. create a degree $n-1$ polynomial with variable coefficients in sympy,
  3. have sympy symbolically compute the relations the coefficients must satisfy,
  4. ask sympy to solve this symbolic system of equations.

In code, this looks like

import sympy
from sympy.abc import X, B, C, D    # Establish our variable names
def interpolate(conds):
    """
    Finds the polynomial of minimal degree that solves the given Hermite conditions.

    conds is a list of the form
      [(x1, r1, v1), (x2, r2, v2), ...]
    where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
    """
    # the degree will be one less than the number of conditions
    n = len(conds)

    # generate a symbol for each coefficient
    A = [sympy.Symbol("a[%d]" % i) for i in range(n)]

    # generate the desired polynomial symbolically
    P = sum([A[i] * X**i for i in range(n)])

    # generate the equations the polynomial must satisfy
    #
    # for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
    # substitutes x in for X, and requires that this equals v.
    EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]

    # solve this system for the coefficients A[n]
    SOLN = sympy.solve(EQNS, A)

    return P.subs(SOLN)

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$
p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.
$$
In sympy variables, we think of this as

P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1

Indeed, $2x – 1$ is this polynomial.

Too rigid

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

Add knobs to turn

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to interpolate.

# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1

# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]

def c_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at c0 equal to `n`, with
    first nonzero residue `residue`.

    NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
    of additional degeneracy to add.  This may be a source of off-by-one errors.
    """
    return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]


def b_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at b0 equal to `n`, with
    first nonzero residue `residue`.
    """
    return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]

def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return interpolate(conds)

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16

Too many knobs

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

Minimize change

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form (pt, order, VAL) and think of VAL as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown VAL.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition (2, 1, VAL). Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to VAL over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to VAL and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of VAL.1

In code, this looks like

def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
    """
    Find the polynomial of minimal degree that interpolates the Hermite
    conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
    norm on `interval` of its derivative.
    """
    # Add the symbolic point to the conditions.
    # Recall that D is a sympy variable
    new_conds = conds + [(ctrl_point, order, D)]

    # Find the polynomial interpolating `new_conds`, symbolic in X *and* D
    P = interpolate(new_conds)

    # Compute L2 norm of the derivative on `interval`
    L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))

    # Take the first critical point of the L2 norm with respect to D
    SOLN = sympy.solve(sympy.diff(L2, D), D)[0]

    # Substitute the minimizing solution in for D and return
    return P.subs(D, SOLN)


def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem whose
    derivative on (0, 4) has minimal L2 norm.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.

    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return smoother_interpolate(conds)

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
  + 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212

> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
  + 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X

Is it much better? Let’s compute the L2 norms.

> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349

> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

import matplotlib.pyplot as plt
import numpy as np

def basic_plot(F, n=300):
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    b1d = np.linspace(-.5, 4.5, n)
    f = sympy.lambdify(X, F)(b1d)
    ax.plot(b1d,f,'k')
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-1, 5])
    ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
    fig.savefig("basic_plot.pdf")

Then the plot of poly_with_degens() is given by

 

 

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of smoother_poly_with_degens() is given by

This stays in frame between $0$ and $4$, as desired.

Choose data to highlight and make the functions

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Plotting all possible mean value abscissae

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$
F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).
$$
Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

  1. Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
  2. Restrict attention to the domain where solutions are meaningful.
  3. Plot the contour of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol F in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt

def abscissa_plot(F, n=300):
    # Compute the derivative of f
    DF = sympy.diff(F,X)

    # Define CAP_F --- "capital F"
    #
    # this is (f(b) - f(0))/(b - 0) - f'(c).
    CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)

    # build the mesh
    b1d = np.linspace(-.5, 4.5, n)
    b2d, c2d = np.meshgrid(b1d, b1d)

    # compute CAP_F within the mesh
    cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)

    # restrict attention to below the diagonal --- we require c < b
    # (although the mas inequality looks reversed in this perspective)
    valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)

    # Set up plot basics
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-.5, 4.5])

    # plot the contour
    ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')

    # plot a diagonal line representing the boundary
    ax.plot(b1d,b1d,'k--')

    # plot the guaranteed point
    ax.plot(b0,c0,'ko')

    fig.savefig("abscissa_plot.pdf")

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for poly_with_degens(), while for smoother_poly_with_degens() we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

 

Copy of the code

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | 3 Comments

Writing a Python Script to be Used in Vim

It is no secret that I love vim. I’ve written and helped maintain a variety of vim plugins, but I’ve mostly restricted my attention to direct use of vimscript or to call external tools and redirect their output to a vim buffer.

Only recently did I learn how easy it is to use python to interact directly with vim objects through the vim-python package. And it was so easy that this felt a bit like writing a quick shell script to facilitate some immediate task — it allows quick and dirty work.

In this note, I’ll review how I recently wrote a quick python function (to be used in vim) to facilitate conversion of python f-strings (which I love) to python 3.5 (which I also love, but which is admittedly a bit behind).

Of course the real reason is to remind future me just how easy this is.

Description of the Problem

In python 3.7+, one can use f-strings to format strings. This syntax looks like

x = 4
print(f"Python has at least {x} many ways to format strings now.")

I love them, they’re great. But they are not backportable since they are a change to fundamental syntax. Thus systems relying on previous versions of python (be it python2 or even python3.5) can’t parse python files containing f-strings.

The goal is this: given a line containing an f-string, write an equivalent line containing no f-string.

The formula I choose is simple: use .format() instead. Thus given

s = f"string with {some} {variables}"

we will convert this to

s = "string with {} {}".format(some, variables)

I will assume that the string takes place on one line for simplicity (and because in my application this was almost always true), but that there are any number of variables included in the line.

Background on python in vim

Some good insight is gained through reading :h python-vim. In fact, this is essentially the only documentation on the python-vim interface (though it is pretty good documentation — perhaps the actual best source of learning this interface is to read some plugins which leverage the python connection).

I will assume that vim has been compiled with python3 support (which can be checked by looking for +python3 in vim --version). Then the key idea is that one can use :py3 to call python functions. For example,

:py3 print("Hello")

Being restricted to one line is inconvenient, and it is far more useful to use the form

:py3 << EOF
def say_hello():
  print("Sup doc.")
EOF
:py3 say_hello()

In this way, we can define and use python functions. Right now, the output of these functions simply appears in the status lines at the bottom of the screen. This is of limited value. Often, we will really want to interact directly with the vim buffer. The key to do this is the vim module for python.1

The vim module for python can be used through

import vim

# do vim things

This is what :h python-vim actually documents, and I defer to the list of methods there. The important thing is that it provides direct access to vim buffers.

Today, we will use only the direct access to the current line, vim.current.line.

A Solution

I very routinely open scratch buffers (i.e. buffers that I call scratch.tmp). Sometimes I momentary notes, or I edit vim macros in place, or I define vim functions for short-term use. I will assume that we have opened up a scratch buffer and our pythonfile withfstrings.py. (In fact, I usually open it in a vsplit). I also very routinely assign Q to do whatever convenient function I have written in my scratch buffer (or whatever macro I want to repeat most simply). Of course, these are my idiosyncratic habits — the ideas are easy enough to translate.

In our scratch buffer, we can write a vim function, which is internally a python function, and map the symbol Q to call it.

function! Convert()
py3 << EOF

import vim

import re
bracketed = re.compile("{.*?}")

def convert():
    line = vim.current.line
    single = line.find("'")
    double = line.find('"')
    if single == -1:
        sep = '"'
    elif double == -1 or single < double:
        sep = "'"
    else:
        sep = '"'

    # l sep inner sep post
    l, _, m = line.partition(sep)
    inner, _, post = m.partition(sep)

    #get rid of f
    l = l[:-1]

    ret = ""
    var_names = bracketed.findall(inner)
    var_names = list(map(lambda x: x[1:-1], var_names))

    ret += l + sep + inner + sep
    ret += ".format("
    for var in var_names:
        ret += "{}={}, ".format(var, var)
    ret = ret[:-2]
    ret += ")" + post
    vim.current.line = ret

convert()
EOF

endfunction

com! Conv call Convert()
nnoremap Q :Conv<CR>

To use, one sources the scratch buffer (either directly, like :source scratch.tmp, or through navigating to its split and using :so %). Then one can find a line containing an f-string and hit Q (or alternately use :Conv).

Why is this good?

Although nontrivial, writing a simple python script like this takes only a few minutes. And then converting f-strings became the least important part of my backporting project.2

And more broadly, this is such an easy process. Off-loading the mental burden of repeating annoying tasks is valuable. The most revelatory aspect of this experience for me is that it’s easy enough to allow throwaway functions for momentary ease.

In my recent application, I converted 482 f-strings using this function. A few of them were multiline strings and my (admittedly simple) function failed — and so I did those by hand. But it took me less than 10 minutes to write the function (maybe even less than 5, though my first function expected double quotes only). Overall, this was a great gain.

Keeping the Script

Having come this far, it’s worth considering how we might keep the script around if we wanted to.

One good approach is to separate the python from the vim (allowing, for instance, testing on the python itself) and have a thin vim wrapper. We’ll look at this in two possible levels of encapsulation.

  1. Separate the python from the vim.
  2. Make the function a plugin. (Presumably for slightly more meaningful functions than this one).

Separate the Files

Let’s separate the python from the vim. In this was we can approach the python as proper little python script, merely waiting for a thin vim wrapper around it at the end.

We can separate the python out into the following file, convert_fstring.py.

# convert_fstring.py
import re
bracketed = re.compile("{.*?}")

def convert(line):
    single = line.find("'")
    double = line.find('"')
    if single == -1:
        sep = '"'
    elif double == -1 or single < double:
        sep = "'"
    else:
        sep = '"'

    # l sep inner sep post
    l, _, m = line.partition(sep)
    inner, _, post = m.partition(sep)

    # get rid of f
    l = l[:-1]

    ret = ""
    var_names = bracketed.findall(inner)
    var_names = list(map(lambda x: x[1:-1], var_names))

    ret += l + sep + inner + sep
    ret += ".format("
    for var in var_names:
        ret += "{}={}, ".format(var, var)
    ret = ret[:-2]
    ret += ")" + post
    return ret

if __name__ == "__main__":
    test_input = 'mystr = f"This {is} an {f}-string".reverse()'
    expected  = 'mystr = "This {is} an {f}-string".format(is=is, f=f).reverse()'
    assert expected == convert(test_input)

This is a raw python file. I included a mock unit-test (in case one was testing the function while writing it… I didn’t compose the function like this because I didn’t think about it, but in the future I probably will).

Now let’s write a thin vim wrapper around it.

" convert_string.vim

let s:script_dir = fnamemodify(resolve(expand('<sfile>', ':p')), ':h')

function! Convert()
py3 << EOF

import sys
import vim

script_dir = vim.eval('s:script_dir')
sys.path.insert(0, script_dir)

import convert_fstring

def convert():
  out = convert_fstring.convert(vim.current.line)
  vim.current.line = out

convert()
EOF

endfunction

com! Conv call Convert()
nnoremap Q :Conv<CR>

This is mostly pretty clear, with (I think) two exceptions concerning s:script_dir.

The problem comes from trying to import convert_fstring — it’s not in our PYTHONPATH, and the current working directory for python from within vim is a bit weird. It would be possible to directly code in the current directory into the PYTHONPATH, but I think it is more elegant to have convert_fstring.vim add the location of convert_fstring.py to python’s path.3 One can do that through

There are two additions of note that address this import. The first is the line

let s:script_dir = fnamemodify(resolve(expand('<sfile>', ':p')), ':h')

This sets a script-local variable (the prefix s:) containing the directory in which the script was sourced from. (This is where we use the assumption that the vimfile is in the same directory as the python file. If they’re in relative directories, then one much change the path addition to python appropriately).

The string <sfile> is populated automatically by vim to contain the filename of the sourcefile. Calling expand with the :p filemodifier returns the complete path of <sfile>. Calling resolve follows symlinks so that the path is absolute. And the final fnamemodify with the :h modifier returns the head, or rather the directory, of the file. Thus the result is to get an absolute path to the directory containing the script (and in this case also the python file).

Then in the python function, we add this location to the path through

script_dir = vim.eval('s:script_dir')
sys.path.insert(0, script_dir)

The first line has vim convert the vim variable into a python string, and the second inserts this directory into the PYTHONPATH.

I have a collection of loosely organized helper scripts in a (creatively named) ~/scripts directory. Some of these were written as pure python shellscripts that I’ve added some vim wrappers around.

Writing a Plugin

To reiterate, I consider this excessive for my sample application. But we’ve covered so many of the ideas that we should touch on it now.

Any plugin should follow Tim Pope’s Pathogen design model, with the ultimate goal of having a structure that looks like

myplugin/
  README.markdown
  LICENSE
  autoload/
    myplugin.vim
    ...
  plugin/
  ...

Pathogen (or Pathogen-compatible plugin managers, which as far as I know means all of them… like Vundle or vim-plug) will allow plugins to be installed in ~/.vim/bundle, and each plugin there will be added to vim’s rtp. 4 Each plugin manager installs plugins (i.e. adds to the runtimepath) in its own way. For Vundle, I include an extra line in my .vimrc.

Suppose I want to package my script as a plugin called fstring_converter. Then I’ll create the directory structure

fstring_converter/
  ftplugin/
    python.vim
    convert_fstring.py

where python.vim is what I called convert_fstring.vim before. Placing fstring_converter in ~/.vim/bundle (or symlinking it, or installing it there through e.g. Vundle) will “install it”. Then opening a python file will load it, and one can either Conv a line or (as it’s currently written) use Q. (Although I think it’s a bit unkind and certainly unsustainable to create this mapping for users).

I note that one should generically use autoload/ when available, but I don’t touch that now.

As an additional note, vim does add some directories to the PYTHONPATH under the hood. For each directory in vim’s runtimepath, vim adds the subdirectory python3 (and also pythonx) to the python module search path. As a result, we could omit the explicit path addition to the python.vim (aka convert_fstring.vim) file if we organized the plugin like

fstring_converter/
  ftplugin/
    python.vim
  python3/
    convert_fstring.py

Regardless, this describes a few ways to make a vim plugin out of our python script.

Posted in Programming, Python, vim | Tagged , , , , | Leave a comment

Extra comparisons in Python’s timsort

Reading through the comp.lang.python mailing list, I saw an interesting question concerning the behavior of the default sorting algorithm in python. This led to this post.

Python uses timsort, a clever hybrid sorting algorithm with ideas borrowing from merge sort and (binary) insertion sort. A major idea in timsort is to use the structure of naturally occuring runs (consecutive elements in the list that are either monotone increasing or monotone decreasing) when sorting.

Let’s look at the following simple list.

10, 20, 5

A simple sorting algorithm is insertion sort, which just advances through the list and inserts each number into the correct spot. More explicitly, insertion sort would

  1. Start with the first element, 10. As a list with one element, it is correctly sorted tautologically.
  2. Now consider the second element, 20. We insert this into the correct position in the already-sorted list of previous elements. Here, that just means that we verify that 20 > 10, and now we have the sorted sublist consisting of 10, 20.
  3. Now consider the third element, 5. We want to insert this into the correct position in the already-sorted list of previous elements. A naively easy way to do this is to either scan the list from the right or from the left, and insert into the correct place. For example, scanning from the right would mean that we compare 5 to the last element of the sublist, 20. As 5 < 20, we shift left and compare 5 to 10. As 5 < 10, we shift left again. As there is nothing left to compare against, we insert 5 at the beginning, yielding the sorted list 5, 10, 20.

How many comparisons did this take? This took 20 > 10, 5 < 20, and 5 < 10. This is three comparisons in total.

We can see this programmatically as well. Here is one implementation of insertion_sort, as described above.

def insertion_sort(lst):
    '''
    Sorts `lst` in-place. Note that this changes `lst`.
    '''
    for index in range(1, len(lst)):
        current_value = lst[index]
        position = index
        while position > 0 and lst[position - 1] > current_value:
            lst[position] = lst[position - 1]
            position = position - 1
        lst[position] = current_value

Let’s also create a simple Number class, which is just like a regular number, except that anytime a comparison is done it prints out the comparison. This will count the number of comparisons made for us.

class Number:
    def __init__(self, value):
        self.value = value

    def __str__(self):
        return str(self.value)

    def __repr__(self):
        return self.__str__()

    def __lt__(self, other):
        if self.value < other.value:
            print("{} < {}".format(self, other))
            return True
        print("{} >= {}".format(self, other))
        return False

    def __eq__(self, other):
        if self.value == other.value:
            print("{} = {}".format(self, other))
            return True
        return False

    def __gt__(self, other):
        return not ((self == other) or (self < other))

    def __le__(self, other):
        return (self < other) or (self == other)

    def __ge__(self, other):
        return not (self < other)

    def __nq__(self, other):
        return not (self == other)

With this class and function, we can run

lst = [Number(10), Number(20), Number(5)]
insertion_sort(lst)
print(lst)

which will print

10 < 20
20 >= 5
10 >= 5
[5, 10, 20]

These are the three comparisons we were expecting to see.

Returning to python’s timsort — what happens if we call python’s default sorting method on this list? The code

lst = [Number(10), Number(20), Number(5)]
lst.sort()
print(lst)

prints

20 >= 10
5 < 20
5 < 20
5 < 10
[5, 10, 20]

There are four comparisons! And weirdly, the method checks that 5 < 20 twice in a row. What’s going on there?1

At its heart, this was at the core of the thread on comp.lang.python. Why are there extra comparisons in cases like this?

Poking around the implementation of timsort taught me a little bit more about timsort.2

Timsort approaches this sorting task in the following way.

  1. First, timsort tries to identify how large the first run within the sequence is. So it keeps comparing terms until it finds one that is out of order. In this case, it compares 20 to 10 (finding that 20 > 10, and thus the run is increasing), and then compares 5 to 20 (finding that 5 < 20, and thus that 5 is not part of the same run as 10, 20). Now the run is identified, and there is one element left to incorporate.
  2. Next timsort tries to insert 5 into the already-sorted run. It is more correct to say that timsort attempts to do a binary insertion, since one knows already that the run is sorted.3 In this binary insertion, timsort will compare 5 with the middle of the already-sorted run 10, 20. But this is a list of length 2, so what is its middle element? It turns out that timsort takes the latter element, 20, in this case. As 5 < 20, timsort concludes that 5 should be inserted somewhere in the first half of the run 10, 20, and not in the second half.
  3. Of course, the first half consists entirely of 10. Thus the remaining comparison is to check that 5 < 10, and now the list is sorted.

We count4 all four of the comparisons. The doubled comparison is due to the two tasks of checking whether 5 is in the same run as 10, 20, and then of deciding through binary insertion where to place 5 in the smaller sublist of 10, 20.

Now that we’ve identified a doubled comparison, we might ask Why is it this way? Is this something that should change?

The short answer is it doesn’t really matter. A longer answer is that to apply this in general would cause additional comparisons to be made, since this applies in the case when the last element of the run agrees in value with the central value of the run (which may occur for longer lists if there are repeated values). Checking that this happens would probably either involve comparing the last element of the run with the central value (one extra comparison, so nothing is really saved anyway), or perhaps adding another data structure like a skip list (which seems sufficiently more complicated to not be worth the effort). Or it would only apply when sorting really short lists, in which case there isn’t much to worry about.

Learning a bit more about timsort made me realize that I could probably learn a lot by really understanding an implementation of timsort, or even a slightly simplified implementation. It’s a nice reminder that one can choose to optimize for certain situations or behaviors, and this might not cover all cases perfectly — and that’s ok.

Posted in Programming, Python | Tagged , | Leave a comment

Cracking Codes with Python: A Book Review

How do you begin to learn a technical subject?

My first experience in “programming” was following a semi-tutorial on how to patch the Starcraft exe in order to make it understand replays from previous versions. I was about 10, and I cobbled together my understanding from internet mailing lists and chatrooms. The documentation was awful and the original description was flawed, and to make it worse, I didn’t know anything about any sort of programming yet. But I trawled these lists and chatroom logs and made it work, and learned a few things. Each time Starcraft was updated, the old replay system broke completely and it was necessary to make some changes, and I got pretty good at figuring out what changes were necessary and how to perform these patches.

On the other hand, my first formal experience in programming was taking a course at Georgia Tech many years later, in which a typical activity would revolve around an exciting topic like concatenating two strings or understanding object polymorphism. These were dry topics presented to us dryly, but I knew that I wanted to understand what was going on and so I suffered the straight-faced-ness of the class and used the course as an opportunity to build some technical depth.

Now I recognize that these two approaches cover most first experiences learning a technical subject: a motivated survey versus monographic study. At the heart of the distinction is a decision to view and alight on many topics (but not delving deeply in most) or to spend as much time as is necessary to understand completely each topic (and hence to not touch too many different topics). Each has their place, but each draws a very different crowd.

The book Cracking Codes with Python: An Introduction to Building and Breaking Ciphers by Al Sweigart1 is very much a motivated flight through various topics in programming and cryptography, and not at all a deep technical study of any individual topic. A more accurate (though admittedly less beckoning) title might be An Introduction to Programming Concepts Through Building and Breaking Ciphers in Python. The main goal is to promote programmatical thinking by exploring basic ciphers, and the medium happens to be python.

But ciphers are cool. And breaking them is cool. And if you think you might want to learn something about programming and you might want to learn something about ciphers, then this is a great book to read.

Sweigart has a knack for writing approachable descriptions of what’s going on without delving into too many details. In fact, in some sense Sweigart has already written this book before: his other books Automate the Boring Stuff with Python and Invent your own Computer Games with Python are similarly survey material using python as the medium, though with different motivating topics.

Each chapter of this book is centered around exploring a different aspect of a cipher, and introduces additional programming topics to do so. For example, one chapter introduces the classic Caesar cipher, as well as the “if”, “else”, and “elif” conditionals (and a few other python functions). Another chapter introduces brute-force breaking the Caesar cipher (as well as string formatting in python).

In each chapter, Sweigart begins by giving a high-level overview of the topics in that chapter, followed by python code which accomplishes the goal of the chapter, followed by a detailed description of what each block of code accomplishes. Readers get to see fully written code that does nontrivial things very quickly, but on the other hand the onus of code generation is entirely on the book and readers may have trouble adapting the concepts to other programming tasks. (But remember, this is more survey, less technical description). Further, Sweigart uses a number of good practices in his code that implicitly encourages good programming behaviors: modules are written with many well-named functions and well-named variables, and sufficiently modularly that earlier modules are imported and reused later.

But this book is not without faults. As with any survey material, one can always disagree on what topics are or are not included. The book covers five classical ciphers (Caesar, transposition, substitution, Vigenere, and affine) and one modern cipher (textbook-RSA), as well as the write-backwards cipher (to introduce python concepts) and the one-time-pad (presented oddly as a Vigenere cipher whose key is the same length as the message). For some unknown reason, Sweigart chooses to refer to RSA almost everywhere as “the public key cipher”, which I find both misleading (there are other public key ciphers) and giving insufficient attribution (the cipher is implemented in chapter 24, but “RSA” appears only once as a footnote in that chapter. Hopefully the reader was paying lots of attention, as otherwise it would be rather hard to find out more about it).

Further, the choice of python topics (and their order) is sometimes odd. In truth, this book is almost language agnostic and one could very easily adapt the presentation to any other scripting language, such as C.

In summary, this book is an excellent resource for the complete beginner who wants to learn something about programming and wants to learn something about ciphers. After reading this book, the reader will be a mid-beginner student of python (knee-deep is apt) and well-versed in classical ciphers. Should the reader feel inspired to learn more python, then he or she would probably feel comfortable diving into a tutorial or reference for their area of interest (like Full Stack Python if interested in web dev, or Python for Data Analysis if interested in data science). Or he or she might dive into a more complete monograph like Dive into Python or the monolithic Learn Python. Many fundamental topics (such as classes and objects, list comprehensions, data structures or algorithms) are not covered, and so “advanced” python resources would not be appropriate.

Further, should the reader feel inspired to learn more about cryptography, then I recommend that he or she consider Cryptanalysis by Gaines, which is a fun book aimed at diving deeper into classical pre-computer ciphers, or the slightly heavier but still fun resource would “Codebreakers” by Kahn. For much further cryptography, it’s necessary to develop a bit of mathematical maturity, which is its own hurdle.

This book is not appropriate for everyone. An experienced python programmer could read this book in an hour, skipping the various descriptions of how python works along the way. An experienced programmer who doesn’t know python could similarly read this book in a lazy afternoon. Both would probably do better reading either a more advanced overview of either cryptography or python, based on what originally drew them to the book.

Posted in Book Review, Programming, Python | Leave a comment

Hosting a Flask App on WebFaction on a Non-root Domain

Since I came to Warwick, I’ve been working extensively on the LMFDB, which uses python, sage, flask, and mongodb at its core. Thus I’ve become very familiar with flask. Writing a simple flask application is very quick and easy. So I thought it would be a good idea to figure out how to deploy a flask app on the server which runs this website, which is currently at WebFaction.

In short, it was not too hard, and now the app is set up for use. (It’s not a public tool, so I won’t link to it).

But there were a few things that I had to think figure out which I would quickly forget. Following the variety of information I found online, the only nontrivial aspect was configuring the site to run on a non-root domain (like davidlowryduda.com/subdomain instead of at davidlowryduda.com). I’m writing this so as to not need to figure this out when I write and hoost more flask apps. (Which I’ll almost certainly do, as it’s so straightforward).

There are some uninteresting things one must do on WebFaction.

  1. Log into your account.
  2. Add a new application of type mod_wsgi (and the desired version of python, which is hopefully 3.6+).
  3. Add this application to the desired website and subdomain in the WebFaction control panel.

After this, WebFaction will set up a skeleton “Hello World” mod_wsgi application with many reasonable server setting defaults. The remainder of the setup is done on the server itself.

In ~/webapps/application_name there will now appear


apache2/    # Apache config files and bin
htdocs/     # Default location where Apache looks for the app

We won’t change that structure. In htdocs1 there is a file index.py, which is where apache expects to find a python wsgi application called application. We will place the flask app along this structure and point to it in htdocs/index.py.

Usually I will use a virtualenv here. So in ~/webapps/application_name, I will run something like virtualenv flask_app_venv and virtualenv activate (or actually out of habit I frequently source the flask_app_venv/bin/activate file). Then pip install flask and whatever other python modules are necessary for the application to run. We will configure the server to use this virtual environment to run the app in a moment.

Copy the flask app, so that the resulting structure looks something like

~/webapps/application_name:

- apache2/
- htdocs/
- flask_app_venv/
- flask_app/    # My flask app
  - config.py
  - libs/
  - main/
    - static/
    - templates/
    - __init__.py
    - views.py
    - models.my

I find it conceptually easiest if I have flask_app/main/__init__.py to directly contain the flask app to reference it by name in htdocs/index.py. It can be made elsewhere (for instance, perhaps in a file like flask_app/main/app.py, which appears to be a common structure), but I assume that it is at least imported in __init__.py.

For example, __init__.py might look something like

# application_name/flask_app/main/__init__.py

# ... other import statements from project if necessary
from flask import Flask

app = Flask(__name__)
app.config.from_object('config')

# Importing the views for the rest of our site
# We do this here to avoid circular imports
# Note that I call it "main" where many call it "app"
from main import views

if __name__ == '__main__':
    app.run()

The Flask constructor returns exactly the sort of wsgi application that apache expects. With this structure, we can edit the htdocs/index.py file to look like

# application_name/htdocs/index.py

import sys

# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')

# launching our app
from main import app as application

Now the server knows the correct wsgi_application to serve.

We must configure it to use our python virtual environment (and we’ll add a few additional convenience pieces). We edit /apache2/conf/httpd.conf as follows. Near the top of the file, certain modules are loaded. Add in the alias module, so that the modules look something like


#... other modules
LoadModule wsgi_module       modules/mod_wsgi.so
LoadModule alias_module      modules/mod_alias.so  # <-- added

This allows us to alias the root of the site. Since all site functionality is routed through htdocs/index.py, we want to think of the root / as beginning with /htdocs/index.py. At the end of the file


Alias / /home/username/webapps/application_name/htdocs/index.py/

We now set the virtual environment to be used properly. There will be a set of lines containing names like WSGIDaemonProcess and WSGIProcessGroup. We edit these to refer to the correct python. WebFaction will have configured WSGIDaemonProcess to point to a local version of python by setting the python-path. Remove that, making that line look like


WSGIDaemonProcess application_name processes=2 threads=12

(or similar). We set the python path below, adding the line


WSGIPythonHome /home/username/webapps/application_name/flask_app_venv

I believe that this could also actually be done by setting puthon-path in WSGIDaemonProcess, but I find this more aesthetically pleasing.

We must also modify the “ section. Edit it to look like


<Directory /home/username/webapps/application_name/htdocs>
  AddHandler wsgi-script .py
  RewriteEngine On            # <-- added
  RewriteBase /               # <-- added
  WSGIScriptReloading On      # <-- added
<Directory>

It may very well be that I don't use the RewriteEngine at all, but if I do then this is where it's done. Script reloading is a nice convenience, especially while reloading and changing the app.

I note that it may be convenient to add an additional alias for static file hosting,


Alias /static/ /home/your_username/webapps/application_name/app/main/static/

though I have not used this so far. (I get the same functionality through controlling the flask views appropriately).

The rest of this file has been setup by WebFaction for us upon creating the wsgi application.

If the application is on a non-root domain...

If the application is to be run on a non-root domain, such as davidlowryduda.com/subdomain, then there is currently a problem. In flask, when using url getters like url_for, urls will be returned as though there is no subdomain. And thus all urls will be incorrect. It is necessary to alter provided urls in some way.

The way that worked for me was to insert a tiny bit of middleware in the wsgi_application. Alter htdocs/index.py to read

#application_name/htdocs/index.py

import sys

# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')

# subdomain url rerouting middleware
from webfaction_middleware import Middleware

from main import app

# set app through middleware
application = Middleware(app)

Now of course we need to write this middleware.

In application_name/flask_app, I create a file called webfaction_middleware.py, which reads

# application_name/flask_app/webfaction_middleware.py

class Middleware(object):  # python2 aware
  def __init__(self, app):
    self.app = app

  def __call__(self, environ, start_response):
    app_url = '/subdomain'
    if app_url != '/':
      environ['SCRIPT_NAME'] = app_url
    return self.app(environ, start_response)

I now have a template file in which I keep app_url = '/' so that I can forget this and not worry, but that is where the subdomain url is prepended. Note that the leading slash is necessary. When I first tried using this, I omitted the leading slash. The application worked sometimes, and horribly failed in some other places. Some urls were correcty constructed, but most were not. I didn't try to figure out which ones were doomed to fail --- but it took me an embarassingly long time to realize that prepending a slash solved all problems.

The magical-names of environ and start_response are because the flask app is a wsgi_application, and this is the api of wsgi_applications generically.

Now it's ready

Restart the apache server (/apache2/bin/restart) and go. Note that when incrementally making changes above, some changes can take a few minutes to fully propogate. It's only doing it the first time which takes some thought.

Posted in Programming, Python | Leave a comment

Segregation, Gerrymandering, and Schelling’s Model

[This note is more about modeling some of the mathematics behind political events than politics themselves. And there are pretty pictures.]

Gerrymandering has become a recurring topic in the news. The Supreme Court of the US, as well as more state courts and supreme courts, is hearing multiple cases on partisan gerrymandering (all beginning with a case in Wisconsin).

Intuitively, it is clear that gerrymandering is bad. It allows politicians to choose their voters, instead of the other way around. And it allows the majority party to quash minority voices.

But how can one identify a gerrymandered map? To quote Justice Kennedy in his Concurrence the 2004 Supreme Court case Vieth v. Jubelirer:

When presented with a claim of injury from partisan gerrymandering, courts confront two obstacles. First is the lack of comprehensive and neutral principles for drawing electoral boundaries. No substantive definition of fairness in districting seems to command general assent. Second is the absence of rules to limit and confine judicial intervention. With uncertain limits, intervening courts–even when proceeding with best intentions–would risk assuming political, not legal, responsibility for a process that often produces ill will and distrust.

Later, he adds to the first obstacle, saying:

The object of districting is to establish “fair and effective representation for all citizens.” Reynolds v. Sims, 377 U.S. 533, 565—568 (1964). At first it might seem that courts could determine, by the exercise of their own judgment, whether political classifications are related to this object or instead burden representational rights. The lack, however, of any agreed upon model of fair and effective representation makes this analysis difficult to pursue.

From Justice Kennedy’s Concurrence emerges a theme — a “workable standard” of identifying gerrymandering would open up the possibility of limiting partisan gerrymandering through the courts. Indeed, at the core of the Wisconsin gerrymandering case is a proposed “workable standard”, based around the efficiency gap.

 

Thomas Schelling and Segregation

In 1971, American economist Thomas Schelling (who later won the Nobel Prize in Economics in 2005) published Dynamic Models of Segregation (Journal of Mathematical Sociology, 1971, Vol 1, pp 143–186). He sought to understand why racial segregation in the United States seems so difficult to combat.

He introduced a simple model of segregation suggesting that even if each individual person doesn’t mind living with others of a different race, they might still choose to segregate themselves through mild preferences. As each individual makes these choices, overall segregation increases.

I write this post because I wondered what happens if we adapt Schelling’s model to instead model a state and its district voting map. In place of racial segregation, I consider political segregation. Supposing the district voting map does not change, I wondered how the efficiency gap will change over time as people further segregate themselves.

It seemed intuitive to me that political segregation (where people who had the same political beliefs stayed largely together and separated from those with different political beliefs) might correspond to more egregious cases of gerrymandering. But to my surprise, I was (mostly) wrong.

Let’s set up and see the model.

(more…)

Posted in Expository, Mathematics, Politics, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 4

This is a very short post in my collection working through this year’s Advent of Code challenges. Unlike the previous ones, this has no mathematical comments, as it was a very short exercise. This notebook is available in its original format on my github.

Day 4: High Entropy Passphrases

Given a list of strings, determine how many strings have no duplicate words.

This is a classic problem, and it’s particularly easy to solve this in python. Some might use collections.Counter, but I think it’s more straightforward to use sets.

The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
def count_lines_with_unique_words(lines):
    num_pass = 0
    for line in lines:
        s = line.split()
        if len(s) == len(set(s)):
            num_pass += 1
    return num_pass

count_lines_with_unique_words(lines)
Out[1]:
455

I think this is the first day where I would have had a shot at the leaderboard if I’d been gunning for it.

Part 2

Let’s add in another constraint. Determine how many strings have no duplicate words, even after anagramming. Thus the string

abc bac

is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words.

In [2]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
sorted_lines = []
for line in lines:
    sorted_line = ' '.join([''.join(l) for l in map(sorted, line.split())])
    sorted_lines.append(sorted_line)

sorted_lines[:2]
    
Out[2]:
['bddjjow acimrv bcjjm anr flmmos fiosv',
 'bcmnoxy dfinyzz dgmp dfgioy hinrrv eeklpuu adgpw kqv']
In [3]:
count_lines_with_unique_words(sorted_lines)
Out[3]:
186
Posted in Expository, Programming, Python | Tagged , , | 1 Comment

Advent of Code: Day 3

This is the third notebook in my posts on the Advent of Code challenges. The notebook in its original format can be found on my github.

Day 3: Spiral Memory

Numbers are arranged in a spiral

17  16  15  14  13
18   5   4   3  12
19   6   1   2  11
20   7   8   9  10
21  22  23---> ...

Given an integer n, what is its Manhattan Distance from the center (1) of the spiral? For instance, the distance of 3 is $2 = 1 + 1$, since it’s one space to the right and one space up from the center.

Here’s my idea. The bottom right corner of the $k$th layer is the integer $(2k+1)^2$, since that’s how many integers are contained within that square. The other three corners in that layer are $(2k+1)^2 – 2k, (2k+1)^2 – 4k$, and $(2k+1)^2 – 6k$. Finally, the closest spot on the $k$th layer to the origin is at distance $k$: these are the four “axis locations” halfway between the corners, at $(2k+1)^2 – k, (2k+1)^2 – 3k, (2k+1)^2 – 5k$, and $(2k+1)^2 – 7k$.

For instance when $k = 1$, the bottom right is $(2 + 1)^2 = 9$, and the four “axis locations” are $9 – 1, 9 – 3, 9-5$, and $9-7$. The “axis locations” are $k$ away, and the corners are $2k$ away.

So I will first find which layer the number is on. Then I’ll figure out which side it’s on, and then how far away it is from the nearest “axis location” or “corner”.

My given number happens to be 289326.

In [1]:
import math

def find_lowest_larger_odd_square(n):
    upper = math.ceil(n**.5)
    if upper %2 == 0:
        upper += 1
    return upper
In [2]:
assert find_lowest_larger_odd_square(39) == 7
assert find_lowest_larger_odd_square(26) == 7
assert find_lowest_larger_odd_square(25) == 5
In [3]:
find_lowest_larger_odd_square(289326)
Out[3]:
539
In [4]:
539**2 - 289326
Out[4]:
1195

It happens to be that our integer is very close to an odd square.
The square is $539^2$, and the distance to that square is $538$ from the center.

Note that $539 = 2(269) + 1$, so this is the $269$th layer of the square.
The previous corner to $539^2$ is $539^2 – 538$, and the previous corner to that is $539^2 – 2\cdot538 = 539^2 – 1076$.
This is the nearest corner.
How far away from the square is this corner?

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 2

This is the second notebook in my posts on the Advent of Code challenges. This notebook in its original format can be found on my github.

Day 2: Corruption Checksum, part I

You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together.

There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
lines[0]
Out[1]:
'5048\t177\t5280\t5058\t4504\t3805\t5735\t220\t4362\t1809\t1521\t230\t772\t1088\t178\t1794\n'
In [2]:
l = lines[0]
l = l.split()
l
Out[2]:
['5048',
 '177',
 '5280',
 '5058',
 '4504',
 '3805',
 '5735',
 '220',
 '4362',
 '1809',
 '1521',
 '230',
 '772',
 '1088',
 '178',
 '1794']
In [3]:
def max_minus_min(line):
    '''Compute the difference between the largest and smallest integer in a line'''
    line = list(map(int, line.split()))
    return max(line) - min(line)

def sum_differences(lines):
    '''Sum the value of `max_minus_min` for each line in `lines`'''
    return sum(max_minus_min(line) for line in lines)
In [4]:
testcase = ['5 1 9 5','7 5 3', '2 4 6 8']
assert sum_differences(testcase) == 18
In [5]:
sum_differences(lines)
Out[5]:
58975

Mathematical Interlude

In line with the first day’s challenge, I’m inclined to ask what we should “expect.” But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense.

Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect?

As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum?

 

Expected Minimum

Let’s begin with the minimum. The minimum is $1$ unless all the integers are greater than $2$. This has probability
$$ 1 – \left( \frac{9}{10} \right)^m = \frac{10^m – 9^m}{10^m}$$
of occurring. We rewrite it as the version on the right for reasons that will soon be clear.
The minimum is $2$ if all the integers are at least $2$ (which can occur in $9$ different ways for each integer), but not all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$). Thus this has probability
$$ \frac{9^m – 8^m}{10^m}.$$
Continuing to do one more for posterity, the minimum is $3$ if all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$), but not all integers are at least $4$ (each integer has $7$ different ways of being at least $4$). Thus this has probability

$$ \frac{8^m – 7^m}{10^m}.$$

And so on.

Recall that the expected value of a random variable is

$$ E[X] = \sum x_i P(X = x_i),$$

so the expected value of the minimum is

$$ \frac{1}{10^m} \big( 1(10^m – 9^m) + 2(9^m – 8^m) + 3(8^m – 7^m) + \cdots + 9(2^m – 1^m) + 10(1^m – 0^m)\big).$$

This simplifies nicely to

$$ \sum_ {k = 1}^{10} \frac{k^m}{10^m}. $$

Expected Maximum

The same style of thinking shows that the expected value of the maximum is

$$ \frac{1}{10^m} \big( 10(10^m – 9^m) + 9(9^m – 8^m) + 8(8^m – 7^m) + \cdots + 2(2^m – 1^m) + 1(1^m – 0^m)\big).$$

This simplifies to

$$ \frac{1}{10^m} \big( 10 \cdot 10^m – 9^m – 8^m – \cdots – 2^m – 1^m \big) = 10 – \sum_ {k = 1}^{9} \frac{k^m}{10^m}.$$

Expected Difference

Subtracting, we find that the expected difference is

$$ 9 – 2\sum_ {k=1}^{9} \frac{k^m}{10^m}. $$

From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It’s also pretty straightforward to see how to extend this to values of integers from $1$ to $N$.

Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn’t expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random.
But we can try it out anyway.

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 1

I thoroughly enjoyed reading through Peter Norvig’s extraordinarily clean and nice solutions to the Advent of Code challenge last year. Inspired by his clean, literate programming style and the convenience of jupyter notebook demonstrations, I will look at several of these challenges in my own jupyter notebooks.

My background and intentions aren’t the same as Peter Norvig’s: his expertise dwarfs mine. And timezones are not kind to those of us in the UK, and thus I won’t be competing for a position on the leaderboards. These are to be fun. And sometimes there are tidbits of math that want to come out of the challenges.

Enough of that. Let’s dive into the first day.

Day 1: Inverse Captcha, Part 1

Given a sequence of digits, find the sum of those digits which match the following digit. The sequence is presumed circular, so the first digit may match the last digit.

This would probably be done the fastest by looping through the sequence.

In [1]:
with open('input.txt', 'r') as f:
    seq = f.read()
seq = seq.strip()
seq[:10]
Out[1]:
'1118313623'
In [2]:
def sum_matched_digits(s):
    "Sum of digits which match following digit, and first digit if it matches last digit"
    total = 0
    for a,b in zip(s, s[1:]+s[0]):
        if a == b:
            total += int(a)
    return total

They provide a few test cases which we use to test our method against.

In [3]:
assert sum_matched_digits('1122') == 3
assert sum_matched_digits('1111') == 4
assert sum_matched_digits('1234') == 0
assert sum_matched_digits('91212129') == 9

For fun, this is a oneline version.

(more…)

Posted in Expository, Programming, Python | Tagged , , | 1 Comment