Gerrymandering has become a recurring topic in the news. The Supreme Court of the US, as well as more state courts and supreme courts, is hearing multiple cases on partisan gerrymandering (all beginning with a case in Wisconsin).

Intuitively, it is clear that gerrymandering is bad. It allows politicians to choose their voters, instead of the other way around. And it allows the majority party to quash minority voices.

But how can one identify a gerrymandered map? To quote Justice Kennedy in his Concurrence the 2004 Supreme Court case Vieth v. Jubelirer:

When presented with a claim of injury from partisan gerrymandering, courts confront two obstacles. First is the lack of comprehensive and neutral principles for drawing electoral boundaries. No substantive definition of fairness in districting seems to command general assent. Second is the absence of rules to limit and confine judicial intervention. With uncertain limits, intervening courts–even when proceeding with best intentions–would risk assuming political, not legal, responsibility for a process that often produces ill will and distrust.

Later, he adds to the first obstacle, saying:

The object of districting is to establish “fair and effective representation for all citizens.” Reynolds v. Sims, 377 U.S. 533, 565—568 (1964). At first it might seem that courts could determine, by the exercise of their own judgment, whether political classifications are related to this object or instead burden representational rights. The lack, however, of any agreed upon model of fair and effective representation makes this analysis difficult to pursue.

From Justice Kennedy’s Concurrence emerges a theme — a “workable standard” of identifying gerrymandering would open up the possibility of limiting partisan gerrymandering through the courts. Indeed, at the core of the Wisconsin gerrymandering case is a proposed “workable standard”, based around the **efficiency gap.**

In 1971, American economist Thomas Schelling (who later won the Nobel Prize in Economics in 2005) published *Dynamic Models of Segregation* (Journal of Mathematical Sociology, 1971, Vol 1, pp 143–186). He sought to understand why racial segregation in the United States seems so difficult to combat.

He introduced a simple model of segregation suggesting that even if each individual person doesn’t mind living with others of a different race, they might still *choose* to segregate themselves through mild preferences. As each individual makes these choices, overall segregation increases.

I write this post because I wondered what happens if we adapt Schelling’s model to instead model a state and its district voting map. In place of racial segregation, I consider political segregation. Supposing the district voting map does not change, I wondered how the efficiency gap will change over time as people further segregate themselves.

It seemed intuitive to me that political segregation (where people who had the same political beliefs stayed largely together and separated from those with different political beliefs) might correspond to more egregious cases of gerrymandering. But to my surprise, I was (mostly) wrong.

Let’s set up and see the model.

Let us first set up Schelling’s model of segregation. Let us model a state as a grid, where each square on that grid represents a house. Initially ten percent of the houses are empty (White), and the remaining houses are randomly assigned to be either Red or Blue.

We suppose that each person wants to have a certain percentage ($p$) of their neighbors that are like itself. By “neighbor”, we mean those in the adjacent squares. We will initially suppose that $p = 0.33$, which is a pretty mild condition. For instance, each person doesn’t mind living with 66 percent of their neighbors being different, so long as there a couple of similar people nearby.

At each step (which I’ll refer to as a year), if a person is unhappy (i.e. fewer than $p$ percent of their neighbors are like that person) then they leave their house and move randomly to another empty house. Notice that after moving, they may or may not be happy, and they will cause other people to perhaps become happy or become unhappy.

We also introduce a measure of segregation. We call the segregation the value of

$$\text{segregation} = \frac{(\sum \text{number of same neighbors})}{(\sum \text{number of neighbors})},$$

summed across the houses in the grid, and where empty spaces aren’t the same as anything, and don’t count as a neighbor. Thus high segregation means that more people are surrounded only by people like them, and low segregation means people are very mixed. (Note also that 50 percent segregation is considered “very unsegregated”, as it means half your neighbors are the same and half are different).

To get to a specific example, here is an instance of this model for $400$ spaces in a $20$ by $20$ grid.

Initially, there is a lot of randomness, and segregation is a low $0.50$. After one year, the state looks like:

After another year, it changes a bit more:

From year to year, the changes are small. But already significant segregation is occurring. The segregation measure is now at $0.63$. After another 10 years, we get the following picture:

That map appears extremely segregated, and now the segregation measure is $0.75$. Further, it didn’t even take very long!

Let’s look at a larger model. Here is a 200 by 200 grid. And since we’re working larger, suppose each square “neighbors” the nearest 24, so that a neighborhood around ‘o’ looks like

```
xxxxx
xxxxx
xxoxx
xxxxx
xxxxx
```

Then we get:

Initially this looks quite a bit like red, white, and blue static. Going forward a couple of years, we get

Let us now fast forward ten years.

And after another ten years…

As an example of the sorts of things that can happen, we present several 15 second animations of these systems with a variety of initial parameters. In each animation, 30 years pass — one each half second.

We first consider a set of cases where we hold everything fixed except for how large people consider their “neighborhood.” In the following, neighborhood sizes range from your nearest 8 neighbors (those which are 1 step away, counting diagonal steps) to those which are up to 5 steps away. The behavior is a bit different in each one. Note that in the really-large neighborhood case, there are only a few people moving at all.

We next consider the cases when people want to have neighborhoods which are at least 40 percent like themselves. That is, people want to cluster a bit more. The list then spans over differing neighborhood sizes again.

These are stunning, and sort of beautiful. I am reminded of slime mold growth.

We now up percentage again to $p = 0.5$. That is, people only feel comfortable in neighborhoods with at least 50 percent of occupants similar to themselves. This is actually the parameter that Schelling introduced in his paper and experiments.

The higher comfort factor leads to much quicker convergence to extreme segregation. This is intuitive — high individual segregation concerns leads to quick societal segregation.

We now increase the population to a million people. In the first animation, there is large segregation at the end, but it manifests as a sort of network of red and blue wispy fingers rather than big blobs. Partly this is because there simply more room. But it’s also a manifestation of the fact that with small neighborhood sizes, one gets mostly local effects.

The second and third animations are pretty astounding to me. In these, people want 40 percent similarity with their neighbors, and their “neighborhoods” are those without four steps (including diagonally) to them. In the third, 55% of the population is Red and 45% is Blue. In the last one, a much larger majority is Red, and there are so few Blue people that they are all rapidly moving trying to find some base where they feel comfortable. But they never found a comfort base.

We should take a moment to say what it is that we are actually trying to measure. Is this supposed to be a perfect model of actual behavior? No.

This note has been examining how some individual incentives, decisions, and perceptions of difference can lead collectively towards greater segregation. Although I have phrased this in terms of political party identification, this analysis is so abstract that it could be applied to any singular distinction.

We should also note that several causes of segregation are omitted from consideration. One is organized action (be is legal or illegal, in good faith or in bad). Another are economic causes behind many separations, such as how the poor are separated from the rich, the unskilled from the skilled, the less educated from the more educated. These lead to separations in job, pastime, residence, and so on. And as political party affiliation correlates strongly with income, and income correlates strongly with where one lives, this is a major factor to omit.

I do not claim that these other sources of discrimination and segregation are less important, but only that I do not know how to model them. And instead I follow Schelling’s line of thought, whereby one looks to see to what extent we might expect individual action to lead to collective outcomes.

Given a Schelling model, we now adapt it incorporate voting districts. Let us suppose that our square is divided up into (regular rectangular) regions of voters. We will assume a totally polarized voter base, so that Red people will always vote for the Red party and Blue people will always vote for the Blue party. (This is a pretty strong assumption).

Before we describe exactly how we set up the model, let’s look at an example. Given a typical Schelling model, we separate it into (in this case, 10) districts.

Each of the 10 areas vote, giving some tallies. In this case, we have the following table which describes the results of this year’s vote. Districts are numbered from top left to bottom right, sequentially.

District | Blue Vote | Red Vote | Winner | Blue Wasted | Red Wasted | Net Wasted |
---|---|---|---|---|---|---|

0 | 18 | 14 | blue | 3 | 14 | -11 |

1 | 20 | 16 | blue | 3 | 16 | -13 |

2 | 19 | 15 | blue | 3 | 15 | -12 |

3 | 12 | 25 | red | 12 | 12 | 0 |

4 | 18 | 20 | red | 18 | 1 | 17 |

5 | 19 | 15 | blue | 3 | 15 | -12 |

6 | 21 | 13 | blue | 7 | 13 | -6 |

7 | 23 | 15 | blue | 7 | 15 | -8 |

8 | 22 | 15 | blue | 6 | 15 | -9 |

9 | 14 | 23 | red | 14 | 8 | 6 |

“Blue Wasted” refers to a wasted blue vote (similarly for Red). This is a key idea counted in the efficiency gap, and contributes towards the overall measure of gerrymandering.

A wasted vote is one that doesn’t contribute to winning an additional election. A vote can be wasted in two different ways. All votes for a losing candidate are wasted, since they didn’t contribute to a win. On the other hand, excess voting for a single candidate is also wasted.

So in District 0, the Blue candidate won and so all 14 Red votes are wasted. The Blue candidate only needed 15 votes to win, but received 18. So there are three excess Blue votes, which means that there are 3 Blue votes wasted.

I adopt that convention (for ease of summing up) that the net wasted votes is the number of Blue wasted votes minus the number of Red wasted votes. So if it is positive, this means that more Blue votes were wasted than Red. And if it’s negative, then more Red votes were wasted than Blue.

With this example in mind, a rough definition of gerrymandering in a competitive district is to draw lines so that one party has many more wasted votes. In this example, there are 186 Blue voters and 171 Red voters, so it might be expected that approximately half of the winners would be Red and half would be Blue. But in fact there are 7 Blue winners and only 3 Red winners.

And a big reason why is that the overall net wasted number of votes is $-48$, which means that $48$ more Red votes than Blue votes did not contribute to a winning election.

So roughly, more wasted votes corresponds to more gerrymandering. The efficiency gap is defined to be

$$ \text{Efficiency Gap} = \frac{\text{Net Wasted Votes}}{\text{Number of Voters}}.$$

In this case, there are $48$ wasted votes and $357$ voters, so the efficiency gap is $48/357 = 0.134$. This number, 13.4 percent, is very high. The proposed gap to raise flags in gerrymandering cases is 7 percent — any higher, and one should consider redrawing district lines.

The efficiency gap is extremely easy to compute, which is a good plus. But whether it is a good indicator of gerrymandering is more complicated, and is one of the considerations in teh Supreme Court case concerining gerrymandering in Wisconsin.

With this example in mind, we are now prepared to describe the model explicitly. The initial setup is the same as in Schelling’s model. A state is a rectangular grid, where each square on this grid represents a house. Unoccupied houses are White. If a Red person occupies a house, then the house is colored Red. If a Blue person occupies a house, then that house is colored Blue. Each year, all the Red people vote for the Red candidate, and all Blue people vote for the Blue candidate, and we can tally the results. We will then measure the efficiency gap.

At the same time, each year people may move as in Schelling’s model. A person is satisfied if they have at least $p$ percent of their neighbors which are similar to them. We will again default to $33$ percent, and a person’s neighbors will be all those people which adjacent (or for larger models, perhaps 2-step adjacent, or that flavor) away.

At each step, we can measure the segregation (which we know will increase from Schelling’s model) and the efficiency gap.

At last, we are prepared to investigate the relationship between segregation and the efficiency gap.

In our first simulation, there is an initial segregation of 51% and and initial efficiency gap of 6%. This is pictured below.

As we can see, an increase in segregation corresponds to an increase in the efficiency gap. ^{1}

Let us now consider a second simulation. There are no parameters changed between this and the above simulation, aside from the chance placement of people.

An increase in segregation actually occurs with a decrease in efficiency gap. Further, if we stepped through year to year, we would see that as the state became more segregated, it also lowered its efficiency gap.

At least naively we should no longer expect increased segregation to correspond to an increase in the efficiency gap.

Let’s try a larger simulation. This one is 200 x 200, with 25 districts.

Again, segregation correlates negatively with the efficiency gap.

What if 55 percent of the population is blue? Does this imbalance lead to interesting simulations? We present two such simulations below.

In each of these simulations, there was an initially large efficiency gap. This is fundamentally caused by the relatively equidistributed Red minority, which essentially loses everywhere. We might say that the Red group begins in a *cracked* state. After 20 years, the efficiency gap falls, since segregation has the interesting side effect of relieving the Red people from their diffused state.

In fact I ran a very large number of simulations with a variety of parameters, and generically increased segregation tends to correspond to a decrease in the efficiency gap.

More segregation leads to a smaller efficiency gap. Why might this be?

I think one of the major reasons is evident in the last pair of simulations I presented above. Uniform segregation reduces the “cracking” gerrymandering technique. In *cracking*, one tries to divide a larger group into many smaller minorities by splitting them into many districts. This maximizes the number of wasted votes coming from lost elections (as opposed to wasted votes from *packing* lots of people into one district so that they over-win an election). Segregation produces clusters, and these clusters tend to win their local district’s election.

The few examples above where high segregation accompanied high efficiency gaps were when the segregated clusters happened to be split by district lines.^{2}

I read many pieces from others while preparing this post. Though I don’t cite any of them explicitly, these works were essential for my preparation.

- A formula goes to court: partisan gerrymandering and the effiency gap. By Mira Bernstein and Moon Duchin. Available on the arXiv.
- An impossibility theorem for gerrymandering. By Boris Alexeev and Dustin Mixon. Available on the arXiv.
- Flaws in the efficiency gap. By Christopher Chambers, Alan Miller, and Joel Sobel. Available on Christopher Chambers’ site.
- How the new math of gerrymandering Works, in the New York Times. By Nate Cohn and Quoctrung Bui. Available at the nytimes.
- The flaw in America’s ‘holy grail’ against gerrymandering, in the Atlantic. By Sam Kean. Available at the atlantic.
- Dynamic Models of Segregation, by Thomas Schelling. In Journal of Mathematical Sociology, 1971, Vol 1, pp143–186.

Below, I include the code I used to generate these simulations and images. This code, as well as much of the code I used to generate the particular data above, is available as a jupyter notebook in my github. (But I would mention that unlike some previous notebooks I’ve made available, this was really a working notebook and isn’t a final product in itself).

The heart of this code is based on code from Allen Downey, presented in his book “Think Complexity.” He generously released his code under the MIT license.

```
"""
Copyright (c) 2018 David Lowry-Duda
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib.colors import LinearSegmentedColormap
from scipy.signal import correlate2d
class Schelling:
"""A 2D grid of Schelling agents."""
options = dict(mode='same', boundary='wrap')
def __init__(self, n, m=None, p=0.5, empty_prob=0.1, red_prob=0.45, size=3):
"""
Initialize grid with attributes.
Args:
n: (int) number of rows.
m: (int) number of columns. In None, defaults to n.
p: (float) ratio of neighbors that makes one feel comfortable.
empty_prob: (float) probability an initial cell is empty.
red_prob: (float) probability an initial cell is Red.
size: (odd int) size of the neighborhood kernel.
"""
self.p = p
m = n if m is None else m
EMPTY, RED, BLUE = 0, 1, 2
empty_prob = 0.1
choices = [EMPTY, RED, BLUE]
probs = [empty_prob, red_prob, 1 - empty_prob - red_prob]
self.array = np.random.choice(choices, (n, m), p=probs).astype(np.int8)
self.kernel = self._make_kernel(size)
def _make_kernel(self, size):
"""
Construct size*size adjacency kernel.
Args:
size: (int) for size of kernel.
Returns:
np.array such as (for size=3)
[[1,1,1],
[1,0,1],
[1,1,1]]
In the size=n case, it's an n*n array of ones with a zero at
the center.
"""
pad = int((size**2-1)/2)
return np.array([1]*(pad) + [0] + [1]*(pad)).reshape((size,size))
def count_neighbors(self):
"""
Surveys neighbors of cells.
Returns:
This returns the tuple (occupied, frac_red, frac_same)
where
occupied: logical array indicating occupied cells.
frac_red: array containing fraction of red neighbors around each cell.
frac_same: array containing the fraction of similar neighbors.
Note:
Unoccupied cells do not count in neighbors or similarity.
"""
a = self.array
EMPTY, RED, BLUE = 0, 1, 2
# These create np.arrays where each entry is True if condition is true
red = a==RED
blue = a==BLUE
occupied = a!=EMPTY
# count red neighbors and all neighbors
num_red = correlate2d(red, self.kernel, **self.options)
num_neighbors = correlate2d(occupied, self.kernel, **self.options)
# compute fraction of similar neighbors
frac_red = num_red / num_neighbors
frac_blue = 1 - frac_red
frac_same = np.where(red, frac_red, frac_blue)
# no neighbors is considered the same as no similar neighbors
frac_same[num_neighbors == 0] = 0
frac_red[num_neighbors == 0] = 0
# Unoccupied squares are not similar to anything
frac_same[occupied == 0] = 0
return occupied, frac_red, frac_same
def segregation(self):
"""Computes the average fraction of similar neighbors."""
occupied, _, frac_same = self.count_neighbors()
return np.sum(frac_same) / np.sum(occupied)
def step(self):
"""Executes one time step."""
a = self.array
# find the unhappy cells
occupied, _, frac_same = self.count_neighbors()
unhappy_locs = locs_where(occupied & (frac_same < self.p))
# find the empty cells
empty = a==0
num_empty = np.sum(empty)
empty_locs = locs_where(empty)
# shuffle the unhappy cells
if len(unhappy_locs):
np.random.shuffle(unhappy_locs)
# for each unhappy cell, choose a random destination
for source in unhappy_locs:
i = np.random.randint(len(empty_locs))
dest = tuple(empty_locs[i])
# move
a[dest] = a[tuple(source)]
a[tuple(source)] = 0
empty_locs[i] = source
num_empty2 = np.sum(a==0)
assert num_empty == num_empty2
return
def locs_where(condition):
"""
Find cells where a logical array is True.
Args:
condition: (2D numpy logical array).
Returns:
Array with one set of coordinates per row indicating where
condition was true.
Example:
Input is (as np.array)
[[1,0],
[1,1]]
Then the output will be
[[0,0],[1,0],[1,1]]
which are the three locations of the nonzero (True) cells.
"""
return np.transpose(np.nonzero(condition))
def make_cmap(color_dict, vmax=None, name='mycmap'):
"""
Makes a custom color map.
Args:
color_dict: (dict) of form {number:color}.
vmax: (float) high end of the range. If None, use max value
from color_dict.
name: (str) name for map.
Returns:
pyplot color map.
"""
if vmax is None:
vmax = max(color_dict.keys())
colors = [(value/vmax, color) for value, color in color_dict.items()]
cmap = LinearSegmentedColormap.from_list(name, colors)
return cmap
class SchellingViewer:
"""Generates animated view of Schelling array"""
# colors from http://colorbrewer2.org/#type=diverging&scheme=RdYlBu&n=5
colors = ['#fdae61','#abd9e9','#d7191c','#ffffbf','#2c7bb6']
cmap = make_cmap({0:'white', 1:colors[2], 2:colors[4]})
options = dict(interpolation='none', alpha=0.8)
def __init__(self, viewee):
"""
Initialize.
Args:
viewee: (Schelling) object to view
"""
self.viewee = viewee
self.im = None
self.hlines = None
self.vlines = None
def step(self, iters=1):
"""Advances the viewee the given number of steps."""
for i in range(iters):
self.viewee.step()
def draw(self, grid=False):
"""
Draws the array, perhaps with a grid.
Args:
grid: (boolean) if True, draw grid lines. If False, don't.
"""
self.draw_array(self.viewee.array)
if grid:
self.draw_grid()
def draw_array(self, array=None, cmap=None, **kwds):
"""
Draws the cells.
Args:
array: (2D np.array) Array to draw. If None, uses self.viewee.array.
cmap: colormap to color array.
**kwds: keywords are passed to plt.imshow as options.
"""
# Note: we have to make a copy because some implementations
# of step perform updates in place.
if array is None:
array = self.viewee.array
a = array.copy()
cmap = self.cmap if cmap is None else cmap
n, m = a.shape
plt.axis([0, m, 0, n])
# Remote tickmarks
plt.xticks([])
plt.yticks([])
options = self.options.copy()
options['extent'] = [0, m, 0, n]
options.update(kwds)
self.im = plt.imshow(a, cmap, **options)
def draw_grid(self):
a = self.viewee.array
n, m = a.shape
lw = 2 if m < 10 else 1 options = dict(color='white', linewidth=lw) rows = np.arange(1, n) self.hlines = plt.hlines(rows, 0, m, **options) cols = np.arange(1, m) self.vlines = plt.vlines(cols, 0, n, **options) def animate(self, frames=20, interval=200, grid=False): """ Creates an animation. Args: frames: (int) number of frames to draw. interval: (int) time between frames in ms. grid: (boolean) if True, include grid in drawings. """ fig = plt.figure() self.draw(grid=grid) anim = animation.FuncAnimation(fig, self.animate_func, init_func=self.init_func, frames=frames, interval=interval) return anim def init_func(self): """Called at the beginning of an animation.""" pass def animate_func(self, i): """Draws one frame of the animation.""" if i > 0:
self.step()
a = self.viewee.array
self.im.set_array(a)
return (self.im,)
```

Then a typical Schelling object can be viewed through a call like

```
grid = Schelling(n=6)
viewer = SchellingViewer(grid)
viewer.draw(grid=True)
```

And code for the District analysis, which sits on top of Schelling from above.

```
class Districts(Schelling):
"""A 2D grid of Schelling agents organized into districts."""
def __init__(self, n, m=None, p=0.5, rows=2,
cols=2, empty_prob=0.1, red_prob=0.45, size=3):
"""
Initialize grid.
Args:
n: (int) number of rows in grid.
m: (int) number of columns in grid. If None, defaults to n.
p: (float) ratio of neighbors required to feel comfortable.
rows: (int) number of rows of districts.
cols: (int) number of columns of districts.
empty_prob: (float) probability each initial cell is empty.
red_prob: (float) probability each initial cell is Red.
size: (odd int) size of neighborhood kernel.
Note:
`rows` must divide n, and `cols` must divide m.
An exception is raised otherwise.
"""
self.p = p
self.n = n
self.m = n if m is None else m
self.rows = rows
self.cols = cols
self.schelling_grid = Schelling(n, m=self.m, p=p,
empty_prob=empty_prob,
red_prob=red_prob, size=3)
self.array = self.schelling_grid.array
self.kernel = self.schelling_grid.kernel
self.row_mult = self.n//self.rows
self.col_mult = self.m//self.cols
try:
assert(self.row_mult*self.rows==self.n)
assert(self.col_mult*self.cols==self.m)
except AssertionError:
raise Exception(("The number of rows and number of columns must"
" divide the size of the grid."))
self.districts = self.make_districts()
def make_districts(self, array=None):
"""
Returns array of np.arrays, one for each district.
"""
if array is None:
array = self.array
# double indices works from numpy sugar
return [array[self.row_mult*i: self.row_mult*(i+1),
self.col_mult*j: self.col_mult*(j+1)]
for i in range(self.rows) for j in range(self.cols)]
def votes(self, output=False):
"""Count votes in each district."""
votes = dict()
if output:
print ("Vote totals\n-----------\n")
for num, district in enumerate(self.districts):
IS_RED = 1
IS_BLUE = 2
votes[num] = {'red': list(district.flatten()).count(IS_RED),
'blue': list(district.flatten()).count(IS_BLUE)}
if output:
print ("District {}:: Red vote: {}, Blue vote: {}".format(
num, votes[num]['red'], votes[num]['blue']))
return votes
def tally_votes(self, output=False):
"""Detect winners from votes in each district."""
tallies = self.votes()
if output:
print ("Tallying votes\n--------------\n")
for num, district in enumerate(self.districts):
dist_tally = tallies[num]
dist_tally.update(self.determine_winner(dist_tally))
return tallies
def determine_winner(self, vote_tally):
"""
Given a single district's vote_tally, determine the winner.
Returns:
A dictionary with the keys
'winner'
'red_wasted'
'blue_wasted'
computed from vote tally.
"""
res = dict()
if vote_tally['red'] > vote_tally['blue']:
res['winner'] = 'red'
res['red_wasted'] = vote_tally['red'] - vote_tally['blue'] - 1
res['blue_wasted'] = vote_tally['blue']
elif vote_tally['blue'] > vote_tally['red']:
res['winner'] = 'blue'
res['blue_wasted'] = vote_tally['blue'] - vote_tally['red'] - 1
res['red_wasted'] = vote_tally['red']
else:
res['winner'] = 'tie'
res['red_wasted'] = 0
res['blue_wasted'] = 0
return res
def net_wasted_votes_by_district(self):
"""
Compute net wasted votes in each district.
Note:
We adopt the convention that 1 wasted vote means a wasted blue vote,
while -1 wasted vote means a wasted red vote.
"""
res = dict()
tallies = self.tally_votes()
for num, district in enumerate(self.districts):
res[num] = tallies[num]['blue_wasted'] - tallies[num]['red_wasted']
return res
def net_wasted_votes(self):
wasted_by_dist = self.net_wasted_votes_by_district()
return sum(wasted_by_dist[num] for num in wasted_by_dist.keys())
def efficiency_gap(self):
return abs(self.net_wasted_votes()) / (np.sum(self.array != 0))
def votes_to_md_table(self):
"""
Output votes to a markdown table.
This is a jupyter notebook convenience method.
"""
vote_tally = self.tally_votes()
ret = "|District|Blue Vote|Red Vote|Winner|Blue Wasted|Red Wasted|Net Wasted|\n"
ret += "|-|-|-|-|-|-|-|\n"
for i in range(len(vote_tally)):
district = i
dist_res = vote_tally[i]
bv = dist_res['blue']
bw = dist_res['blue_wasted']
rv = dist_res['red']
rw = dist_res['red_wasted']
nw = bw - rw
winner = dist_res['winner']
ret += "|{}|{}|{}|{}|{}|{}|{}|\n".format(district, bv, rv, winner, bw, rw, nw)
return ret
class District_Viewer(SchellingViewer):
"""Viewer of Schelling District arrays"""
def __init__(self, districts):
super().__init__(districts.schelling_grid)
self.row_multiplier = districts.row_mult
self.col_multiplier = districts.col_mult
def draw_grid(self):
"""Draws the grid."""
a = self.viewee.array
n, m = a.shape
lw = 2 if m < 10 else 1
options = dict(color='white', linewidth=lw)
rows = self.row_multiplier*np.arange(1, n)
self.hlines = plt.hlines(rows, 0, m, **options)
cols = self.col_multiplier*np.arange(1, m)
self.vlines = plt.vlines(cols, 0, n, **options)
```

The functionality is built on top of Schelling, above. Typical use would look like

```
dgrid = Districts(10, cols=5, p=.2)
viewer = District_Viewer(dgrid)
viewer.draw(grid=True)
dgrid.tally_votes()
```

]]>

There are a couple of different ways to take this story. The most common response I have seen is to blame the employee who accidentally triggered the alarm, and to forgive the Governor his error because who could have guessed that something like this would happen? The second most common response I see is a certain shock that the key mouthpiece of the Governor in this situation is apparently Twitter.

There is some merit to both of these lines of thought. Considering them in turn: it is pretty unfortunate that some employee triggered a state of hysteria by pressing an incorrect button (or something to that effect). We always hope that people with great responsibilities act with extreme caution (like thermonuclear war).

So certainly some blame should be placed on the employee.

As for Twitter, I wonder whether or not a sarcasm filter has been watered down between the Governor’s initial remarks and my reading it in Doug’s article for CNN. It seems likely to me that this comment is meant more as commentary on the status of Twitter as the President’s preferred ^{2} medium of communicating with the People. It certainly seems unlikely to me that the Governor would both frequently use Twitter for important public messages *and* forget his Twitter credentials. Perhaps this is code for “I couldn’t get in touch with the person who manages my Twitter account” (because that person was hiding in a bunker?), but that’s not actually important.

When I first read about the false alarm in Hawaii and the follow-up stories, I was immediately reminded of a story I’d read on HackerNews^{3} and reddit^{4} about a junior software developer starting a job at a new company. Bright-eyed and bushy-tailed, the developer begins to set up her^{5} development environment and build some familiarity with the database. Not quite knowing better, the developer used some credentials in the onboarding document given to her, and ultimately accidentally deleted the entire (actual, production) database.

The company immediately panics and blames her. It is her fault that she destroyed the database, and now the company has an enormous loss of data. They don’t have backups, they’re bringing in legal to assess damage, etc.

What is the moral of this cautionary Parable?

It is certainly NOT that one should blame the young developer.

The moral is that the system should not allow people who do not know any better to access (or delete) the production database, and further that there should be backups so that this sort of catastrophic incident cannot occur. Daily database backups and not including production database access credentials in onboarding documents are two steps in the right direction.

In a famous story from IBM,^{6} a junior developer makes a mistake that cost the company 10 million dollars. He walks into the office of Tom Watson, the CEO, expecting to get fired. “Fire you?” Mr Watson asked. “I just spent 10 million educating you.”

The system and culture should be crafted to

- prevent these mistakes,
- quickly correct these mistakes, and
- learn from errors to improve the system and culture.

Stories in the news have thus far focused on inadequate prevention, such as the widely circulated image of poor interface design

(not to be confused with the earlier, even worse, version, which was apparently made-up^{7}), or stories have focused on inadequate ability to quickly correct these mistakes (such as this CNN article indicating that the Governor’s inability to tweet got in the way of quickly restoring peace of mine).

But what I’m interested in is: what will be learned from this mistake, and what changes to the system will be made? And slightly deeper, what led to the previous system?

US Pacific Command and the office of the Governor of Hawaii need to run a complete post-mortem to understand

- what led to this false alarm,
- what led to the nearly forty minutes between understanding there was a false alarm and disseminating this information, and
- what things should be done to address these issues.

Further, this information should be shared widely with the defense and alarm networks throughout the US. Surely Hawaii is not the only state with that (or a similar) setup in place. Can you not imagine this happening in some other state? Other nations and countries might take this as inspiration to self-reflect on their own disaster-alert systems.

This is a huge opportunity to learn and improve. It may very well be that the poor employee continually makes ridiculous mistakes and should be let go, or it may be that it requires too much concentration to not make an error and the employee can help foolproof the system.

Unfortunately, due to the sensitive nature of this software and scenario, I don’t think that we’ll get to hear about the most important part — what is learned and changed. But it’s still the most important part. It’s the important thing to be learned from this Parable for the Nuclear Age.

]]>Today I give a talk on counting lattice points on one-sheeted hyperboloids. These are the shapes described by

$$ X_1^2 + \cdots + X_{d-1}^2 = X_d^2 + h,$$

where $h > 0$ is a positive integer. The question is: how many lattice points $x$ are on such a hyperboloid with $| x |^2 \leq R$; or equivalently, how many lattice points are on such a hyperboloid and contained within a ball of radius $\sqrt R$ centered at the origin?

I describe my general approach of transforming this into a question about the behavior of modular forms, and then using spectral techniques from the theory of modular forms to understand this behavior. This becomes a question of understanding the shifted convolution Dirichlet series

$$ \sum_{n \geq 0} \frac{r_{d-1}(n+h)r_1(n)}{(2n + h)^s}.$$

Ultimately this comes from the modular form $\theta^{d-1}(z) \overline{\theta(z)}$, where

$$ \theta(z) = \sum_{m \in \mathbb{Z}} e^{2 \pi i m^2 z}.$$

Here are the slides for this talk. Note that this talk is based on chapter 5 of my thesis, and (hopefully) soon a preprint of this chapter ready for submission will appear on the arXiv.

]]>August 11, 1984: President Reagan is preparing for his weekly NPR radio address. The opening line of his address was to be

My fellow Americans, I’m pleased to tell you that today I signed legislation that will allow student religious groups to begin enjoying a right they’ve too long been denied — the freedom to meet in public high schools during nonschool hours, just as other student groups are allowed to do.

^{1}

During the sound check, President Reagan joked

My fellow Americans, I’m pleased to tell you today that I’ve signed legislation that will outlaw Russia forever. We begin bombing in five minutes.

This was met with mild chuckles from the audio technicians, and it wasn’t broadcast intentionally. But it was leaked, and reached the Russians shortly thereafter.

They were not amused.

The Soviet army was placed on alert once they heard what Reagan joked during the sound check. They dropped their alert later, presumably when the bombing didn’t begin. Over the next week, this gaffe drew a lot of attention. Here is NBC Tom Brokaw addressing “the joke heard round the world”

The Pittsburgh Post-Gazette ran an article containing some of the Soviet responses five days later, on 16 August 1984.^{2} Similar articles ran in most major US newspapers that week, including the New York Times (which apparently retyped or OCR’d these statements, and these are now available on their site).

The major Russian papers Pravda and Izvestia, as well as the Soviet News Agency TASS, all decried the President’s remarks. Of particular note are two paragraphs from TASS. The first is reminiscent of many responses on Twitter today,

Tass is authorized to state that the Soviet Union deplores the U.S. President’s invective, unprecedentedly hostile toward the U.S.S.R. and dangerous to the cause of peace.

The second is a bit chilling, especially with modern context,

This conduct is incompatible with the high responsibility borne by leaders of states, particularly nuclear powers, for the destinies of their own peoples and for the destinies of mankind.

In 1984, an accidental microphone gaffe on behalf of the President led to public outcry both foreign and domestic; Soviet news outlets jumped on the opportunity to include additional propaganda^{3}. It is easy to confuse some of Donald Trump’s deliberate actions today with others’ mistakes. I hope that he knows what he is doing.

Given a list of strings, determine how many strings have no duplicate words.

This is a classic problem, and it’s particularly easy to solve this in python. Some might use `collections.Counter`

, but I think it’s more straightforward to use sets.

The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word.

In [1]:

```
with open("input.txt", "r") as f:
lines = f.readlines()
def count_lines_with_unique_words(lines):
num_pass = 0
for line in lines:
s = line.split()
if len(s) == len(set(s)):
num_pass += 1
return num_pass
count_lines_with_unique_words(lines)
```

Out[1]:

I think this is the first day where I would have had a shot at the leaderboard if I’d been gunning for it.

Let’s add in another constraint. Determine how many strings have no duplicate words, even after anagramming. Thus the string

```
abc bac
```

is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words.

In [2]:

```
with open("input.txt", "r") as f:
lines = f.readlines()
sorted_lines = []
for line in lines:
sorted_line = ' '.join([''.join(l) for l in map(sorted, line.split())])
sorted_lines.append(sorted_line)
sorted_lines[:2]
```

Out[2]:

In [3]:

```
count_lines_with_unique_words(sorted_lines)
```

Out[3]:

Numbers are arranged in a spiral

```
17 16 15 14 13
18 5 4 3 12
19 6 1 2 11
20 7 8 9 10
21 22 23---> ...
```

Given an integer n, what is its Manhattan Distance from the center (1) of the spiral? For instance, the distance of 3 is $2 = 1 + 1$, since it’s one space to the right and one space up from the center.

Here’s my idea. The bottom right corner of the $k$th layer is the integer $(2k+1)^2$, since that’s how many integers are contained within that square. The other three corners in that layer are $(2k+1)^2 – 2k, (2k+1)^2 – 4k$, and $(2k+1)^2 – 6k$. Finally, the closest spot on the $k$th layer to the origin is at distance $k$: these are the four “axis locations” halfway between the corners, at $(2k+1)^2 – k, (2k+1)^2 – 3k, (2k+1)^2 – 5k$, and $(2k+1)^2 – 7k$.

For instance when $k = 1$, the bottom right is $(2 + 1)^2 = 9$, and the four “axis locations” are $9 – 1, 9 – 3, 9-5$, and $9-7$. The “axis locations” are $k$ away, and the corners are $2k$ away.

So I will first find which layer the number is on. Then I’ll figure out which side it’s on, and then how far away it is from the nearest “axis location” or “corner”.

My given number happens to be 289326.

In [1]:

```
import math
def find_lowest_larger_odd_square(n):
upper = math.ceil(n**.5)
if upper %2 == 0:
upper += 1
return upper
```

In [2]:

```
assert find_lowest_larger_odd_square(39) == 7
assert find_lowest_larger_odd_square(26) == 7
assert find_lowest_larger_odd_square(25) == 5
```

In [3]:

```
find_lowest_larger_odd_square(289326)
```

Out[3]:

In [4]:

```
539**2 - 289326
```

Out[4]:

It happens to be that our integer is very close to an odd square.

The square is $539^2$, and the distance to that square is $538$ from the center.

Note that $539 = 2(269) + 1$, so this is the $269$th layer of the square.

The previous corner to $539^2$ is $539^2 – 538$, and the previous corner to that is $539^2 – 2\cdot538 = 539^2 – 1076$.

This is the nearest corner.

How far away from the square is this corner?

In [5]:

`539**2 - 2*538 - 289326`

Out[5]:

In [6]:

```
538 - 119
```

Out[6]:

And so we solved the first part quickly with a mixture of function and handiwork.

In part two, the spiral has changed significantly. Build the spiral iteratively. Initially, start with 1. Then in the next square of the spiral, put in the integer that is the sum of the adjacent (including diagonal) numbers in the spiral. This spiral is

```
147 142 133 122 59
304 5 4 2 57
330 10 1 1 54
351 11 23 25 26
362 747 806---> ...
```

What is the first value that’s larger than 289326?

My plan is to construct this spiral. The central 1 will have coordinates (0,0), and the spiral will be stored in a dictionary whose key is the tuple of the location.

To construct the spiral, we note that the direction of adding goes in the pattern RULLDDRRRUUULLLLDDDD. The order is right, up, left, down: the number of times each direction is repeated goes in the sequence 1,1,2,2,3,3,4,4,….

In [7]:

```
spiral = {}
spiral[(0,0)] = 1
NEIGHBORS = [(1,0), (1,1), (0,1), (-1,1), (-1,0), (-1,-1), (0,-1), (1,-1)]
DIRECTION = [(1,0), (0,1), (-1,0), (0,-1)] #Right Up Left Down
def spiral_until_at_least(n):
spiral = {} # Spiral dictionary
spiral[(0,0)] = 1
x,y = 0,0
steps_in_row = 1 # times spiral extends in same direction
second_direction = False # spiral extends in same direction twice: False if first leg, True if second
nstep = 0 # number of steps in current direction
step_direction = 0 # index of direction in DIRECTION
while True:
dx, dy = DIRECTION[step_direction]
x, y = x + dx, y + dy
total = 0
for neighbor in NEIGHBORS:
nx, ny = neighbor
if (x+nx, y+ny) in spiral:
total += spiral[(x+nx, y+ny)]
print("X: {}, Y:{}, Total:{}".format(x,y,total))
if total > n:
return total
spiral[(x,y)] = total
nstep += 1
if nstep == steps_in_row:
nstep = 0
step_direction = (step_direction + 1)% 4
if second_direction:
second_direction = False
steps_in_row += 1
else:
second_direction = True
```

In [8]:

```
spiral_until_at_least(55)
```

Out[8]:

In [9]:

```
spiral_until_at_least(289326)
```

Out[9]:

The sequence in the part 2 grows really, really quickly. The sequence starts 1,1,2,4,5,10,11,23…

Many mathematicians (recreational, amateur, and professional alike) often delight in properties of sequences of integers. And sometimes they put them in Sloane’s **Online Encyclopedia of Integer Sequences**, the OEIS. Miraculously, the sequence from part 2 appears in the OEIS.

It’s OEIS A141481.

But I’ve never seen this sequence before.

I wonder: how quickly does it grow? This is one of the most fundamantal questions one can ask about a sequence.

Clearly it grows quickly — the entries are strictly increasing, and after each corner they roughly double (since the adjacent and diagonal are each there and roughly the same size).

But does this capture most of the growth?

In [10]:

```
spiral = {}
spiral[(0,0)] = 1
NEIGHBORS = [(1,0), (1,1), (0,1), (-1,1), (-1,0), (-1,-1), (0,-1), (1,-1)]
DIRECTION = [(1,0), (0,1), (-1,0), (0,-1)] #Right Up Left Down
CORNERS = [1]
def spiral_until_at_least_print_corners(n):
spiral = {} # Spiral dictionary
spiral[(0,0)] = 1
x,y = 0,0
steps_in_row = 1 # times spiral extends in same direction
second_direction = False # spiral extends in same direction twice: False if first leg, True if second
nstep = 0 # number of steps in current direction
step_direction = 0 # index of direction in DIRECTION
while True:
dx, dy = DIRECTION[step_direction]
x, y = x + dx, y + dy
total = 0
for neighbor in NEIGHBORS:
nx, ny = neighbor
if (x+nx, y+ny) in spiral:
total += spiral[(x+nx, y+ny)]
if total > n:
return total
spiral[(x,y)] = total
nstep += 1
if nstep == steps_in_row:
print("X: {}, Y:{}, Total:{}".format(x,y,total))
CORNERS.append(total)
nstep = 0
step_direction = (step_direction + 1)% 4
if second_direction:
second_direction = False
steps_in_row += 1
else:
second_direction = True
```

In [11]:

```
spiral_until_at_least_print_corners(10**15)
```

Out[11]:

In [12]:

```
CORNERS
```

Out[12]:

In [13]:

```
for a, b in zip(CORNERS, CORNERS[1:]):
print(b/a)
```

You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together.

There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up.

In [1]:

```
with open("input.txt", "r") as f:
lines = f.readlines()
lines[0]
```

Out[1]:

In [2]:

```
l = lines[0]
l = l.split()
l
```

Out[2]:

In [3]:

```
def max_minus_min(line):
'''Compute the difference between the largest and smallest integer in a line'''
line = list(map(int, line.split()))
return max(line) - min(line)
def sum_differences(lines):
'''Sum the value of `max_minus_min` for each line in `lines`'''
return sum(max_minus_min(line) for line in lines)
```

In [4]:

```
testcase = ['5 1 9 5','7 5 3', '2 4 6 8']
assert sum_differences(testcase) == 18
```

In [5]:

```
sum_differences(lines)
```

Out[5]:

In line with the first day’s challenge, I’m inclined to ask what we should “expect.” But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense.

Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect?

As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum?

Let’s begin with the minimum. The minimum is $1$ unless all the integers are greater than $2$. This has probability

$$ 1 – \left( \frac{9}{10} \right)^m = \frac{10^m – 9^m}{10^m}$$

of occurring. We rewrite it as the version on the right for reasons that will soon be clear.

The minimum is $2$ if all the integers are at least $2$ (which can occur in $9$ different ways for each integer), but not all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$). Thus this has probability

$$ \frac{9^m – 8^m}{10^m}.$$

Continuing to do one more for posterity, the minimum is $3$ if all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$), but not all integers are at least $4$ (each integer has $7$ different ways of being at least $4$). Thus this has probability

$$ \frac{8^m – 7^m}{10^m}.$$

And so on.

Recall that the expected value of a random variable is

$$ E[X] = \sum x_i P(X = x_i),$$

so the expected value of the minimum is

$$ \frac{1}{10^m} \big( 1(10^m – 9^m) + 2(9^m – 8^m) + 3(8^m – 7^m) + \cdots + 9(2^m – 1^m) + 10(1^m – 0^m)\big).$$

This simplifies nicely to

$$ \sum_ {k = 1}^{10} \frac{k^m}{10^m}. $$

The same style of thinking shows that the expected value of the maximum is

$$ \frac{1}{10^m} \big( 10(10^m – 9^m) + 9(9^m – 8^m) + 8(8^m – 7^m) + \cdots + 2(2^m – 1^m) + 1(1^m – 0^m)\big).$$

This simplifies to

$$ \frac{1}{10^m} \big( 10 \cdot 10^m – 9^m – 8^m – \cdots – 2^m – 1^m \big) = 10 – \sum_ {k = 1}^{9} \frac{k^m}{10^m}.$$

Subtracting, we find that the expected difference is

$$ 9 – 2\sum_ {k=1}^{9} \frac{k^m}{10^m}. $$

From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It’s also pretty straightforward to see how to extend this to values of integers from $1$ to $N$.

Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn’t expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random.

But we can try it out anyway.

In [6]:

```
# We see the table is 16 lines long
len(lines)
```

Out[6]:

In [7]:

```
# And a generic line is 16 numbers long
len(lines[0].split())
```

Out[7]:

In [8]:

```
total = 6999
for k in range(7000):
total = total - 2* (k/7000)**16
```

In [9]:

```
total
```

Out[9]:

The expected value of the table is $16$ times this.

In [10]:

```
16 * total
```

Out[10]:

In the table, each row has exactly one pair of integers that evenly divides each other. Find the sum of the quotients.

My plan is straightforward. For each line, go through each element and determine if it is the dividend or divisor in a perfection fraction. Once we’ve found a pair, we compute the quotient, and add these quotients together.

In [11]:

```
def find_quotient_in_line(line):
'''
Finds a pair of integers which divides each other in line.
Returns the quotient.
'''
line = list(map(int, line.split()))
for i, elem in enumerate(line):
for num in line[i+1:]:
if elem%num == 0:
return elem/num
if num%elem == 0:
return num/elem
raise KeyError('No divisor relationship found in line.')
def sum_quotients(lines):
'''Sum the value of `find_quotient_in_line` for each line in `lines`'''
return sum(find_quotient_in_line(line) for line in lines)
```

In [12]:

```
testcase = ['5 9 2 8', '9 4 7 3', '3 8 6 5']
assert find_quotient_in_line(testcase[0]) == 4
assert sum_quotients(testcase) == 9
```

In [13]:

```
sum_quotients(lines)
```

Out[13]:

My background and intentions aren’t the same as Peter Norvig’s: his expertise dwarfs mine. And timezones are not kind to those of us in the UK, and thus I won’t be competing for a position on the leaderboards. These are to be fun. And sometimes there are tidbits of math that want to come out of the challenges.

Enough of that. Let’s dive into the first day.

In [1]:

```
with open('input.txt', 'r') as f:
seq = f.read()
seq = seq.strip()
seq[:10]
```

Out[1]:

In [2]:

```
def sum_matched_digits(s):
"Sum of digits which match following digit, and first digit if it matches last digit"
total = 0
for a,b in zip(s, s[1:]+s[0]):
if a == b:
total += int(a)
return total
```

They provide a few test cases which we use to test our method against.

In [3]:

```
assert sum_matched_digits('1122') == 3
assert sum_matched_digits('1111') == 4
assert sum_matched_digits('1234') == 0
assert sum_matched_digits('91212129') == 9
```

For fun, this is a oneline version.

In [4]:

```
def sum_matched_digits_oneliner(s):
return sum(int(a) if a == b else 0 for a,b in zip(s, s[1:]+s[0]))
```

In [5]:

```
assert sum_matched_digits_oneliner('1122') == 3
assert sum_matched_digits_oneliner('1111') == 4
assert sum_matched_digits_oneliner('1235') == 0
assert sum_matched_digits_oneliner('91212129') == 9
```

For more fun, this is a regex version.

In [6]:

```
import regex
def sum_matched_digits_regex(s):
matches = map(int, regex.findall(r'(\d)\1', s, overlapped=True))
total = sum(matches)
if s[0] == s[-1]:
total += int(s[0])
return total
```

In [7]:

```
assert sum_matched_digits_regex('1122') == 3
assert sum_matched_digits_regex('1111') == 4
assert sum_matched_digits_regex('1235') == 0
assert sum_matched_digits_regex('91212129') == 9
```

Regardless of which one we use, we find the answer.

In [8]:

```
print(sum_matched_digits(seq))
print(sum_matched_digits_oneliner(seq))
print(sum_matched_digits_regex(seq))
```

I wonder: is there any sort of time difference between these?

In [9]:

```
%timeit sum_matched_digits(seq)
```

In [10]:

```
%timeit sum_matched_digits_oneliner(seq)
```

In [11]:

```
%timeit sum_matched_digits_regex(seq)
```

In [12]:

```
import random
randseq = ''
for i in range(10**7):
randseq += str(random.randint(0,9))
randseq[:10]
```

Out[12]:

In [13]:

```
%timeit -n5 sum_matched_digits(randseq)
```

In [14]:

```
%timeit -n5 sum_matched_digits_oneliner(randseq)
```

In [15]:

```
%timeit -n5 sum_matched_digits_regex(randseq)
```

In [16]:

```
sum_matched_digits(randseq)
```

Out[16]:

We can compute what we expect the value to be for a random string of digits $d$. Assuming that each digit is randomly selected, we should expect that it has probability $1/10$ of matching the subsequent digit. Thus the expected contribution from each digit is (its value) $\times \frac{1}{10}$. The digit itself is $0$ with probability $0.1$, and $1$ with probability $0.1$, and so on. This becomes

$$ \sum_{d = 0}^{10 – 1} \frac{d}{10} \times \frac{1}{10} = \frac{10(10-1)}{2 \cdot 10^2} = \frac{9}{20} = 0.45.$$

If there are $n$ (random) digits, then we expect the sum of the digits which match the subsequent digit to be $0.45 n$.

In this case, there are $10^7$ digits, and we should expect the sum to be $0.45 \cdot 10^7 = 4.5 \cdot 10^6$. How close are we?

In [17]:

```
abs(sum_matched_digits(randseq) - 4.5 * 10**6)
```

Out[17]:

That’s really, really close. How does this apply to the Advent of Code Day 1 problem?

In [18]:

```
0.45 * len(seq)
```

Out[18]:

For the second part of the problem, we are tasked with finding the sum of those digits which match the digits half-way away from the string. This only makes sense on even length strings.

It’s easy enough to modify the loop to do this.

In [19]:

```
def sum_matched_digits_with_sep(s, sep):
"Sum of digits which match the digit sep digits later"
total = 0
for a,b in zip(s, s[sep:]+s[:sep]):
if a == b:
total += int(a)
return total
```

In [20]:

```
assert sum_matched_digits_with_sep('1212', 2) == 6
assert sum_matched_digits_with_sep('1221', 2) == 0
assert sum_matched_digits_with_sep('123425', 3) == 4
assert sum_matched_digits_with_sep('123123', 3) == 12
assert sum_matched_digits_with_sep('12131415', 4) == 4
```

In [21]:

```
sum_matched_digits_with_sep(seq, len(seq)//2)
```

Out[21]:

The one-liner can be similarly written. What about the regex?

We want to identify a digit, skip `sep - 1`

digits, and then check to see if the subsequent digit matches.

In principle, we need to worry about wrapping around the string. But we notice that not wrapping around misses exactly half of the matches, so we just double the non-wrapped answer. This leads to the following.

In [22]:

```
import regex
def sum_matched_digits_with_sep_regex(s, sep):
matches = map(int, regex.findall(r'(\d)\d{}\1'.format("{"+str(sep-1)+"}"), s, overlapped=True))
total = 2*sum(matches)
return total
```

In [23]:

```
assert sum_matched_digits_with_sep_regex('1212', 2) == 6
assert sum_matched_digits_with_sep_regex('1221', 2) == 0
assert sum_matched_digits_with_sep_regex('123425', 3) == 4
assert sum_matched_digits_with_sep_regex('123123', 3) == 12
assert sum_matched_digits_with_sep_regex('12131415', 4) == 4
```

In [24]:

```
sum_matched_digits_with_sep_regex(seq, len(seq)//2)
```

Out[24]:

It is interesting to note that the expected value is the same as in the consecutive digit case. This is because the probability that two randomly chosen digits agree has nothing to do with the location of the digits. One random digit is as good as another.

I will instead note that a similar calculation as above shows that the expected value depends also on the base involved. We arrived at the value $n \times 9/20 = n \times (10 – 1)/2 \cdot 10$ for an $n$ digit number written in base $10$.

For an $n$ digit number written in base $b$, the expected value is

$$ n \cdot \frac{b-1}{2b}.$$

This increases as the base increases, and tends towards $n/2$.

The notebook itself (as a jupyter notebook) can be found and viewed on my github (link to jupyter notebook). When written, this notebook used a Sage 8.0.0.rc1 backend kernel and ran fine on the standard Sage 8.0 release , though I expect it to work fine with any recent official version of sage. The last cell requires an active notebook to be seen (or some way to export jupyter widgets to standalone javascript or something; this either doesn’t yet exist, or I am not aware of it).

I will also note that I converted the notebook for display on this website using jupyter’s nbconvert package. I have some CSS and syntax coloring set up that affects the display.

Good luck learning sage, and happy hacking.

Sage (also known as SageMath) is a general purpose computer algebra system written on top of the python language. In Mathematica, Magma, and Maple, one writes code in the mathematica-language, the magma-language, or the maple-language. Sage is python.

But no python background is necessary for the rest of today’s guided tutorial. The purpose of today’s tutorial is to give an indication about how one really *uses* sage, and what might be available to you if you want to try it out.

I will spoil the surprise by telling you upfront the two main points I hope you’ll take away from this tutorial.

- With tab-completion and documentation, you can do many things in sage without ever having done them before.
- The ecosystem of libraries and functionality available in sage is tremendous, and (usually) pretty easy to use.

Let’s first get a small feel for sage by seeing some standard operations and what typical use looks like through a series of trivial, mostly unconnected examples.

In [1]:

```
# Fundamental manipulations work as you hope
2+3
```

Out[1]:

You can also subtract, multiply, divide, exponentiate…

```
>>> 3-2
1
>>> 2*3
6
>>> 2^3
8
>>> 2**3 # (also exponentiation)
8
```

There is an order of operations, but these things work pretty much as you want them to work. You might try out several different operations.

Sage includes a lot of functionality, too. For instance,

In [2]:

```
factor(-1008)
```

Out[2]:

In [3]:

```
list(factor(1008))
```

Out[3]:

Sage knows many functions and constants, and these are accessible.

In [4]:

```
sin(pi)
```

Out[4]:

In [5]:

```
exp(2)
```

Out[5]:

Sage tries to internally keep expressions in exact form. To present approximations, use `N()`

.

In [6]:

```
N(exp(2))
```

Out[6]:

In [7]:

```
pi
```

Out[7]:

In [8]:

```
N(pi)
```

Out[8]:

You can ask for a number of digits in the approximation by giving a `digits`

keyword to `N()`

.

In [9]:

```
N(pi, digits=60)
```

Out[9]:

In [10]:

```
sqrt(2)
```

Out[10]:

In [11]:

```
sqrt(2)**2
```

Out[11]:

Of course, there are examples where floating point arithmetic gets in the way.

In sage/python, integers have unlimited digit length. Real precision arithmetic is a bit more complicated, which is why sage tries to keep exact representations internally. We don’t go into tracking digits of precision in sage, but it is usually possible to prescribe levels of precision.

`range`

function in python counts up to a given number, starting at 0.

In [12]:

```
range(16)
```

Out[12]:

In [13]:

```
A = matrix(4,4, range(16))
A
```

Out[13]:

In [14]:

```
B = matrix(4,4, range(-5, 11))
B
```

Out[14]:

In [15]:

```
A*B
```

Out[15]:

`.`

, and then call the function.

In [16]:

```
A.charpoly()
```

Out[16]:

There are some top-level functions as well.

In [17]:

```
factor(A.charpoly())
```

Out[17]:

Sometimes you start with an object, such as a matrix, and you wonder what you can do with it. Sage has very good tab-completion and introspection in its notebook interface.

Try typing

```
A.
```

and hit `<Tab>`

. Sage should display a list of things it thinks it can do to the matrix A.

Note that on CoCalc or external servers, tab completion sometimes has a small delay.

In [ ]:

```
A.
```

Some of these are more meaningful than others, but you have a list of options. If you want to find out what an option does, like `A.eigenvalues()`

, then type

```
A.eigenvalues?
```

and hit enter.

In [18]:

```
A.eigenvalues?
```

In [19]:

```
A.eigenvalues()
```

Out[19]:

If you’re really curious about what’s going on, you can type

```
A.eigenvalues??
```

which will also show you the implementation of that functionality. (You usually don’t need this).

In [ ]:

```
A.eigenvalues??
```

In [20]:

```
E = EllipticCurve([1,2,3,4,5])
E
```

Out[20]:

In [ ]:

```
# Tab complete me to see what's available
E.
```

In [21]:

```
E.conductor()
```

Out[21]:

In [22]:

```
E.rank()
```

Out[22]:

Sage knows about complex numbers as well. Use `i`

or `I`

to mean a $\sqrt{-1}$.

In [23]:

```
(1+2*I) * (pi - sqrt(5)*I)
```

Out[23]:

In [24]:

```
c = 1/(sqrt(3)*I + 3/4 + sqrt(29)*2/3)
```

`c`

is stored with perfect representations of square roots.

In [25]:

```
c
```

Out[25]:

But we can have sage give numerical estimates of objects by calling `N()`

on them.

In [26]:

```
N(c)
```

Out[26]:

In [27]:

```
N(c, 20) # Keep 20 "bits" of information
```

Out[27]:

`latex(<object>)`

to give a latex representation.

In [28]:

```
latex(c)
```

Out[28]:

In [29]:

```
latex(E)
```

Out[29]:

In [30]:

```
latex(A)
```

Out[30]:

You can have sage print the LaTeX version in the notebook by using `pretty_print`

In [31]:

```
pretty_print(A)
```

In [32]:

```
H = DihedralGroup(6)
H.list()
```

Out[32]:

In [33]:

```
a = H[1]
a
```

Out[33]:

In [34]:

```
a.order()
```

Out[34]:

In [35]:

```
b = H[2]
b
```

Out[35]:

In [36]:

```
a*b
```

Out[36]:

In [37]:

```
for elem in H:
if elem.order() == 2:
print elem
```

In [38]:

```
# Or, in the "pythonic" way
elements_of_order_2 = [elem for elem in H if elem.order() == 2]
elements_of_order_2
```

Out[38]:

In [39]:

```
rand_elem = H.random_element()
rand_elem
```

Out[39]:

In [40]:

```
G_sub = H.subgroup([rand_elem])
G_sub
```

Out[40]:

In [41]:

```
# Explicitly using elements of a group
H("(1,2,3,4,5,6)") * H("(1,5)(2,4)")
```

Out[41]:

The real purpose of these exercises are to show you that it’s often possible to use tab-completion to quickly find out what is and isn’t possible to do within sage.

- What things does sage know about this subgroup? Can you find the cardinality of the subgroup? (Note that the subgroup is generated by a random element, and your subgroup might be different than your neighbor’s). Can you list all subgroups of the dihedral group in sage?
- Sage knows other groups as well. Create a symmetric group on 5 elements. What does sage know about that? Can you verify that S5 isn’t simple? Create some cyclic groups?

It’s pretty easy to work over different fields in sage as well. I show a few examples of this

In [42]:

```
# It may be necessary to use `reset('x')` if x has otherwise been defined
K.<alpha> = NumberField(x**3 - 5)
```

In [43]:

```
K
```

Out[43]:

In [44]:

```
alpha
```

Out[44]:

In [45]:

```
alpha**3
```

Out[45]:

In [46]:

```
(alpha+1)**3
```

Out[46]:

In [47]:

```
GF?
```

In [48]:

```
F7 = GF(7)
```

In [49]:

```
a = 2/5
a
```

Out[49]:

In [50]:

```
F7(a)
```

Out[50]:

In [51]:

```
var('x')
```

Out[51]:

In [52]:

```
eqn = x**3 + sqrt(2)*x + 5 == 0
a = solve(eqn, x)[0].rhs()
```

In [53]:

```
a
```

Out[53]:

In [54]:

```
latex(a)
```

Out[54]:

In [55]:

```
pretty_print(a)
```

In [56]:

```
# Also RR, CC
QQ
```

Out[56]:

In [57]:

```
K.<b> = QQ[a]
```

In [58]:

```
K
```

Out[58]:

In [59]:

```
a.minpoly()
```

Out[59]:

In [60]:

```
K.class_number()
```

Out[60]:

Sage tries to keep the same syntax even across different applications. Above, we factored a few integers. But we may also try to factor over a number field. You can factor 2 over the Gaussian integers by:

- Create the Gaussian integers. The constructor CC[I] works.
- Get the Gaussian integer 2 (which is programmatically different than the typical integer 2), by something like
`CC[I](2)`

. `factor`

that 2.

In [61]:

```
# Let's declare that we want x and y to mean symbolic variables
x = 1
y = 2
print(x+y)
reset('x')
reset('y')
var('x')
var('y')
print(x+y)
```

In [62]:

```
solve(x^2 + 3*x + 2, x)
```

Out[62]:

In [63]:

```
solve(x^2 + y*x + 2 == 0, x)
```

Out[63]:

In [64]:

```
# Nonlinear systems with complicated solutions can be solved as well
var('p,q')
eq1 = p+1==9
eq2 = q*y+p*x==-6
eq3 = q*y**2+p*x**2==24
s = solve([eq1, eq2, eq3, y==1], p,q,x,y)
s
```

Out[64]:

In [65]:

```
s[0]
```

Out[65]:

In [66]:

```
latex(s[0])
```

Out[66]:

$$\left[p = 8, q = \left(-26\right), x = \left(\frac{5}{2}\right), y = 1\right]$$

In [67]:

```
# We can also do some symbolic calculus
f = x**2 + 2*x + 1
f
```

Out[67]:

In [68]:

```
diff(f, x)
```

Out[68]:

In [69]:

```
integral(f, x)
```

Out[69]:

In [70]:

```
F = integral(f, x)
F(x=1)
```

Out[70]:

In [71]:

```
diff(sin(x**3), x)
```

Out[71]:

In [72]:

```
# Compute the 4th derivative
diff(sin(x**3), x, 4)
```

Out[72]:

In [73]:

```
# We can try to foil sage by giving it a hard integral
integral(sin(x)/x, x)
```

Out[73]:

In [74]:

```
f = sin(x**2)
f
```

Out[74]:

In [75]:

```
# And sage can give Taylor expansions
f.taylor(x, 0, 20)
```

Out[75]:

In [76]:

```
f(x,y)=y^2+1-x^3-x
contour_plot(f, (x,-pi,pi), (y,-pi,pi))
```

Out[76]:

In [77]:

```
contour_plot(f, (x,-pi,pi), (y,-pi,pi), colorbar=True, labels=True)
```

Out[77]:

In [78]:

```
# Implicit plots
f(x,y) = -x**3 + y**2 - y + x + 1
implicit_plot(f(x,y)==0,(x,0,2*pi),(y,-pi,pi))
```

Out[78]:

- Experiment with the above examples by trying out different functions and plots.
- Sage can do partial fractions for you as well. To do this, you first define your function you want to split up. Suppose you call it
`f`

. Then you use`f`

.partial_fraction(x). Try this out - Sage can also create 3d plots. Create one. Start by looking at the documentation for
`plot3d`

.

Of the various math software, sage+python provides my preferred plotting environment. I have used sage to create plots for notes, lectures, classes, experimentation, and publications. You can quickly create good-looking plots. For example, I used sage/python extensively in creating this note for my students on Taylor Series (which is a classic “hard topic” that students have lots of questions about, at least in the US universities I’m familiar with. To this day, about 1/6 of the traffic to my website is to see that page).

As a non-trivial example, I present the following interactive plot.

In [79]:

```
@interact
def g(f=sin(x), c=0, n=(1..30),
xinterval=range_slider(-10, 10, 1, default=(-8,8), label="x-interval"),
yinterval=range_slider(-50, 50, 1, default=(-3,3), label="y-interval")):
x0 = c
degree = n
xmin,xmax = xinterval
ymin,ymax = yinterval
p = plot(f, xmin, xmax, thickness=4)
dot = point((x0,f(x=x0)),pointsize=80,rgbcolor=(1,0,0))
ft = f.taylor(x,x0,degree)
pt = plot(ft, xmin, xmax, color='red', thickness=2, fill=f)
show(dot + p + pt, ymin=ymin, ymax=ymax, xmin=xmin, xmax=xmax)
html('$f(x)\;=\;%s$'%latex(f))
html('$P_{%s}(x)\;=\;%s+R_{%s}(x)$'%(degree,latex(ft),degree))
```

There are a variety of tutorials and resources for learning more about sage. I list several here.

- Sage provides some tutorials of its own. These include its Guided Tour and the Standard Sage Tutorial. The Standard Sage Tutorial is designed to take 2-4 hours to work through, and afterwards you should have a pretty good sense of the sage environment.
- PREP Tutorials are a set of tutorials created in a program sponsored by the Mathematics Association of America, aimed at working with university students with sage. These tutorials are designed for people both new to sage and to programming.

See also the main sage website.

For questions about specific things in sage, you can ask about these on StackOverflow or AskSage. You might also consider the sage-support or sage-edu mailing lists.

It isn’t necessary to know python to use sage, but a heavy sage user will benefit significantly from learning some python. Conversely, sage is very easy to use if you know python.

The purpose of this note is to describe the large effects of having no internet at my home for the last four weeks. I’m at my home about half the time, leading to the title.

I have become accustomed to having the internet at all times. I now see that many various habits of mine involved the internet. In the mornings and evenings, I would check HackerNews, longform, and reddit for interesting reads. Invariably there are more interesting seeming things than I would read, and my *Checkout* bookmarks list is a hundreds-of-items long growing list of maybe interesting stuff. In the middle times throughout the day, I would checkout a few of these bookmarks.

All in all, I would spend an enormous amount of time reading random interesting tidbits, even though much of this time was spread out in the “in-betweens” in my day.

When I didn’t have internet at my home, I had to fill all those “in-between” moments, as well as my waking and sleeping moments, with something else. Faced with the necessity of doing something, I filled most of these moments with reading books. Made out of paper. (The same sort of books whose sales are rising compared to ebooks, contrary to most predictions a few years ago).

I’d forgotten how much I enjoyed reading a book in large chunks, in very few sittings. I usually have an ebook on my phone that I read during commutes, and perhaps most of my idle reading over the last several years has been in 20 page increments. The key phrase here is “idle reading”. I now set aside time to “actively read”, in perhaps 100 page increments. Reading enables a “flow state” very similar to the sensation I get when mathing continuously, or programming continuously, for a long period of time. I not only read more, but I enjoy what I’m reading more.

As a youth, I would read all the time. Fun fact: at one time, I’d read almost every book in the Star Wars expanded universe. There were over a hundred, and they were all canon (before Disney paved over the universe to make room). I learned to love reading by reading science fiction, and the first novel I remember reading was a copy of Andre Norton’s “The Beastmaster” (… which is great. A part telepath part Navajo soldier moves to another planet. Then it’s a space western. What’s not to love?).

My primary source of books is the library at the University of Warwick. Whether through differences in continental taste or simply a case of different focus, the University Library doesn’t have many books in its fiction collection that I’ve been intending to read. I realize now that most of the nonfiction I read originates on the internet, while much of the fiction I read comes from books. Now, encouraged by a lack of alternatives, I picked up many more and varied nonfiction books than I would otherwise have.

As an unexpected side effect, I found that I would also carefully download some of the articles I identified as “interesting” a bit before I headed home from the office. Without internet, I read far more of my *checkout* bookmarks than I did with internet. Weird. Correspondingly, I found that I would spend a bit more time cutting down the false-positive rate — I used to bookmark almost anything that I thought might be interesting, but which I wasn’t going to read right then. Now I culled the wheat from the chaff, as harvesting wheat takes time. (Perhaps this is something I should do more often. I recognize that there are services or newsletters that promise to identify great materials, but somehow none of them have worked better to my tastes than hackernews or longform. But these both have questionable signal to noise.).

The result is that I’ve goofed off reading probably about the same amount of time, but in fewer topics and at greater depth in each. It’s easy to jump from 10 page article to 10 page article online; when the medium is books, things come in larger chunks.

I *feel* more productive reading a book, even though I don’t actually attribute much to the difference. There may be something to the act of reading contiguously and continuously for long periods of time, though. This correlated with an overall increase my “chunking” of tasks across continuous blocks of time, instead of loosely multitasking. I think this is for the better.

I now have internet at my flat. Some habits will slide back, but there are other new habits that I will keep. I’ll keep my bedroom computer-free. In the evening, this means I read books before I sleep. In the morning, this means I must leave and go to the other room before I waste any time on online whatevers. Both of these are good. And I’ll try to continue to chunk time.

To end, I’ll note what I read in the last month, along with a few notes about each.

From best to worse.

- The best fiction I read was
*The Three Body Problem*, by Cixin Liu. I’d heard lots about this book. It’s Chinese scifi, and much of the story takes place against the backdrop of the Chinese cultural revolution… which I know embarassingly little about. The moral and philosophical underpinnings of this book are interesting and atypical (to me). At its core are various groups of people who have lost faith in aspects of science, or humanity, or both. I was unprepared for the many (hundreds?) of pages of philosophizing in the book, but I understood why it was there. This aspect reminded me of the last half of Anathem by Stephenson (perhaps the best book I’ve read in the last few years), which also had many (also hundreds?) of pages of philosophizing. I love this book, I recommend it. And I note that I read it in four sittings. There are two more books completing a trilogy, and I will read them once I can get my hands on them. [No library within 50 miles of me has them. I did buy the first one, though. Perhaps I’ll buy the other two.] - The second best was
*The Lathe of Heaven*by Ursula Le Guin. This is some classic fantasy, and is pretty mindbending. I think the feel of many books of Ursula Le Guin is very similar — there are many interesting ideas throughout the book, but the book deliberately loses coherence as the flow and fury of the plot reaches a climax. I like*The Lathe of Heaven*more than*The Wizard of Earthsea*and about the same as*The Left Hand of Darkness*, also by Le Guin. I read this book in three sittings. - I read three of the Witcher books, by Andzej Sapkowski. Namely,
*The Sword of Destiny*,*Blood of Elves*, and*Time of Concempt*. These are fun, not particularly deep reads. There is a taste of moral ambiguity that I like as it’s different from what I normally find. On the other hand, Sapkowski often uses humor or ambiguity in place of a meaningful, coherent plot.*The Sword of Destiny*is a collection of short tales, and I think his short tales are better than his novels — entirely because one doesn’t need or expect coherence from short stories.

I’m currently reading *Confusion* by Neal Stephenson, book two of the Baroque trilogy. Right now, I am exactly 1 page in.

I rank these from those I most enjoyed to those I least enjoyed.

*How Equal Temperament Ruined Harmony*, by Duffin. This was told to me as an introduction to music theory [in fact, I noted this from a comment thread on hackernews somewhere], but really it is a treatise on the history of tuning and temparaments. It turns out that modern equal termperament suffers from many flaws that aren’t commonly taught. When I got back to the office after reading this book, I spent a good amount of time on youtube listening to songs in mean tone tuning and just intonation. There is a difference! I read this book in 2 sittings — it’s short, pretty simple, and generally nice. However there are several long passages that are simply better to skip. Nonetheless I learned a lot.*A Random Walk down Wall Street*, by Burton Malkiel. I didn’t know too much about investing before reading this book. I wouldn’t actually say that I know too much after reading it either, but the book is about investing. I was warned that reading this book would make me think that the only way to really invest is to purchase index funds. And indeed, that is the overwhelming (and explicit) takeawar from the book. But I found the book surprisingly readable, and read it very quickly. I find that some of the analysis is biased towards long-term investing even as a basis of comparison.*Guesstimation*, by Weinstein. Ok, perhaps it is not fair to say that one “reads” this book. It consists of many Fermi-style questions (how many golf balls does it take to fill up a football stadium type questions), followed by their analysis. So I read a question and then sit down and do my own analysis. And then I compare it against Weinstein’s. I was stunned at how often the analyses were tremendously similar and got essentially the same order of magnitude at the end. [But not always, that’s for sure. There are also lots of things that I estimate very, very poorly]. There’s a small subgenre of “popular mathematics for the reader who is willing to take out a pencil and paper” (which can’t have a big readership, but which I thoroughly enjoy), and this is a good book within that subgenre. I’m currently working through its sequel.*Natures Numbers*, by Ian Stewart. This is a pop math book. Ian Stewart is an emeritus professor at my university, so it seemed appropriate to read something of his. This is a surprisingly fast read (I read it in a single sitting). Stewart is known for writing approachable popular math accounts, and this fits.*The Structure of Scientific Revolutions*, by Thomas Kuhn. This is metascience. I read the first half of this book/essay very quickly, and I struggled through its second half. This came highly recommended to me, but I found the signal to noise ratio to be pretty low. It might be that I wasn’t very willing to navigate the careful treading around equivocation throughout. However, I think many of the ideas are good. I don’t know if someone has written a 30 page summary, but I think this may be possible — and a good alternative to the book/essay itself.

I’m now reading *Grit*, by Angela Duckworth. Another side effect of reading more is that I find myself reading one fiction, one non-fiction, and one “simple” book at the same time.

Written while on a bus without internet to Heathrow, minus the pictures (which were added at Heathrow).

]]>