Category Archives: Expository

Update to Second Moments in the Generalized Gauss Circle Problem

Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a paper to the arXiv called “Second Moments in the Generalized Gauss Circle Problem”. I’ve briefly described its contents before.

This paper has been accepted and will appear in Forum of Mathematics: Sigma.More randomized squares art

This is the first time I’ve submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don’t release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.

An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.1

A new appendix has appeared

There is one major addition to the paper that didn’t appear in the original preprint. At one of the referee’s suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.

If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$ f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz), $$ then one can use a (real analytic) Eisenstein series $$ E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s $$ to recognize the Rankin-Selberg $L$-function \begin{equation}\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k – 1}} = h(s) \langle f g y^k, E(s, z) \rangle, \end{equation} where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.

When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.

When $f$ and $g$ are not cusp forms, then Zagier2 provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.

In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta3 gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.

However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.4 For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.

The Appendix to the paper can be thought of as an extension of Gupta’s paper: it uses Gupta’s ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.

There is more to this story. I’ll return to it in a later note.

Other submission details for FoM: Sigma

I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.

When I first opened these comments, I was a bit afraid. Having so many comments was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).

More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,5 but this process changed and helped sharpen my ideas concerning good technical math writing.

I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn’t expect this.

I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author’s institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).

This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.6

I will certainly consider Forum of Mathematics in the future.

Posted in Expository, Math.NT, Mathematics, Warwick | Tagged , , , | Leave a comment

Notes from a Talk at Building Bridges 4

On 18 July 2018 I gave a talk at the 4th Building Bridges Automorphic Forms Workshop, which is hosted at the Renyi Institute in Budapest, Hungary this year. In this talk, I spoke about counting points on hyperboloids, with a certain focus on counting points on the three dimensional hyperboloid

$$\begin{equation} X^2 + Y^2 = Z^2 + h \end{equation}$$

for any fixed integer $h$.

I gave a similar talk at the 32nd Automorphic Forms Workshop in Tufts in March. I don’t say this during my talk, but a big reason for giving these talks is to continue to inspire me to finish the corresponding paper. (There are still a couple of rough edges that need some attention).

The methodology for the result relies on the spectral expansion of half-integral weight modular forms. This is unfriendly to those unfamiliar with the subject, and particularly mysterious to students. But there is a nice connection to a topic discussed by Arpad Toth during the previous week’s associated summer school.

Arpad sketched a proof of the spectral decomposition of holomorphic modular cusp forms on $\Gamma = \mathrm{SL}(2, \mathbb{Z})$. He showed that
$$\begin{equation} L^2(\Gamma \backslash \mathcal{H}) = \textrm{cuspidal} \oplus \textrm{Eisenstein}, \tag{1}
\end{equation}$$
where the cuspidal contribution comes from Maass forms and the Eisenstein contribution comes from line integrals against Eisenstein series.

The typical Eisenstein series $$\begin{equation} E(z, s) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} \textrm{Im}(\gamma z)^s \end{equation}$$ only converges for $\mathrm{Re}(s) > 1$, and the initial decomposition in $(1)$ implicitly has $s$ in this range.

To write down the integrals appearing in the Eisenstein spectrum explicitly, one normally shifts the line of integration to $1/2$. As Arpad explained, classically this produces a pole at $s = 1$ (which is the constant function).

In half-integral weight, the Eisenstein series has a pole at $s = 3/4$, with the standard theta function

$$\begin{equation} \theta(z) = \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z} \end{equation}$$

as the residue. (More precisely, it’s a constant times $y^{1/4} \theta(z)$, or a related theta function for $\Gamma_0(N)$). I refer to this portion of the spectrum as the residual spectrum, since it comes from often-forgotten residues of Eisenstein series. Thus the spectral decomposition for half-integral weight objects is a bit more complicated than the normal case.

When giving talks involving half-integral weight spectral expansions to audiences including non-experts, I usually omit description of this. But for those who attended the summer school, it’s possible to at least recognize where these additional terms come from.

The slides for this talk are available here.

Posted in Expository, Math.NT, Mathematics | Tagged , , , , | Leave a comment

Paper Announcement: A Shifted Sum for the Congruent Number Problem

Tom Hulse, Chan Ieong Kuan, Alex Walker, and I have just uploaded a new paper to the arXiv titled A Shifted Sum for the Congruent Number Problem. In this charming, short paper, we investigate a particular sum of terms which are products of square-indicator functions and show that its asymptotics are deeply connected to congruent numbers. This note serves to describe and provide additional context for these results. (This note is also available as a pdf).

Congruent Numbers

We consider some triangles. There are many right triangles, such as the triangle with sides $(3, 4, 5)$ or the triangle with sides $(1, 1, \sqrt{2})$. We call a right triangle rational when all its side lengths are rational numbers. For illustration, $(3, 4, 5)$ is rational, while $(1, 1, \sqrt{2})$ is not. $\DeclareMathOperator{\sqfree}{sqfree}$

There is mythology surrounding rational right triangles. According to legend, the ancient Greeks, led both philosophcally and mathematically by Pythagoras (who was the first person to call himself a philosopher and essentially the first to begin to distill and codify mathematics), believed all numbers and quantities were ratios of integers (rational). When a disciple of Pythagoras named Hippasus found that the side lengths of the right triangle $(1, 1, \sqrt{2})$ were not rational multiples of each other, the other followers of Pythagoras killed him by casting him overboard while at sea for having produced an element which contradicted the gods. (It with some irony that we now attribute this as a simple consequence of the Pythagorean Theorem).

This mythology is uncertain, but what is certain is that even the ancient Greeks were interested in studying rational right triangles, and they began to investigate what we now call the Congruent Number Problem. By the year 972 the CNP appears in Arabic manuscripts in (essentially) its modern formulation. The Congruent Number Problem (CNP) may be the oldest unresolved math problem.

We call a positive rational number $t$ congruent if there is a rational right triangle with area $t$. The triangle $(3,4,5)$ shows that $6 = 3 \cdot 4 / 2$ is congruent. The CNP is to describe all congruent numbers. Alternately, the CNP asks whether there is an algorithm to show definitively whether or not $t$ is a congruent number for any $t$.

We can reduce the problem to a statement about integers. If the rational number $t = p/q$ is the area of a triangle with legs $a$ and $b$, then the triangle $aq$ and $bq$ has area $tq^2 = pq$. It follows that to every rational number there is an associated squarefree integer for which either both are congruent or neither are congruent. Further, if $t$ is congruent, then $ty^2$ and $t/y^2$ are congruent for any integer $y$.

We may also restrict to integer-sided triangles if we allow ourselves to look for those triangles with squarefree area $t$. That is, if $t$ is the area of a triangle with rational sides $a/A$ and $b/B$, then $tA^2 B^2$ is the area of the triangle with integer sides $aB$ and $bA$.

It is in this form that we consider the CNP today.

Congruent Number Problem

Given a squarefree integer $t$, does there exist a triangle with integer side lengths such that the squarefree part of the area of the triangle is $t$?

We will write this description a lot, so for a triangle $T$ we introduce the notation
\begin{equation}
\sqfree(T) = \text{The squarefree part of the area of } T.
\end{equation}
For example, the area of the triangle $T = (6, 8, 10)$ is $24 = 6 \cdot 2^2$, and so $\sqfree(T) = 6$. We should expect this, as $T$ is exactly a doubled-in-size $(3,4,5)$ triangle, which also corresponds to the congruent number $6$. Note that this allows us to only consider primitive right triangles.

Main Result

Let $\tau(n)$ denote the square-indicator function. That is, $\tau(n)$ is $1$ if $n$ is a square, and is $0$ otherwise. Then the main result of the paper is that the sum
\begin{equation}
S_t(X) := \sum_{m = 1}^X \sum_{n = 1}^X \tau(m-n)\tau(m)\tau(nt)\tau(m+n)
\end{equation}
is related to congruent numbers through the asymptotic
\begin{equation}
S_t(X) = C_t \sqrt X + O_t\Big( \log^{r/2} X\Big),
\end{equation}
where
\begin{equation}
C_t = \sum_{h_i \in \mathcal{H}(t)} \frac{1}{h_i}.
\end{equation}
Each $h_i$ is a hypotenuse of a primitive integer right triangle $T$ with $\sqfree(T) = t$. Each hypotnesue will occur in a pair of similar triangles $(a,b, h_i)$ and $(b, a, h_i)$; $\mathcal{H}(t)$ is the family of these triangles, choosing only one triangle from each similar pair. The exponent $r$ in the error term is the rank of the elliptic curve
\begin{equation}
E_t(\mathbb{Q}): y^2 = x^3 – t^2 x.
\end{equation}

What this says is that $S_t(X)$ will have a main term if and only if $t$ is a congruent number, so that computing $S_t(X)$ for sufficiently large $X$ will show whether $t$ is congruent. (In fact, it’s easy to show that $S_t(X) \neq 0$ if and only if $t$ is congruent, so the added value here is the nature of the asymptotic).

We should be careful to note that this does not solve the CNP, since the error term depends in an inexplicit way on the desired number $t$. What this really means is that we do not have a good way of recognizing when the first nonzero term should occur in the double sum. We can only guarantee that for any $t$, understanding $S_t(X)$ for sufficiently large $X$ will allow one to understand whether $t$ is congruent or not.

Intuition and Methodology

There are four primary components to this result:

  1. There is a bijection between primitive integer right triangles $T$ with
    $\sqfree(T) = t$ and arithmetic progressions of squares $m^2 – tn^2, m^2,
    m^2 + tn^2$ (where each term is itself a square).
  2. There is a bijection between primitive integer right triangles $T$ with
    $\sqfree(T) = t$ and points on the elliptic curve $E_t(\mathbb{Q}): y^2 = x^3
    – t x$ with $y \neq 0 $.
  3. If the triangle $T$ corresponds to a point $P$ on the curve $E_t$, then
    the size of the hypotenuse of $T$ can be bounded below by $H(P)$, the
    (naive) height of the point on the elliptic curve.
  4. Néron (and perhaps Mordell, but I’m not quite fluent in the initial
    history of the theory of elliptic curves) proved strong (upper) bounds on
    the number of points on an elliptic curve up to a given height. (In fact,
    they proved asymptotics which are much stronger than we use).

In this paper, we use $(1)$ to relate triangles $T$ to the sum $S_t(X)$ and we use $(2)$ to relate these triangles to points on the elliptic curve. Tracking the exact nature of the hypotenuses through these bijections allows us to relate the sum to certain points on elliptic curves. In order to facilitate the tracking of these hypotenuses, we phrase these bijections in slightly different ways than have appeared in the literature. By $(3)$ and $(4)$, we can bound the number and size of the hypotenuses which appear in terms of numbers of points on the elliptic curve up to a certain height. Intuitively this is why the higher the rank of the elliptic curve (corresponding roughly to the existence of many more points on the curve), the worse the error term in our asymptotic.

I would further conjecture that the error term in our asymptotic is essentially best-possible, even though we have thrown away some information in our proof.

Additional Context

We are not the first to note either the bijection between triangles $T$ and arithmetic progressions of squares or between triangles $T$ and points on a particular elliptic curve. The first is surely an ancient observation, but I don’t know who first considered the relation to elliptic curves. But it’s certain that this was a fundamental aspect in Tunnell’s famous work A Classical Diophantine Problem and Modular Forms of Weight 3/2 from 1983, where he used the properties of the elliptic curve $E_t$ to relate the CNP to the Birch and Swinnerton-Dyer Conjecture.

One statement following from the Birch and Swinnerton-Dyer conjecture (BSD) is that if an elliptic curve $E$ has rank $r$, then the $L$-function $L(s, E)$ has a zero of order $r$ at $1$. The relation between lots of points on the curve and the existence of a zero is intuitive from the approximate relation that
\begin{equation}
L(1, E) \approx \lim_{X} \prod_{p \leq X} \frac{p}{\#E(\mathbb{F}_p)},
\end{equation}
so if $E$ has lots and lots of points then we should expect the multiplicands to be very small.

On the other hand, the elliptic curve $E_t: y^2 = x^3 – t^2 x$ has the interesting property that any point with $y \neq 0$ generates a free group of points on the curve. From the bijections alluded to above, a primitive right integer triangle $T$ with $\sqfree(T) = t$ corresponds to a point on $E_t$ with $y \neq 0$, and thus guarantees that there are lots of points on the curve. Tunnell showed that what I described as “lots of points” is actually enough points that $L(1, E)$ must be zero (assuming the relation between the rank of the curve and the value of $L(1, E)$ from BSD).

Tunnell proved that if BSD is true, then $L(1, E) = 0$ if and only if $n$ is a congruent number.

Yet for any elliptic curve we know how to compute $L(1, E)$ to guaranteed accuracy (for instance by using Dokchitser’s algorithm). Thus a corollary of Tunnell’s theorem is that BSD implies that there is an algorithm which can be used to determine definitively whether or not any particular integer $t$ is congruent.

This is the state of the art on the congruent number problem. Unfortunately, BSD (or even the somewhat weaker between BSD and mere nonzero rank of elliptic curves as is necessary for Tunnell’s result for the CNP) is quite far from being proven.

In this context, the main result of this paper is not as effective at actually determining whether a number is congruent or not. But it does have the benefit of not relying on any unknown conjecture.

And there is some potential follow-up questions. The sum $S_t(X)$ appears as an integral transform of the multiple Dirichlet series
\begin{equation}
\sum_{m,n} \frac{\tau(m-n)\tau(m)\tau(nt)\tau(m+n)}{m^s n^w}
\approx
\sum_{m,n} \frac{r_1(m-n)r_1(m)r_1(nt)r_1(m+n)}{m^s n^w},
\end{equation}
where $r_1(n)$ is $1$ if $n = 0$ or $2$ if $n$ is a positive square, and $0$ otherwise. Then $r_1(n)$ appears as the Fourier coefficients of the half-integral weight standard theta function
\begin{equation}
\theta(z)
= \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z}
= \sum_{n \geq 0} r_1(n) e^{2 \pi i n z},
\end{equation}
and $S_t(X)$ is a shifted convolution sum coming from some products of modular forms related to $\theta(z)$.

It may be possible to gain further understanding of the behavior of $S_t(X)$ (and therefore the congruent number problem) by studying the shifted convolution as coming from theta functions.

I would guess that there is a deep relation to Tunnell’s analysis in his 1983 paper, as in some sense he constructs appropriate products of three theta functions and uses them centrally in his proof. But I do not understand this relationship well enough yet to know whether it is possible to deepen our understanding of the CNP, BSD, or Tunnell’s proof. That is something to explore in the future.

Posted in Expository, Math.NT, Mathematics | Tagged , | Leave a comment

Notes from a talk at Tufts, Automorphic Forms Workshop

On 19 March I gave a talk at the 32nd Automorphic Forms Workshop, which ishosted by Tufts this year. The content of the talk concerned counting points on hyperboloids, and inparticular counting points on the three dimensional hyperboloid

$$\begin{equation}
X^2 + Y^2 = Z^2 + h
\end{equation}$$

for any fixed integer $h$. But thematically, I wanted to give another concrete example of using modularforms to compute some sort of arithmetic data, and to mention how the perhapsapparently unrelated topic of spectral theory appears even in such an arithmeticapplication.

Somehow, starting from counting points on $X^2 + Y^2 = Z^2 + h$ (which appearssimple enough on its own that I could probably put this in front of anelementary number theory class and they would feel comfortable experimentingaway on the topic), one gets to very scary-looking expressions like

$$\begin{equation}
\sum_{t_j}
\langle P_h^k, \mu_j \rangle
\langle \theta^2 \overline{\theta} y^{3/4}, \mu_j \rangle +
\sum_{\mathfrak{a}}\int_{(1/2)}
\langle P_h^k, E_h^k(\cdot, u) \rangle
\langle \theta^2 \overline{\theta} y^{3/4}, E_h^k(\cdot, u) \rangle du,
\end{equation}$$

which is full of lots of non-obvious symbols and is generically intimidating.

Part of the theme of this talk is to give a very direct idea of how one gets tothe very complicated spectral expansion from the original lattice-countingproblem. Stated differently, perhaps part of the theme is to describe a simple-lookingnail and a scary-looking hammer, and show that the hammer actually works quitewell in this case.

The slides for this talk are available here.

Posted in Expository, Math.NT, Mathematics | Leave a comment

Segregation, Gerrymandering, and Schelling’s Model

[This note is more about modeling some of the mathematics behind political events than politics themselves. And there are pretty pictures.]

Gerrymandering has become a recurring topic in the news. The Supreme Court of the US, as well as more state courts and supreme courts, is hearing multiple cases on partisan gerrymandering (all beginning with a case in Wisconsin).

Intuitively, it is clear that gerrymandering is bad. It allows politicians to choose their voters, instead of the other way around. And it allows the majority party to quash minority voices.

But how can one identify a gerrymandered map? To quote Justice Kennedy in his Concurrence the 2004 Supreme Court case Vieth v. Jubelirer:

When presented with a claim of injury from partisan gerrymandering, courts confront two obstacles. First is the lack of comprehensive and neutral principles for drawing electoral boundaries. No substantive definition of fairness in districting seems to command general assent. Second is the absence of rules to limit and confine judicial intervention. With uncertain limits, intervening courts–even when proceeding with best intentions–would risk assuming political, not legal, responsibility for a process that often produces ill will and distrust.

Later, he adds to the first obstacle, saying:

The object of districting is to establish “fair and effective representation for all citizens.” Reynolds v. Sims, 377 U.S. 533, 565—568 (1964). At first it might seem that courts could determine, by the exercise of their own judgment, whether political classifications are related to this object or instead burden representational rights. The lack, however, of any agreed upon model of fair and effective representation makes this analysis difficult to pursue.

From Justice Kennedy’s Concurrence emerges a theme — a “workable standard” of identifying gerrymandering would open up the possibility of limiting partisan gerrymandering through the courts. Indeed, at the core of the Wisconsin gerrymandering case is a proposed “workable standard”, based around the efficiency gap.

 

Thomas Schelling and Segregation

In 1971, American economist Thomas Schelling (who later won the Nobel Prize in Economics in 2005) published Dynamic Models of Segregation (Journal of Mathematical Sociology, 1971, Vol 1, pp 143–186). He sought to understand why racial segregation in the United States seems so difficult to combat.

He introduced a simple model of segregation suggesting that even if each individual person doesn’t mind living with others of a different race, they might still choose to segregate themselves through mild preferences. As each individual makes these choices, overall segregation increases.

I write this post because I wondered what happens if we adapt Schelling’s model to instead model a state and its district voting map. In place of racial segregation, I consider political segregation. Supposing the district voting map does not change, I wondered how the efficiency gap will change over time as people further segregate themselves.

It seemed intuitive to me that political segregation (where people who had the same political beliefs stayed largely together and separated from those with different political beliefs) might correspond to more egregious cases of gerrymandering. But to my surprise, I was (mostly) wrong.

Let’s set up and see the model.

(more…)

Posted in Expository, Mathematics, Politics, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 4

This is a very short post in my collection working through this year’s Advent of Code challenges. Unlike the previous ones, this has no mathematical comments, as it was a very short exercise. This notebook is available in its original format on my github.

Day 4: High Entropy Passphrases

Given a list of strings, determine how many strings have no duplicate words.

This is a classic problem, and it’s particularly easy to solve this in python. Some might use collections.Counter, but I think it’s more straightforward to use sets.

The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
def count_lines_with_unique_words(lines):
    num_pass = 0
    for line in lines:
        s = line.split()
        if len(s) == len(set(s)):
            num_pass += 1
    return num_pass

count_lines_with_unique_words(lines)
Out[1]:
455

I think this is the first day where I would have had a shot at the leaderboard if I’d been gunning for it.

Part 2

Let’s add in another constraint. Determine how many strings have no duplicate words, even after anagramming. Thus the string

abc bac

is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words.

In [2]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
sorted_lines = []
for line in lines:
    sorted_line = ' '.join([''.join(l) for l in map(sorted, line.split())])
    sorted_lines.append(sorted_line)

sorted_lines[:2]
    
Out[2]:
['bddjjow acimrv bcjjm anr flmmos fiosv',
 'bcmnoxy dfinyzz dgmp dfgioy hinrrv eeklpuu adgpw kqv']
In [3]:
count_lines_with_unique_words(sorted_lines)
Out[3]:
186
Posted in Expository, Programming, Python | Tagged , , | 1 Comment

Advent of Code: Day 3

This is the third notebook in my posts on the Advent of Code challenges. The notebook in its original format can be found on my github.

Day 3: Spiral Memory

Numbers are arranged in a spiral

17  16  15  14  13
18   5   4   3  12
19   6   1   2  11
20   7   8   9  10
21  22  23---> ...

Given an integer n, what is its Manhattan Distance from the center (1) of the spiral? For instance, the distance of 3 is $2 = 1 + 1$, since it’s one space to the right and one space up from the center.

Here’s my idea. The bottom right corner of the $k$th layer is the integer $(2k+1)^2$, since that’s how many integers are contained within that square. The other three corners in that layer are $(2k+1)^2 – 2k, (2k+1)^2 – 4k$, and $(2k+1)^2 – 6k$. Finally, the closest spot on the $k$th layer to the origin is at distance $k$: these are the four “axis locations” halfway between the corners, at $(2k+1)^2 – k, (2k+1)^2 – 3k, (2k+1)^2 – 5k$, and $(2k+1)^2 – 7k$.

For instance when $k = 1$, the bottom right is $(2 + 1)^2 = 9$, and the four “axis locations” are $9 – 1, 9 – 3, 9-5$, and $9-7$. The “axis locations” are $k$ away, and the corners are $2k$ away.

So I will first find which layer the number is on. Then I’ll figure out which side it’s on, and then how far away it is from the nearest “axis location” or “corner”.

My given number happens to be 289326.

In [1]:
import math

def find_lowest_larger_odd_square(n):
    upper = math.ceil(n**.5)
    if upper %2 == 0:
        upper += 1
    return upper
In [2]:
assert find_lowest_larger_odd_square(39) == 7
assert find_lowest_larger_odd_square(26) == 7
assert find_lowest_larger_odd_square(25) == 5
In [3]:
find_lowest_larger_odd_square(289326)
Out[3]:
539
In [4]:
539**2 - 289326
Out[4]:
1195

It happens to be that our integer is very close to an odd square.
The square is $539^2$, and the distance to that square is $538$ from the center.

Note that $539 = 2(269) + 1$, so this is the $269$th layer of the square.
The previous corner to $539^2$ is $539^2 – 538$, and the previous corner to that is $539^2 – 2\cdot538 = 539^2 – 1076$.
This is the nearest corner.
How far away from the square is this corner?

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 2

This is the second notebook in my posts on the Advent of Code challenges. This notebook in its original format can be found on my github.

Day 2: Corruption Checksum, part I

You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together.

There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
lines[0]
Out[1]:
'5048\t177\t5280\t5058\t4504\t3805\t5735\t220\t4362\t1809\t1521\t230\t772\t1088\t178\t1794\n'
In [2]:
l = lines[0]
l = l.split()
l
Out[2]:
['5048',
 '177',
 '5280',
 '5058',
 '4504',
 '3805',
 '5735',
 '220',
 '4362',
 '1809',
 '1521',
 '230',
 '772',
 '1088',
 '178',
 '1794']
In [3]:
def max_minus_min(line):
    '''Compute the difference between the largest and smallest integer in a line'''
    line = list(map(int, line.split()))
    return max(line) - min(line)

def sum_differences(lines):
    '''Sum the value of `max_minus_min` for each line in `lines`'''
    return sum(max_minus_min(line) for line in lines)
In [4]:
testcase = ['5 1 9 5','7 5 3', '2 4 6 8']
assert sum_differences(testcase) == 18
In [5]:
sum_differences(lines)
Out[5]:
58975

Mathematical Interlude

In line with the first day’s challenge, I’m inclined to ask what we should “expect.” But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense.

Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect?

As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum?

 

Expected Minimum

Let’s begin with the minimum. The minimum is $1$ unless all the integers are greater than $2$. This has probability
$$ 1 – \left( \frac{9}{10} \right)^m = \frac{10^m – 9^m}{10^m}$$
of occurring. We rewrite it as the version on the right for reasons that will soon be clear.
The minimum is $2$ if all the integers are at least $2$ (which can occur in $9$ different ways for each integer), but not all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$). Thus this has probability
$$ \frac{9^m – 8^m}{10^m}.$$
Continuing to do one more for posterity, the minimum is $3$ if all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$), but not all integers are at least $4$ (each integer has $7$ different ways of being at least $4$). Thus this has probability

$$ \frac{8^m – 7^m}{10^m}.$$

And so on.

Recall that the expected value of a random variable is

$$ E[X] = \sum x_i P(X = x_i),$$

so the expected value of the minimum is

$$ \frac{1}{10^m} \big( 1(10^m – 9^m) + 2(9^m – 8^m) + 3(8^m – 7^m) + \cdots + 9(2^m – 1^m) + 10(1^m – 0^m)\big).$$

This simplifies nicely to

$$ \sum_ {k = 1}^{10} \frac{k^m}{10^m}. $$

Expected Maximum

The same style of thinking shows that the expected value of the maximum is

$$ \frac{1}{10^m} \big( 10(10^m – 9^m) + 9(9^m – 8^m) + 8(8^m – 7^m) + \cdots + 2(2^m – 1^m) + 1(1^m – 0^m)\big).$$

This simplifies to

$$ \frac{1}{10^m} \big( 10 \cdot 10^m – 9^m – 8^m – \cdots – 2^m – 1^m \big) = 10 – \sum_ {k = 1}^{9} \frac{k^m}{10^m}.$$

Expected Difference

Subtracting, we find that the expected difference is

$$ 9 – 2\sum_ {k=1}^{9} \frac{k^m}{10^m}. $$

From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It’s also pretty straightforward to see how to extend this to values of integers from $1$ to $N$.

Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn’t expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random.
But we can try it out anyway.

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 1

I thoroughly enjoyed reading through Peter Norvig’s extraordinarily clean and nice solutions to the Advent of Code challenge last year. Inspired by his clean, literate programming style and the convenience of jupyter notebook demonstrations, I will look at several of these challenges in my own jupyter notebooks.

My background and intentions aren’t the same as Peter Norvig’s: his expertise dwarfs mine. And timezones are not kind to those of us in the UK, and thus I won’t be competing for a position on the leaderboards. These are to be fun. And sometimes there are tidbits of math that want to come out of the challenges.

Enough of that. Let’s dive into the first day.

Day 1: Inverse Captcha, Part 1

Given a sequence of digits, find the sum of those digits which match the following digit. The sequence is presumed circular, so the first digit may match the last digit.

This would probably be done the fastest by looping through the sequence.

In [1]:
with open('input.txt', 'r') as f:
    seq = f.read()
seq = seq.strip()
seq[:10]
Out[1]:
'1118313623'
In [2]:
def sum_matched_digits(s):
    "Sum of digits which match following digit, and first digit if it matches last digit"
    total = 0
    for a,b in zip(s, s[1:]+s[0]):
        if a == b:
            total += int(a)
    return total

They provide a few test cases which we use to test our method against.

In [3]:
assert sum_matched_digits('1122') == 3
assert sum_matched_digits('1111') == 4
assert sum_matched_digits('1234') == 0
assert sum_matched_digits('91212129') == 9

For fun, this is a oneline version.

(more…)

Posted in Expository, Programming, Python | Tagged , , | 1 Comment

A Jupyter Notebook from a SageMath tutorial

I gave an introduction to sage tutorial at the University of Warwick Computational Group seminar today, 2 November 2017. Below is a conversion of the sage/jupyter notebook I based the rest of the tutorial on. I said many things which are not included in the notebook, and during the seminar we added a few additional examples and took extra consideration to a few different calls. But for reference, the notebook is here.

The notebook itself (as a jupyter notebook) can be found and viewed on my github (link to jupyter notebook). When written, this notebook used a Sage 8.0.0.rc1 backend kernel and ran fine on the standard Sage 8.0 release , though I expect it to work fine with any recent official version of sage. The last cell requires an active notebook to be seen (or some way to export jupyter widgets to standalone javascript or something; this either doesn’t yet exist, or I am not aware of it).

I will also note that I converted the notebook for display on this website using jupyter’s nbconvert package. I have some CSS and syntax coloring set up that affects the display.

Good luck learning sage, and happy hacking.

Sage

Sage (also known as SageMath) is a general purpose computer algebra system written on top of the python language. In Mathematica, Magma, and Maple, one writes code in the mathematica-language, the magma-language, or the maple-language. Sage is python.

But no python background is necessary for the rest of today’s guided tutorial. The purpose of today’s tutorial is to give an indication about how one really uses sage, and what might be available to you if you want to try it out.

I will spoil the surprise by telling you upfront the two main points I hope you’ll take away from this tutorial.

  1. With tab-completion and documentation, you can do many things in sage without ever having done them before.
  2. The ecosystem of libraries and functionality available in sage is tremendous, and (usually) pretty easy to use.

Lightning Preview

Let’s first get a small feel for sage by seeing some standard operations and what typical use looks like through a series of trivial, mostly unconnected examples.

In [1]:
# Fundamental manipulations work as you hope

2+3
Out[1]:
5

You can also subtract, multiply, divide, exponentiate…

>>> 3-2
1
>>> 2*3
6
>>> 2^3
8
>>> 2**3 # (also exponentiation)
8

There is an order of operations, but these things work pretty much as you want them to work. You might try out several different operations.

Sage includes a lot of functionality, too. For instance,

In [2]:
factor(-1008)
Out[2]:
-1 * 2^4 * 3^2 * 7
In [3]:
list(factor(1008))
Out[3]:
[(2, 4), (3, 2), (7, 1)]

In the background, Sage is actually calling on pari/GP to do this factorization. Sage bundles lots of free and open source math software within it (which is why it’s so large), and provides a common access point. The great thing here is that you can often use sage without needing to know much pari/GP (or other software).

Sage knows many functions and constants, and these are accessible.

(more…)

Posted in Expository, Mathematics, sage, sagemath | Tagged , , , , , | Leave a comment