Given a list of strings, determine how many strings have no duplicate words.

This is a classic problem, and it’s particularly easy to solve this in python. Some might use `collections.Counter`

, but I think it’s more straightforward to use sets.

The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word.

In [1]:

```
with open("input.txt", "r") as f:
lines = f.readlines()
def count_lines_with_unique_words(lines):
num_pass = 0
for line in lines:
s = line.split()
if len(s) == len(set(s)):
num_pass += 1
return num_pass
count_lines_with_unique_words(lines)
```

Out[1]:

I think this is the first day where I would have had a shot at the leaderboard if I’d been gunning for it.

Let’s add in another constraint. Determine how many strings have no duplicate words, even after anagramming. Thus the string

```
abc bac
```

is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words.

In [2]:

```
with open("input.txt", "r") as f:
lines = f.readlines()
sorted_lines = []
for line in lines:
sorted_line = ' '.join([''.join(l) for l in map(sorted, line.split())])
sorted_lines.append(sorted_line)
sorted_lines[:2]
```

Out[2]:

In [3]:

```
count_lines_with_unique_words(sorted_lines)
```

Out[3]:

Numbers are arranged in a spiral

```
17 16 15 14 13
18 5 4 3 12
19 6 1 2 11
20 7 8 9 10
21 22 23---> ...
```

Given an integer n, what is its Manhattan Distance from the center (1) of the spiral? For instance, the distance of 3 is $2 = 1 + 1$, since it’s one space to the right and one space up from the center.

Here’s my idea. The bottom right corner of the $k$th layer is the integer $(2k+1)^2$, since that’s how many integers are contained within that square. The other three corners in that layer are $(2k+1)^2 – 2k, (2k+1)^2 – 4k$, and $(2k+1)^2 – 6k$. Finally, the closest spot on the $k$th layer to the origin is at distance $k$: these are the four “axis locations” halfway between the corners, at $(2k+1)^2 – k, (2k+1)^2 – 3k, (2k+1)^2 – 5k$, and $(2k+1)^2 – 7k$.

For instance when $k = 1$, the bottom right is $(2 + 1)^2 = 9$, and the four “axis locations” are $9 – 1, 9 – 3, 9-5$, and $9-7$. The “axis locations” are $k$ away, and the corners are $2k$ away.

So I will first find which layer the number is on. Then I’ll figure out which side it’s on, and then how far away it is from the nearest “axis location” or “corner”.

My given number happens to be 289326.

In [1]:

```
import math
def find_lowest_larger_odd_square(n):
upper = math.ceil(n**.5)
if upper %2 == 0:
upper += 1
return upper
```

In [2]:

```
assert find_lowest_larger_odd_square(39) == 7
assert find_lowest_larger_odd_square(26) == 7
assert find_lowest_larger_odd_square(25) == 5
```

In [3]:

```
find_lowest_larger_odd_square(289326)
```

Out[3]:

In [4]:

```
539**2 - 289326
```

Out[4]:

It happens to be that our integer is very close to an odd square.

The square is $539^2$, and the distance to that square is $538$ from the center.

Note that $539 = 2(269) + 1$, so this is the $269$th layer of the square.

The previous corner to $539^2$ is $539^2 – 538$, and the previous corner to that is $539^2 – 2\cdot538 = 539^2 – 1076$.

This is the nearest corner.

How far away from the square is this corner?

In [5]:

`539**2 - 2*538 - 289326`

Out[5]:

In [6]:

```
538 - 119
```

Out[6]:

And so we solved the first part quickly with a mixture of function and handiwork.

In part two, the spiral has changed significantly. Build the spiral iteratively. Initially, start with 1. Then in the next square of the spiral, put in the integer that is the sum of the adjacent (including diagonal) numbers in the spiral. This spiral is

```
147 142 133 122 59
304 5 4 2 57
330 10 1 1 54
351 11 23 25 26
362 747 806---> ...
```

What is the first value that’s larger than 289326?

My plan is to construct this spiral. The central 1 will have coordinates (0,0), and the spiral will be stored in a dictionary whose key is the tuple of the location.

To construct the spiral, we note that the direction of adding goes in the pattern RULLDDRRRUUULLLLDDDD. The order is right, up, left, down: the number of times each direction is repeated goes in the sequence 1,1,2,2,3,3,4,4,….

In [7]:

```
spiral = {}
spiral[(0,0)] = 1
NEIGHBORS = [(1,0), (1,1), (0,1), (-1,1), (-1,0), (-1,-1), (0,-1), (1,-1)]
DIRECTION = [(1,0), (0,1), (-1,0), (0,-1)] #Right Up Left Down
def spiral_until_at_least(n):
spiral = {} # Spiral dictionary
spiral[(0,0)] = 1
x,y = 0,0
steps_in_row = 1 # times spiral extends in same direction
second_direction = False # spiral extends in same direction twice: False if first leg, True if second
nstep = 0 # number of steps in current direction
step_direction = 0 # index of direction in DIRECTION
while True:
dx, dy = DIRECTION[step_direction]
x, y = x + dx, y + dy
total = 0
for neighbor in NEIGHBORS:
nx, ny = neighbor
if (x+nx, y+ny) in spiral:
total += spiral[(x+nx, y+ny)]
print("X: {}, Y:{}, Total:{}".format(x,y,total))
if total > n:
return total
spiral[(x,y)] = total
nstep += 1
if nstep == steps_in_row:
nstep = 0
step_direction = (step_direction + 1)% 4
if second_direction:
second_direction = False
steps_in_row += 1
else:
second_direction = True
```

In [8]:

```
spiral_until_at_least(55)
```

Out[8]:

In [9]:

```
spiral_until_at_least(289326)
```

Out[9]:

The sequence in the part 2 grows really, really quickly. The sequence starts 1,1,2,4,5,10,11,23…

Many mathematicians (recreational, amateur, and professional alike) often delight in properties of sequences of integers. And sometimes they put them in Sloane’s **Online Encyclopedia of Integer Sequences**, the OEIS. Miraculously, the sequence from part 2 appears in the OEIS.

It’s OEIS A141481.

But I’ve never seen this sequence before.

I wonder: how quickly does it grow? This is one of the most fundamantal questions one can ask about a sequence.

Clearly it grows quickly — the entries are strictly increasing, and after each corner they roughly double (since the adjacent and diagonal are each there and roughly the same size).

But does this capture most of the growth?

In [10]:

```
spiral = {}
spiral[(0,0)] = 1
NEIGHBORS = [(1,0), (1,1), (0,1), (-1,1), (-1,0), (-1,-1), (0,-1), (1,-1)]
DIRECTION = [(1,0), (0,1), (-1,0), (0,-1)] #Right Up Left Down
CORNERS = [1]
def spiral_until_at_least_print_corners(n):
spiral = {} # Spiral dictionary
spiral[(0,0)] = 1
x,y = 0,0
steps_in_row = 1 # times spiral extends in same direction
second_direction = False # spiral extends in same direction twice: False if first leg, True if second
nstep = 0 # number of steps in current direction
step_direction = 0 # index of direction in DIRECTION
while True:
dx, dy = DIRECTION[step_direction]
x, y = x + dx, y + dy
total = 0
for neighbor in NEIGHBORS:
nx, ny = neighbor
if (x+nx, y+ny) in spiral:
total += spiral[(x+nx, y+ny)]
if total > n:
return total
spiral[(x,y)] = total
nstep += 1
if nstep == steps_in_row:
print("X: {}, Y:{}, Total:{}".format(x,y,total))
CORNERS.append(total)
nstep = 0
step_direction = (step_direction + 1)% 4
if second_direction:
second_direction = False
steps_in_row += 1
else:
second_direction = True
```

In [11]:

```
spiral_until_at_least_print_corners(10**15)
```

Out[11]:

In [12]:

```
CORNERS
```

Out[12]:

In [13]:

```
for a, b in zip(CORNERS, CORNERS[1:]):
print(b/a)
```

You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together.

There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up.

In [1]:

```
with open("input.txt", "r") as f:
lines = f.readlines()
lines[0]
```

Out[1]:

In [2]:

```
l = lines[0]
l = l.split()
l
```

Out[2]:

In [3]:

```
def max_minus_min(line):
'''Compute the difference between the largest and smallest integer in a line'''
line = list(map(int, line.split()))
return max(line) - min(line)
def sum_differences(lines):
'''Sum the value of `max_minus_min` for each line in `lines`'''
return sum(max_minus_min(line) for line in lines)
```

In [4]:

```
testcase = ['5 1 9 5','7 5 3', '2 4 6 8']
assert sum_differences(testcase) == 18
```

In [5]:

```
sum_differences(lines)
```

Out[5]:

In line with the first day’s challenge, I’m inclined to ask what we should “expect.” But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense.

Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect?

As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum?

Let’s begin with the minimum. The minimum is $1$ unless all the integers are greater than $2$. This has probability

$$ 1 – \left( \frac{9}{10} \right)^m = \frac{10^m – 9^m}{10^m}$$

of occurring. We rewrite it as the version on the right for reasons that will soon be clear.

The minimum is $2$ if all the integers are at least $2$ (which can occur in $9$ different ways for each integer), but not all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$). Thus this has probability

$$ \frac{9^m – 8^m}{10^m}.$$

Continuing to do one more for posterity, the minimum is $3$ if all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$), but not all integers are at least $4$ (each integer has $7$ different ways of being at least $4$). Thus this has probability

$$ \frac{8^m – 7^m}{10^m}.$$

And so on.

Recall that the expected value of a random variable is

$$ E[X] = \sum x_i P(X = x_i),$$

so the expected value of the minimum is

$$ \frac{1}{10^m} \big( 1(10^m – 9^m) + 2(9^m – 8^m) + 3(8^m – 7^m) + \cdots + 9(2^m – 1^m) + 10(1^m – 0^m)\big).$$

This simplifies nicely to

$$ \sum_ {k = 1}^{10} \frac{k^m}{10^m}. $$

The same style of thinking shows that the expected value of the maximum is

$$ \frac{1}{10^m} \big( 10(10^m – 9^m) + 9(9^m – 8^m) + 8(8^m – 7^m) + \cdots + 2(2^m – 1^m) + 1(1^m – 0^m)\big).$$

This simplifies to

$$ \frac{1}{10^m} \big( 10 \cdot 10^m – 9^m – 8^m – \cdots – 2^m – 1^m \big) = 10 – \sum_ {k = 1}^{9} \frac{k^m}{10^m}.$$

Subtracting, we find that the expected difference is

$$ 9 – 2\sum_ {k=1}^{9} \frac{k^m}{10^m}. $$

From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It’s also pretty straightforward to see how to extend this to values of integers from $1$ to $N$.

Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn’t expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random.

But we can try it out anyway.

In [6]:

```
# We see the table is 16 lines long
len(lines)
```

Out[6]:

In [7]:

```
# And a generic line is 16 numbers long
len(lines[0].split())
```

Out[7]:

In [8]:

```
total = 6999
for k in range(7000):
total = total - 2* (k/7000)**16
```

In [9]:

```
total
```

Out[9]:

The expected value of the table is $16$ times this.

In [10]:

```
16 * total
```

Out[10]:

In the table, each row has exactly one pair of integers that evenly divides each other. Find the sum of the quotients.

My plan is straightforward. For each line, go through each element and determine if it is the dividend or divisor in a perfection fraction. Once we’ve found a pair, we compute the quotient, and add these quotients together.

In [11]:

```
def find_quotient_in_line(line):
'''
Finds a pair of integers which divides each other in line.
Returns the quotient.
'''
line = list(map(int, line.split()))
for i, elem in enumerate(line):
for num in line[i+1:]:
if elem%num == 0:
return elem/num
if num%elem == 0:
return num/elem
raise KeyError('No divisor relationship found in line.')
def sum_quotients(lines):
'''Sum the value of `find_quotient_in_line` for each line in `lines`'''
return sum(find_quotient_in_line(line) for line in lines)
```

In [12]:

```
testcase = ['5 9 2 8', '9 4 7 3', '3 8 6 5']
assert find_quotient_in_line(testcase[0]) == 4
assert sum_quotients(testcase) == 9
```

In [13]:

```
sum_quotients(lines)
```

Out[13]:

My background and intentions aren’t the same as Peter Norvig’s: his expertise dwarfs mine. And timezones are not kind to those of us in the UK, and thus I won’t be competing for a position on the leaderboards. These are to be fun. And sometimes there are tidbits of math that want to come out of the challenges.

Enough of that. Let’s dive into the first day.

In [1]:

```
with open('input.txt', 'r') as f:
seq = f.read()
seq = seq.strip()
seq[:10]
```

Out[1]:

In [2]:

```
def sum_matched_digits(s):
"Sum of digits which match following digit, and first digit if it matches last digit"
total = 0
for a,b in zip(s, s[1:]+s[0]):
if a == b:
total += int(a)
return total
```

They provide a few test cases which we use to test our method against.

In [3]:

```
assert sum_matched_digits('1122') == 3
assert sum_matched_digits('1111') == 4
assert sum_matched_digits('1234') == 0
assert sum_matched_digits('91212129') == 9
```

For fun, this is a oneline version.

In [4]:

```
def sum_matched_digits_oneliner(s):
return sum(int(a) if a == b else 0 for a,b in zip(s, s[1:]+s[0]))
```

In [5]:

```
assert sum_matched_digits_oneliner('1122') == 3
assert sum_matched_digits_oneliner('1111') == 4
assert sum_matched_digits_oneliner('1235') == 0
assert sum_matched_digits_oneliner('91212129') == 9
```

For more fun, this is a regex version.

In [6]:

```
import regex
def sum_matched_digits_regex(s):
matches = map(int, regex.findall(r'(\d)\1', s, overlapped=True))
total = sum(matches)
if s[0] == s[-1]:
total += int(s[0])
return total
```

In [7]:

```
assert sum_matched_digits_regex('1122') == 3
assert sum_matched_digits_regex('1111') == 4
assert sum_matched_digits_regex('1235') == 0
assert sum_matched_digits_regex('91212129') == 9
```

Regardless of which one we use, we find the answer.

In [8]:

```
print(sum_matched_digits(seq))
print(sum_matched_digits_oneliner(seq))
print(sum_matched_digits_regex(seq))
```

I wonder: is there any sort of time difference between these?

In [9]:

```
%timeit sum_matched_digits(seq)
```

In [10]:

```
%timeit sum_matched_digits_oneliner(seq)
```

In [11]:

```
%timeit sum_matched_digits_regex(seq)
```

In [12]:

```
import random
randseq = ''
for i in range(10**7):
randseq += str(random.randint(0,9))
randseq[:10]
```

Out[12]:

In [13]:

```
%timeit -n5 sum_matched_digits(randseq)
```

In [14]:

```
%timeit -n5 sum_matched_digits_oneliner(randseq)
```

In [15]:

```
%timeit -n5 sum_matched_digits_regex(randseq)
```

In [16]:

```
sum_matched_digits(randseq)
```

Out[16]:

We can compute what we expect the value to be for a random string of digits $d$. Assuming that each digit is randomly selected, we should expect that it has probability $1/10$ of matching the subsequent digit. Thus the expected contribution from each digit is (its value) $\times \frac{1}{10}$. The digit itself is $0$ with probability $0.1$, and $1$ with probability $0.1$, and so on. This becomes

$$ \sum_{d = 0}^{10 – 1} \frac{d}{10} \times \frac{1}{10} = \frac{10(10-1)}{2 \cdot 10^2} = \frac{9}{20} = 0.45.$$

If there are $n$ (random) digits, then we expect the sum of the digits which match the subsequent digit to be $0.45 n$.

In this case, there are $10^7$ digits, and we should expect the sum to be $0.45 \cdot 10^7 = 4.5 \cdot 10^6$. How close are we?

In [17]:

```
abs(sum_matched_digits(randseq) - 4.5 * 10**6)
```

Out[17]:

That’s really, really close. How does this apply to the Advent of Code Day 1 problem?

In [18]:

```
0.45 * len(seq)
```

Out[18]:

For the second part of the problem, we are tasked with finding the sum of those digits which match the digits half-way away from the string. This only makes sense on even length strings.

It’s easy enough to modify the loop to do this.

In [19]:

```
def sum_matched_digits_with_sep(s, sep):
"Sum of digits which match the digit sep digits later"
total = 0
for a,b in zip(s, s[sep:]+s[:sep]):
if a == b:
total += int(a)
return total
```

In [20]:

```
assert sum_matched_digits_with_sep('1212', 2) == 6
assert sum_matched_digits_with_sep('1221', 2) == 0
assert sum_matched_digits_with_sep('123425', 3) == 4
assert sum_matched_digits_with_sep('123123', 3) == 12
assert sum_matched_digits_with_sep('12131415', 4) == 4
```

In [21]:

```
sum_matched_digits_with_sep(seq, len(seq)//2)
```

Out[21]:

The one-liner can be similarly written. What about the regex?

We want to identify a digit, skip `sep - 1`

digits, and then check to see if the subsequent digit matches.

In principle, we need to worry about wrapping around the string. But we notice that not wrapping around misses exactly half of the matches, so we just double the non-wrapped answer. This leads to the following.

In [22]:

```
import regex
def sum_matched_digits_with_sep_regex(s, sep):
matches = map(int, regex.findall(r'(\d)\d{}\1'.format("{"+str(sep-1)+"}"), s, overlapped=True))
total = 2*sum(matches)
return total
```

In [23]:

```
assert sum_matched_digits_with_sep_regex('1212', 2) == 6
assert sum_matched_digits_with_sep_regex('1221', 2) == 0
assert sum_matched_digits_with_sep_regex('123425', 3) == 4
assert sum_matched_digits_with_sep_regex('123123', 3) == 12
assert sum_matched_digits_with_sep_regex('12131415', 4) == 4
```

In [24]:

```
sum_matched_digits_with_sep_regex(seq, len(seq)//2)
```

Out[24]:

It is interesting to note that the expected value is the same as in the consecutive digit case. This is because the probability that two randomly chosen digits agree has nothing to do with the location of the digits. One random digit is as good as another.

I will instead note that a similar calculation as above shows that the expected value depends also on the base involved. We arrived at the value $n \times 9/20 = n \times (10 – 1)/2 \cdot 10$ for an $n$ digit number written in base $10$.

For an $n$ digit number written in base $b$, the expected value is

$$ n \cdot \frac{b-1}{2b}.$$

This increases as the base increases, and tends towards $n/2$.

The notebook itself (as a jupyter notebook) can be found and viewed on my github (link to jupyter notebook). When written, this notebook used a Sage 8.0.0.rc1 backend kernel and ran fine on the standard Sage 8.0 release , though I expect it to work fine with any recent official version of sage. The last cell requires an active notebook to be seen (or some way to export jupyter widgets to standalone javascript or something; this either doesn’t yet exist, or I am not aware of it).

I will also note that I converted the notebook for display on this website using jupyter’s nbconvert package. I have some CSS and syntax coloring set up that affects the display.

Good luck learning sage, and happy hacking.

Sage (also known as SageMath) is a general purpose computer algebra system written on top of the python language. In Mathematica, Magma, and Maple, one writes code in the mathematica-language, the magma-language, or the maple-language. Sage is python.

But no python background is necessary for the rest of today’s guided tutorial. The purpose of today’s tutorial is to give an indication about how one really *uses* sage, and what might be available to you if you want to try it out.

I will spoil the surprise by telling you upfront the two main points I hope you’ll take away from this tutorial.

- With tab-completion and documentation, you can do many things in sage without ever having done them before.
- The ecosystem of libraries and functionality available in sage is tremendous, and (usually) pretty easy to use.

Let’s first get a small feel for sage by seeing some standard operations and what typical use looks like through a series of trivial, mostly unconnected examples.

In [1]:

```
# Fundamental manipulations work as you hope
2+3
```

Out[1]:

You can also subtract, multiply, divide, exponentiate…

```
>>> 3-2
1
>>> 2*3
6
>>> 2^3
8
>>> 2**3 # (also exponentiation)
8
```

There is an order of operations, but these things work pretty much as you want them to work. You might try out several different operations.

Sage includes a lot of functionality, too. For instance,

In [2]:

```
factor(-1008)
```

Out[2]:

In [3]:

```
list(factor(1008))
```

Out[3]:

Sage knows many functions and constants, and these are accessible.

In [4]:

```
sin(pi)
```

Out[4]:

In [5]:

```
exp(2)
```

Out[5]:

Sage tries to internally keep expressions in exact form. To present approximations, use `N()`

.

In [6]:

```
N(exp(2))
```

Out[6]:

In [7]:

```
pi
```

Out[7]:

In [8]:

```
N(pi)
```

Out[8]:

You can ask for a number of digits in the approximation by giving a `digits`

keyword to `N()`

.

In [9]:

```
N(pi, digits=60)
```

Out[9]:

In [10]:

```
sqrt(2)
```

Out[10]:

In [11]:

```
sqrt(2)**2
```

Out[11]:

Of course, there are examples where floating point arithmetic gets in the way.

In sage/python, integers have unlimited digit length. Real precision arithmetic is a bit more complicated, which is why sage tries to keep exact representations internally. We don’t go into tracking digits of precision in sage, but it is usually possible to prescribe levels of precision.

`range`

function in python counts up to a given number, starting at 0.

In [12]:

```
range(16)
```

Out[12]:

In [13]:

```
A = matrix(4,4, range(16))
A
```

Out[13]:

In [14]:

```
B = matrix(4,4, range(-5, 11))
B
```

Out[14]:

In [15]:

```
A*B
```

Out[15]:

`.`

, and then call the function.

In [16]:

```
A.charpoly()
```

Out[16]:

There are some top-level functions as well.

In [17]:

```
factor(A.charpoly())
```

Out[17]:

Sometimes you start with an object, such as a matrix, and you wonder what you can do with it. Sage has very good tab-completion and introspection in its notebook interface.

Try typing

```
A.
```

and hit `<Tab>`

. Sage should display a list of things it thinks it can do to the matrix A.

Note that on CoCalc or external servers, tab completion sometimes has a small delay.

In [ ]:

```
A.
```

Some of these are more meaningful than others, but you have a list of options. If you want to find out what an option does, like `A.eigenvalues()`

, then type

```
A.eigenvalues?
```

and hit enter.

In [18]:

```
A.eigenvalues?
```

In [19]:

```
A.eigenvalues()
```

Out[19]:

If you’re really curious about what’s going on, you can type

```
A.eigenvalues??
```

which will also show you the implementation of that functionality. (You usually don’t need this).

In [ ]:

```
A.eigenvalues??
```

In [20]:

```
E = EllipticCurve([1,2,3,4,5])
E
```

Out[20]:

In [ ]:

```
# Tab complete me to see what's available
E.
```

In [21]:

```
E.conductor()
```

Out[21]:

In [22]:

```
E.rank()
```

Out[22]:

Sage knows about complex numbers as well. Use `i`

or `I`

to mean a $\sqrt{-1}$.

In [23]:

```
(1+2*I) * (pi - sqrt(5)*I)
```

Out[23]:

In [24]:

```
c = 1/(sqrt(3)*I + 3/4 + sqrt(29)*2/3)
```

`c`

is stored with perfect representations of square roots.

In [25]:

```
c
```

Out[25]:

But we can have sage give numerical estimates of objects by calling `N()`

on them.

In [26]:

```
N(c)
```

Out[26]:

In [27]:

```
N(c, 20) # Keep 20 "bits" of information
```

Out[27]:

`latex(<object>)`

to give a latex representation.

In [28]:

```
latex(c)
```

Out[28]:

In [29]:

```
latex(E)
```

Out[29]:

In [30]:

```
latex(A)
```

Out[30]:

You can have sage print the LaTeX version in the notebook by using `pretty_print`

In [31]:

```
pretty_print(A)
```

In [32]:

```
H = DihedralGroup(6)
H.list()
```

Out[32]:

In [33]:

```
a = H[1]
a
```

Out[33]:

In [34]:

```
a.order()
```

Out[34]:

In [35]:

```
b = H[2]
b
```

Out[35]:

In [36]:

```
a*b
```

Out[36]:

In [37]:

```
for elem in H:
if elem.order() == 2:
print elem
```

In [38]:

```
# Or, in the "pythonic" way
elements_of_order_2 = [elem for elem in H if elem.order() == 2]
elements_of_order_2
```

Out[38]:

In [39]:

```
rand_elem = H.random_element()
rand_elem
```

Out[39]:

In [40]:

```
G_sub = H.subgroup([rand_elem])
G_sub
```

Out[40]:

In [41]:

```
# Explicitly using elements of a group
H("(1,2,3,4,5,6)") * H("(1,5)(2,4)")
```

Out[41]:

The real purpose of these exercises are to show you that it’s often possible to use tab-completion to quickly find out what is and isn’t possible to do within sage.

- What things does sage know about this subgroup? Can you find the cardinality of the subgroup? (Note that the subgroup is generated by a random element, and your subgroup might be different than your neighbor’s). Can you list all subgroups of the dihedral group in sage?
- Sage knows other groups as well. Create a symmetric group on 5 elements. What does sage know about that? Can you verify that S5 isn’t simple? Create some cyclic groups?

It’s pretty easy to work over different fields in sage as well. I show a few examples of this

In [42]:

```
# It may be necessary to use `reset('x')` if x has otherwise been defined
K.<alpha> = NumberField(x**3 - 5)
```

In [43]:

```
K
```

Out[43]:

In [44]:

```
alpha
```

Out[44]:

In [45]:

```
alpha**3
```

Out[45]:

In [46]:

```
(alpha+1)**3
```

Out[46]:

In [47]:

```
GF?
```

In [48]:

```
F7 = GF(7)
```

In [49]:

```
a = 2/5
a
```

Out[49]:

In [50]:

```
F7(a)
```

Out[50]:

In [51]:

```
var('x')
```

Out[51]:

In [52]:

```
eqn = x**3 + sqrt(2)*x + 5 == 0
a = solve(eqn, x)[0].rhs()
```

In [53]:

```
a
```

Out[53]:

In [54]:

```
latex(a)
```

Out[54]:

In [55]:

```
pretty_print(a)
```

In [56]:

```
# Also RR, CC
QQ
```

Out[56]:

In [57]:

```
K.<b> = QQ[a]
```

In [58]:

```
K
```

Out[58]:

In [59]:

```
a.minpoly()
```

Out[59]:

In [60]:

```
K.class_number()
```

Out[60]:

Sage tries to keep the same syntax even across different applications. Above, we factored a few integers. But we may also try to factor over a number field. You can factor 2 over the Gaussian integers by:

- Create the Gaussian integers. The constructor CC[I] works.
- Get the Gaussian integer 2 (which is programmatically different than the typical integer 2), by something like
`CC[I](2)`

. `factor`

that 2.

In [61]:

```
# Let's declare that we want x and y to mean symbolic variables
x = 1
y = 2
print(x+y)
reset('x')
reset('y')
var('x')
var('y')
print(x+y)
```

In [62]:

```
solve(x^2 + 3*x + 2, x)
```

Out[62]:

In [63]:

```
solve(x^2 + y*x + 2 == 0, x)
```

Out[63]:

In [64]:

```
# Nonlinear systems with complicated solutions can be solved as well
var('p,q')
eq1 = p+1==9
eq2 = q*y+p*x==-6
eq3 = q*y**2+p*x**2==24
s = solve([eq1, eq2, eq3, y==1], p,q,x,y)
s
```

Out[64]:

In [65]:

```
s[0]
```

Out[65]:

In [66]:

```
latex(s[0])
```

Out[66]:

$$\left[p = 8, q = \left(-26\right), x = \left(\frac{5}{2}\right), y = 1\right]$$

In [67]:

```
# We can also do some symbolic calculus
f = x**2 + 2*x + 1
f
```

Out[67]:

In [68]:

```
diff(f, x)
```

Out[68]:

In [69]:

```
integral(f, x)
```

Out[69]:

In [70]:

```
F = integral(f, x)
F(x=1)
```

Out[70]:

In [71]:

```
diff(sin(x**3), x)
```

Out[71]:

In [72]:

```
# Compute the 4th derivative
diff(sin(x**3), x, 4)
```

Out[72]:

In [73]:

```
# We can try to foil sage by giving it a hard integral
integral(sin(x)/x, x)
```

Out[73]:

In [74]:

```
f = sin(x**2)
f
```

Out[74]:

In [75]:

```
# And sage can give Taylor expansions
f.taylor(x, 0, 20)
```

Out[75]:

In [76]:

```
f(x,y)=y^2+1-x^3-x
contour_plot(f, (x,-pi,pi), (y,-pi,pi))
```

Out[76]:

In [77]:

```
contour_plot(f, (x,-pi,pi), (y,-pi,pi), colorbar=True, labels=True)
```

Out[77]:

In [78]:

```
# Implicit plots
f(x,y) = -x**3 + y**2 - y + x + 1
implicit_plot(f(x,y)==0,(x,0,2*pi),(y,-pi,pi))
```

Out[78]:

- Experiment with the above examples by trying out different functions and plots.
- Sage can do partial fractions for you as well. To do this, you first define your function you want to split up. Suppose you call it
`f`

. Then you use`f`

.partial_fraction(x). Try this out - Sage can also create 3d plots. Create one. Start by looking at the documentation for
`plot3d`

.

Of the various math software, sage+python provides my preferred plotting environment. I have used sage to create plots for notes, lectures, classes, experimentation, and publications. You can quickly create good-looking plots. For example, I used sage/python extensively in creating this note for my students on Taylor Series (which is a classic “hard topic” that students have lots of questions about, at least in the US universities I’m familiar with. To this day, about 1/6 of the traffic to my website is to see that page).

As a non-trivial example, I present the following interactive plot.

In [79]:

```
@interact
def g(f=sin(x), c=0, n=(1..30),
xinterval=range_slider(-10, 10, 1, default=(-8,8), label="x-interval"),
yinterval=range_slider(-50, 50, 1, default=(-3,3), label="y-interval")):
x0 = c
degree = n
xmin,xmax = xinterval
ymin,ymax = yinterval
p = plot(f, xmin, xmax, thickness=4)
dot = point((x0,f(x=x0)),pointsize=80,rgbcolor=(1,0,0))
ft = f.taylor(x,x0,degree)
pt = plot(ft, xmin, xmax, color='red', thickness=2, fill=f)
show(dot + p + pt, ymin=ymin, ymax=ymax, xmin=xmin, xmax=xmax)
html('$f(x)\;=\;%s$'%latex(f))
html('$P_{%s}(x)\;=\;%s+R_{%s}(x)$'%(degree,latex(ft),degree))
```

There are a variety of tutorials and resources for learning more about sage. I list several here.

- Sage provides some tutorials of its own. These include its Guided Tour and the Standard Sage Tutorial. The Standard Sage Tutorial is designed to take 2-4 hours to work through, and afterwards you should have a pretty good sense of the sage environment.
- PREP Tutorials are a set of tutorials created in a program sponsored by the Mathematics Association of America, aimed at working with university students with sage. These tutorials are designed for people both new to sage and to programming.

See also the main sage website.

For questions about specific things in sage, you can ask about these on StackOverflow or AskSage. You might also consider the sage-support or sage-edu mailing lists.

It isn’t necessary to know python to use sage, but a heavy sage user will benefit significantly from learning some python. Conversely, sage is very easy to use if you know python.

The purpose of this note is to describe the large effects of having no internet at my home for the last four weeks. I’m at my home about half the time, leading to the title.

I have become accustomed to having the internet at all times. I now see that many various habits of mine involved the internet. In the mornings and evenings, I would check HackerNews, longform, and reddit for interesting reads. Invariably there are more interesting seeming things than I would read, and my *Checkout* bookmarks list is a hundreds-of-items long growing list of maybe interesting stuff. In the middle times throughout the day, I would checkout a few of these bookmarks.

All in all, I would spend an enormous amount of time reading random interesting tidbits, even though much of this time was spread out in the “in-betweens” in my day.

When I didn’t have internet at my home, I had to fill all those “in-between” moments, as well as my waking and sleeping moments, with something else. Faced with the necessity of doing something, I filled most of these moments with reading books. Made out of paper. (The same sort of books whose sales are rising compared to ebooks, contrary to most predictions a few years ago).

I’d forgotten how much I enjoyed reading a book in large chunks, in very few sittings. I usually have an ebook on my phone that I read during commutes, and perhaps most of my idle reading over the last several years has been in 20 page increments. The key phrase here is “idle reading”. I now set aside time to “actively read”, in perhaps 100 page increments. Reading enables a “flow state” very similar to the sensation I get when mathing continuously, or programming continuously, for a long period of time. I not only read more, but I enjoy what I’m reading more.

As a youth, I would read all the time. Fun fact: at one time, I’d read almost every book in the Star Wars expanded universe. There were over a hundred, and they were all canon (before Disney paved over the universe to make room). I learned to love reading by reading science fiction, and the first novel I remember reading was a copy of Andre Norton’s “The Beastmaster” (… which is great. A part telepath part Navajo soldier moves to another planet. Then it’s a space western. What’s not to love?).

My primary source of books is the library at the University of Warwick. Whether through differences in continental taste or simply a case of different focus, the University Library doesn’t have many books in its fiction collection that I’ve been intending to read. I realize now that most of the nonfiction I read originates on the internet, while much of the fiction I read comes from books. Now, encouraged by a lack of alternatives, I picked up many more and varied nonfiction books than I would otherwise have.

As an unexpected side effect, I found that I would also carefully download some of the articles I identified as “interesting” a bit before I headed home from the office. Without internet, I read far more of my *checkout* bookmarks than I did with internet. Weird. Correspondingly, I found that I would spend a bit more time cutting down the false-positive rate — I used to bookmark almost anything that I thought might be interesting, but which I wasn’t going to read right then. Now I culled the wheat from the chaff, as harvesting wheat takes time. (Perhaps this is something I should do more often. I recognize that there are services or newsletters that promise to identify great materials, but somehow none of them have worked better to my tastes than hackernews or longform. But these both have questionable signal to noise.).

The result is that I’ve goofed off reading probably about the same amount of time, but in fewer topics and at greater depth in each. It’s easy to jump from 10 page article to 10 page article online; when the medium is books, things come in larger chunks.

I *feel* more productive reading a book, even though I don’t actually attribute much to the difference. There may be something to the act of reading contiguously and continuously for long periods of time, though. This correlated with an overall increase my “chunking” of tasks across continuous blocks of time, instead of loosely multitasking. I think this is for the better.

I now have internet at my flat. Some habits will slide back, but there are other new habits that I will keep. I’ll keep my bedroom computer-free. In the evening, this means I read books before I sleep. In the morning, this means I must leave and go to the other room before I waste any time on online whatevers. Both of these are good. And I’ll try to continue to chunk time.

To end, I’ll note what I read in the last month, along with a few notes about each.

From best to worse.

- The best fiction I read was
*The Three Body Problem*, by Cixin Liu. I’d heard lots about this book. It’s Chinese scifi, and much of the story takes place against the backdrop of the Chinese cultural revolution… which I know embarassingly little about. The moral and philosophical underpinnings of this book are interesting and atypical (to me). At its core are various groups of people who have lost faith in aspects of science, or humanity, or both. I was unprepared for the many (hundreds?) of pages of philosophizing in the book, but I understood why it was there. This aspect reminded me of the last half of Anathem by Stephenson (perhaps the best book I’ve read in the last few years), which also had many (also hundreds?) of pages of philosophizing. I love this book, I recommend it. And I note that I read it in four sittings. There are two more books completing a trilogy, and I will read them once I can get my hands on them. [No library within 50 miles of me has them. I did buy the first one, though. Perhaps I’ll buy the other two.] - The second best was
*The Lathe of Heaven*by Ursula Le Guin. This is some classic fantasy, and is pretty mindbending. I think the feel of many books of Ursula Le Guin is very similar — there are many interesting ideas throughout the book, but the book deliberately loses coherence as the flow and fury of the plot reaches a climax. I like*The Lathe of Heaven*more than*The Wizard of Earthsea*and about the same as*The Left Hand of Darkness*, also by Le Guin. I read this book in three sittings. - I read three of the Witcher books, by Andzej Sapkowski. Namely,
*The Sword of Destiny*,*Blood of Elves*, and*Time of Concempt*. These are fun, not particularly deep reads. There is a taste of moral ambiguity that I like as it’s different from what I normally find. On the other hand, Sapkowski often uses humor or ambiguity in place of a meaningful, coherent plot.*The Sword of Destiny*is a collection of short tales, and I think his short tales are better than his novels — entirely because one doesn’t need or expect coherence from short stories.

I’m currently reading *Confusion* by Neal Stephenson, book two of the Baroque trilogy. Right now, I am exactly 1 page in.

I rank these from those I most enjoyed to those I least enjoyed.

*How Equal Temperament Ruined Harmony*, by Duffin. This was told to me as an introduction to music theory [in fact, I noted this from a comment thread on hackernews somewhere], but really it is a treatise on the history of tuning and temparaments. It turns out that modern equal termperament suffers from many flaws that aren’t commonly taught. When I got back to the office after reading this book, I spent a good amount of time on youtube listening to songs in mean tone tuning and just intonation. There is a difference! I read this book in 2 sittings — it’s short, pretty simple, and generally nice. However there are several long passages that are simply better to skip. Nonetheless I learned a lot.*A Random Walk down Wall Street*, by Burton Malkiel. I didn’t know too much about investing before reading this book. I wouldn’t actually say that I know too much after reading it either, but the book is about investing. I was warned that reading this book would make me think that the only way to really invest is to purchase index funds. And indeed, that is the overwhelming (and explicit) takeawar from the book. But I found the book surprisingly readable, and read it very quickly. I find that some of the analysis is biased towards long-term investing even as a basis of comparison.*Guesstimation*, by Weinstein. Ok, perhaps it is not fair to say that one “reads” this book. It consists of many Fermi-style questions (how many golf balls does it take to fill up a football stadium type questions), followed by their analysis. So I read a question and then sit down and do my own analysis. And then I compare it against Weinstein’s. I was stunned at how often the analyses were tremendously similar and got essentially the same order of magnitude at the end. [But not always, that’s for sure. There are also lots of things that I estimate very, very poorly]. There’s a small subgenre of “popular mathematics for the reader who is willing to take out a pencil and paper” (which can’t have a big readership, but which I thoroughly enjoy), and this is a good book within that subgenre. I’m currently working through its sequel.*Natures Numbers*, by Ian Stewart. This is a pop math book. Ian Stewart is an emeritus professor at my university, so it seemed appropriate to read something of his. This is a surprisingly fast read (I read it in a single sitting). Stewart is known for writing approachable popular math accounts, and this fits.*The Structure of Scientific Revolutions*, by Thomas Kuhn. This is metascience. I read the first half of this book/essay very quickly, and I struggled through its second half. This came highly recommended to me, but I found the signal to noise ratio to be pretty low. It might be that I wasn’t very willing to navigate the careful treading around equivocation throughout. However, I think many of the ideas are good. I don’t know if someone has written a 30 page summary, but I think this may be possible — and a good alternative to the book/essay itself.

I’m now reading *Grit*, by Angela Duckworth. Another side effect of reading more is that I find myself reading one fiction, one non-fiction, and one “simple” book at the same time.

Written while on a bus without internet to Heathrow, minus the pictures (which were added at Heathrow).

]]>The primary purpose of this note is to collect a few hitherto unnoticed or unpublished results concerning gaps between powers of consecutive primes. The study of gaps between primes has attracted many mathematicians and led to many deep realizations in number theory. The literature is full of conjectures, both open and closed, concerning the nature of primes.

In a series of stunning developments, Zhang, Maynard, and Tao^{1}^{2} made the first major progress towards proving the prime $k$-tuple conjecture, and successfully proved the existence of infinitely many pairs of primes differing by a fixed number. As of now, the best known result is due to the massive collaborative Polymath8 project,^{3} which showed that there are infinitely many pairs of primes of the form $p, p+246$. In the excellent expository article, ^{4} Granville describes the history and ideas leading to this breakthrough, and also discusses some of the potential impact of the results. This note should be thought of as a few more results following from the ideas of Zhang, Maynard, Tao, and the Polymath8 project.

Throughout, $p_n$ will refer to the $n$th prime number. In a paper, ^{5} Andrica conjectured that

\begin{equation}\label{eq:Andrica_conj}

\sqrt{p_{n+1}} – \sqrt{p_n} < 1

\end{equation}

holds for all $n$. This conjecture, and related statements, is described in Guy’s Unsolved Problems in Number Theory.

^{6} It is quickly checked that this holds for primes up to $4.26 \cdot 10^{8}$ in sagemath

```
# Sage version 8.0.rc1
# started with `sage -ipython`
# sage has pari/GP, which can generate primes super quickly
from sage.all import primes_first_n
# import izip since we'll be zipping a huge list, and sage uses python2 which has
# non-iterable zip by default
from itertools import izip
# The magic number 23150000 appears because pari/GP can't compute
# primes above 436273290 due to fixed precision arithmetic
ps = primes_first_n(23150000) # This is every prime up to 436006979
# Verify Andrica's Conjecture for all prime pairs = up to 436006979
gap = 0
for a,b in izip(ps[:-1], ps[1:]):
if b**.5 - a**.5 > gap:
A, B, gap = a, b, b**.5 - a**.5
print(gap)
print("")
print(A)
print(B)
```

In approximately 20 seconds on my machine (so it would not be harder to go much higher, except that I would have to go beyond pari/GP to generate primes), this completes and prints out the following output.

```
0.317837245196
0.504017169931
0.670873479291
7
11
```

Thus the largest value of $\sqrt{p_{n+1}} – \sqrt{p_n}$ was merely $0.670\ldots$, and occurred on the gap between $7$ and $11$.

So it appears very likely that the conjecture is true. However it is also likely that new, novel ideas are necessary before the conjecture is decided.

Andrica’s Conjecture can also be stated in terms of prime gaps. Let $g_n = p_{n+1} – p_n$ be the gap between the $n$th prime and the $(n+1)$st prime. Then Andrica’s Conjecture is equivalent to the claim that $g_n < 2 \sqrt{p_n} + 1$. In this direction, the best known result is due to Baker, Harman, and Pintz, ^{7} who show that $g_n \ll p_n^{0.525}$.

In 1985, Sandor ^{8} proved that \begin{equation}\label{eq:Sandor} \liminf_{n \to \infty} \sqrt[4]{p_n} (\sqrt{p_{n+1}} – \sqrt{p_n}) = 0. \end{equation} The close relation to Andrica’s Conjecture \eqref{eq:Andrica_conj} is clear. The first result of this note is to strengthen this result.

TheoremLet $\alpha, \beta \geq 0$, and $\alpha + \beta < 1$. Then

\begin{equation}\label{eq:main}

\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0.

\end{equation}

We prove this theorem below. Choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{4}$ verifies Sandor’s result \eqref{eq:Sandor}. But choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{2} – \epsilon$ for a small $\epsilon > 0$ gives stronger results.

This theorem leads naturally to the following conjecture.

ConjectureFor any $0 \leq \alpha < 1$, there exists a constant $C(\alpha)$ such that

\begin{equation}

p_{n+1}^\alpha – p_{n}^\alpha \leq C(\alpha)

\end{equation}

for all $n$.

A simple heuristic argument, given in the last section below, shows that this Conjecture follows from Cramer’s Conjecture.

It is interesting to note that there are generalizations of Andrica’s Conjecture. One can ask what the smallest $\gamma$ is such that

\begin{equation}

p_{n+1}^{\gamma} – p_n^{\gamma} = 1

\end{equation}

has a solution. This is known as the Smarandache Conjecture, and it is believed that the smallest such $\gamma$ is approximately

\begin{equation}

\gamma \approx 0.5671481302539\ldots

\end{equation}

The digits of this constant, sometimes called “the Smarandache constant,” are the contents of sequence A038458 on the OEIS. It is possible to generalize this question as well.

Open QuestionFor any fixed constant $C$, what is the smallest $\alpha = \alpha(C)$ such that

\begin{equation}

p_{n+1}^\alpha – p_n^\alpha = C

\end{equation}

has solutions? In particular, how does $\alpha(C)$ behave as a function of $C$?

This question does not seem to have been approached in any sort of generality, aside from the case when $C = 1$.

The idea of the proof is very straightforward. We estimate \eqref{eq:main} across prime pairs $p, p+246$, relying on the recent proof from Polymath8 that infinitely many such primes exist.

Fix $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$. Applying the mean value theorem of calculus on the function $x \mapsto x^\alpha$ shows that

\begin{align}

p^\beta \big( (p+246)^\alpha – p^\alpha \big) &= p^\beta \cdot 246 \alpha q^{\alpha – 1} \\\

&\leq p^\beta \cdot 246 \alpha p^{\alpha – 1} = 246 \alpha p^{\alpha + \beta – 1}, \label{eq:bound}

\end{align}

for some $q \in [p, p+246]$. Passing to the inequality in the second line is done by realizing that $q^{\alpha – 1}$ is a decreasing function in $q$. As $\alpha + \beta – 1 < 0$, as $p \to \infty$ we see that\eqref{eq:bound} goes to zero.

Therefore

\begin{equation}

\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) = 0,

\end{equation}

as was to be proved.

Cramer’s Conjecture states that there exists a constant $C$ such that for all sufficiently large $n$,

\begin{equation}

p_{n+1} – p_n < C(\log n)^2.

\end{equation}

Thus for a sufficiently large prime $p$, the subsequent prime is at most $p + C (\log p)^2$. Performing a similar estimation as above shows that

\begin{equation}

(p + C (\log p)^2)^\alpha – p^\alpha \leq C (\log p)^2 \alpha p^{\alpha – 1} =

C \alpha \frac{(\log p)^2}{p^{1 – \alpha}}.

\end{equation}

As the right hand side vanishes as $p \to \infty$, we see that it is natural to expect that the main Conjecture above is true. More generally, we should expect the following, stronger conjecture.

Conjecture’For any $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$, there exists a constant $C(\alpha, \beta)$ such that

\begin{equation}

p_n^\beta (p_{n+1}^\alpha – p_n^\alpha) \leq C(\alpha, \beta).

\end{equation}

I wrote this note in between waiting in never-ending queues while I sort out my internet service and other mundane activities necessary upon moving to another country. I had just read some papers on the arXiv, and I noticed a paper which referred to unknown statuses concerning Andrica’s Conjecture. So then I sat down and wrote this up.

I am somewhat interested in qualitative information concerning the Open Question in the introduction, and I may return to this subject unless someone beats me to it.

This note is (mostly, minus the code) available as a pdf and (will shortly) appears on the arXiv. This was originally written in LaTeX and converted for display on this site using a set of tools I’ve written based around latex2jax, which is available on my github.

]]>The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.

This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is **a prototype** and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, **please request/comment/ask** either now, or at my email: `david@lowryduda.com`

.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:

```
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve
```

In [2]:

```
lmfdb_ecurve.search(rank=1)
```

Out[2]:

This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:

```
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))
```

Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:

```
# Try autocompleting the following. It has all the things!
print(dir(E))
```

This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:

```
E.defining_ideal()
```

Out[5]:

But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:

```
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)
```

And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:

```
print(E.gens())
print(E.rank())
print(E.regulator())
```

In [8]:

```
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())
```

That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the

In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.### This is where we’re really looking for input.¶

## Now let’s describe what’s going on under the hood a little bit¶

`search()`

function.In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.

**email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues**

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

```
http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050
```

The result is a large mess of data, which can be exported as json and parsed.

But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time *and thought* to parse.

So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:

```
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]
```

The total functionality of this search function is visible from its current documentation.

In [10]:

```
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)
```

In [11]:

```
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)
```

Out[11]:

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.

For example, no elliptic curve in the database has rank 5.

In [12]:

```
lmfdb_ecurve.search(rank=5)
```

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument `base_item`

(the name will probably change… to `skip`

? or perhaps to `offset`

?) to start returning at the `base_item`

th element.

In [13]:

```
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3)) # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list
```

Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept

`max_conductor`

or `min_conductor`

(or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).
In [14]:

```
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000) # Not implemented
```

Our

Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

`EllipticCurve_rational_field_lmfdb`

class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:

```
print(E.gens.__doc__)
```

Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like

`regulator()`

.)
In [16]:

```
print(E.regulator.__doc__)
```

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.

We now have a variety of results concerning the behavior of the partial sums

$$ S_f(X) = \sum_{n \leq X} a(n) $$

where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series

$$ D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s} $$

completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.

That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as

$$ S_2(R) = \pi R + P_2(R),$$

where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the “lattice point discrepancy” — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.

The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,

$$ \theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$

Then

$$ \theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$

where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum

$$ \sum_{n \leq R} r_2(n) = S_2(R),$$

and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.

More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,

$$ S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$

giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).

Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.

Strong evidence for the conjectured size of the lattice point discrepancy comes in the form of mean square estimates. By looking at the square, one doesn’t need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form

$$\begin{align}

\int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\

\int_0^X P_3(t)^2 dt &= C’ X^2 \log X + O(X^2 (\sqrt{ \log X})).

\end{align}$$

These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.

It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.

In a paper that we recently posted to the arxiv, we show that the Dirichlet series

$$ \sum_{n \geq 1} \frac{S_k(n)^2}{n^s} $$

and

$$ \sum_{n \geq 1} \frac{P_k(n)^2}{n^s} $$

for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results. In particular, we prove a result stated following.

Theorem$$\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2} \\ &\quad + C_kX^{k-1} + O(X^{k-2} \end{align}$$

In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.

We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.

TheoremThere exists $\lambda > 0$ such that

$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 – \lambda}).$$

We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.

The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.

There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.

]]>Disclaimer: There are several greenhouse gasses, and lots of other things that we’re throwing wantonly into the environment. Considering them makes things incredibly complicated incredibly quickly, so I blithely ignore them in this note.

Such rapid changes have side effects, many of which lead to bad things. That’s why nearly 150 countries ratified the Paris Agreement on Climate Change.^{1} Even if we assume that all these countries will accomplish what they agreed to (which might be challenging for the US),^{2}

most nations and advocacy groups are focusing on *increasing efficiency* and *reducing emissions.* These are good goals! But what about all the carbon that is already in the atmosphere?^{3}

You know what else is a problem? Obesity! How are we to solve all of these problems?

Looking at this (very unscientific) graph,^{4} we see that the red isn’t keeping up! Maybe we aren’t using the valuable resource of our own bodies enough! Fat has carbon in it — often over 20% by weight. What if we took advantage of our propensity to become propense? How fat would we need to get to balance last year’s carbon emissions?

That’s what we investigate here.

We need some data. It turns out that, despite knowing that we put *a lot* of carbon into the atmosphere, I don’t have any idea how much *a lot* actually is. Usually it’s given in nice, relatable terms that we’re supposed to be able to make sense of — like estimates on the number of degrees of warming to expect given a certain amount of emissions. So question number one: how much carbon do we put into the atmosphere?

This uses real data from the US Energy Information Association (in the “International Energy Statistics” dataset). This shows the highest carbon contributors from the year 2014 (the year with the most recent complete data. All countries not explicitly displayed are included in “All Others.”

What does this tell us?^{5} The vertical bars are measured in terms of “Million Metric Tons of CO2”. In total, the world released 33716 MMTons CO2.^{6}

This unit is a bit hard to wrap my head around, MMTon CO2, a million metric ton of CO2. Firstly, we should note that only 9195 MMTons of that is carbon, which is what we’re focusing on. To put this in proper perspective, that’s 2700 pounds per person alive today (Or 1226 kilograms, for that crowd).^{7}

So how fat would we need to get to balance one year of carbon emissions? If every man, woman, child, and elder gained a mere 2700 pounds (1226 kilograms!) of pure carbon, we would successfully sequester one year’s worth of carbon.

Unfortunately, that means about 13000 pounds (6000 kilograms) of fat, which is a bit much. So the chart really looks like this.

Wow. So this isn’t a reasonable carbon sequestration plan.^{8} We toss an **unbelievable** amount of carbon into the atmosphere. According to LiveScience, a fully grown T-Rex could weight as much as 18000 pounds (8160 kilograms). If we assume that the overall body composition of a dinosaur is about the same as a human,^{9} so that roughly 20% of a T-Rex’s weight is carbon, then a fully grown T-Rex might have 3590 pounds of carbon within his or her body. This is approximately the same amount of carbon that corresponds to each man, woman, child, and elder’s carbon use in 2014.

That’s a weird thought. How much carbon did we pull out of the ground and burn in 2014? About the same as if every human dug up a fully grown T-Rex, burned it, and then resumed their normal lives.

A fully grown male African elephant can weigh as much as 6000 kilograms. So we might grasp the magnitude of this as thinking of every person unearthing a fully grown male African elephant each year. Alternately, although we can’t gain enough weight to sequester enough carbon, elephants can. We could initiate a policy where every human adopts and raises a new African elephant each year.

I think I’m starting to get a bigger idea of just how daunting a task of large scale carbon sequestration will actually be. 2700 pounds per person per year. Whoa. Let’s move away from fat, towards better ideas.

Following guidelines set by the US Forestry Service for computing tree weight, a fully grown oak tree can weight as much as 14 metric tons, with as much as 4 metric tons (8800 pounds) being carbon. Thus one fully grown oak tree can hold three people’s average yearly carbon emissions.

Instead of an elephant a year, every person could plant an oak tree every year. (Actually, it just takes one in every three people). If these trees never died and were able to grow to complete size, then this would also offset carbon emissions. Conversely, when we cut and burn down trees, these release lots and lots and lots of carbon.

Suppose we did this. So this year, we were to plant 2.5 billion oak trees. That’s one for every three people on Earth. According to Penn State’s Forestry Extension School, a healthy, mature, hardwood forest can have as many as 120 trees per acre. If all 2.5 billion trees were planted at this density together, then this would cover 32552 square miles. The area of South Carolina is 32020 square miles, so we could cover the entire state of South Carolina with newly planted oak trees.^{10}

Of course, oak trees are probably not the best choice for a carbon sequestration tree, and there are probably plants that, in optimal growth conditions, hold a much higher carbon per square mile concentration.^{11} Perhaps some trees are three times as effective (a Maryland per year), or maybe even ten times as effective (a Delaware per year).

But that is the magnitude of the effort. Now if you’ll excuse me, I’m going to go hug a tree.

]]>