Category Archives: Programming

Programming Masthead

I maintain the following programming projects:

HNRSS: (source), a HackerNews RSS generator written in python. HNRSS periodically updates RSS feeds from the HN frontpage and best list. It also attempts to automatically summarize the link (if there is a link) and includes the top five comments, all to make it easier to determine whether it’s worth checking out.

LaTeX2Jax: (source), a tool to convert LaTeX documents to HTML with MathJax. This is a modification of the earlier MSE2WP, which converts Math.StackExchange flavored markdown to WordPress+MathJax compatible html. In particular, this is more general, and allows better control of the resulting html by exposing more CSS elements (that generically aren’t available on free WordPress setups). This is what is used for all math posts on this site.

MSE2WP: (source), a tool to convert Math.Stackexchange flavored markdown to WordPress+MathJax compatible html. This was once written for the Math.Stackexchange Community Blog. But as that blog is shutting down, there is much less of a purpose for this script. Note that this began as a modified version of latex2wp.

 

I actively contribute to:

python-markdown2: (source),  a fast and complete python implementation of markdown, with a few additional features.

 

And I generally support or have contributed to:

SageMath: (main site), a free and open source system of tools for mathematics. Some think of it as a free alternative to the “Big M’s” — Maple, Mathematica, Magma.

Matplotlib: (main site), a plotting library in python. Most of the static plots on this site were creating using matplotlib.

crouton: (source), a tool for making Chromebooks, which by default are very limited in capability, into hackable linux laptops. This lets you directly run Linux on the device at the same time as having ChromeOS installed. The only cost is that there is absolutely no physical security at all (and every once in a while a ChromeOS update comes around and breaks lots of things). It’s great!

 

Below, you can find my most recent posts tagged “Programming” on this site.

I will note the following posts which have received lots of positive feedback.

  1. A Notebook Preparing for a Talk at Quebec-Maine
  2. A Brief Notebook on Cryptography
  3. Computing pi with Tools from Calculus (which includes computational tidbits, though no actual programming).
Posted in Programming | Leave a comment

Cracking Codes with Python: A Book Review

How do you begin to learn a technical subject?

My first experience in “programming” was following a semi-tutorial on how to patch the Starcraft exe in order to make it understand replays from previous versions. I was about 10, and I cobbled together my understanding from internet mailing lists and chatrooms. The documentation was awful and the original description was flawed, and to make it worse, I didn’t know anything about any sort of programming yet. But I trawled these lists and chatroom logs and made it work, and learned a few things. Each time Starcraft was updated, the old replay system broke completely and it was necessary to make some changes, and I got pretty good at figuring out what changes were necessary and how to perform these patches.

On the other hand, my first formal experience in programming was taking a course at Georgia Tech many years later, in which a typical activity would revolve around an exciting topic like concatenating two strings or understanding object polymorphism. These were dry topics presented to us dryly, but I knew that I wanted to understand what was going on and so I suffered the straight-faced-ness of the class and used the course as an opportunity to build some technical depth.

Now I recognize that these two approaches cover most first experiences learning a technical subject: a motivated survey versus monographic study. At the heart of the distinction is a decision to view and alight on many topics (but not delving deeply in most) or to spend as much time as is necessary to understand completely each topic (and hence to not touch too many different topics). Each has their place, but each draws a very different crowd.

The book Cracking Codes with Python: An Introduction to Building and Breaking Ciphers by Al Sweigart1 is very much a motivated flight through various topics in programming and cryptography, and not at all a deep technical study of any individual topic. A more accurate (though admittedly less beckoning) title might be An Introduction to Programming Concepts Through Building and Breaking Ciphers in Python. The main goal is to promote programmatical thinking by exploring basic ciphers, and the medium happens to be python.

But ciphers are cool. And breaking them is cool. And if you think you might want to learn something about programming and you might want to learn something about ciphers, then this is a great book to read.

Sweigart has a knack for writing approachable descriptions of what’s going on without delving into too many details. In fact, in some sense Sweigart has already written this book before: his other books Automate the Boring Stuff with Python and Invent your own Computer Games with Python are similarly survey material using python as the medium, though with different motivating topics.

Each chapter of this book is centered around exploring a different aspect of a cipher, and introduces additional programming topics to do so. For example, one chapter introduces the classic Caesar cipher, as well as the “if”, “else”, and “elif” conditionals (and a few other python functions). Another chapter introduces brute-force breaking the Caesar cipher (as well as string formatting in python).

In each chapter, Sweigart begins by giving a high-level overview of the topics in that chapter, followed by python code which accomplishes the goal of the chapter, followed by a detailed description of what each block of code accomplishes. Readers get to see fully written code that does nontrivial things very quickly, but on the other hand the onus of code generation is entirely on the book and readers may have trouble adapting the concepts to other programming tasks. (But remember, this is more survey, less technical description). Further, Sweigart uses a number of good practices in his code that implicitly encourages good programming behaviors: modules are written with many well-named functions and well-named variables, and sufficiently modularly that earlier modules are imported and reused later.

But this book is not without faults. As with any survey material, one can always disagree on what topics are or are not included. The book covers five classical ciphers (Caesar, transposition, substitution, Vigenere, and affine) and one modern cipher (textbook-RSA), as well as the write-backwards cipher (to introduce python concepts) and the one-time-pad (presented oddly as a Vigenere cipher whose key is the same length as the message). For some unknown reason, Sweigart chooses to refer to RSA almost everywhere as “the public key cipher”, which I find both misleading (there are other public key ciphers) and giving insufficient attribution (the cipher is implemented in chapter 24, but “RSA” appears only once as a footnote in that chapter. Hopefully the reader was paying lots of attention, as otherwise it would be rather hard to find out more about it).

Further, the choice of python topics (and their order) is sometimes odd. In truth, this book is almost language agnostic and one could very easily adapt the presentation to any other scripting language, such as C.

In summary, this book is an excellent resource for the complete beginner who wants to learn something about programming and wants to learn something about ciphers. After reading this book, the reader will be a mid-beginner student of python (knee-deep is apt) and well-versed in classical ciphers. Should the reader feel inspired to learn more python, then he or she would probably feel comfortable diving into a tutorial or reference for their area of interest (like Full Stack Python if interested in web dev, or Python for Data Analysis if interested in data science). Or he or she might dive into a more complete monograph like Dive into Python or the monolithic Learn Python. Many fundamental topics (such as classes and objects, list comprehensions, data structures or algorithms) are not covered, and so “advanced” python resources would not be appropriate.

Further, should the reader feel inspired to learn more about cryptography, then I recommend that he or she consider Cryptanalysis by Gaines, which is a fun book aimed at diving deeper into classical pre-computer ciphers, or the slightly heavier but still fun resource would “Codebreakers” by Kahn. For much further cryptography, it’s necessary to develop a bit of mathematical maturity, which is its own hurdle.

This book is not appropriate for everyone. An experienced python programmer could read this book in an hour, skipping the various descriptions of how python works along the way. An experienced programmer who doesn’t know python could similarly read this book in a lazy afternoon. Both would probably do better reading either a more advanced overview of either cryptography or python, based on what originally drew them to the book.

Posted in Book Review, Programming, Python | Leave a comment

Hosting a Flask App on WebFaction on a Non-root Domain

Since I came to Warwick, I’ve been working extensively on the LMFDB, which uses python, sage, flask, and mongodb at its core. Thus I’ve become very familiar with flask. Writing a simple flask application is very quick and easy. So I thought it would be a good idea to figure out how to deploy a flask app on the server which runs this website, which is currently at WebFaction.

In short, it was not too hard, and now the app is set up for use. (It’s not a public tool, so I won’t link to it).

But there were a few things that I had to think figure out which I would quickly forget. Following the variety of information I found online, the only nontrivial aspect was configuring the site to run on a non-root domain (like davidlowryduda.com/subdomain instead of at davidlowryduda.com). I’m writing this so as to not need to figure this out when I write and hoost more flask apps. (Which I’ll almost certainly do, as it’s so straightforward).

There are some uninteresting things one must do on WebFaction.

  1. Log into your account.
  2. Add a new application of type mod_wsgi (and the desired version of python, which is hopefully 3.6+).
  3. Add this application to the desired website and subdomain in the WebFaction control panel.

After this, WebFaction will set up a skeleton “Hello World” mod_wsgi application with many reasonable server setting defaults. The remainder of the setup is done on the server itself.

In ~/webapps/application_name there will now appear


apache2/    # Apache config files and bin
htdocs/     # Default location where Apache looks for the app

We won’t change that structure. In htdocs1 there is a file index.py, which is where apache expects to find a python wsgi application called application. We will place the flask app along this structure and point to it in htdocs/index.py.

Usually I will use a virtualenv here. So in ~/webapps/application_name, I will run something like virtualenv flask_app_venv and virtualenv activate (or actually out of habit I frequently source the flask_app_venv/bin/activate file). Then pip install flask and whatever other python modules are necessary for the application to run. We will configure the server to use this virtual environment to run the app in a moment.

Copy the flask app, so that the resulting structure looks something like

~/webapps/application_name:

- apache2/
- htdocs/
- flask_app_venv/
- flask_app/    # My flask app
  - config.py
  - libs/
  - main/
    - static/
    - templates/
    - __init__.py
    - views.py
    - models.my

I find it conceptually easiest if I have flask_app/main/__init__.py to directly contain the flask app to reference it by name in htdocs/index.py. It can be made elsewhere (for instance, perhaps in a file like flask_app/main/app.py, which appears to be a common structure), but I assume that it is at least imported in __init__.py.

For example, __init__.py might look something like

# application_name/flask_app/main/__init__.py

# ... other import statements from project if necessary
from flask import Flask

app = Flask(__name__)
app.config.from_object('config')

# Importing the views for the rest of our site
# We do this here to avoid circular imports
# Note that I call it "main" where many call it "app"
from main import views

if __name__ == '__main__':
    app.run()

The Flask constructor returns exactly the sort of wsgi application that apache expects. With this structure, we can edit the htdocs/index.py file to look like

# application_name/htdocs/index.py

import sys

# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')

# launching our app
from main import app as application

Now the server knows the correct wsgi_application to serve.

We must configure it to use our python virtual environment (and we’ll add a few additional convenience pieces). We edit /apache2/conf/httpd.conf as follows. Near the top of the file, certain modules are loaded. Add in the alias module, so that the modules look something like


#... other modules
LoadModule wsgi_module       modules/mod_wsgi.so
LoadModule alias_module      modules/mod_alias.so  # <-- added

This allows us to alias the root of the site. Since all site functionality is routed through htdocs/index.py, we want to think of the root / as beginning with /htdocs/index.py. At the end of the file


Alias / /home/username/webapps/application_name/htdocs/index.py/

We now set the virtual environment to be used properly. There will be a set of lines containing names like WSGIDaemonProcess and WSGIProcessGroup. We edit these to refer to the correct python. WebFaction will have configured WSGIDaemonProcess to point to a local version of python by setting the python-path. Remove that, making that line look like


WSGIDaemonProcess application_name processes=2 threads=12

(or similar). We set the python path below, adding the line


WSGIPythonHome /home/username/webapps/application_name/flask_app_venv

I believe that this could also actually be done by setting puthon-path in WSGIDaemonProcess, but I find this more aesthetically pleasing.

We must also modify the “ section. Edit it to look like


<Directory /home/username/webapps/application_name/htdocs>
  AddHandler wsgi-script .py
  RewriteEngine On            # <-- added
  RewriteBase /               # <-- added
  WSGIScriptReloading On      # <-- added
<Directory>

It may very well be that I don't use the RewriteEngine at all, but if I do then this is where it's done. Script reloading is a nice convenience, especially while reloading and changing the app.

I note that it may be convenient to add an additional alias for static file hosting,


Alias /static/ /home/your_username/webapps/application_name/app/main/static/

though I have not used this so far. (I get the same functionality through controlling the flask views appropriately).

The rest of this file has been setup by WebFaction for us upon creating the wsgi application.

If the application is on a non-root domain...

If the application is to be run on a non-root domain, such as davidlowryduda.com/subdomain, then there is currently a problem. In flask, when using url getters like url_for, urls will be returned as though there is no subdomain. And thus all urls will be incorrect. It is necessary to alter provided urls in some way.

The way that worked for me was to insert a tiny bit of middleware in the wsgi_application. Alter htdocs/index.py to read

#application_name/htdocs/index.py

import sys

# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')

# subdomain url rerouting middleware
from webfaction_middleware import Middleware

from main import app

# set app through middleware
application = Middleware(app)

Now of course we need to write this middleware.

In application_name/flask_app, I create a file called webfaction_middleware.py, which reads

# application_name/flask_app/webfaction_middleware.py

class Middleware(object):  # python2 aware
  def __init__(self, app):
    self.app = app

  def __call__(self, environ, start_response):
    app_url = '/subdomain'
    if app_url != '/':
      environ['SCRIPT_NAME'] = app_url
    return self.app(environ, start_response)

I now have a template file in which I keep app_url = '/' so that I can forget this and not worry, but that is where the subdomain url is prepended. Note that the leading slash is necessary. When I first tried using this, I omitted the leading slash. The application worked sometimes, and horribly failed in some other places. Some urls were correcty constructed, but most were not. I didn't try to figure out which ones were doomed to fail --- but it took me an embarassingly long time to realize that prepending a slash solved all problems.

The magical-names of environ and start_response are because the flask app is a wsgi_application, and this is the api of wsgi_applications generically.

Now it's ready

Restart the apache server (/apache2/bin/restart) and go. Note that when incrementally making changes above, some changes can take a few minutes to fully propogate. It's only doing it the first time which takes some thought.

Posted in Programming, Python | Leave a comment

Segregation, Gerrymandering, and Schelling’s Model

[This note is more about modeling some of the mathematics behind political events than politics themselves. And there are pretty pictures.]

Gerrymandering has become a recurring topic in the news. The Supreme Court of the US, as well as more state courts and supreme courts, is hearing multiple cases on partisan gerrymandering (all beginning with a case in Wisconsin).

Intuitively, it is clear that gerrymandering is bad. It allows politicians to choose their voters, instead of the other way around. And it allows the majority party to quash minority voices.

But how can one identify a gerrymandered map? To quote Justice Kennedy in his Concurrence the 2004 Supreme Court case Vieth v. Jubelirer:

When presented with a claim of injury from partisan gerrymandering, courts confront two obstacles. First is the lack of comprehensive and neutral principles for drawing electoral boundaries. No substantive definition of fairness in districting seems to command general assent. Second is the absence of rules to limit and confine judicial intervention. With uncertain limits, intervening courts–even when proceeding with best intentions–would risk assuming political, not legal, responsibility for a process that often produces ill will and distrust.

Later, he adds to the first obstacle, saying:

The object of districting is to establish “fair and effective representation for all citizens.” Reynolds v. Sims, 377 U.S. 533, 565—568 (1964). At first it might seem that courts could determine, by the exercise of their own judgment, whether political classifications are related to this object or instead burden representational rights. The lack, however, of any agreed upon model of fair and effective representation makes this analysis difficult to pursue.

From Justice Kennedy’s Concurrence emerges a theme — a “workable standard” of identifying gerrymandering would open up the possibility of limiting partisan gerrymandering through the courts. Indeed, at the core of the Wisconsin gerrymandering case is a proposed “workable standard”, based around the efficiency gap.

 

Thomas Schelling and Segregation

In 1971, American economist Thomas Schelling (who later won the Nobel Prize in Economics in 2005) published Dynamic Models of Segregation (Journal of Mathematical Sociology, 1971, Vol 1, pp 143–186). He sought to understand why racial segregation in the United States seems so difficult to combat.

He introduced a simple model of segregation suggesting that even if each individual person doesn’t mind living with others of a different race, they might still choose to segregate themselves through mild preferences. As each individual makes these choices, overall segregation increases.

I write this post because I wondered what happens if we adapt Schelling’s model to instead model a state and its district voting map. In place of racial segregation, I consider political segregation. Supposing the district voting map does not change, I wondered how the efficiency gap will change over time as people further segregate themselves.

It seemed intuitive to me that political segregation (where people who had the same political beliefs stayed largely together and separated from those with different political beliefs) might correspond to more egregious cases of gerrymandering. But to my surprise, I was (mostly) wrong.

Let’s set up and see the model.

(more…)

Posted in Expository, Mathematics, Politics, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 4

This is a very short post in my collection working through this year’s Advent of Code challenges. Unlike the previous ones, this has no mathematical comments, as it was a very short exercise. This notebook is available in its original format on my github.

Day 4: High Entropy Passphrases

Given a list of strings, determine how many strings have no duplicate words.

This is a classic problem, and it’s particularly easy to solve this in python. Some might use collections.Counter, but I think it’s more straightforward to use sets.

The key idea is that the set of words in a sentence will not include duplicates. So if taking the set of a sentence reduces its length, then there was a duplicate word.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
def count_lines_with_unique_words(lines):
    num_pass = 0
    for line in lines:
        s = line.split()
        if len(s) == len(set(s)):
            num_pass += 1
    return num_pass

count_lines_with_unique_words(lines)
Out[1]:
455

I think this is the first day where I would have had a shot at the leaderboard if I’d been gunning for it.

Part 2

Let’s add in another constraint. Determine how many strings have no duplicate words, even after anagramming. Thus the string

abc bac

is not valid, since the second word is an anagram of the first. There are many ways to tackle this as well, but I will handle anagrams by sorting the letters in each word first, and then running the bit from part 1 to identify repeated words.

In [2]:
with open("input.txt", "r") as f:
    lines = f.readlines()
    
sorted_lines = []
for line in lines:
    sorted_line = ' '.join([''.join(l) for l in map(sorted, line.split())])
    sorted_lines.append(sorted_line)

sorted_lines[:2]
    
Out[2]:
['bddjjow acimrv bcjjm anr flmmos fiosv',
 'bcmnoxy dfinyzz dgmp dfgioy hinrrv eeklpuu adgpw kqv']
In [3]:
count_lines_with_unique_words(sorted_lines)
Out[3]:
186
Posted in Expository, Programming, Python | Tagged , , | 1 Comment

Advent of Code: Day 3

This is the third notebook in my posts on the Advent of Code challenges. The notebook in its original format can be found on my github.

Day 3: Spiral Memory

Numbers are arranged in a spiral

17  16  15  14  13
18   5   4   3  12
19   6   1   2  11
20   7   8   9  10
21  22  23---> ...

Given an integer n, what is its Manhattan Distance from the center (1) of the spiral? For instance, the distance of 3 is $2 = 1 + 1$, since it’s one space to the right and one space up from the center.

Here’s my idea. The bottom right corner of the $k$th layer is the integer $(2k+1)^2$, since that’s how many integers are contained within that square. The other three corners in that layer are $(2k+1)^2 – 2k, (2k+1)^2 – 4k$, and $(2k+1)^2 – 6k$. Finally, the closest spot on the $k$th layer to the origin is at distance $k$: these are the four “axis locations” halfway between the corners, at $(2k+1)^2 – k, (2k+1)^2 – 3k, (2k+1)^2 – 5k$, and $(2k+1)^2 – 7k$.

For instance when $k = 1$, the bottom right is $(2 + 1)^2 = 9$, and the four “axis locations” are $9 – 1, 9 – 3, 9-5$, and $9-7$. The “axis locations” are $k$ away, and the corners are $2k$ away.

So I will first find which layer the number is on. Then I’ll figure out which side it’s on, and then how far away it is from the nearest “axis location” or “corner”.

My given number happens to be 289326.

In [1]:
import math

def find_lowest_larger_odd_square(n):
    upper = math.ceil(n**.5)
    if upper %2 == 0:
        upper += 1
    return upper
In [2]:
assert find_lowest_larger_odd_square(39) == 7
assert find_lowest_larger_odd_square(26) == 7
assert find_lowest_larger_odd_square(25) == 5
In [3]:
find_lowest_larger_odd_square(289326)
Out[3]:
539
In [4]:
539**2 - 289326
Out[4]:
1195

It happens to be that our integer is very close to an odd square.
The square is $539^2$, and the distance to that square is $538$ from the center.

Note that $539 = 2(269) + 1$, so this is the $269$th layer of the square.
The previous corner to $539^2$ is $539^2 – 538$, and the previous corner to that is $539^2 – 2\cdot538 = 539^2 – 1076$.
This is the nearest corner.
How far away from the square is this corner?

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 2

This is the second notebook in my posts on the Advent of Code challenges. This notebook in its original format can be found on my github.

Day 2: Corruption Checksum, part I

You are given a table of integers. Find the difference between the maximum and minimum of each row, and add these differences together.

There is not a lot to say about this challenge. The plan is to read the file linewise, compute the difference on each line, and sum them up.

In [1]:
with open("input.txt", "r") as f:
    lines = f.readlines()
lines[0]
Out[1]:
'5048\t177\t5280\t5058\t4504\t3805\t5735\t220\t4362\t1809\t1521\t230\t772\t1088\t178\t1794\n'
In [2]:
l = lines[0]
l = l.split()
l
Out[2]:
['5048',
 '177',
 '5280',
 '5058',
 '4504',
 '3805',
 '5735',
 '220',
 '4362',
 '1809',
 '1521',
 '230',
 '772',
 '1088',
 '178',
 '1794']
In [3]:
def max_minus_min(line):
    '''Compute the difference between the largest and smallest integer in a line'''
    line = list(map(int, line.split()))
    return max(line) - min(line)

def sum_differences(lines):
    '''Sum the value of `max_minus_min` for each line in `lines`'''
    return sum(max_minus_min(line) for line in lines)
In [4]:
testcase = ['5 1 9 5','7 5 3', '2 4 6 8']
assert sum_differences(testcase) == 18
In [5]:
sum_differences(lines)
Out[5]:
58975

Mathematical Interlude

In line with the first day’s challenge, I’m inclined to ask what we should “expect.” But what we should expect is not well-defined in this case. Let us rephrase the problem in a randomized sense.

Suppose we are given a table, $n$ lines long, where each line consists of $m$ elements, that are each uniformly randomly chosen integers from $1$ to $10$. We might ask what is the expected value of this operation, of summing the differences between the maxima and minima of each row, on this table. What should we expect?

As each line is independent of the others, we are really asking what is the expected value across a single row. So given $m$ integers uniformly randomly chosen from $1$ to $10$, what is the expected value of the maximum, and what is the expected value of the minimum?

 

Expected Minimum

Let’s begin with the minimum. The minimum is $1$ unless all the integers are greater than $2$. This has probability
$$ 1 – \left( \frac{9}{10} \right)^m = \frac{10^m – 9^m}{10^m}$$
of occurring. We rewrite it as the version on the right for reasons that will soon be clear.
The minimum is $2$ if all the integers are at least $2$ (which can occur in $9$ different ways for each integer), but not all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$). Thus this has probability
$$ \frac{9^m – 8^m}{10^m}.$$
Continuing to do one more for posterity, the minimum is $3$ if all the integers are at least $3$ (each integer has $8$ different ways of being at least $3$), but not all integers are at least $4$ (each integer has $7$ different ways of being at least $4$). Thus this has probability

$$ \frac{8^m – 7^m}{10^m}.$$

And so on.

Recall that the expected value of a random variable is

$$ E[X] = \sum x_i P(X = x_i),$$

so the expected value of the minimum is

$$ \frac{1}{10^m} \big( 1(10^m – 9^m) + 2(9^m – 8^m) + 3(8^m – 7^m) + \cdots + 9(2^m – 1^m) + 10(1^m – 0^m)\big).$$

This simplifies nicely to

$$ \sum_ {k = 1}^{10} \frac{k^m}{10^m}. $$

Expected Maximum

The same style of thinking shows that the expected value of the maximum is

$$ \frac{1}{10^m} \big( 10(10^m – 9^m) + 9(9^m – 8^m) + 8(8^m – 7^m) + \cdots + 2(2^m – 1^m) + 1(1^m – 0^m)\big).$$

This simplifies to

$$ \frac{1}{10^m} \big( 10 \cdot 10^m – 9^m – 8^m – \cdots – 2^m – 1^m \big) = 10 – \sum_ {k = 1}^{9} \frac{k^m}{10^m}.$$

Expected Difference

Subtracting, we find that the expected difference is

$$ 9 – 2\sum_ {k=1}^{9} \frac{k^m}{10^m}. $$

From this we can compute this for each list-length $m$. It is good to note that as $m \to \infty$, the expected value is $9$. Does this make sense? Yes, as when there are lots of values we should expect one to be a $10$ and one to be a $1$. It’s also pretty straightforward to see how to extend this to values of integers from $1$ to $N$.

Looking at the data, it does not appear that the integers were randomly chosen. Instead, there are very many relatively small integers and some relatively large integers. So we shouldn’t expect this toy analysis to accurately model this problem — the distribution is definitely not uniform random.
But we can try it out anyway.

(more…)

Posted in Expository, Programming, Python | Tagged , , | Leave a comment

Advent of Code: Day 1

I thoroughly enjoyed reading through Peter Norvig’s extraordinarily clean and nice solutions to the Advent of Code challenge last year. Inspired by his clean, literate programming style and the convenience of jupyter notebook demonstrations, I will look at several of these challenges in my own jupyter notebooks.

My background and intentions aren’t the same as Peter Norvig’s: his expertise dwarfs mine. And timezones are not kind to those of us in the UK, and thus I won’t be competing for a position on the leaderboards. These are to be fun. And sometimes there are tidbits of math that want to come out of the challenges.

Enough of that. Let’s dive into the first day.

Day 1: Inverse Captcha, Part 1

Given a sequence of digits, find the sum of those digits which match the following digit. The sequence is presumed circular, so the first digit may match the last digit.

This would probably be done the fastest by looping through the sequence.

In [1]:
with open('input.txt', 'r') as f:
    seq = f.read()
seq = seq.strip()
seq[:10]
Out[1]:
'1118313623'
In [2]:
def sum_matched_digits(s):
    "Sum of digits which match following digit, and first digit if it matches last digit"
    total = 0
    for a,b in zip(s, s[1:]+s[0]):
        if a == b:
            total += int(a)
    return total

They provide a few test cases which we use to test our method against.

In [3]:
assert sum_matched_digits('1122') == 3
assert sum_matched_digits('1111') == 4
assert sum_matched_digits('1234') == 0
assert sum_matched_digits('91212129') == 9

For fun, this is a oneline version.

(more…)

Posted in Expository, Programming, Python | Tagged , , | 1 Comment

Sage Days 87 Demo: Interfacing between sage and the LMFDB

Interfacing sage and the LMFDB — a prototype

The lmfdb and sagemath are both great things, but they don’t currently talk to each other. Much of the lmfdb calls sage, but the lmfdb also includes vast amounts of data on $L$-functions and modular forms (hence the name) that is not accessible from within sage.
This is an example prototype of an interface to the lmfdb from sage. Keep in mind that this is a prototype and every aspect can change. But we hope to show what may be possible in the future. If you have requests, comments, or questions, please request/comment/ask either now, or at my email: david@lowryduda.com.

Note that this notebook is available on http://davidlowryduda.com or https://gist.github.com/davidlowryduda/deb1f88cc60b6e1243df8dd8f4601cde, and the code is available at https://github.com/davidlowryduda/sage2lmfdb

Let’s dive into an example.

In [1]:
# These names will change
from sage.all import *
import LMFDB2sage.elliptic_curves as lmfdb_ecurve
In [2]:
lmfdb_ecurve.search(rank=1)
Out[2]:
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 1240542*x - 531472509 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 - x^2 + 8100*x - 263219 over Rational Field,
 Elliptic Curve defined by y^2 + x*y = x^3 + 637*x - 68783 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 + 36*x - 380 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + x^2 - 2535*x - 49982 over Rational Field]
This returns 10 elliptic curves of rank 1. But these are a bit different than sage’s elliptic curves.

In [3]:
Es = lmfdb_ecurve.search(rank=1)
E = Es[0]
print(type(E))
<class 'LMFDB2sage.ell_lmfdb.EllipticCurve_rational_field_lmfdb_with_category'>
Note that the class of an elliptic curve is an lmfdb ElliptcCurve. But don’t worry, this is a subclass of a normal elliptic curve. So we can call the normal things one might call on an elliptic curve.

th

In [4]:
# Try autocompleting the following. It has all the things!
print(dir(E))
['CPS_height_bound', 'CartesianProduct',
'Chow_form', 'Hom',
'Jacobian', 'Jacobian_matrix',
'Lambda', 'Np',
'S_integral_points', '_AlgebraicScheme__A',
'_AlgebraicScheme__divisor_group', '_AlgebraicScheme_subscheme__polys',
'_EllipticCurve_generic__ainvs', '_EllipticCurve_generic__b_invariants',
'_EllipticCurve_generic__base_ring', '_EllipticCurve_generic__discriminant',
'_EllipticCurve_generic__is_over_RationalField', '_EllipticCurve_generic__multiple_x_denominator',
'_EllipticCurve_generic__multiple_x_numerator', '_EllipticCurve_rational_field__conductor_pari',
'_EllipticCurve_rational_field__generalized_congruence_number', '_EllipticCurve_rational_field__generalized_modular_degree',
'_EllipticCurve_rational_field__gens', '_EllipticCurve_rational_field__modular_degree',
'_EllipticCurve_rational_field__np', '_EllipticCurve_rational_field__rank',
'_EllipticCurve_rational_field__regulator', '_EllipticCurve_rational_field__torsion_order',
'_Hom_', '__add__', '__cached_methods', '__call__',
'__class__', '__cmp__', '__contains__', '__delattr__',
'__dict__', '__dir__', '__div__', '__doc__',
'__eq__', '__format__', '__ge__', '__getattribute__',
'__getitem__', '__getstate__', '__gt__', '__hash__',
'__init__', '__le__', '__lt__', '__make_element_class__',
'__module__', '__mul__', '__ne__', '__new__',
'__nonzero__', '__pari__', '__pow__', '__pyx_vtable__',
'__rdiv__', '__reduce__', '__reduce_ex__', '__repr__',
'__rmul__', '__setattr__', '__setstate__', '__sizeof__',
'__str__', '__subclasshook__', '__temporarily_change_names', '__truediv__',
'__weakref__', '_abstract_element_class', '_adjust_heegner_index', '_an_element_',
'_ascii_art_', '_assign_names', '_axiom_', '_axiom_init_',
'_base', '_base_ring', '_base_scheme', '_best_affine_patch',
'_cache__point_homset', '_cache_an_element', '_cache_key', '_check_satisfies_equations',
'_cmp_', '_coerce_map_from_', '_coerce_map_via', '_coercions_used',
'_compute_gens', '_convert_map_from_', '_convert_method_name', '_defining_names',
'_defining_params_', '_doccls', '_element_constructor', '_element_constructor_',
'_element_constructor_from_element_class', '_element_init_pass_parent', '_factory_data', '_first_ngens',
'_forward_image', '_fricas_', '_fricas_init_', '_gap_',
'_gap_init_', '_generalized_congmod_numbers', '_generic_coerce_map', '_generic_convert_map',
'_get_action_', '_get_local_data', '_giac_', '_giac_init_',
'_gp_', '_gp_init_', '_heegner_best_tau', '_heegner_forms_list',
'_heegner_index_in_EK', '_homset', '_init_category_', '_initial_action_list',
'_initial_coerce_list', '_initial_convert_list', '_interface_', '_interface_init_',
'_interface_is_cached_', '_internal_coerce_map_from', '_internal_convert_map_from', '_introspect_coerce',
'_is_category_initialized', '_is_valid_homomorphism_', '_isoclass', '_json',
'_kash_', '_kash_init_', '_known_points', '_latex_',
'_lmfdb_label', '_lmfdb_regulator', '_macaulay2_', '_macaulay2_init_',
'_magma_init_', '_maple_', '_maple_init_', '_mathematica_',
'_mathematica_init_', '_maxima_', '_maxima_init_', '_maxima_lib_',
'_maxima_lib_init_', '_modsym', '_modular_symbol_normalize', '_morphism',
'_multiple_of_degree_of_isogeny_to_optimal_curve', '_multiple_x_denominator', '_multiple_x_numerator', '_names',
'_normalize_padic_lseries', '_octave_', '_octave_init_', '_p_primary_torsion_basis',
'_pari_', '_pari_init_', '_point', '_point_homset',
'_polymake_', '_polymake_init_', '_populate_coercion_lists_', '_r_init_',
'_reduce_model', '_reduce_point', '_reduction', '_refine_category_',
'_repr_', '_repr_option', '_repr_type', '_sage_',
'_scale_by_units', '_set_conductor', '_set_cremona_label', '_set_element_constructor',
'_set_gens', '_set_modular_degree', '_set_rank', '_set_torsion_order',
'_shortest_paths', '_singular_', '_singular_init_', '_symbolic_',
'_test_an_element', '_test_cardinality', '_test_category', '_test_elements',
'_test_elements_eq_reflexive', '_test_elements_eq_symmetric', '_test_elements_eq_transitive', '_test_elements_neq',
'_test_eq', '_test_new', '_test_not_implemented_methods', '_test_pickling',
'_test_some_elements', '_tester', '_torsion_bound', '_unicode_art_',
'_unset_category', '_unset_coercions_used', '_unset_embedding', 'a1',
'a2', 'a3', 'a4', 'a6',
'a_invariants', 'abelian_variety', 'affine_patch', 'ainvs',
'algebra', 'ambient_space', 'an', 'an_element',
'analytic_rank', 'analytic_rank_upper_bound', 'anlist', 'antilogarithm',
'ap', 'aplist', 'arithmetic_genus', 'automorphisms',
'b2', 'b4', 'b6', 'b8',
'b_invariants', 'base', 'base_extend', 'base_field',
'base_morphism', 'base_ring', 'base_scheme', 'c4',
'c6', 'c_invariants', 'cartesian_product', 'categories',
'category', 'change_ring', 'change_weierstrass_model', 'cm_discriminant',
'codimension', 'coerce', 'coerce_embedding', 'coerce_map_from',
'complement', 'conductor', 'congruence_number', 'construction',
'convert_map_from', 'coordinate_ring', 'count_points', 'cremona_label',
'database_attributes', 'database_curve', 'db', 'defining_ideal',
'defining_polynomial', 'defining_polynomials', 'degree', 'descend_to',
'dimension', 'dimension_absolute', 'dimension_relative', 'discriminant',
'division_field', 'division_polynomial', 'division_polynomial_0', 'divisor',
'divisor_group', 'divisor_of_function', 'dual', 'dump',
'dumps', 'element_class', 'elliptic_exponential', 'embedding_center',
'embedding_morphism', 'eval_modular_form', 'excellent_position', 'formal',
'formal_group', 'fundamental_group', 'galois_representation', 'gen',
'gens', 'gens_certain', 'gens_dict', 'gens_dict_recursive',
'genus', 'geometric_genus', 'get_action', 'global_integral_model',
'global_minimal_model', 'global_minimality_class', 'has_additive_reduction', 'has_bad_reduction',
'has_base', 'has_cm', 'has_coerce_map_from', 'has_global_minimal_model',
'has_good_reduction', 'has_good_reduction_outside_S', 'has_multiplicative_reduction', 'has_nonsplit_multiplicative_reduction',
'has_rational_cm', 'has_split_multiplicative_reduction', 'hasse_invariant', 'heegner_discriminants',
'heegner_discriminants_list', 'heegner_index', 'heegner_index_bound', 'heegner_point',
'heegner_point_height', 'heegner_sha_an', 'height', 'height_function',
'height_pairing_matrix', 'hom', 'hyperelliptic_polynomials', 'identity_morphism',
'inject_variables', 'integral_model', 'integral_points', 'integral_short_weierstrass_model',
'integral_weierstrass_model', 'integral_x_coords_in_interval', 'intersection', 'intersection_multiplicity',
'intersection_points', 'intersects_at', 'irreducible_components', 'is_atomic_repr',
'is_coercion_cached', 'is_complete_intersection', 'is_conversion_cached', 'is_exact',
'is_global_integral_model', 'is_global_minimal_model', 'is_good', 'is_integral',
'is_irreducible', 'is_isogenous', 'is_isomorphic', 'is_local_integral_model',
'is_minimal', 'is_on_curve', 'is_ordinary', 'is_ordinary_singularity',
'is_p_integral', 'is_p_minimal', 'is_parent_of', 'is_projective',
'is_quadratic_twist', 'is_quartic_twist', 'is_semistable', 'is_sextic_twist',
'is_singular', 'is_smooth', 'is_supersingular', 'is_transverse',
'is_x_coord', 'isogenies_prime_degree', 'isogeny', 'isogeny_class',
'isogeny_codomain', 'isogeny_degree', 'isogeny_graph', 'isomorphism_to',
'isomorphisms', 'j_invariant', 'kodaira_symbol', 'kodaira_type',
'kodaira_type_old', 'kolyvagin_point', 'label', 'latex_name',
'latex_variable_names', 'lift_x', 'lll_reduce', 'lmfdb_page',
'local_coordinates', 'local_data', 'local_integral_model', 'local_minimal_model',
'lseries', 'lseries_gross_zagier', 'manin_constant', 'matrix_of_frobenius',
'minimal_discriminant_ideal', 'minimal_model', 'minimal_quadratic_twist', 'mod5family',
'modular_degree', 'modular_form', 'modular_parametrization', 'modular_symbol',
'modular_symbol_numerical', 'modular_symbol_space', 'multiplication_by_m', 'multiplication_by_m_isogeny',
'multiplicity', 'mwrank', 'mwrank_curve', 'neighborhood',
'newform', 'ngens', 'non_minimal_primes', 'nth_iterate',
'objgen', 'objgens', 'optimal_curve', 'orbit',
'ordinary_model', 'ordinary_primes', 'padic_E2', 'padic_height',
'padic_height_pairing_matrix', 'padic_height_via_multiply', 'padic_lseries', 'padic_regulator',
'padic_sigma', 'padic_sigma_truncated', 'parent', 'pari_curve',
'pari_mincurve', 'period_lattice', 'plane_projection', 'plot',
'point', 'point_homset', 'point_search', 'point_set',
'pollack_stevens_modular_symbol', 'preimage', 'projection', 'prove_BSD',
'q_eigenform', 'q_expansion', 'quadratic_transform', 'quadratic_twist',
'quartic_twist', 'rank', 'rank_bound', 'rank_bounds',
'rational_parameterization', 'rational_points', 'real_components', 'reduce',
'reduction', 'register_action', 'register_coercion', 'register_conversion',
'register_embedding', 'regulator', 'regulator_of_points', 'rename',
'reset_name', 'root_number', 'rst_transform', 'satisfies_heegner_hypothesis',
'saturation', 'save', 'scale_curve', 'selmer_rank',
'sextic_twist', 'sha', 'short_weierstrass_model', 'silverman_height_bound',
'simon_two_descent', 'singular_points', 'singular_subscheme', 'some_elements',
'specialization', 'structure_morphism', 'supersingular_primes', 'tamagawa_exponent',
'tamagawa_number', 'tamagawa_number_old', 'tamagawa_numbers', 'tamagawa_product',
'tamagawa_product_bsd', 'tangents', 'tate_curve', 'three_selmer_rank',
'torsion_order', 'torsion_points', 'torsion_polynomial', 'torsion_subgroup',
'two_descent', 'two_descent_simon', 'two_division_polynomial', 'two_torsion_rank',
'union', 'variable_name', 'variable_names', 'weierstrass_p',
'weil_restriction', 'zeta_series']
All the things
This gives quick access to some data that is not stored within the LMFDB, but which is relatively quickly computable. For example,

In [5]:
E.defining_ideal()
Out[5]:
Ideal (-x^3 + x*y*z + y^2*z + 887688*x*z^2 + 321987008*z^3) of Multivariate Polynomial Ring in x, y, z over Rational Field
But one of the great powers is that there are some things which are computed and stored in the LMFDB, and not in sage. We can now immediately give many examples of rank 3 elliptic curves with:

In [6]:
Es = lmfdb_ecurve.search(conductor=11050, torsion_order=2)
print("There are {} curves returned.".format(len(Es)))
E = Es[0]
print(E)
There are 10 curves returned.
Elliptic Curve defined by y^2 + x*y + y = x^3 - 3476*x - 79152 over Rational Field
And for these curves, the lmfdb contains data on its rank, generators, regulator, and so on.

In [7]:
print(E.gens())
print(E.rank())
print(E.regulator())
[(-34 : 17 : 1)]
1
1.63852610029
In [8]:
res = []
%time for E in Es: res.append(E.gens()); res.append(E.rank()); res.append(E.regulator())
CPU times: user 971 ms, sys: 6.82 ms, total: 978 ms
Wall time: 978 ms
That’s pretty fast, and this is because all of this was pulled from the LMFDB when the curves were returned by the search() function.
In this case, elliptic curves over the rationals are only an okay example, as they’re really well studied and sage can compute much of the data very quickly. On the other hand, through the LMFDB there are millions of examples and corresponding data at one’s fingertips.

This is where we’re really looking for input.

Think of what you might want to have easy access to through an interface from sage to the LMFDB, and tell us. We’re actively seeking comments, suggestions, and requests. Elliptic curves over the rationals are a prototype, and the LMFDB has lots of (much more challenging to compute) data. There is data on the LMFDB that is simply not accessible from within sage.
email: david@lowryduda.com, or post an issue on https://github.com/LMFDB/lmfdb/issues

Now let’s describe what’s going on under the hood a little bit

There is an API for the LMFDB at http://beta.lmfdb.org/api/. This API is a bit green, and we will change certain aspects of it to behave better in the future. A call to the API looks like

http://beta.lmfdb.org/api/elliptic_curves/curves/?rank=i1&conductor=i11050

The result is a large mess of data, which can be exported as json and parsed.
But that’s hard, and the resulting data are not sage objects. They are just strings or ints, and these require time and thought to parse.
So we created a module in sage that writes the API call and parses the output back into sage objects. The 22 curves given by the above API call are the same 22 curves returned by this call:

In [9]:
Es = lmfdb_ecurve.search(rank=1, conductor=11050, max_items=25)
print(len(Es))
E = Es[0]
22
The total functionality of this search function is visible from its current documentation.

In [10]:
# Execute this cell for the documentation
print(lmfdb_ecurve.search.__doc__)
    Search the LMFDB for an elliptic curve.

    Note that all inputs are optional, but at least one input is necessary.

    INPUT:

    -  ``label=l`` -- a string ``l`` representing a label in the LMFDB.

    -  ``degree=d`` -- an int ``d`` giving the minimum degree of a
       parameterization of the modular curve

    -  ``conductor=c`` -- an int ``c`` giving the conductor of the curve

    -  ``min_conductor=mc`` -- an int ``mc`` giving a lower bound on the
       conductor for desired curves

    -  ``max_conductor=mc`` -- an int ``mc`` giving an upper bound on the
       conductor for desired curves

    -  ``torsion_order=t`` -- an int ``t`` giving the order of the torsion
       subgroup of the curve

    -  ``rank=r`` -- an int ``r`` giving the rank of the curve

    -  ``regulator=f`` -- a float ``f`` giving the regulator of the curve

    -  ``max_items=m`` -- an int ``m`` (default: 10, max: 100) indicating the
       maximum number of results to return

    -  ``base_item=b`` -- an int ``b`` (default: 0) specifying where to start
       returning values from. The search will begin by returning the ``b``th
       curve. Combined with ``max_items`` to return data in chunks.

    -  ``sort=s`` -- a string ``s`` specifying what database field to sort the
       results on. See the LMFDB api for more info.

    EXAMPLES::

        sage: Es = search(conductor=11050, rank=2)
        [Elliptic Curve defined by y^2 + x*y = x^3 - x^2 - 442*x + 1716 over Rational Field, Elliptic Curve defined by y^2 + x*y = x^3 - x^2 + 1558*x + 11716 over Rational Field]
        sage: E = E[0]
        sage: E.conductor()
        11050
    
In [11]:
# So, for instance, one could perform the following search, finding a unique elliptic curve
lmfdb_ecurve.search(rank=2, torsion_order=3, degree=4608)
Out[11]:
[Elliptic Curve defined by y^2 + y = x^3 + x^2 - 5155*x + 140756 over Rational Field]

What if there are no curves?

If there are no curves satisfying the search criteria, then a message is displayed and that’s that. These searches may take a couple of seconds to complete.
For example, no elliptic curve in the database has rank 5.

In [12]:
lmfdb_ecurve.search(rank=5)
No fields were found satisfying input criteria.

How does one step through the data?

Right now, at most 100 curves are returned in a single API call. This is the limit even from directly querying the API. But one can pass in the argument base_item (the name will probably change… to skip? or perhaps to offset?) to start returning at the base_itemth element.

In [13]:
from pprint import pprint
pprint(lmfdb_ecurve.search(rank=1, max_items=3))              # The last item in this list
print('')
pprint(lmfdb_ecurve.search(rank=1, max_items=3, base_item=2)) # should be the first item in this list
[Elliptic Curve defined by y^2 + x*y = x^3 - 887688*x - 321987008 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 + 10795*x - 97828 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field]

[Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 2294115305*x - 42292668425178 over Rational Field,
 Elliptic Curve defined by y^2 + x*y + y = x^3 - x^2 - 3170*x - 49318 over Rational Field,
 Elliptic Curve defined by y^2 + y = x^3 + 1050*x - 26469 over Rational Field]
Included in the documentation is also a bit of hopefulness. Right now, the LMFDB API does not actually accept max_conductor or min_conductor (or arguments of that type). But it will sometime. (This introduces a few extra difficulties on the server side, and so it will take some extra time to decide how to do this).

In [14]:
lmfdb_ecurve.search(rank=1, min_conductor=500, max_conductor=10000)  # Not implemented
---------------------------------------------------------------------------
NotImplementedError                       Traceback (most recent call last)
<ipython-input-14-3d98f2cf7a13> in <module>()
----> 1 lmfdb_ecurve.search(rank=Integer(1), min_conductor=Integer(500), max_conductor=Integer(10000))  # Not implemented

/home/djlowry/Dropbox/EllipticCurve_LMFDB/LMFDB2sage/elliptic_curves.py in search(**kwargs)
     76             kwargs[item]
     77             raise NotImplementedError("This would be a great thing to have, " +
---> 78                 "but the LMFDB api does not yet provide this functionality.")
     79         except KeyError:
     80             pass

NotImplementedError: This would be a great thing to have, but the LMFDB api does not yet provide this functionality.
Our EllipticCurve_rational_field_lmfdb class constructs a sage elliptic curve from the json and overrides (somem of the) the default methods in sage if there is quicker data available on the LMFDB. In principle, this new object is just a sage object with some slightly different methods.
Generically, documentation and introspection on objects from this class should work. Much of sage’s documentation carries through directly.

In [15]:
print(E.gens.__doc__)
        Return generators for the Mordell-Weil group E(Q) *modulo*
        torsion.

        .. warning::

           If the program fails to give a provably correct result, it
           prints a warning message, but does not raise an
           exception. Use :meth:`~gens_certain` to find out if this
           warning message was printed.

        INPUT:

        - ``proof`` -- bool or None (default None), see
          ``proof.elliptic_curve`` or ``sage.structure.proof``

        - ``verbose`` - (default: None), if specified changes the
           verbosity of mwrank computations

        - ``rank1_search`` - (default: 10), if the curve has analytic
          rank 1, try to find a generator by a direct search up to
          this logarithmic height.  If this fails, the usual mwrank
          procedure is called.

        - algorithm -- one of the following:

          - ``'mwrank_shell'`` (default) -- call mwrank shell command

          - ``'mwrank_lib'`` -- call mwrank C library

        - ``only_use_mwrank`` -- bool (default True) if False, first
          attempts to use more naive, natively implemented methods

        - ``use_database`` -- bool (default True) if True, attempts to
          find curve and gens in the (optional) database

        - ``descent_second_limit`` -- (default: 12) used in 2-descent

        - ``sat_bound`` -- (default: 1000) bound on primes used in
          saturation.  If the computed bound on the index of the
          points found by two-descent in the Mordell-Weil group is
          greater than this, a warning message will be displayed.

        OUTPUT:

        - ``generators`` - list of generators for the Mordell-Weil
           group modulo torsion

        IMPLEMENTATION: Uses Cremona's mwrank C library.

        EXAMPLES::

            sage: E = EllipticCurve('389a')
            sage: E.gens()                 # random output
            [(-1 : 1 : 1), (0 : 0 : 1)]

        A non-integral example::

            sage: E = EllipticCurve([-3/8,-2/3])
            sage: E.gens() # random (up to sign)
            [(10/9 : 29/54 : 1)]

        A non-minimal example::

            sage: E = EllipticCurve('389a1')
            sage: E1 = E.change_weierstrass_model([1/20,0,0,0]); E1
            Elliptic Curve defined by y^2 + 8000*y = x^3 + 400*x^2 - 320000*x over Rational Field
            sage: E1.gens() # random (if database not used)
            [(-400 : 8000 : 1), (0 : -8000 : 1)]
        
Modified methods should have a note indicating that the data comes from the LMFDB, and then give sage’s documentation. This is not yet implemented. (So if you examine the current version, you can see some incomplete docstrings like regulator().)

In [16]:
print(E.regulator.__doc__)
        Return the regulator of the curve. This is taken from the lmfdb if available.

        NOTE:
            In later implementations, this docstring will probably include the
            docstring from sage's regular implementation. But that's not
            currently the case.
        

This concludes our demo of an interface between sage and the LMFDB.

Thank you, and if you have any questions, comments, or concerns, please find me/email me/raise an issue on LMFDB’s github.
XKCD's automation

Posted in Expository, LMFDB, Math.NT, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | Leave a comment

Experimenting with latex2html5: PSTricks to HTML interactivity

I recently learned about about latex2html5, a javascript library which allows one to write LaTeX and PSTricks to produce interactive objects on websites.At its core, it functions in a similar way to MathJax, which is what I use to generate mathematics on this (and my other) sites. As an example of MathJax, I can write the following.

$$ \int_0^1 f(x) dx = F(1) – F(0). $$

The dream of latex2html5 is to be able to describe a diagram using the language of PSTricks inside LaTeX, throw in a bit of sugar to describe how interactivity should work on the web, and then render this to a beautiful svg using javascript.

Unfortunately, I did not try to make this work on WordPress (as WordPress is a bit finicky about how it interacts with javascript). So instead, I wrote a more detailed description about latex2html5, including some examples and some criticisms, on my non-Wordpress website david.lowryduda.com.

 

Posted in Programming | Leave a comment

Revealing zero in fully homomorphic encryption is a Bad Thing

When I was first learning number theory, cryptography seemed really fun and really practical. I thought elementary number theory was elegant, and that cryptography was an elegant application. As I continued to learn more about mathematics, and in particular modern mathematics, I began to realize that decades of instruction and improvement (and perhaps of more useful points of view) have simplified the presentation of elementary number theory, and that modern mathematics is less elegant in presentation.

Similarly, as I learned more about cryptography, I learned that though the basic ideas are very simple, their application is often very inelegant. For example, the basis of RSA follows immediately from Euler’s Theorem as learned while studying elementary number theory, or alternately from Lagrange’s Theorem as learned while studying group theory or abstract algebra. And further, these are very early topics in these two areas of study!

But a naive implementation of RSA is doomed (For that matter, many professional implementations have their flaws too). Every now and then, a very clever expert comes up with a new attack on popular cryptosystems, generating new guidelines and recommendations. Some guidelines make intuitive sense [e.g. don’t use too small of an exponent for either the public or secret keys in RSA], but many are more complicated or designed to prevent more sophisticated attacks [especially side-channel attacks].

In the summer of 2013, I participated in the ICERM IdeaLab working towards more efficient homomorphic encryption. We were playing with existing homomorphic encryption schemes and trying to come up with new methods. One guideline that we followed is that an attacker should not be able to recognize an encryption of zero. This seems like a reasonable guideline, but I didn’t really understand why, until I was chatting with others at the 2017 Joint Mathematics Meetings in Atlanta.

It turns out that revealing zero isn’t just against generally sound advice. Revealing zero is a capital B capital T Bad Thing.

Basic Setup

For the rest of this note, I’ll try to identify some of this reasoning.

In a typical cryptosystem, the basic setup is as follows. Andrew has a message that he wants to send to Beatrice. So Andrew converts the message into a list of numbers $M$, and uses some sort of encryption function $E(\cdot)$ to encrypt $M$, forming a ciphertext $C$. We can represent this as $C = E(M)$. Andrew transmits $C$ to Beatrice. If an eavesdropper Eve happens to intercept $C$, it should be very hard for Eve to recover any information about the original message from $C$. But when Beatrice receives $C$, she uses a corresponding decryption function $D(\cdot)$ to decrypt $C$, $M = d(C)$.

Often, the encryption and decryption techniques are based on number theoretic or combinatorial primitives. Some of these have extra structure (or at least they do with basic implementation). For instance, the RSA cryptosystem involves a public exponent $e$, a public mod $N$, and a private exponent $d$. Andrew encrypts the message $M$ by computing $C = E(M) \equiv M^e \bmod N$. Beatrice decrypts the message by computing $M = C^d \equiv M^{ed} \bmod N$.

Notice that in the RSA system, given two messages $M_1, M_2$ and corresponding ciphertexts $C_1, C_2$, we have that
\begin{equation}
E(M_1 M_2) \equiv (M_1 M_2)^e \equiv M_1^e M_2^e \equiv E(M_1) E(M_2) \pmod N. \notag
\end{equation}
The encryption function $E(\cdot)$ is a group homomorphism. This is an example of extra structure.

A fully homomorphic cryptosystem has an encryption function $E(\cdot)$ satisfying both $E(M_1 + M_2) = E(M_1) + E(M_2)$ and $E(M_1M_2) = E(M_1)E(M_2)$ (or more generally an analogous pair of operations). That is, $E(\cdot)$ is a ring homomorphism.

This extra structure allows for (a lot of) extra utility. A fully homomorphic $E(\cdot)$ would allow one to perform meaningful operations on encrypted data, even though you can’t read the data itself. For example, a clinic could store (encrypted) medical information on an external server. A doctor or nurse could pull out a cellphone or tablet with relatively little computing power or memory and securely query the medical data. Fully homomorphic encryption would allow one to securely outsource data infrastructure.

A different usage model suggests that we use a different mental model. So suppose Alice has sensitive data that she wants to store for use on EveCorp’s servers. Alice knows an encryption method $E(\cdot)$ and a decryption method $D(\cdot)$, while EveCorp only ever has mountains of ciphertexts, and cannot read the data [even though they have it].

Why revealing zero is a Bad Thing

Let us now consider some basic cryptographic attacks. We should assume that EveCorp has access to a long list of plaintext messages $M_i$ and their corresponding ciphertexts $C_i$. Not everything, but perhaps from small leaks or other avenues. Among the messages $M_i$ it is very likely that there are two messages $M_1, M_2$ which are relatively prime. Then an application of the Euclidean Algorithm gives a linear combination of $M_1$ and $M_2$ such that
\begin{equation}
M_1 x + M_2 y = 1 \notag
\end{equation}
for some integers $x,y$. Even though EveCorp doesn’t know the encryption method $E(\cdot)$, since we are assuming that they have access to the corresponding ciphertexts $C_1$ and $C_2$, EveCorp has access to an encryption of $1$ using the ring homomorphism properties:
\begin{equation}\label{eq:encryption_of_one}
E(1) = E(M_1 x + M_2 y) = x E(M_1) + y E(M_2) = x C_1 + y C_2.
\end{equation}
By multiplying $E(1)$ by $m$, EveCorp has access to a plaintext and encryption of $m$ for any message $m$.

Now suppose that EveCorp can always recognize an encryption of $0$. Then EveCorp can mount a variety of attacks exposing information about the data it holds.

For example, EveCorp can test whether a particular message $m$ is contained in the encrypted dataset. First, EveCorp generates a ciphertext $C_m$ for $m$ by multiplying $E(1)$ by $m$, as in \eqref{eq:encryption_of_one}. Then for each ciphertext $C$ in the dataset, EveCorp computes $C – C_m$. If $m$ is contained in the dataset, then $C – C_m$ will be an encryption of $0$ for the $C$ corresponding to $m$. EveCorp recognizes this, and now knows that $m$ is in the data. To be more specific, perhaps a list of encrypted names of medical patients appears in the data, and EveCorp wants to see if JohnDoe is in that list. If they can recognize encryptions of $0$, then EveCorp can access this information.

And thus it is unacceptable for external entities to be able to consistently recognize encryptions of $0$.

Up to now, I’ve been a bit loose by saying “an encryption of zero” or “an encryption of $m$”. The reason for this is that to protect against recognition of encryptions of $0$, some entropy is added to the encryption function $E(\cdot)$, making it multivalued. So if we have a message $M$ and we encrypt it once to get $E(M)$, and we encrypt $M$ later and get $E'(M)$, it is often not true that $E(M) = E'(M)$, even though they are both encryptions of the same message. But these systems are designed so that it is true that $C(E(M)) = C(E'(M)) = M$, so that the entropy doesn’t matter.

This is a separate matter, and something that I will probably return to later.

Posted in Crypto, Math.NT, Mathematics, Programming | Tagged , | Leave a comment