Category Archives: Brown University

Math 100 Fall 2016: Concluding Remarks

It is that time of year. Classes are over. Campus is emptying. Soon it will be mostly emptiness, snow, and grad students (who of course never leave).

I like to take some time to reflect on the course. How did it go? What went well and what didn’t work out? And now that all the numbers are in, we can examine course trends and data.

Since numbers are direct and graphs are pretty, let’s look at the numbers first.

Math 100 grades at a glance

Let’s get an understanding of the distribution of grades in the course, all at once.

box_plots

These are classic box plots. The center line of each box denotes the median. The left and right ends of the box indicate the 1st and 3rd quartiles. As a quick reminder, the 1st quartile is the point where 25% of students received that grade or lower. The 3rd quartile is the point where 75% of students received that grade or lower. So within each box lies 50% of the course.

Each box has two arms (or “whiskers”) extending out, indicating the other grades of students. Points that are plotted separately are statistical outliers, which means that they are $1.5 \cdot (Q_3 – Q_1)$ higher than $Q_3$ or lower than $Q_1$ (where $Q_1$ denotes the first quartile and $Q_3$ indicates the third quartile).

A bit more information about the distribution itself can be seen in the following graph.

violin_plot

Within each blob, you’ll notice an embedded box-and-whisker graph. The white dots indicate the medians, and the thicker black parts indicate the central 50% of the grade. The width of the colored blobs roughly indicate how many students scored within that region. [As an aside, each blob actually has the same area, so the area is a meaningful data point].

So what can we determine from these graphs? Firstly, students did extremely well in their recitation sections and on the homework. I am perhaps most stunned by the tightness of the homework distribution. Remarkably, 75% of students had at least a 93 homework average. Recitation scores were very similar.

I also notice some patterns between the two midterms and final. The median on the first midterm was very high and about 50% of students earned a score within about 12 points of the median. The median on the second midterm was a bit lower, but the spread of the middle 50% of students was about the same. However the lower end was significantly lower on the second midterm in comparison to the first midterm. The median on the final was significantly lower, and the 50% spread was much, much larger.

Looking at the Overall grade, it looks very similar to the distribution of the first midterm, except shifted a bit.

It is interesting to note that that Recitation (10%), Homework (20%), and the First Midterm (20%) accounted for 50% of the course grade; the Second Midterm (20%) and the Final (30%) accounted for the other 50% of the course grade. The Recitation, Homework, and First Midterm grades pulled the Overall grade distribution up, while the Second Midterm and Final pulled the Overall grade distribution down.

Correlation between assignments and Overall Grade

I post the question: was any individual assignment type a good predictor of the final grade? For example, to what extent can we predict your final grade based on your First Midterm grade?

hw_vs_overall

No, doing well on homework is a terrible predictor of final grade. The huge vertical cluster of dots indicates that the overall grades vary significantly over a very small amount of homework. However, I note that doing poorly on homework is a great predictor of doing poorly overall. No one whose homework average was below an 80 got an A in the course. Having a homework grade below a 70 is a very strong indicator of failing the course. In terms of Pearson’s R correlation, one might say that about 40% of overall performance is predicted from performance on homework (which is very little).

Although drastic, this is in line with my expectations for calculus courses. This is perhaps a bit more extreme than normal — the level of clustering in the homework averages is truly stunning. Explaining this is a bit hard. It is possible to get homework help from the instructor or TA, or to work with other students, or to get help from the Math Resource Center or other tutoring. It is also possible to cheat, either with a solutions manual (which I know some students have), or a paid answer service (which I also witnessed), or to check answers on a computer algebra system like WolframAlpha. Each of these weakens the relationship between homework as an indicator of mastery.

In the calculus curriculum at Brown, I think it’s safe to say that homework plays a formative role instead of a normative role. It serves to provide opportunities for students to work through and learn material, and we don’t expect the grades to correspond strongly to understanding. To that end, half of the homework isn’t even collected.

m1_vs_overall

m2_vs_overall

The two midterms each correlate pretty strongly with Overall grade. In particular, the second midterm visually indicates really strong correlation. Statistically speaking (i.e. from Pearson’s R), it turns out that 67% of the Overall grade can be predict from the First Midterm (higher than might be expected) and 80% can be predicted from the Second Midterm (which is really, really high).

If we are willing to combine some pieces of information, the Homework and First Midterm (together) predict 77% of the Overall grade. As each student’s initial homework effort is very indicative of later homework, this means that we can often predict a student’s final grade (to a pretty good accuracy) as early as the first midterm.

(The Homework and the Second Midterm together predict 85% of the Overall grade. The two midterms together predict 88% of the Overall grade.)

This has always surprised me a bit, since for many students the first midterm is at least partially a review of material taught before. However, this course is very cumulative, so it does make sense that doing poorly on earlier tests indicates a hurdle that must be overcome in order to succeed on later tests. This is one of the unforgiving aspects of math, the sciences, and programming — early disadvantages compound. I’ve noted roughly this pattern in the past as well.

final_vs_overall

However the correlation between the Final and the Overall grade is astounding. I mean, look at how much the relationship looks like a line. Even the distributions (shown around the edges) look similar. Approximately 90% of the Overall grade is predicted by the grade on the Final Exam.

This is a bit atypical. One should expect a somewhat high correlation, as the final exam is cumulative and covers everything from the course (or at least tries to). But 90% is extremely high.

I think one reason why this occurred this semester is that the final exam was quite hard. It was distinctly harder than the midterms (though still easier than many of the homework problems). A hard final gives more opportunities for students who really understand the material to demonstrate their mastery. Conversely, a hard final punishes students with only a cursory understanding. Although stressful, I’ve always been a fan of exams that are difficult enough to distinguish between students, and to provide a chance for students to catch up. See Chances for a Comeback below for more on this.

Related statistics of interest might concern to what extent performance on the First Midterm predicts performance on the Second Midterm (44%) or the Final Exam (48%), or to what extent the Second Midterm predicts performance on the Final Exam (63%).

Homework and Recitations

hw_vs_m1As mentioned above, homework performance is a terrible predictor of course grade. I thought it was worth diving into a bit more. Does homework performance predict anything well? The short answer is not really.

Plotting Homework grade vs the First Midterm shows such a lack of meaning that it doesn’t even make sense to try to draw a line of best fit.

To be fair, homework is a better predictor of performance on the Second Midterm and Final Exam, but it’s still very bad.

rec_vs_hwHere’s a related question: what about Recitation sections? Are these good predictors of any other aspect of the course?

Plotting Recitation vs Homework is sort of interesting. Evidently, most people did very well on both homework and recitation. It is perhaps no surprise that most students who did very well in Recitation also did very well on their Homework, and vice versa. However it turns out that there are more people with high recitation grades and low homework grades than the other way around. But thinking about it, this makes sense.

These distributions are so tight that it still doesn’t make sense to try to draw a line of best fit or to talk about Pearson coefficients – most variation is simply too small to be meaningful.

Together, Homework and Recitation predict a measly 50% of the Overall grade of the course (in the Pearson’s R sense). One would expect more, as Homework and Recitation are directly responsible for 30% of the Overall grade, and one would expect homework and recitation to correlate at least somewhat meaningfully with the rest of graded content of the course, right?

I guess not.

So what does this mean about recitation and homework? Should we toss them aside? Does something need to be changed?

I would say “Not necessarily,” as it is important to recognize that not all grades are equal. Both homework and recitation are the places for students to experiment and learn. Recitations are supposed to be times where students are still learning material. They are to be inoffensive and safe, where students can mess up, fall over, and get back up again with the help of their peers and TA. I defend the lack of stress on grade or challenging and rigorous examination during recitation.

Homework is sort of the same, and sort of completely different. What gives me pause concerning homework is that homework is supposed to be the barometer by which students can measure their own understanding. When students ask us about how they should prepare for exams, our usual response is “If you can do all the homework (including self-check) without referencing the book, then you will be well-prepared for the exam.” If homework grade is such a poor predictor of exam grades, then is it possible that homework gives a poor ruler for students to measure themselves by?

I’m not sure. Perhaps it would be a good idea to indicate all the relevant questions in the textbook so that students have more problems to work on. In theory, students could do this themselves (and for that matter, I’m confident that such students would do very well in the course). But the problem is that we only cover a subset of the material in most sections of the textbook, and many questions (even those right next to ones we assign) require ideas or concepts that we don’t teach.

On the other hand, learning how to actually learn is a necessary skill, and probably one that most people struggle with when they first actually have to learn it. It’s necessary to learn it sooner or later.

Chances for a Comeback

The last numerical aspect I’ll consider is about whether or not it is possible to come back after doing badly on an earlier assessment. There are two things to consider: whether it is actually feasible or not, and whether any students did make it after a poor initial/early performance.

As to whether it is possible, the answer is yes (but it may be hard). And the reason why is that the Second Midterm and Final grades were each relatively low. It may be counterintuitive, but in order to return from a failing grade, it is necessary that there be enough room to actually come back.

Suppose Aiko is a pretty good student, but it just so happens that she makes a 49 on the first midterm due to some particular misunderstanding. If the class average on every assessment is a 90, then Aiko cannot claw her way back. That is, even if Aiko makes a 100 on everything else, Aiko’s final grade would be below a 90, and thus below average. Aiko would probably make a B.

In this situation, the class is too easy, and thus there are no chances for students to overcome a setback on any single exam.

On the other hand, suppose that Bilal makes a 49 on the first midterm, but that the class average is a 75 overall. If Bilal makes a 100 on everything else, Bilal will  end with just below a 90, significantly above the class average. Bilal would probably make an A.

In this course, the mean overall was a 78, and the standard deviation was about 15. In this case, an 89 would be an A. So there was enough space and distance to overcome even a disastrous exam.

But, did anyone actually do this? The way I like to look at this is to look at changes in relative performance in terms of standard deviations away from the mean. Performing at one standard deviation below the mean on Midterm 1 and one standard deviation above the mean on Midterm 2 indicates a more meaningful amount of grade fluidity than merely looking at points above or below the mean

m1_vs_m2_stddevs

Looking at the First Midterm vs the Second Midterm, we see that there is a rough linear relationship (Pearson R suggests 44% predictive value). That’s to be expected. What’s relevant now are the points above or below the line $y = x$. To be above the line $y = x$ means that you did better on the Second Midterm than you did on the First Midterm, all in comparison to the rest of the class. To be below means the opposite.

Even more relevant is to be in the Fourth Quadrant, which indicates that you did worse than average on the first midterm and better than average on the second. Looking here, there is a very healthy amount of people who are in the Fourth Quadrant. There are many people who changed by 2 standard deviations in comparison to the mean — a very meaningful change. [Many people lost a few standard deviations too – grade mobility is a two way street].

m1_vs_overall_stddevsm2_vs_overall_stddevs

The First Midterm to the Overall grade shows healthy mobility as well.

The Second Midterm to Overall shows some mobility, but it is interesting that more people lost ground (by performing well on the Second Midterm, and then performing badly Overall) than gained ground (by performing badly on the Second Midterm, but performing well Overall).

Although I don’t show the plots, this trend carries through pretty well. Many people were able to salvage or boost a letter grade based solely on the final (and correspondingly many people managed to lose just enough on the final to drop a letter grade). Interestingly, very many people were able to turn a likely B into an A through the final.

So overall, I would say that it was definitely possible to salvage grades this semester.

If you’ve never thought about this before, then keep this in mind the next time you hear complaints about a course with challenging exams — it gives enough space for students to demonstrate sufficient understanding to make up for a bad past assessment.

Non-Numerical Reflection

The numbers tell some characteristics of the class, but not the whole story.

Standard Class Materials

We used Thomas’ Calculus. I think this is an easy book to teach from, and relatively easy to read. It feels like many other cookie-cutter calculus books (such as Larson and Edwards or Stewart). But it’s quite expensive for students. However, as we do not use an electronic homework component (which seems to be becoming more popular elsewhere), at least students can buy used copies (or use other methods of procural).

However, solutions manuals are available online (I noticed some students had copies). Some of the pay-for sites have complete (and mostly but not entirely correct) provided solutions manuals as well. This makes some parts of Thomas challenging to use, especially as we do not write our own homework to give. I suppose that this is a big reason why one might want to use an electronic system.

The book has much more material in it than we teach. For instance, the book includes all of a first semester of calculus, and also more details in many sections. We avoid numerical integration, Fourier series, some applications, some details concerning polar and parametric plots, etc. Ideally, there would exist a book catering to exactly our needs. But there isn’t, so I suppose Thomas is about as good as any.

Additional Course Materials

I’ve now taught elementary calculus for a few years, and I’m surprised at how often I am able to reuse two notes I wrote years ago, namely the refresher on first semester calculus (An Intuitive Introduction to Calculus) and my additional note on Taylor series (An Intuitive Overview of Taylor Series). Perhaps more surprisingly, I’m astounded at how often people from other places link to and visit these two notes (and in particular, the Taylor Series note).

These were each written for a Math 100 course in 2013. So my note to myself is that there is good value in writing something well enough that I can reuse it, and others might even find it valuable.

Unfortunately, while I wrote a few notes this semester, I don’t think that they will have the same lasting appeal. The one I wrote on the series convergence tests is something that (perhaps after one more round of editing) I will use each time I teach this subject in the future. I’m tremendously happy with my note on computing $\pi$ with Math 100 tools, but as it sits outside the curriculum, many students won’t actually read it. [However many did read it, and it generated many interesting conversations about actual mathematics]. Perhaps sometime I will teach a calculus class ending with some sort of project, as computing $\pi$ leads to very many interested and interrelated project thoughts.

Course Content

I must admit that I do not know why this course is the way it is, and this bothers me a bit. In many ways, this course is a grab bag of calculus nuggets. Presumably each piece was added in because it is necessary in sufficiently many other places, or is so directly related to the “core material” of this course, that it makes sense to include it. But from what I can tell, these reasons have been lost to the sands of time.

The core material in this course are: Integration by Parts, Taylor’s Theorem, Parametric and Polar coordinates, and First Order Linear Differential Equations. We also spend a large amount of time towards other techniques of integration (partial fraction decomposition, trig substitution) and understanding generic series (including the various series convergence/divergence tests). Along the way, there are some seemingly arbitrary decisions on what to include or exclude. For instance, we learn how to integrate

$$ \int \sin^n x \cos^m x \; dx$$

because we have decided that being able to perform trigonometric substitution in integrals is a good idea. But we omit integrals like

$$ \int \sin(nx) \sin(mx) \; dx$$

which would come up naturally in talking about Fourier series. Fourier series fit naturally into this class, and in some variants of this class they are taught. But so does trigonometric substitution! So what is the rationale here? If the answer is to become better at problem solving or to develop mathematical maturity, then I think it would be good to recognize that so that we know what we should feel comfortable wiggling to build and develop the curriculum in the future. [Also, students should know that calculus is not a pinnacle. See for instance this podcast with Steven Strogatz on Innovation Hub.]

This is not restricted to Brown. I’m familiar with the equivalent of this course at other institutions, and there are similar seemingly arbitrary differences in what to include or exclude. For years at Georgia Tech, they tossed in a several week unit on linear algebra into this course [although I’ve learned that they stopped that in the past two years]. The AP Calc BC curriculum includes trig substitution but not Fourier series. Perhaps they had a reason?

What this means to me is that the intent of this course has become muddled, and separated from the content of the course. This is an overwhelmingly hard task to try to fix, as a second semester of calculus fits right in the middle of so many other pieces. Yet I would be very grateful to the instructor who sits down and identifies reasons for or against inclusion of the various topics in this course, or perhaps cuts the calculus curriculum into pieces and rearranges them to fit modern necessities.

A Parachute is only necessary to go skydiving twice

This is the last class I teach at Brown as a graduate student (and most likely, ever). Amusingly, I taught it in the same room as the first course I taught as a graduate student. I’ve learned quite a bit about teaching inbetween, but in many ways it feels the same. Just like for students, the only scary class is the first one, although exams can be a real pain (to take, or to grade).

It’s been a pleasure. As usual, if you have any questions, please let me know.

Posted in Brown University, Math 100, Mathematics, Teaching | Tagged , , , , | Leave a comment

Computing pi with tools from Calculus

Computing $\pi$

This note was originally written in the context of my fall Math 100 class at Brown University. It is also available as a pdf note.

While investigating Taylor series, we proved that
\begin{equation}\label{eq:base}
\frac{\pi}{4} = 1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \frac{1}{9} + \cdots
\end{equation}
Let’s remind ourselves how. Begin with the geometric series
\begin{equation}
\frac{1}{1 + x^2} = 1 – x^2 + x^4 – x^6 + x^8 + \cdots = \sum_{n = 0}^\infty (-1)^n x^{2n}. \notag
\end{equation}
(We showed that this has interval of convergence $\lvert x \rvert < 1$). Integrating this geometric series yields
\begin{equation}
\int_0^x \frac{1}{1 + t^2} dt = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}. \notag
\end{equation}
Note that this has interval of convergence $-1 < x \leq 1$.

We also recognize this integral as
\begin{equation}
\int_0^x \frac{1}{1 + t^2} dt = \text{arctan}(x), \notag
\end{equation}
one of the common integrals arising from trigonometric substitution. Putting these together, we find that
\begin{equation}
\text{arctan}(x) = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}. \notag
\end{equation}
As $x = 1$ is within the interval of convergence, we can substitute $x = 1$ into the series to find the representation
\begin{equation}
\text{arctan}(1) = 1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{1}{2n+1}. \notag
\end{equation}
Since $\text{arctan}(1) = \frac{\pi}{4}$, this gives the representation for $\pi/4$ given in \eqref{eq:base}.

However, since $x=1$ was at the very edge of the interval of convergence, this series converges very, very slowly. For instance, using the first $50$ terms gives the approximation
\begin{equation}
\pi \approx 3.121594652591011. \notag
\end{equation}
The expansion of $\pi$ is actually
\begin{equation}
\pi = 3.141592653589793238462\ldots \notag
\end{equation}
So the first $50$ terms of \eqref{eq:base} gives two digits of accuracy. That’s not very good.

I think it is very natural to ask: can we do better? This series converges slowly — can we find one that converges more quickly?

Aside

As an aside: one might also ask if we can somehow speed up the convergence of the series we already have. It turns out that in many cases, you can! For example, we know in alternating series that the sum of the whole series is between any two consecutive partial sums. So what if you took the average of two consecutive partial sums? [Equivalently, what if you added only one half of the last term in a partial sum. Do you see why these are the same?]

The average of the partial sum of the first 49 terms and the partial sum of the first 50 terms is actually
\begin{equation}
3.141796672793031, \notag
\end{equation}
which is correct to within $0.001$. That’s an improvement!

What if you do still more? More on this can be found in the last Section.

Estimating $\pi$ through a different series

We return to the question: can we find a series that gives us $\pi$, but which converges faster? Yes we can! And we don’t have to look too far — we can continue to rely on our expansion for $\text{arctan}(x)$.

We had been using that $\text{arctan}(1) = \frac{\pi}{4}$. But we also know that $\text{arctan}(1/\sqrt{3}) = \frac{\pi}{6}$. Since $1/\sqrt{3}$ is closer to the center of the power series than $1$, we should expect that the convergence is much better.

Recall that
\begin{equation}
\text{arctan}(x) = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n + 1}}{2n + 1}. \notag
\end{equation}
Then we have that
\begin{align}
\text{arctan}\left(\frac{1}{\sqrt 3}\right) &= \frac{1}{\sqrt 3} – \frac{1}{3(\sqrt 3)^3} + \frac{1}{5(\sqrt 3)^5} + \cdots \notag \\
&= \frac{1}{\sqrt 3} \left(1 – \frac{1}{3 \cdot 3} + \frac{1}{5 \cdot 3^2} – \frac{1}{7 \cdot 3^3} + \cdots \right) \notag \\
&= \frac{1}{\sqrt 3} \sum_{n = 0}^\infty (-1)^n \frac{1}{(2n + 1) 3^n}. \notag
\end{align}
Therefore, we have the equality
\begin{equation}
\frac{\pi}{6} = \frac{1}{\sqrt 3} \sum_{n = 0}^\infty (-1)^n \frac{1}{(2n + 1) 3^n} \notag
\end{equation}
or rather that
\begin{equation}
\pi = 2 \sqrt{3} \sum_{n = 0}^\infty (-1)^n \frac{1}{(2n + 1) 3^n}. \notag
\end{equation}
From a computation perspective, this is far superior. For instance, based on our understanding of error from the alternating series test, using the first $10$ terms of this series will approximate $\pi$ to within
\begin{equation}
2 \sqrt 3 \frac{1}{23 \cdot 3^{11}} \approx \frac{1}{26680}. \notag
\end{equation}

Let’s check this.
\begin{equation}
2 \sqrt 3 \left(1 – \frac{1}{3\cdot 3} + \frac{1}{5 \cdot 3^2} + \cdots + \frac{1}{21 \cdot 3^{10}}\right) = 3.1415933045030813. \notag
\end{equation}
Look at how close that approximation is, and we only used the first $10$ terms!
Roughly speaking, each additional 2.5 terms yields another digit of $\pi$. Using the first $100$ terms would give the first 48 digits of $\pi$.
Using the first million terms would give the first 47000 (or so) digits of $\pi$ — and this is definitely doable, even on a personal laptop. (On my laptop, it takes approximately 4 milliseconds to compute the first 20 digits of $\pi$ using this technique).

Even Better Series

I think it is very natural to ask again: can we find an even faster converging series? Perhaps we can choose better values to evaluate arctan at? This turns out to be a very useful line of thought, and it leads to some of the best-known methods for evaluating $\pi$. Through clever choices of values and identities involving arctangents, one can construct extremely quickly converging series for $\pi$. For more information on this line of thought, look up Machin-like formula.

Patterns in the Approximation of $\pi/4$

 

Looking back at the approximation of $\pi$ coming from the first $50$ terms of the series
\begin{equation}\label{eq:series_pi4_base}
1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \cdots
\end{equation}
we found an approximation of $\pi$, which I’ll represent as $\widehat{\pi}$,
\begin{equation}
\pi \approx \widehat{\pi} = 3.121594652591011. \notag
\end{equation}
Let’s look very carefully at how this compares to $\pi$, up to the first $10$ decimals. We color the incorrect digits in ${\color{orange}{orange}}$.
\begin{align}
\pi &= 3.1415926535\ldots \notag \\
\widehat{\pi} &= 3.1{\color{orange}2}159{\color{orange}4}65{\color{orange}2}5 \notag
\end{align}
Notice that most of the digits are correct — in fact, only three (of the first ten) are incorrect! Isn’t that weird?

It happens to be that when one uses the first $10^N / 2$ terms (for any $N$) of the series \eqref{eq:series_pi4_base}, there will be a pattern of mostly correct digits with disjoint strings of incorrect digits in the middle. This is an unusual and surprising phenomenon.

The positions of the incorrect digits can be predicted. Although I won’t go into any detail here, the positions of the errors are closely related to something called Euler Numbers or, more deeply, to Boole Summation.

Playing with infinite series leads to all sorts of interesting patterns. There is a great history of mathematicians and physicists messing around with series and stumbling across really deep ideas.

Speeding up computation

Take an alternating series
\begin{equation}
\sum_{n = 0}^\infty (-1)^{n} a_n = a_0 – a_1 + a_2 – a_3 + \cdots \notag
\end{equation}
If ${a_n}$ is a sequence of positive, decreasing terms with limit $0$, then the alternating series converges to some value $S$. And further, consecutive partial sums bound the value of $S$, in that
\begin{equation}
\sum_{n = 0}^{2K-1} (-1)^{n} a_n \leq S \leq \sum_{n = 1}^{2K} (-1)^{n} a_n. \notag
\end{equation}
For example,
\begin{equation}
1 – \frac{1}{3} < \sum_{n = 0}^\infty \frac{(-1)^{n}}{2n+1} < 1 – \frac{1}{3} + \frac{1}{5}. \notag
\end{equation}

Instead of approximating the value of the whole sum $S$ by the $K$th partial sum $\sum_{n \leq K} (-1)^n a_n$, it might seem reasonable to approximate $S$ by the average of the $(K-1)$st partial sum and the $K$th partial sum. Since we know $S$ is between the two, taking their average might be closer to the real result.

As mentioned above, the average of the partial sum consisting of the first $49$ terms of \eqref{eq:base} and the first $50$ terms of \eqref{eq:base} gives a much improved estimate of $\pi$ than using either the first $49$ or first $50$ terms on their own. (And indeed, it works much better than even the first $500$ terms on their own).

Before we go on, let’s introduce a little notation. Let $S_K$ denote the partial sum of the terms up to $K$, i.e.
\begin{equation}
S_K = \sum_{n = 0}^K (-1)^{n} a_n. \notag
\end{equation}
Then the idea is that instead of using $S_{K}$ to approximate the wholse sum $S$, we’ll use the average
\begin{equation}
\frac{S_{K-1} + S_{K}}{2} \approx S. \notag
\end{equation}

Averaging once seems like a great idea. What if we average again? That is, what if instead of using the average of $S_{K-1}$ and $S_K$, we actually use the average of (the average of $S_{K-2}$ and $S_{K-1}$) and (the average of $S_{K_1}$ and $S_K$),
\begin{equation}\label{eq:avgavg}
\frac{\frac{S_{K-2} + S_{K-1}}{2} + \frac{S_{K-1} + S_{K}}{2}}{2}.
\end{equation}
As this is really annoying to write, let’s come up with some new notation. Write the average between a quantity $X$ and $Y$ as
\begin{equation}
[X, Y] = \frac{X + Y}{2}. \notag
\end{equation}
Further, define the average of $[X, Y]$ and $[Y, Z]$ to be $[X, Y, Z]$,
\begin{equation}
[X, Y, Z] = \frac{[X, Y] + [Y, Z]}{2} = \frac{\frac{X + Y}{2} + \frac{Y + Z}{2}}{2}. \notag
\end{equation}
So the long expression in \eqref{eq:avgavg} can be written as $[S_{K-2}, S_{K-1}, S_{K}]$.

With this notation in mind, let’s compute some numerics. Below, we give the actual value of $\pi$, the values of $S_{48}, S_{49}$, and $S_{50}$, pairwise averages, and the average-of-the-average, in the case of $1 – \frac{1}{3} + \frac{1}{5} + \cdots$.
\begin{equation} \notag
\begin{array}{c|l|l}
& \text{Value} & \text{Difference from } \pi \\ \hline
\pi & 3.141592653589793238462\ldots & \phantom{-}0 \\ \hline
4 \cdot S_{48} & 3.1207615795929895 & \phantom{-}0.020831073996803617 \\ \hline
4 \cdot S_{49} & 3.161998692995051 & -0.020406039405258092 \\ \hline
4 \cdot S_{50} & 3.121594652591011 & \phantom{-}0.01999800099878213 \\ \hline
4 \cdot [S_{48}, S_{49}] & 3.1413801362940204 & \phantom{-}0.0002125172957727628 \\ \hline
4 \cdot [S_{49}, S_{50}] & 3.1417966727930313 & -0.00020401920323820377 \\ \hline
4 \cdot [S_{48}, S_{49}, S_{50}] & 3.141588404543526 & \phantom{-}0.00000424904626727951 \\ \hline
\end{array}
\end{equation}
So using the average of averages from the three sums $S_{48}, S_{49}$, and $S_{50}$ gives $\pi$ to within $4.2 \cdot 10^{-6}$, an incredible improvement compared to $S_{50}$ on its own.

There is something really odd going on here. We are not computing additional summands in the overall sum \eqref{eq:base}. We are merely combining some of our partial results together in a really simple way, repeatedly. Somehow, the sequence of partial sums contains more information about the limit $S$ than individual terms, and we are able to extract some of this information.

I think there is a very natural question. What if we didn’t stop now? What if we took averages-of-averages-of-averages, and averages-of-averages-of-averages-of-averages, and so on? Indeed, we might define the average
\begin{equation}
[X, Y, Z, W] = \frac{[X, Y, Z] + [Y, Z, W]}{2}, \notag
\end{equation}
and so on for larger numbers of terms. In this case, it happens to be that
\begin{equation}
[S_{15}, S_{16}, \ldots, S_{50}] = 3.141592653589794,
\end{equation}
which has the first 15 digits of $\pi$ correct!

By repeatedly averaging alternating sums of just the first $50$ reciprocals of odd integers, we can find $\pi$ up to 15 digits. I think that’s incredible — it seems both harder than it might have been (as this involves lots of averaging) and much easier than it might have been (as the only arithmetic input are the fractions $1/(2n+1)$ for $n$ up to $50$.

Although we leave the thread of ideas here, there are plenty of questions that I think are now asking themselves. I encourage you to ask them, and we may return to this (or related) topics in the future. I’ll see you in class.

Posted in Brown University, Expository, Math 100, Mathematics, Teaching | Tagged , , , | Leave a comment

Series Convergence Tests with Prototypical Examples

This is a note written for my Fall 2016 Math 100 class at Brown University. We are currently learning about various tests for determining whether series converge or diverge. In this note, we collect these tests together in a single document. We give a brief description of each test, some indicators of when each test would be good to use, and give a prototypical example for each. Note that we do justify any of these tests here — we’ve discussed that extensively in class. [But if something is unclear, send me an email or head to my office hours]. This is here to remind us of the variety of the various tests of convergence.

A copy of just the statements of the tests, put together, can be found here. A pdf copy of this whole post can be found here.

In order, we discuss the following tests:

  1. The $n$th term test, also called the basic divergence test
  2. Recognizing an alternating series
  3. Recognizing a geometric series
  4. Recognizing a telescoping series
  5. The Integral Test
  6. P-series
  7. Direct (or basic) comparison
  8. Limit comparison
  9. The ratio test
  10. The root test

The $n$th term test

Statement

Suppose we are looking at $\sum_{n = 1}^\infty a_n$ and
\begin{equation}
\lim_{n \to \infty} a_n \neq 0. \notag
\end{equation}
Then $\sum_{n = 1}^\infty a_n$ does not converge.

When to use it

When applicable, the $n$th term test for divergence is usually the easiest and quickest way to confirm that a series diverges. When first considering a series, it’s a good idea to think about whether the terms go to zero or not. But remember that if the limit of the individual terms is zero, then it is necessary to think harder about whether the series converges or diverges.

Example

Each of the series
\begin{equation}
\sum_{n = 1}^\infty \frac{n+1}{2n + 4}, \quad \sum_{n = 1}^\infty \cos n, \quad \sum_{n = 1}^\infty \sqrt{n} \notag
\end{equation}
diverges since their limits are not $0$.

Recognizing alternating series

Statement

Suppose $\sum_{n = 1}^\infty (-1)^n a_n$ is a series where

  1. $a_n \geq 0$,
  2. $a_n$ is decreasing, and
  3. $\lim_{n \to \infty} a_n = 0$.

Then $\sum_{n = 1}^\infty (-1)^n a_n$ converges.

Stated differently, if the terms are alternating sign, decreasing in absolute size, and converging to zero, then the series converges.

When to use it

The key is in the name — if the series is alternating, then this is the goto idea of analysis. Note that if the terms of a series are alternating and decreasing, but the terms do not go to zero, then the series diverges by the $n$th term test.

Example

Suppose we are looking at the series
\begin{equation}
\sum_{n = 1}^\infty \frac{(-1)^n}{\log(n+1)} = \frac{-1}{\log 2} + \frac{1}{\log 3} + \frac{-1}{\log 4} + \cdots \notag
\end{equation}
The terms are alternating.
The sizes of the terms are $\frac{1}{\log (n+1)}$, and these are decreasing.
Finally,
\begin{equation}
\lim_{n \to \infty} \frac{1}{\log(n+1)} = 0. \notag
\end{equation}
Thus the alternating series test applies and shows that this series converges.

(more…)

Posted in Brown University, Math 100, Mathematics, Teaching | Tagged , , , | Leave a comment

Math 42 Spring 2016 Student Showcase

This spring, I taught Math 42: An Introduction to Elementary Number Theory at Brown University. An important aspect of the course was the final project. In these projects, students either followed up on topics that interested them from the semester, or chose and investigated topics related to number theory.  Projects could be done individual or in small groups.

I thought it would be nice to showcase some excellent student projects from my class. Most of the projects were quite good, and some showed extraordinary effort. Some students really dove in and used this as an opportunity to explore and digest a topic far more thoroughly than could possibly be expected from an introductory class such as this one. With the students’ permission, I’ve chosen five student projects (in no particular order) for a blog showcase (impressed by similar sorts  of showcases from Scott Aaronson).

  • Factorization Techniques, by Elvis Nunez and Chris Shaw. In this project, Elvis and Chris look at Fermat Factorization, which looks to factor $n$ by expressing $n = a^2 – b^2$. Further, they investigate improvements to Fermat’s Algorithm by Dixon and Kraitchik. Following this line of investigation leads to the development of the modern quadratic sieve and factor base methods of factorization.

  • Pseudoprimes and Carmichael Numbers, by Emily Riemer. Fermat’s Little Theorem is one of the first “big idea” theorems we encounter in the course, and we came back to it again and again throughout. Emily explored the Fermat’s Little Theorem as a primality test, leading to pseudoprimes, strong pseudoprimes, and Carmichael numbers. [As an aside, one of her references concerning Carmichael numbers were notes from an algebraic number theory class taught by Matt Baker, who first got me interested in number theory].

  • Continued Fractions and Pell’s Equation, by Max Lahn and Jonathan Spiegel. As it happened, I did not have time to teach continued fractions in the course.  So Max and Jonathan decided to look at them on their own. They explore some ideas related to the convergence of continued fractions and see how one uses continued fractions to solve Pell’s Equation.

  • Quantum Computing, by Edward Hu and Chris Long. Edward and Chris explore quantum computing with particular emphasis towards gaining some idea of how Shor’s factorization algorithm works. For some of the more complicated ideas, like the quantum Fourier transform, they make use of heuristic and analogy to purvey the main ideas.

  • Fermat’s Last Theorem, by Dylan Groos, Natalie Schudrowitz, and Kenneth Berglund. Dylan, Natalie, and Kenneth provide a historical look at attacks on Fermat’s Last Theorem. They examine proofs for $n=4$ and Sophie Germaine’s remarkable advances. They also touch on elliptic curves and modular forms, hinting at some of the deep ideas lying beneath the surface.

Posted in Brown University, Math 420, Mathematics, Teaching | Tagged , , , , | 1 Comment

Math 42: Concluding Remarks

As this semester draws to an end, it is time to reflect on what we’ve done. What worked well? What didn’t work well? What would I change if I taught this course again?

Origins of the course

This course was created by my advisor, Jeff Hoffstein, many years ago in order to offer a sort of bridge between high school math and “real math.” The problem is that in primary and secondary school, students are not exposed to the grand, modern ideas of mathematics. They are forced to drill exercises and repeat formulae. Often, the greatest and largest exposure to mathematical reasoning is hidden among statements of congruent triangles and Side-Angle-Side theorems. Most students arrive at university thinking that math is over and done with. What else could there possibly remain to do in math?

Math 42 was designed to attract nonscience majors, especially those not intending to pursue the standard calculus sequence, and to convince them to study some meaningful mathematics. Ideally, students begin to think mathematically and experience some of the thrill of independent intellectual discovery.

It is always a bit surprising to me that so many students find their way into this class each spring. This class does not have a natural lead-in, it satisfies no prerequisites, and it is not in the normal track for math concentrators. One cannot even pretend to make the argument that number theory is a useful day-to-day skill. Yet number theory has a certain appeal… there are so many immediate and natural questions. It is possible to get a hint that there is something deep going on within the first two classes.

Further, I think there is something special about the first homework assigned in a course. Homeworks send a really strong signal about the content of a course. I want this course to be more about the students exploring, asking questions, and experimenting than about repeating the same old examples and techniques from the class. So the first several questions on the first homework are dedicated to open-ended exploration.

There are side effects to this approach. Open ended exploration is uncertain, and therefore scary. I hope that it’s intriguing enough (and different enough) that students push through initial discomfort, but I’m acutely aware that this can be an intimidating time. Perhaps in another time, students were more comfortable with uncertainty — but that is a discussion for later. I’m pretty sure that many fears are assuaged after the first week, once the idea that it is okay to not know what you’re going to learn before you learn it. In fact, it’s more fun that way! (One also learns much more rapidly).

My approach to this course is strongly influenced by my experiences teaching number theory to high schoolers as part of the Summer@Brown program for the past several years. During my first summer teaching that course, I co-taught it with Jackie Anderson, who is an excellent and thoughtful instructor. I also strongly draw on the excellent textbook A Friendly Introduction to Number Theory by Joe Silverman, written specifically for this course about 20 years ago.

Trying final projects

I am still surprised each time I teach this course. I tried one major new thing in this course that I’ve thought about for a long time — students were required to do a final project on a topic of their choice. It turns out that this is a great idea, and I would absolutely do it again. The paths available to the students really opens up in the second half of the course. With a few basic tools (in particular once we’ve mastered modular arithmetic, linear congruences, and the Chinese Remainder Theorem) the number of deeply interesting and accessible topics is huge. There is some great truth here, about how a few basic structural tools allow one to explore all sorts of new playgrounds.

A final project allows students to realize this for themselves. It also fits in with the motif of experimentation and self-discovery that pervades the whole course. The structural understanding from the first half of the course is enough to pursue some really interesting, and occasionally even useful, topics. More importantly, they learn that they are capable of finding, learning, and understanding complex mathematical ideas on their own. And usually, they enjoy it, since it’s fun to learn cool things.

For various reasons, I had thought it would be a good idea to offer students an alternative to final projects, in the form of a somewhat challenging final exam. In hindsight, I now think this was not such a good idea. Students who do not perform final projects miss out on a sort of representative capstone experience in the course.

There are a few other things that I would have done differently, if given the chance. I would ask students to be on the lookout for topics and groups much earlier, perhaps about a third of the way into the semester (instead of about two thirds of the way into the course). My students did an extraordinary job at their projects this semester. But I think with some additional rumination time, more groups would pursue projects based more on their own particular interests. Or perhaps not — I’ll see next time.

Several of my students who really dove into their final projects have agreed to have their work showcased here (which is something that I’ll get back to in a later note). This means that students in later courses will have something to refer back to. [Whether this is a good thing remainds to be seen, but I suspect it’s for the better].

Interesting correlations

In these concluding notes, I often like to try to draw correlations between certain patterns of behaviour and success in the course. I’m very often interested in the question of how early on in a course one can accurately predict a final grade. In calculus courses, it seems one can very accurately predict a final grade using only the first midterm grade.

In this course, fewer such correlations are meaningful. Most notably, a large percentage of the class took the course as pass/fail. [I can’t blame them, as this course is supposed to draw students a bit out of their comfort zone into a topic they know little about]. This distorts the entire incentive structure of the course in relation to other demands of college life.

There is a very strong correlation between taking the class for a grade and receiving a high numeric grade at the end. I think this comes largely from two causes: students who are more confident with the subject material coming into the course decide to take it for a grade, and then perform in line with their expectations; and taking a class for pass/fail creates an incentive structure with high emphasis on learning enough material to pass, but not necessarily mastering all the material.

In sharp contrast to my experience in calculus courses, there is a pretty strong correlation between homework grade and with overall performance. While this may seem obvious, this has absolutely not been my experience in calculus courses. Generally poor homework grades correlate extremely strongly with poor final grades, but strong homework has had almost no correlation with strong performance.

I think that a reason why homework might be a better predicter in this course is that homework is harder. There are always open-ended problems, and every homework had at least one or two problems designed to take a lot of experimentation and thought. Students who did well on the homework put in that experimentation and thinking time, reflecting better study habits, higher commitment, and more grit (like in this TED talk).

Finally, there is an extremely high correlation with students attending office hours and strong performance in the course. It will always remain a mystery to me why more students don’t take advantage of office hours. [It might be that this is also a measure other characteristics, such as commitment, study habits, and grit].

Don’t forget the coffee

This was my favorite course I’ve taught at Brown. On the flipside, I think my students enjoyed this class more than any other class I’ve taught at Brown. This is one of those courses that rejuvenates the soul.

While returning home after one of the final classes, I flipped on NPR and listened to Innovation Hub. On the program was Steven Strogatz, a well-known mathematician and expositor, talking about his general dislike of the Calculus-Is-The-Pinnacle-Of-Mathematics style approach that is so common today in high schools and colleges. The program in particular can be found here.

He argues that standard math education is missing some important topics, especially related to financial numeracy. But he also argues that the current emphasis is not on the beauty or attraction of mathematics, but on a very particular set of applications [and in particular, towards creating rockets].

While this course isn’t perfect, I do think that it is the sort of course that Strogatz would approve of — somewhat like a Survey of Shakespeare course, but in mathematics.

Posted in Brown University, Math 420, Teaching | Leave a comment

Math 420: Supplement on Gaussian Integers II

This is a secondary supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Theory Class at Brown University. This note is also available as a pdf document.

In this note, we cover the following topics.

  1. Assumed prerequisites from other lectures.
  2. Which regular integer primes are sums of squares?
  3. How can we classify all Gaussian primes?

1. Assumed Prerequisites

Although this note comes shortly after the previous note on the Gaussian integers, we covered some material from the book in the middle. In particular, we will assume use the results from chapters 20 and 21 from the textbook.

Most importantly, for $latex {p}$ a prime and $latex {a}$ an integer not divisible by $latex {p}$, recall the Legendre symbol $latex {\left(\frac{a}{p}\right)}$, which is defined to be $latex {1}$ if $latex {a}$ is a square mod $latex {p}$ and $latex {-1}$ if $latex {a}$ is not a square mod $latex {p}$. Then we have shown Euler’s Criterion, which states that

$$ a^{\frac{p-1}{2}} \equiv \left(\frac{a}{p}\right) \pmod p, \tag{1}$$
and which gives a very efficient way of determining whether a given number $latex {a}$ is a square mod $latex {p}$.

We used Euler’s Criterion to find out exactly when $latex {-1}$ is a square mod $latex {p}$. In particular, we concluded that for each odd prime $latex {p}$, we have

$$ \left(\frac{-1}{p}\right) = \begin{cases} 1 & \text{ if } p \equiv 1 \pmod 4 \ -1 & \text{ if } p \equiv 3 \pmod 4 \end{cases}. \tag{2}$$
Finally, we assume familiarity with the notation and ideas from the previous note on the Gaussian integers.

2. Understanding When $latex {p = a^2 + b^2}$.

Throughout this section, $latex {p}$ will be a normal odd prime. The case $latex {p = 2}$ is a bit different, and we will need to handle it separately. When used, the letters $latex {a}$ and $latex {b}$ will denote normal integers, and $latex {q_1,q_2}$ will denote Gaussian integers.

We will be looking at the following four statements.

  1. $latex {p \equiv 1 \pmod 4}$
  2. $latex {\left(\frac{-1}{p}\right) = 1}$
  3. $latex {p}$ is not a Gaussian prime
  4. $latex {p = a^2 + b^2}$

Our goal will be to show that each of these statements are equivalent. In order to show this, we will show that

$$ (1) \implies (2) \implies (3) \implies (4) \implies (1). \tag{3}$$
Do you see why this means that they are all equivalent?

This naturally breaks down into four lemmas.

We have actually already shown one.

Lemma 1 $latex {(1) \implies (2)}$.

Proof: We have already proved this claim! This is exactly what we get from Euler’s Criterion applied to $latex {-1}$, as mentioned in the first section. $latex \Box$

There is one more that is somewhat straightfoward, and which does not rely on going up to the Gaussian integers.

Lemma 2 $latex {(4) \implies (1)}$.

Proof: We have an odd prime $latex {p}$ which is a sum of squares $latex {p = a^2 + b^2}$. If we look mod $latex {4}$, we are led to consider $$ p = a^2 + b^2 \pmod 4. \tag{4}$$
What are the possible values of $latex {a^2 \pmod 4}$? A quick check shows that the only possibilites are $latex {a^2 \equiv 0, 1 \pmod 4}$.

So what are the possible values of $latex {a^2 + b^2 \pmod 4}$? We must have one of $latex {p \equiv 0, 1, 2 \pmod 4}$. Clearly, we cannot have $latex {p \equiv 0 \pmod 4}$, as then $latex {4 \mid p}$. Similarly, we cannot have $latex {p \equiv 2 \pmod 4}$, as then $latex {2 \mid p}$. So we necessarily have $latex {p \equiv 1 \pmod 4}$, which is what we were trying to prove. $latex \Box$

For the remaining two pieces, we will dive into the Gaussian integers.

Lemma 3 $latex {(2) \implies (3)}$.

Proof: As $latex {\left(\frac{-1}{p}\right) = 1}$, we know there is some $latex {a}$ so that $latex {a^2 \equiv -1 \pmod p}$. Rearranging, this becomes $latex {a^2 + 1 \equiv 0 \pmod p}$.

Over the normal integers, we are at an impasse, as all this tells us is that $latex {p \mid (a^2 + 1)}$. But if we suddenly view this within the Gaussian integers, then $latex {a^2 + 1}$ factors as $latex {a^2 + 1 = (a + i)(a – i)}$.

So we have that $latex {p \mid (a+i)(a-i)}$. If $latex {p}$ were a Gaussian prime, then we would necessarily have $latex {p \mid (a+i)}$ or $latex {p \mid (a-i)}$. (Do you see why?)

But is it true that $latex {p}$ divides $latex {a + i}$ or $latex {a – i}$? For instance, does $latex {p}$ divide $latex {a + i}$? No! If so, then $latex {\frac{a}{p} + \frac{i}{p}}$ would be a Gaussian integer, which is clearly not true.

So $latex {p}$ does not divide $latex {a + i}$ or $latex {a-i}$, and we must therefore conclude that $latex {p}$ is not a Gaussian prime. $latex \Box$

Lemma 4 $latex {(3) \implies (4)}$.

Proof: We now know that $latex {p}$ is not a Gaussian prime. In particular, this means that $latex {p}$ is not irreducible, and so it has a nontrivial factorization in the Gaussian integers. (For example, $latex {5}$ is a regular prime, but it is not a Gaussian prime. It factors as $latex {5 = (1 + 2i)(1 – 2i)}$ in the Gaussian integers.)

Let’s denote this nontrivial factorization as $latex {p = q_1 q_2}$. By nontrivial, we mean that neither $latex {q_1}$ nor $latex {q_2}$ are units, i.e. $latex {N(q_1), N(q_2) > 1}$. Taking norms, we see that $latex {N(p) = N(q_1) N(q_2)}$.

We can evaluate $latex {N(p) = p^2}$, so we have that $latex {p^2 = N(q_1) N(q_2)}$. Both $latex {N(q_1)}$ and $latex {N(q_2)}$ are integers, and their product is $latex {p^2}$. Yet $latex {p^2}$ has exactly two different factorizations: $latex {p^2 = 1 \cdot p^2 = p \cdot p}$. Since $latex {N(q_1), N(q_2) > 1}$, we must have the latter.

So we see that $latex {N(q_1) = N(q_2) = p}$. As $latex {q_1, q_2}$ are Gaussian integers, we can write $latex {q_1 = a + bi}$ for some $latex {a, b}$. Then since $latex {N(q_1) = p}$, we see that $latex {N(q_1) = a^2 + b^2}$. And so $latex {p}$ is a sum of squares, ending the proof. $latex \Box$

Notice that $latex {2 = 1 + 1}$ is also a sum of squares. Then all together, we can say the following theorem.

Theorem 5 A regular prime $latex {p}$ can be written as a sum of two squares, $$ p = a^2 + b^2, \tag{5}$$
exactly when $latex {p = 2}$ or $latex {p \equiv 1 \pmod 4}$.

A remarkable aspect of this theorem is that it is entirely a statement about the behaviour of the regular integers. Yet in our proof, we used the Gaussian integers in a very fundamental way. Isn’t that strange?

You might notice that in the textbook, Dr. Silverman presents a proof that does not rely on the Gaussian integers. While interesting and clever, I find that the proof using the Gaussian integers better illustrates the deep connections between and around the structures we have been studying in this course so far. Everything connects!

Example 1 The prime $latex {5}$ is $latex {1 \pmod 4}$, and so $latex {5}$ is a sum of squares. In particular, $latex {5 = 1^2 + 2^2}$.

Example 2 The prime $latex {101}$ is $latex {1 \pmod 4}$, and so is a sum of squares. Our proof is not constructive, so a priori we do not know what squares sum to $latex {101}$. But in this case, we see that $latex {101 = 1^2 + 10^2}$.

Example 3 The prime $latex {97}$ is $latex {1 \pmod 4}$, and so it also a sum of squares. It’s less obvious what the squares are in this case. It turns out that $latex {97 = 4^2 + 9^2}$.

Example 4 The prime $latex {43}$ is $latex {3 \pmod 4}$, and so is not a sum of squares.

3. Classification of Gaussian Primes

In the previous section, we showed that each integer prime $latex {p \equiv 1 \pmod 4}$ actually splits into a product of two Gaussian numbers $latex {q_1}$ and $latex {q_2}$. In fact, since $latex {N(q_1) = p}$ is a regular prime, $latex {q_1}$ is a Gaussian irreducible and therefore a Gaussian prime (can you prove this? This is a nice midterm question.)

So in fact, $latex {p \equiv 1 \pmod 4}$ splits in to the product of two Gaussian primes $latex {q_1}$ and $latex {q_2}$.

In this way, we’ve found infinitely many Gaussian primes. Take a regular prime congruent to $latex {1 \pmod 4}$. Then we know that it splits into two Gaussian primes. Further, if we know how to write $latex {p = a^2 + b^2}$, then we know that $latex {q_1 = a + bi}$ and $latex {q_2 = a – bi}$ are those two Gaussian primes.

In general, we will find all Gaussian primes by determining their interaction with regular primes.

Suppose $latex {q}$ is a Gaussian prime. Then on the one hand, $latex {N(q) = q \overline{q}}$. On the other hand, $latex {N(q) = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}}$ is some regular integer. Since $latex {q}$ is a Gaussian prime (and so $latex {q \mid w_1 w_2}$ means that $latex {q \mid w_1}$ or $latex {q \mid w_2}$), we know that $latex {q \mid p_j}$ for some regular integer prime $latex {p_j}$.

So one way to classify Gaussian primes is to look at every regular integer prime and see which Gaussian primes divide it. We have figured this out for all primes $latex {p \equiv 1 \pmod 4}$. We can handle $latex {2}$ by noticing that $latex {2 = (1 + i) (1-i)}$. Both $latex {(1+i)}$ and $latex {(1-i)}$ are Gaussian primes.

The only primes left are those regular primes with $latex {p \equiv 3 \pmod 4}$. We actually already covered the key idea in the previous section.

Lemma 6 If $latex {p \equiv 3 \pmod 4}$ is a regular prime, then $latex {p}$ is also a Gaussian prime.

Proof: In the previous section, we showed that if $latex {p}$ is not a Gaussian prime, then $latex {p = a^2 + b^2}$ for some integers $latex {a,b}$, and then $latex { p \equiv 1 \pmod 4}$. Since $latex {p \not \equiv 1 \pmod 4}$, we see that $latex {p}$ is a Gaussian prime. $latex \Box$

In total, we have classified all Gaussian primes.

Theorem 7 The Gaussian primes are given by

  1. $latex {(1+i), (1-i)}$
  2. Regular primes $latex {p \equiv 3 \pmod 4}$
  3. The factors $latex {q_1 q_2}$ of a regular prime $latex {p \equiv 1 \pmod 4}$. Further, these primes are given by $latex {a \pm bi}$, where $latex {p = a^2 + b^2}$.

 

4. Concluding Remarks

I hope that it’s clear that the regular integers and the Gaussian integers are deeply connected and intertwined. Number theoretic questions in one constantly lead us to investigate the other. As one dives deeper into number theory, more and different integer-like rings appear, all deeply connected.

Each time I teach the Gaussian integers, I cannot help but feel the sense that this is a hint at a deep structural understanding of what is really going on. The interplay between the Gaussian integers and the regular integers is one of my favorite aspects of elementary number theory, which is one reason why I deviated so strongly from the textbook to include it. I hope you enjoyed it too.

Posted in Brown University, Expository, Math 420, Mathematics, Teaching | Tagged , , , , , | Leave a comment

Math 420: Supplement on Gaussian Integers

This is a brief supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Class at Brown University. With respect to the book, the nearest material is the material in Chapters 35 and 36, but we take a very different approach.

A pdf of this note can be found here. I’m sure there are typos, so feel free to ask me or correct me if you think something is amiss.

In this note, we cover the following topics.

  1. What are the Gaussian integers?
  2. Unique factorization within the Gaussian integers.
  3. An application of the Gaussian integers to the Diophantine equation $latex {y^2 = x^3 – 1}$.
  4. Other integer-like sets: general rings.
  5. Specific examples within $latex {\mathbb{Z}[\sqrt{2}]}$ and $latex {\mathbb{Z}[\sqrt{-5}]}$.

1. What are the Gaussian Integers?

The Gaussian Integers are the set of numbers of the form $latex {a + bi}$, where $latex {a}$ and $latex {b}$ are normal integers and $latex {i}$ is a number satisfying $latex {i^2 = -1}$. As a collection, the Gaussian Integers are represented by the symbol $latex {\mathbb{Z}[i]}$, or sometimes $latex {\mathbb{Z}[\sqrt{-1}]}$. These might be pronounced either as The Gaussian Integers or as Z append i.

In many ways, the Gaussian integers behave very much like the regular integers. We’ve been studying the qualities of the integers, but we should ask — which properties are really properties of the integers, and which properties hold in greater generality? Is it the integers themselves that are special, or is there something bigger and deeper going on?

These are the main questions that we ask and make some progress towards in these notes. But first, we need to describe some properties of Gaussian integers.

We will usually use the symbols $latex {z = a + bi}$ to represent our typical Gaussian integer. One adds and multiples two Gaussian integers just as you would add and multiply two complex numbers. Informally, you treat $latex {i}$ like a polynomial indeterminate $latex {X}$, except that it satisfies the relation $latex {X^2 = -1}$.

Definition 1 For each complex number $latex {z = a + bi}$, we define the conjugate of $latex {z}$, written as $latex {\overline{z}}$, by
\begin{equation}
\overline{z} = a – bi.
\end{equation}
We also define the norm of $latex {z}$, written as $latex {N(z)}$, by
\begin{equation}
N(z) = a^2 + b^2.
\end{equation}

You can check that $latex {N(z) = z \overline{z}}$ (and in fact this is one of your assigned problems). You can also chack that $latex {N(zw) = N(z)N(w)}$, or rather that the norm is multiplicative (this is also one of your assigned problems).

Even from our notation, it’s intuitive that $latex {z = a + bi}$ has two parts, the part corresponding to $latex {a}$ and the part corresponding to $latex {b}$. We call $latex {a}$ the real part of $latex {z}$, written as $latex {\Re z = a}$, and we call $latex {b}$ the imaginary part of $latex {z}$, written as $latex {\Im z = b}$. I should add that the name ”imaginary number” is a poor name that reflects historical reluctance to view complex numbers as acceptable. For that matter, the name ”complex number” is also a poor name.

As a brief example, consider the Gaussian integer $latex {z = 2 + 5i}$. Then $latex {N(z) = 4 + 25 = 29}$, $latex {\Re z = 2}$, $latex {\Im z = 5}$, and $latex {\overline{z} = 2 – 5i}$.

We can ask similar questions to those we asked about the regular integers. What does it mean for $latex {z \mid w}$ in the complex case?

Definition 2 We say that a Gaussian integer $latex {z}$ divides another Gaussian integer $latex {w}$ if there is some Gaussian integer $latex {k}$ so that $latex {zk = w}$. In this case, we write $latex {z \mid w}$, just as we write for regular integers.

For the integers, we immediately began to study the properties of the primes, which in many ways were the building blocks of the integers. Recall that for the regular integers, we said $latex {p}$ was a prime if its only divisors were $latex {\pm 1}$ and $latex {\pm p}$. In the Gaussian integers, the four numbers $latex {\pm 1, \pm i}$ play the same role as $latex {\pm 1}$ in the usual integers. These four numbers are distinguished as being the only four Gaussian integers with norm equal to $latex {1}$.

That is, the only solutions to $latex {N(z) = 1}$ where $latex {z}$ is a Gaussian integer are $latex {z = \pm 1, \pm i}$. We call these four numbers the Gaussian units.

With this in mind, we are ready to define the notion of a prime for the Gaussian integers.

Definition 3 We say that a Gaussian integer $latex {z}$ with $latex {N(z) > 1}$ is a Gaussian prime if the only divisors of $latex {z}$ are $latex {u}$ and $latex {uz}$, where $latex {u = \pm 1, \pm i}$ is a Gaussian unit.

Remark 1 When we look at other integer-like sets, we will actually use a different definition of a prime.

It’s natural to ask whether the normal primes in $latex {\mathbb{Z}}$ are also primes in $latex {\mathbb{Z}[i]}$. And the answer is no. For instance, $latex {5}$ is a prime in $latex {\mathbb{Z}}$, but
\begin{equation}
5 = (1 + 2i)(1 – 2i)
\end{equation}
in the Gaussian integers. However, the two Gaussian integers $latex {1 + 4i}$ and $latex {1 – 4i}$ are prime. It also happens to be that $latex {3}$ is a Gaussian prime. We will continue to investigate which numbers are Gaussian primes over the next few lectures.

With a concept of a prime, it’s also natural to ask whether or not the primes form the building blocks for the Gaussian integers like they form the building blocks for the regular integers. We take up this in our next topic.

2. Unique Factorization in the Gaussian Integers

Let us review the steps that we followed to prove unique factorization for $latex {\mathbb{Z}}$.

  1. We proved that for $latex {a,b}$ in $latex {\mathbb{Z}}$ with $latex {b \neq 0}$, there exist unique $latex {q}$ and $latex {r}$ such that $latex {a = bq + r}$ with $latex {0 \leq r < b}$. This is called the Division Algorithm.
  2. By repeatedly applying the Division Algorithm, we proved the Euclidean Algorithm. In particular, we showed that the last nonzero remainder was the GCD of our initial numbers.
  3. By performing reverse substition on the steps of the Euclidean Algorithm, we showed that there are integer solutions in $latex {x,y}$ to the Diophantine equation $latex {ax + by = \gcd(a,b)}$. This is often called Bezout’s Theorem or Bezout’s Lemma, although we never called it by that name in class.
  4. With Bezout’s Theorem, we showed that if a prime $latex {p}$ divides $latex {ab}$, then $latex {p \mid a}$ or $latex {p \mid b}$. This is the crucial step towards proving Unique Factorization.
  5. We then proved Unique Factorization.

Each step of this process can be repeated for the Gaussian integers, with a few notable differences. Remarkably, once we have the division algorithm, each proof is almost identical for $latex {\mathbb{Z}[i]}$ as it is for $latex {\mathbb{Z}}$. So we will prove the division algorithm, and then give sketches of the remaining ideas, highlighting the differences that come up along the way.

In the division algorithm, we require the remainder $latex {r}$ to ”be less than what we are dividing by.” A big problem in translating this to the Gaussian integers is that the Gaussian integers are not ordered. That is, we don’t have a concept of being greater than or less than for $latex {\mathbb{Z}[i]}$.

When this sort of problem emerges, we will get around this by taking norms. Since the norm of a Gaussian integer is a typical integer, we will be able to use the ordering of the integers to order our norms.

Theorem 4 For $latex {z,w}$ in $latex {\mathbb{Z}[i]}$ with $latex {w \neq 0}$, there exist $latex {q}$ and $latex {r}$ in $latex {\mathbb{Z}[i]}$ such that $latex {z = qw + r}$ with $latex {N(r) < N(w)}$.

Proof: Here, we will cheat a little bit and use properties about general complex numbers and the rationals to perform this proof. One can give an entirely intrinsic proof, but I like the approach I give as it also informs how to actually compute the $latex {q}$ and $latex {r}$.

The entire proof boils down to the idea of writing $latex {z/w}$ as a fraction and approximating the real and imaginary parts by the nearest integers.

Let us now transcribe that idea. We will need to introduce some additional symbols. Let $latex {z = a_1 + b_1 i}$ and $latex {w = a_2 + b_2 i}$.

Then
\begin{align}
\frac{z}{w} &= \frac{a_1 + b_1 i}{a_2 + b_2 i} = \frac{a_1 + b_1 i}{a_2 + b_2 i} \frac{a_2 – b_2 i}{a_2 – b_2 i} \\
&= \frac{a_1a_2 + b_1 b_2}{a_2^2 + b_2^2} + i \frac{b_1 a_2 – a_1 b_2}{a_2^2 + b_2 ^2} \\
&= u + iv.
\end{align}
By rationalizing the denominator by multiplying by $latex {\overline{w}/ \overline{w}}$, we are able to separate out the real and imaginary parts. In this final expression, we have named $latex {u}$ to be the real part and $latex {v}$ to be the imaginary part. Notice that $latex {u}$ and $latex {v}$ are normal rational numbers.

We know that for any rational number $latex {u}$, there is an integer $latex {u’}$ such that $latex {\lvert u – u’ \rvert \leq \frac{1}{2}}$. Let $latex {u’}$ and $latex {v’}$ be integers within $latex {1/2}$ of $latex {u}$ and $latex {v}$ above, respectively.

Then we claim that we can choose $latex {q = u’ + i v’}$ to be the $latex {q}$ in the theorem statement, and let $latex {r}$ be the resulting remainder, $latex {r = z – qw}$. We need to check that $latex {N(r) < N(w)}$. We will check that explicitly.

We compute
\begin{align}
N(r) &= N(z – qw) = N\left(w \left(\frac{z}{w} – q\right)\right) = N(w) N\left(\frac{z}{w} – q\right).
\end{align}
Note that we have used that $latex {N(ab) = N(a)N(b)}$. In this final expression, we have already come across $latex {\frac{z}{w}}$ before — it’s exactly what we called $latex {u + iv}$. And we called $latex {q = u’ + i v’}$. So our final expression is the same as
\begin{equation}
N(r) = N(w) N(u + iv – u’ – i v’) = N(w) N\left( (u – u’) + i (v – v’)\right).
\end{equation}
How large can the real and imaginary parts of $latex {(u-u’) + i (v – v’)}$ be? By our choice of $latex {u’}$ and $latex {v’}$, they can be at most $latex {1/2}$.

So we have that
\begin{equation}
N(r) \leq N(w) N\left( (\tfrac{1}{2})^2 + (\tfrac{1}{2})^2\right) = \frac{1}{2} N(w).
\end{equation}
And so in particular, we have that $latex {N(r) < N(w)}$ as we needed. $latex \Box$

Note that in this proof, we did not actually show that $latex {q}$ or $latex {r}$ are unique. In fact, unlike the case in the regular integers, it is not true that $latex {q}$ and $latex {r}$ are unique.

Example 1 Consider $latex {3+5i, 1 + 2i}$. Then we compute
\begin{equation}
\frac{3+5i}{1+2i} = \frac{3+5i}{1+2i}\frac{1-2i}{1-2i} = \frac{13}{5} + i \frac{-1}{5}.
\end{equation}
The closest integer to $latex {13/5}$ is $latex {3}$, and the closest integer to $latex {-1/5}$ is $latex {0}$. So we take $latex {q = 3}$. Then $latex {r = (3+5i) – (1+2i)3 = -i}$, and we see in total that
\begin{equation}
3+5i = (1+2i) 3 – i.
\end{equation}
Note that $latex {N(-i) = 1}$ and $latex {N(1 + 2i) = 5}$, so this choice of $latex {q}$ and $latex {r}$ works.

As $latex {13/5}$ is sort of close to $latex {2}$, what if we chose $latex {q = 2}$ instead? Then $latex {r = (3 + 5i) – (1 + 2i)2 = 1 + i}$, leading to the overall expression
\begin{equation}
3_5i = (1 + 2i) 2 + (1 + i).
\end{equation}
Note that $latex {N(1+i) = 2 < N(1+2i) = 5}$, so that this choice of $latex {q}$ and $latex {r}$ also works.

This is an example of how the choice of $latex {q}$ and $latex {r}$ is not well-defined for the Gaussian integers. In fact, even if one decides to choose $latex {q}$ to that $latex {N(r)}$ is minimal, the resulting choices are still not necessarily unique.

This may come as a surprise. The letters $latex {q}$ and $latex {r}$ come from our tendency to call those numbers the quotient and remainder after division. We have shown that the quotient and remainder are not well-defined, so it does not make sense to talk about ”the remainder” or ”the quotient.” This is a bit strange!

Are we able to prove unique factorization when the process of division itself seems to lead to ambiguities? Let us proceed forwards and try to see.

Our next goal is to prove the Euclidean Algorithm. By this, we mean that by repeatedly performing the division algorithm starting with two Gaussian integers $latex {z}$ and $latex {w}$, we hope to get a sequence of remainders with the last nonzero remainder giving a greatest common divisor of $latex {z}$ and $latex {w}$.

Before we can do that, we need to ask a much more basic question. What do we mean by a greatest common divisor? In particular, the Gaussian integers are not ordered, so it does not make sense to say whether one Gaussian integer is bigger than another.

For instance, is it true that $latex {i > 1}$? If so, then certainly $latex {i}$ is positive. We know that multiplying both sides of an inequality by a positive number doesn’t change that inequality. So multiplying $latex {i > 1}$ by $latex {i}$ leads to $latex {-1 > i}$, which is absurd if $latex {i}$ was supposed to be positive!

To remedy this problem, we will choose a common divisor of $latex {z}$ and $latex {w}$ with the greatest norm (which makes sense, as the norm is a regular integer and thus is well-ordered). But the problem here, just as with the division algorithm, is that there may or may not be multiple such numbers. So we cannot talk about ”the greatest common divisor” and instead talk about ”a greatest common divisor.” To paraphrase Lewis Carroll’s\footnote{Carroll was also a mathematician, and hid some nice mathematics inside some of his works.} Alice, things are getting curiouser and curiouser!

Definition 5 For nonzero $latex {z,w}$ in $latex {\mathbb{Z}[i]}$, a greatest common divisor of $latex {z}$ and $latex {w}$, denoted by $latex {\gcd(z,w)}$, is a common divisor with largest norm. That is, if $latex {c}$ is another common divisor of $latex {z}$ and $latex {w}$, then $latex {N(c) \leq N(\gcd(z,w))}$.

If $latex {N(\gcd(z,w)) = 1}$, then we say that $latex {z}$ and $latex {w}$ are relatively prime. Said differently, if $latex {1}$ is a greatest common divisor of $latex {z}$ and $latex {w}$, then we say that $latex {z}$ and $latex {w}$ are relatively prime.

Remark 2 Note that $latex {\gcd(z,w)}$ as we’re writing it is not actually well-defined, and may stand for any greatest common divisor of $latex {z}$ and $latex {w}$.

With this definition in mind, the proof of the Euclidean Algorithm is almost identical to the proof of the Euclidean Algorithm for the regular integers. As with the regular integers, we need the following result, which we will use over and over again.

Lemma 6 Suppose that $latex {z \mid w_1}$ and $latex {z \mid w_2}$. Then for any $latex {x,y}$ in $latex {\mathbb{Z}[i]}$, we have that $latex {z \mid (x w_1 + y w_2)}$.

Proof: As $latex {z \mid w_1}$, there is some Gaussian integer $latex {k_1}$ such that $latex {z k_1 = w_1}$. Similarly, there is some Gaussian integer $latex {k_2}$ such that $latex {z k_2 = w_2}$.

Then $latex {xw_1 + yw_2 = zxk_1 + zyk_2 = z(xk_1 + yk_2)}$, which is divisible by $latex {z}$ as this is the definition of divisibility. $latex \Box$

Notice that this proof is identical to the analogous statement in the integers, except with differently chosen symbols. That is how the proof of the Euclidean Algorithm goes as well.

Theorem 7 let $latex {z,w}$ be nonzero Gaussian integers. Recursively apply the division algorithm, starting with the pair $latex {z, w}$ and then choosing the quotient and remainder in one equation the new pair for the next. The last nonzero remainder is divisible by all common divisors of $latex {z,w}$, is itself a common divisor, and so the last nonzero remainder is a greatest common divisor of $latex {z}$ and $latex {w}$.

Symbolically, this looks like
\begin{align}
z &= q_1 w + r_1, \quad N(r_1) < N(w) \\\\
w &= q_2 r_1 + r_2, \quad N(r_2) < N(r_1) \\\\
r_1 &= q_3 r_2 + r_3, \quad N(r_3) < N(r_2) \\\\
\cdots &= \cdots \\\\
r_k &= q_{k+2} r_{k+1} + r_{k+2}, \quad N(r_{k+2}) < N(r_{k+1}) \\\\
r_{k+1} &= q_{k+3} r_{k+2} + 0,
\end{align}
where $latex {r_{k+2}}$ is the last nonzero remainder, which we claim is a greatest common divisor of $latex {z}$ and $latex {w}$.

Proof: We are claiming several thing. Firstly, we should prove our implicit claim that this algorithm terminates at all. Is it obvious that we should eventually reach a zero remainder?

In order to see this, we look at the norms of the remainders. After each step in the algorithm, the norm of the remainder is smaller than the previous step. As the norms are always nonnegative integers, and we know there does not exist an infinite list of decreasing positive integers, we see that the list of nonzero remainders is finite. So the algorithm terminates.

We now want to prove that the last nonzero remainder is a common divisor and is in fact a greatest common divisor. The proof is actually identical to the proof in the integer case, merely with a different choice of symbols.

Here, we only sketch the argument. Then the rest of the argument can be found by comparing with the proof of the Euclidean Algorithm for $latex {\mathbb{Z}}$ as found in the course textbook.

For ease of exposition, suppose that the algorithm terminated in exatly 3 steps, so that we have
\begin{align}
z &= q_1 w + r_1, \\
w &= q_2 r_1 + r_2 \\
r_1 &= q_3 r_2 + 0.
\end{align}

On the one hand, suppose that $latex {d}$ is a common divisor of $latex {z}$ and $latex {w}$. Then by our previous lemma, $latex {d \mid z – q_1 w = r_1}$, so that we see that $latex {d}$ is a divisor of $latex {r_1}$ as well. Applying to the next line, we have that $latex {d \mid w}$ and $latex {d \mid r_1}$, so that $latex {d \mid w – q_2 r_1 = r_2}$. So every common divisor of $latex {z}$ and $latex {w}$ is a divisor of the last nonzero remainder $latex {r_2}$.

On the other hand, $latex {r_2 \mid r_1}$ by the last line of the algorithm. Then as $latex {r_2 \mid r_1}$ and $latex {r_2 \mid r_1}$, we know that $latex {r_2 \mid q_2 r_1 + r_2 = w}$. Applying this to the first line, as $latex {r_2 \mid r_1}$ and $latex {r_2 \mid w}$, we know that $latex {r_2 \mid q_1 w + r_1 = z}$. So $latex {r_2}$ is a common divisor.

We have shown that $latex {r_2}$ is a common divisor of $latex {z}$ and $latex {w}$, and that every common divisor of $latex {z}$ and $latex {w}$ divides $latex {r_2}$. How do we show that $latex {r_2}$ is a greatest common divisor?

Suppose that $latex {d}$ is a common divisor of $latex {z}$ and $latex {w}$, so that we know that $latex {d \mid r_2}$. In particular, this means that there is some nonzero $latex {k}$ so that $latex {dk = r_2}$. Taking norms, this means that $latex {N(dk) = N(d)N(k) = N(r_2)}$. As $latex {N(d)}$ and $latex {N(k)}$ are both at least $latex {1}$, this means that $latex {N(d) \leq N(r_2)}$.

This is true for every common divisor $latex {d}$, and so $latex {N(r_2)}$ is at least as large as the norm of any common divisor of $latex {z}$ and $latex {w}$. Thus $latex {r_2}$ is a greatest common divisor.

The argument carries on in the same way for when there are more steps in the algorithm. $latex \Box$

Theorem 8 The greatest common divisor of $latex {z}$ and $latex {w}$ is well-defined, up to multiplication by $latex {\pm 1, \pm i}$. In other words, if $latex {\gcd(z,w)}$ is a greatest common divisor of $latex {z}$ and $latex {w}$, then all greatest common divisors of $latex {z}$ and $latex {w}$ are given by $latex {\pm \gcd(z,w), \pm i \gcd(z,w)}$.

Proof: Suppose $latex {d}$ is a greatest common divisor, and let $latex {\gcd(z,w)}$ denote a greatest common divisor resulting from an application of the Euclidean Algorithm. Then we know that $latex {d \mid \gcd(z,w)}$, so that there is some $latex {k}$ so that $latex {dk = \gcd(z,w)}$. Taking norms, we see that $latex {N(d)N(k) = N(\gcd(z,w)}$.

But as both $latex {d}$ and $latex {\gcd(z,w)}$ are greatest common divisors, we must have that $latex {N(d) = N(\gcd(z,w))}$. So $latex {N(k) = 1}$. The only Gaussian integers with norm one are $latex {\pm 1, \pm i}$, so we have that $latex {du = \gcd(z,w)}$ where $latex {u}$ is one of the four Gaussian units, $latex {\pm 1, \pm i}$.

Conversely, it’s clear that the four numbers $latex {\pm \gcd(z,w), \pm i \gcd(z,w)}$ are all greatest common divisors. $latex \Box$

Now that we have the Euclidean Algorithm, we can go towards unique factorization in $latex {\mathbb{Z}[i]}$. Let $latex {g}$ denote a greatest common divisor of $latex {z}$ and $latex {w}$. Reverse substitution in the Euclidean Algorithm shows that we can find Gaussian integer solutions $latex {x,y}$ to the (complex) linear Diophantine equation
\begin{equation}
zx + wy = g.
\end{equation}
Let’s see an example.

Example 2 Consider $latex {32 + 9i}$ and $latex {4 + 11i}$. The Euclidean Algorithm looks like
\begin{align}
32 + 9i &= (4 + 11i)(2 – 2i) + 2 – 5i, \\\\
4 + 11i &= (2 – 5i)(-2 + i) + 3 – i, \\\\
2 – 5i &= (3-i)(1-i) – i, \\\\
3 – i &= -i (1 + 3i) + 0.
\end{align}
So we know that $latex {-i}$ is a greatest common divisor of $latex {32 + 9i}$ and $latex {4 + 11i}$, and so we know that $latex {32+9i}$ and $latex {4 + 11i}$ are relatively prime. Let us try to find a solution to the Diophantine equation
\begin{equation}
x(32 + 9i) + y(4 + 11i) = 1.
\end{equation}
Performing reverse substition, we see that
\begin{align}
-i &= (2 – 5i) – (3-i)(1-i) \\\\
&= (2 – 5i) – (4 + 11i – (2-5i)(-2 + i))(1-i) \\\\
&= (2 – 5i) – (4 + 11i)(1 – i) + (2 – 5i)(-2 + 1)(1 – i) \\\\
&= (2 – 5i)(3i) – (4 + 11i)(1 – i) \\\\
&= (32 + 9i – (4 + 11i)(2 – 2i))(3i) – (4 + 11i)(1 – i) \\\\
&= (32 + 9i) 3i – (4 + 11i)(2 – 2i)(3i) – (4 + 11i)(1-i) \\\\
&= (32 + 9i) 3i – (4 + 11i)(7 + 5i).
\end{align}
Multiplying this through by $latex {i}$, we have that
\begin{equation}
1 = (32 + 9i) (-3) + (4 + 11i)(5 – 7i).
\end{equation}
So one solution is $latex {(x,y) = (-3, 5 – 7i)}$.

Although this looks more complicated, the process is the same as in the case over the regular integers. The apparent higher difficulty comes mostly from our lack of familiarity with basic arithmetic in $latex {\mathbb{Z}[i]}$.

The rest of the argument is now exactly as in the integers.

Theorem 9 Suppose that $latex {z, w}$ are relatively prime, and that $latex {z \mid wv}$. Then $latex {z \mid v}$.

Proof: This is left as an exercise (and will appear on the next midterm in some form — cheers to you if you’ve read this far in these notes). But it’s now the almost the same as in the regular integers. $latex \Box$

Theorem 10 Let $latex {z}$ be a Gaussian integer with $latex {N(z) > 1}$. Then $latex {z}$ can be written uniquely as a product of Gaussian primes, up to multiplication by one of the Gaussian units $latex {\pm 1, \pm i}$.

Proof: We only sketch part of the proof. There are multiple ways of doing this, but we present the one most similar to what we’ve done for the integers. If there are Gaussian integers without unique factorization, then there are some (maybe they tie) with minimal norm. So let $latex {z}$ be a Gaussian integer of minimal norm without unique factorization. Then we can write
\begin{equation}
p_1 p_2 \cdots p_k = z = q_1 q_2 \cdots q_\ell,
\end{equation}
where the $latex {p}$ and $latex {q}$ are all primes. As $latex {p_1 \mid z = q_1 q_2 \cdots q_\ell}$, we know that $latex {p_1}$ divides one of the $latex {q}$ (by Theorem~9), and so (up to units) we can say that $latex {p_1}$ is one of the $latex {q}$ primes. We can divide each side by $latex {p_1}$ and we get two supposedly different factorizations of a Gaussian integer of norm $latex {N(z)/N(p_1) < N(z)}$, which is less than the least norm of an integer without unique factorization (by what we supposed). This is a contradiction, and we can conclude that there are no Gaussian integers without unique factorization. $latex \Box$

If this seems unclear, I recommend reviewing this proof and the proof of unique factroziation for the regular integers. I should also mention that one can modify the proof of unique factorization for $latex {\mathbb{Z}}$ as given in the course textbook as well (since it is a bit different than what we have done). Further, the course textbook does proof of unique factorization for $latex {\mathbb{Z}[i]}$ in Chapter 36, which is very similar to the proof sketched above (although the proof of Theorem~9 is very different.)

3. An application to $latex {y^2 = x^3 – 1}$.

We now consider the nonlinear Diophantine equation $latex {y^2 = x^3 – 1}$, where $latex {x,y}$ are in $latex {\mathbb{Z}}$. This is hard to solve over the integers, but by going up to $latex {\mathbb{Z}[i]}$, we can determine all solutions.

In $latex {\mathbb{Z}[i]}$, we can rewrite $$ y^2 + 1 = (y + i)(y – i) = x^3. \tag{1}$$
We claim that $latex {y+i}$ and $latex {y-i}$ are relatively prime. To see this, suppose that $latex {d}$ is a common divisor of $latex {y+i}$ and $latex {y-i}$. Then $latex {d \mid (y + i) – (y – i) = 2i}$. It happens to be that $latex {2i = (1 + i)^2}$, and that $latex {(1 + i)}$ is prime. To see this, we show the following.

Lemma 11 Suppose $latex {z}$ is a Gaussian integer, and $latex {N(z) = p}$ is a regular prime. Then $latex {z}$ is a Gaussian prime.

Proof: Suppose that $latex {z}$ factors nontrivially as $latex {z = ab}$. Then taking norms, $latex {N(z) = N(a)N(b)}$, and so we get a nontrivial factorization of $latex {N(z)}$. When $latex {N(z)}$ is a prime, then there are no nontrivial factorizations of $latex {N(z)}$, and so $latex {z}$ must have no nontrivial factorization. $latex \Box$

As $latex {N(1+i) = 2}$, which is a prime, we see that $latex {(1 + i)}$ is a Gaussian prime. So $latex {d \mid (1 + i)^2}$, which means that $latex {d}$ is either $latex {1, (1 + i)}$, or $latex {(1+i)^2}$ (up to multiplication by a Gaussian unit).

Suppose we are in the case of the latter two, so that $latex {(1+i) \mid d}$. Then as $latex {d \mid (y + i)}$, we know that $latex {(1 + i) \mid x^3}$. Taking norms, we have that $latex {2 \mid x^6}$.

By unique factorization in $latex {\mathbb{Z}}$, we know that $latex {2 \mid x}$. This means that $latex {4 \mid x^2}$, which allows us to conclude that $latex {x^3 \equiv 0 \pmod 4}$. Going back to the original equation $latex {y^2 + 1 = x^3}$, we see that $latex {y^2 + 1 \equiv 0 \pmod 4}$, which means that $latex {y^2 \equiv 3 \pmod 4}$. A quick check shows that $latex {y^2 \equiv 3 \pmod 4}$ has no solutions $latex {y}$ in $latex {\mathbb{Z}/4\mathbb{Z}}$.

So we rule out the case then $latex {(1 + i) \mid d}$, and we are left with $latex {d}$ being a unit. This es exactly the case that $latex {y+i}$ and $latex {y-i}$ are relatively prime.

Recall that $latex {(y+i)(y-i) = x^3}$. As $latex {y+i}$ and $latex {y-i}$ are relatively prime and their product is a cube, by unique factorization in $latex {\mathbb{Z}[i]}$ we know that $latex {y+i}$ and $latex {y-i}$ much each be Gaussian cubes. Then we can write $latex {y+i = (m + ni)^3}$ for some Gaussian integer $latex {m + ni}$. Expanding, we see that
\begin{equation}
y+i = m^3 – 3mn^2 + i(3m^2n – n^3).
\end{equation}
Equating real and imaginary parts, we have that
\begin{align}
y &= m(m^2 – 3n^2) \\
1 &= n(3m^2 – n^2).
\end{align}
This second line shows that $latex {n \mid 1}$. As $latex {n}$ is a regular integer, we see that $latex {n = 1}$ or $latex {-1}$.

If $latex {n = 1}$, then that line becomes $latex {1 = (3m^2 – 1)}$, or after rearranging $latex {2 = 3m^2}$. This has no solutions.

If $latex {n = -1}$, then that line becomes $latex {1 = -(3m^2 – 1)}$, or after rearranging $latex {0 = 3m^2}$. This has the solution $latex {m = 0}$, so that $latex {y+i = (-i)^3 = i}$, which means that $latex {y = 0}$. Then from $latex {y^2 + 1 = x^3}$, we see that $latex {x = 1}$.

And so the only solution is $latex {(x,y) = (1,0)}$, and there are no other solutions.

4. Other Rings

The Gaussian integers have many of the same properties as the regular integers, even though there are some differences. We could go further. For example, we might consider the following integer-like sets,
\begin{equation}
\mathbb{Z}(\sqrt{d}) = { a + b \sqrt{d} : a,b \in \mathbb{Z} }.
\end{equation}
One can add, subtract, and multiply these together in similar ways to how we can add, subtract, and multiply together integers, or Gaussian integers.

We might ask what properties these other integer-like sets have. For instance, do they have unique factorization?

More generally, there is a better name than ”integer-like set” for this sort of construction.

Suppose $latex {R}$ is a collection of elements, and it makes sense to add, subtract, and multiply these elements together. Further, we want addition and multiplication to behave similarly to how they behave for the regular integers. In particular, if $latex {r}$ and $latex {s}$ are elements in $latex {R}$, then we want $latex {r + s = s + r}$ to be in $latex {R}$; we want something that behaves like $latex {0}$ in the sense that $latex {r + 0 = r}$; for each $latex {r}$, want another element $latex {-r}$ so that $latex {r + (-r) = 0}$; we want $latex {r \cdot s = s \cdot r}$; we want something that behaves like $latex {1}$ in the sense that $latex {r \cdot 1 = r}$ for all $latex {r \neq 0}$; and we want $latex {r(s_1 + s_2) = r s_1 + r s_2}$. Such a collection is called a ring. (More completely, this is called a commutative unital ring, but that’s not important.)

It is not important that you explicitly remember exactly what the definition of a ring is. The idea is that there is a name for things that are ”integer-like” and that we might wonder what properties we have been thinking of as properties of the integers are actually properties of rings.

As a total aside: there are very many more rings too, things that look much more different than the integers. This is one of the fundamental questions that leads to the area of mathematics called Abstract Algebra. With an understanding of abstract algebra, one could then focus on these general number theoretic problems in an area of math called Algebraic Number Theory.

5. The rings $latex {\mathbb{Z}[\sqrt{d}]}$

We can describe some of the specific properties of $latex {\mathbb{Z}[\sqrt{d}]}$, and suggest how some of the ideas we’ve been considering do (or don’t) generalize. For a general element $latex {n = a + b \sqrt{d}}$, we can define the conjugate $latex {\overline{n} = a – b\sqrt {d}}$ and the norm $latex {N(n) = n \cdot \overline{n} = a^2 – d b^2}$. We call those elements $latex {u}$ with $latex {N(u) = 1}$ the units in $latex {\mathbb{Z}[\sqrt{d}]}$.

Some of the definitions we’ve been using turn out to not generalize so easily, or in quite the ways we expect. If $latex {n}$ doesn’t have a nontrivial factoriation (meaning that we cannot write $latex {n = ab}$ with $latex {N(a), N(b) \neq 1}$), then we call $latex {n}$ an irreducible. In the cases of $latex {\mathbb{Z}}$ and $latex {\mathbb{Z}[i]}$, we would have called these elements prime.

In general, we call a number $latex {p}$ in $latex {\mathbb{Z}{\sqrt{d}}}$ a prime if $latex {p}$ has the property that $latex {p \mid ab}$ means that $latex {p \mid a}$ or $latex {p \mid b}$. Of course, in the cases of $latex {\mathbb{Z}}$ and $latex {\mathbb{Z}[i]}$, we showed that irreducibles are primes. But it turns out that this is not usually the case.

Let us look at $latex {\mathbb{Z}{\sqrt{-5}}}$ for a moment. In particular, we can write $latex {6}$ in two ways as
\begin{equation}
6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 – \sqrt{-5}).
\end{equation}
Although it’s a bit challenging to show, these are the only two fundamentally different factorizations of $latex {6}$ in $latex {\mathbb{Z}[\sqrt{-5}]}$. One can show (it’s not very hard, but it’s not particularly illuminating to do here) that neither $latex {2}$ or $latex {3}$ divides $latex {(1 + \sqrt{-5})}$ or $latex {(1 – \sqrt{-5})}$ (and vice versa), which means that none of these four numbers are primes in our more general definition. One can also show that all four numbers are irreducible.

What does this mean? This means that $latex {6}$ can be factored into irreducibles in fundamentally different ways, and that $latex {\mathbb{Z}[\sqrt{-5}]}$ does not have unique factorization.

It’s a good thought exercise to think about what is really different between $latex {\mathbb{Z}[\sqrt{-5}]}$ and $latex {\mathbb{Z}}$. At the beginning of this course, it seemed extremely obvious that $latex {\mathbb{Z}}$ had unique factorization. But in hindsight, is it really so obvious?

Understanding when there is and is not unique factorization in $latex {\mathbb{Z}[\sqrt{d}]}$ is something that people are still trying to understand today. The fact is that we don’t know! In particular, we really don’t know very much when $latex {d}$ is positive.

One reason why can be seen in $latex {\mathbb{Z}[\sqrt{2}]}$. If $latex {n = a + b \sqrt{2}}$, then $latex {N(n) = a^2 – 2 b^2}$. A very basic question that we can ask is what are the units? That is, which $latex {n}$ have $latex {N(n) = 1}$?

Here, that means trying to solve the equation $$ a^2 – 2 b^2 = 1. \tag{2}$$
We have seen this equation a few times before. On the second homework assignment, I asked you to show that there were infinitely many solutions to this equation by finding lines and intersecting them with hyperbolas. We began to investigate this Diophantine equation because each solution leads to another square-triangular number.

So there are infinitely many units in $latex {\mathbb{Z}[\sqrt{2}]}$. This is strange! For instance, $latex {3 + 2 \sqrt{2}}$ is a unit, which means that it behaves just like $latex {\pm 1}$ in $latex {\mathbb{Z}}$, or like $latex {\pm 1, \pm i}$ in $latex {\mathbb{Z}[i]}$. Very often, the statements we’ve been looking at and proving are true ”up to multiplication by units.” Since there are infinitely many in $latex {\mathbb{Z}[\sqrt 2]}$, it can mean that it’s annoying to determine even if two numbers are actually the same up to multiplication by units.

As you look further, there are many more strange and interesting behaviours. It is really interesting to see what properties are very general, and what properties vary a lot. It is also interesting to see the different ways in which properties we’re used to, like unique factorization, can fail.

For instance, we have seen that $latex {\mathbb{Z}[\sqrt -5]}$ does not have unique factorization. We showed this by seeing that $latex {6}$ factors in two fundamentally different ways. In fact, some numbers in $latex {\mathbb{Z}[\sqrt -5]}$ do factor uniquely, and others do not. But if one does not, then it factors in at most two fundamentally different ways.

In other rings, you can have numbers which factor in more fundamentally different ways. The actual behaviour here is also really poorly understood, and there are mathematicians who are actively pursuing these topics.

It’s a very large playground out there.

Posted in Brown University, Expository, Math 420, Math.NT, Mathematics, Teaching | Tagged , , , , , | 2 Comments

A Brief Notebook on Cryptography

This is a post written for my Spring 2016 Number Theory class at Brown University. It was written and originally presented from a Jupyter Notebook, and changed for presentation on this site. There is a version of the notebook available at github.

Cryptography

Recall the basic setup of cryptography. We have two people, Anabel and Bartolo. Anabel wants to send Bartolo a secure message. What do we mean by “secure?” We mean that even though that dastardly Eve might intercept and read the transmitted message, Eve won’t learn anything about the actual message Anabel wants to send to Bartolo.

Before the 1970s, the only way for Anabel to send Bartolo a secure message required Anabel and Bartolo to get together beforehand and agree on a secret method of communicating. To communicate, Anabel first decides on a message. The original message is sometimes called the plaintext. She then encrypts the message, producing a ciphertext.

She then sends the ciphertext. If Eve gets hold of hte ciphertext, she shouldn’t be able to decode it (unless it’s a poor encryption scheme).

When Bartolo receives the ciphertext, he can decrypt it to recover the plaintext message, since he agreed on the encryption scheme beforehand.

Caesar Shift

The first known instance of cryptography came from Julius Caesar. It was not a very good method. To encrypt a message, he shifted each letter over by 2, so for instance “A” becomes “C”, and “B” becomes “D”, and so on.

Let’s see what sort of message comes out.

alpha = "abcdefghijklmnopqrstuvwxyz".upper()
punct = ",.?:;'\n "
from functools import partial

def shift(l, s=2):
    l = l.upper()
    return alpha[(alpha.index(l) + s) % 26]

def caesar_shift_encrypt(m, s=2):
    m = m.upper()
    c = "".join(map(partial(shift, s=s), m))
    return c

def caesar_shift_decrypt(c, s=-2):
    c = c.upper()
    m = "".join(map(partial(shift, s=s), c))
    return m
print "Original Message: HI"
print "Ciphertext:", caesar_shift_encrypt("hi")
Original Message: HI
Ciphertext: JK
m = """To be, or not to be, that is the question:
Whether 'tis Nobler in the mind to suffer
The Slings and Arrows of outrageous Fortune,
Or to take Arms against a Sea of troubles,
And by opposing end them."""

m = "".join([l for l in m if not l in punct])

print "Original Message:"
print m

print
print "Ciphertext:"
tobe_ciphertext = caesar_shift_encrypt(m)
print tobe_ciphertext
Original Message:
TobeornottobethatisthequestionWhethertisNoblerinthemindtosuffer
TheSlingsandArrowsofoutrageousFortuneOrtotakeArmsagainstaSeaof
troublesAndbyopposingendthem

Ciphertext:
VQDGQTPQVVQDGVJCVKUVJGSWGUVKQPYJGVJGTVKUPQDNGTKPVJGOKPFVQUWHHGT
VJGUNKPIUCPFCTTQYUQHQWVTCIGQWUHQTVWPGQTVQVCMGCTOUCICKPUVCUGCQH
VTQWDNGUCPFDAQRRQUKPIGPFVJGO
print "Decrypted first message:", caesar_shift_decrypt("JK")
Decrypted first message: HI
print "Decrypted second message:"
print caesar_shift_decrypt(tobe_ciphertext)
Decrypted second message:
TOBEORNOTTOBETHATISTHEQUESTIONWHETHERTISNOBLERINTHEMINDTOSUFFER
THESLINGSANDARROWSOFOUTRAGEOUSFORTUNEORTOTAKEARMSAGAINSTASEAOF
TROUBLESANDBYOPPOSINGENDTHEM

Is this a good encryption scheme? No, not really. There are only 26 different things to try. This can be decrypted very quickly and easily, even if entirely by hand.

Substitution Cipher

A slightly different scheme is to choose a different letter for each letter. For instance, maybe “A” actually means “G” while “B” actually means “E”. We call this a substitution cipher as each letter is substituted for another.

import random
permutation = list(alpha)
random.shuffle(permutation)
# Display the new alphabet
print alpha
subs = "".join(permutation)
print subs
ABCDEFGHIJKLMNOPQRSTUVWXYZ
EMJSLZKAYDGTWCHBXORVPNUQIF
def subs_cipher_encrypt(m):
    m = "".join([l.upper() for l in m if not l in punct])
    return "".join([subs[alpha.find(l)] for l in m])

def subs_cipher_decrypt(c):
    c = "".join([l.upper() for l in c if not l in punct])
    return "".join([alpha[subs.find(l)] for l in c])
m1 = "this is a test"

print "Original message:", m1
c1 = subs_cipher_encrypt(m1)

print
print "Encrypted Message:", c1

print
print "Decrypted Message:", subs_cipher_decrypt(c1)
Original message: this is a test

Encrypted Message: VAYRYREVLRV

Decrypted Message: THISISATEST
print "Original message:"
print m

print
c2 = subs_cipher_encrypt(m)
print "Encrypted Message:"
print c2

print
print "Decrypted Message:"
print subs_cipher_decrypt(c2)
Original message:
TobeornottobethatisthequestionWhethertisNoblerinthemindtosuffer
TheSlingsandArrowsofoutrageousFortuneOrtotakeArmsagainstaSeaof
troublesAndbyopposingendthem

Encrypted Message:
VHMLHOCHVVHMLVAEVYRVALXPLRVYHCUALVALOVYRCHMTLOYCVALWYCSVHRPZZLO
VALRTYCKRECSEOOHURHZHPVOEKLHPRZHOVPCLHOVHVEGLEOWREKEYCRVERLEHZ
VOHPMTLRECSMIHBBHRYCKLCSVALW

Decrypted Message:
TOBEORNOTTOBETHATISTHEQUESTIONWHETHERTISNOBLERINTHEMINDTOSUFFER
THESLINGSANDARROWSOFOUTRAGEOUSFORTUNEORTOTAKEARMSAGAINSTASEAOF
TROUBLESANDBYOPPOSINGENDTHEM

Is this a good encryption scheme? Still no. In fact, these are routinely used as puzzles in newspapers or puzzle books, because given a reasonably long message, it’s pretty easy to figure it out using things like the frequency of letters.

For instance, in English, the letters RSTLNEAO are very common, and much more common than other letters. So one might start to guess that the most common letter in the ciphertext is one of these. More powerfully, one might try to see which pairs of letters (called bigrams) are common, and look for those in the ciphertext, and so on.

From this sort of thought process, encryption schemes that ultimately rely on a single secret alphabet (even if it’s not our typical alphabet) fall pretty quickly. So… what about polyalphabetic ciphers? For instance, what if each group of 5 letters uses a different set of alphabets?

This is a great avenue for exploration, and there are lots of such encryption schemes that we won’t discuss in this class. But a class on cryptography (or a book on cryptography) would certainly go into some of these. It might also be a reasonable avenue of exploration for a final project.

The German Enigma

One very well-known polyalphabetic encryption scheme is the German Enigma used before and during World War II. This was by far the most complicated cryptosystem in use up to that point, and the story of how it was broken is a long and tricky one. Intial successes towards breaking the Enigma came through the work of Polish mathematicians, fearful (and rightfully so) of the Germans across the border. By 1937, they had built replicas and understood many details of the Enigma system. But in 1938, the Germans shifted to a more secure and complicated cryptosystem. Just weeks before the German invasion of Poland and the beginning of World War II, Polish mathematicians sent their work and notes to mathematicians in France and Britain, who carried out this work.

The second major progress towards breaking the Enigma occurred largely in Bletchley Park in Britain, a communication center with an enormous dedicated effort to breaking the Enigma. This is where the tragic tale of Alan Turing, recently popularized through the movie The Imitation Game, begins. This is also the origin tale for modern computers, as Alan Turing developed electromechanical computers to help break the Enigma.

The Enigma worked by having a series of cogs or rotors whose positions determined a substitution cipher. After each letter, the positions were changed through a mechanical process. An Enigma machine is a very impressive machine to look at [and the “computer” Alan Turing used to help break them was also very impressive].

Below, I have implemented an Enigma, by default set to 4 rotors. I don’t expect one to understand the implementation. The interesting part is how meaningless the output message looks. Note that I’ve kept the spacing and punctuation from the original message for easier comparison. Really, you wouldn’t do this.

The plaintext used for demonstration is from Wikipedia’s article on the Enigma.

from random import shuffle,randint,choice  
from copy import copy  
num_alphabet = range(26)   
    
def en_shift(l, n):                         # Rotate cogs and arrays
    return l[n:] + l[:n]  
      
    
class Cog:                                  # Each cog has a substitution cipher  
    def __init__(self):  
        self.shuf = copy(num_alphabet)  
        shuffle(self.shuf)                  # Create the individual substition cipher
        return                              # Really, these were not random
    
    def subs_in(self,i):                    # Perform a substition
        return self.shuf[i] 
    
    def subs_out(self,i):                   # Perform a reverse substition
        return self.shuf.index(i)
    
    def rotate(self):                       # Rotate the cog by 1.
        self.shuf = en_shift(self.shuf, 1)
        
    def setcog(self,a):                     # Set up a particular substitution
        self.shuf = a  

        
class Enigma:  
    def __init__(self, numcogs,readability=True):  
        self.readability = readability  
        self.numcogs = numcogs  
        self.cogs = []  
        self.oCogs = []                     # "Original Cog positions"  
          
        for i in range(0,self.numcogs):     # Create the cogs
            self.cogs.append(Cog())
            self.oCogs.append(self.cogs[i].shuf)  
            
        refabet = copy(num_alphabet) 
        self.reflector = copy(num_alphabet)  
        while len(refabet) > 0:             # Pair letters in the reflector
            a = choice(refabet)  
            refabet.remove(a)  
            
            b = choice(refabet)  
            refabet.remove(b)  
            
            self.reflector[a] = b  
            self.reflector[b] = a
            
  
    def print_setup(self): # Print out substituion setup.
        print "Enigma Setup:\nCogs: ",self.numcogs,"\nCog arrangement:"  
        for i in range(0,self.numcogs):  
            print self.cogs[i].shuf  
        print "Reflector arrangement:\n",self.reflector,"\n"  
          
    def reset(self):  
        for i in range(0,self.numcogs):  
            self.cogs[i].setcog(self.oCogs[i])  
              
    def encode(self,text):  
        t = 0     # Ticker counter  
        ciphertext=""  
        for l in text.lower():  
            num = ord(l) % 97  
            # Handle special characters for readability
            if (num>25 or num<0):  
                if (self.readability):
                    ciphertext += l   
                else:  
                    pass  
            
            else:
                # Pass through cogs, reflect, then return through cogs
                t += 1  
                for i in range(self.numcogs): 
                    num = self.cogs[i].subs_in(num)  
                      
                num = self.reflector[num]  
                  
                for i in range(self.numcogs):  
                    num = self.cogs[self.numcogs-i-1].subs_out(num)  
                ciphertext += "" + chr(97+num)
                  
                # Rotate cogs
                for i in range(self.numcogs):
                    if ( t % ((i*6)+1) == 0 ):
                        self.cogs[i].rotate()  
        return ciphertext.upper()  
  
plaintext="""When placed in an Enigma, each rotor can be set to one of 26 possible positions. 
When inserted, it can be turned by hand using the grooved finger-wheel, which protrudes from 
the internal Enigma cover when closed. So that the operator can know the rotor's position, 
each had an alphabet tyre (or letter ring) attached to the outside of the rotor disk, with 
26 characters (typically letters); one of these could be seen through the window, thus indicating 
the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor 
disk. A later improvement was the ability to adjust the alphabet ring relative to the rotor disk. 
The position of the ring was known as the Ringstellung ("ring setting"), and was a part of the 
initial setting prior to an operating session. In modern terms it was a part of the 
initialization vector."""

# Remove newlines for encryption
pt = "".join([l.upper() for l in plaintext if not l == "\n"])
# pt = "".join([l.upper() for l in plaintext if not l in punct])
  
x=enigma(4)  
#x.print_setup()  
  
print "Original Message:"
print pt

print
ciphertext = x.encode(pt)  
print "Encrypted Message"
print ciphertext

print
# Decryption and encryption are symmetric, so to decode we reset and re-encrypt.
x.reset()  
decipheredtext = x.encode(ciphertext)  
print "Decrypted Message:"
print decipheredtext
Original Message:
WHEN PLACED IN AN ENIGMA, EACH ROTOR CAN BE SET TO ONE OF 26 POSSIBLE POSITIONS. 
WHEN INSERTED, IT CAN BE TURNED BY HAND USING THE GROOVED FINGER-WHEEL, WHICH 
PROTRUDES FROM THE INTERNAL ENIGMA COVER WHEN CLOSED. SO THAT THE OPERATOR CAN 
KNOW THE ROTOR'S POSITION, EACH HAD AN ALPHABET TYRE (OR LETTER RING) ATTACHED 
TO THE OUTSIDE OF THE ROTOR DISK, WITH 26 CHARACTERS (TYPICALLY LETTERS); ONE 
OF THESE COULD BE SEEN THROUGH THE WINDOW, THUS INDICATING THE ROTATIONAL 
POSITION OF THE ROTOR. IN EARLY MODELS, THE ALPHABET RING WAS FIXED TO THE 
ROTOR DISK. A LATER IMPROVEMENT WAS THE ABILITY TO ADJUST THE ALPHABET RING 
RELATIVE TO THE ROTOR DISK. THE POSITION OF THE RING WAS KNOWN AS THE 
RINGSTELLUNG ("RING SETTING"), AND WAS A PART OF THE INITIAL SETTING PRIOR TO 
AN OPERATING SESSION. IN MODERN TERMS IT WAS A PART OF THE INITIALIZATION VECTOR.

Encrypted Message
RYQY LWMVIH OH EK GGWFBO, PDZN FNALL ZEP AP IUD JM FEV UG 26 HGRKODYT XWOLIYTSA.
 TJQK KBCNVMHG, VS YBB XI BYZTVU XP BGWP IFNIY UQZ IQUPTKJ QXPVYE-EAOVT, PYMNN
 RXFIOWYVK LWNN OJL JXZBTRVP YMQCXX STJQF KXNZ LXSWDC. LK KGRH ZKT RDZUMCWA GKY
 LYKC LLV ZNFAD'Z BAXLDFTH, QSHA NBY RZ SXEXDAMP FXUS (WC RWHZOX ARWP) VKYNDJQP
 WL OLW ICBALPI RV ISD GXSDQ ZKMJ, OHNN 26 IBGUHNYIQE (GDBNTEBWP QGECUNA); QPG
 SQ JYUQX NDBWB NM AAUF RUNKYDL OLD LMPZQV, ZMKP SQZFDEHKCI KLJ AKPZLRAZZX 
QXPJEXLV HS AFG IIMGK. NT PVAYP RFOKYV, XUY CHZAQEXG DUSS CKN YENSR FX PLS CYMZK
 YHTB. J RLZLB DNIDWNWAIOO HOK PGM HVXXHIJ VV SCTNIC QGM TFKPQYPQ XFQU PSAECHTX
 RA QUW CUNRV JCUW. LCP GFKZLLXW LN YWF OAQM CMF YVTQH EI JAU LZTXEYDGDFLQ 
("CDBS TQHUEIR"), BLN QIG O UHJJ DV DQY KQGVFKI EBEFRCT WEZWG LJ WC NIAUKIYQJ
 EHZLZLS. TY XPZSBL IPGWN ZS IUY E YQMD BW NVF QYWSDMTRSNSHYO GJGNUL.

Decrypted Message:
WHEN PLACED IN AN ENIGMA, EACH ROTOR CAN BE SET TO ONE OF 26 POSSIBLE POSITIONS.
 WHEN INSERTED, IT CAN BE TURNED BY HAND USING THE GROOVED FINGER-WHEEL, WHICH
 PROTRUDES FROM THE INTERNAL ENIGMA COVER WHEN CLOSED. SO THAT THE OPERATOR 
CAN KNOW THE ROTOR'S POSITION, EACH HAD AN ALPHABET TYRE (OR LETTER RING)
 ATTACHED TO THE OUTSIDE OF THE ROTOR DISK, WITH 26 CHARACTERS (TYPICALLY 
LETTERS); ONE OF THESE COULD BE SEEN THROUGH THE WINDOW, THUS INDICATING THE
 ROTATIONAL POSITION OF THE ROTOR. IN EARLY MODELS, THE ALPHABET RING WAS FIXED
 TO THE ROTOR DISK. A LATER IMPROVEMENT WAS THE ABILITY TO ADJUST THE ALPHABET
 RING RELATIVE TO THE ROTOR DISK. THE POSITION OF THE RING WAS KNOWN AS THE 
RINGSTELLUNG ("RING SETTING"), AND WAS A PART OF THE INITIAL SETTING PRIOR TO 
AN OPERATING SESSION. IN MODERN TERMS IT WAS A PART OF THE INITIALIZATION VECTOR.

The advent of computers brought in a paradigm shift in approaches towards cryptography. Prior to computers, one of the ways of maintaining security was to come up with a hidden key and a hidden cryptosystem, and keep it safe merely by not letting anyone know anything about how it actually worked at all. This has the short cute name security through obscurity. As the number of type of attacks on cryptosystems are much, much higher with computers, a different model of security and safety became necessary.

It is interesting to note that it is not obvious that security through obscurity is always bad, as long as it’s really really well-hidden. This is relevant to some problems concerning current events and cryptography.

Public Key Cryptography

The new model begins with a slightly different setup. We should think of Anabel and Bartolo as sitting on opposite sides of a classroom, trying to communicate securely even though there are lots of people in the middle of the classroom who might be listening in. In particular, Anabel has something she wants to tell Bartolo.

Instead of keeping the cryptosystem secret, Bartolo tells everyone (in our metaphor, he shouts to the entire classroom) a public key K, and explains how to use it to send him a message. Anabel uses this key to encrypt her message. She then sends this message to Bartolo.

If the system is well-designed, no one will be able to understand the ciphertext even though they all know how the cryptosystem works. This is why the system is called Public Key.

Bartolo receives this message and (using something only he knows) he decrypts the message.

We will learn one such cryptosystem here: RSA, named after Rivest, Shamir, and Addleman — the first major public key cryptosystem.

RSA

Bartolo takes two primes such as $p = 12553$ and $q = 13007$. He notes their product
$$m = pq = 163276871$$
and computes $\varphi(m)$,
$$\varphi(m) = (p-1)(q-1) = 163251312.$$
Finally, he chooses some integer $k$ relatively prime to $\varphi(m)$, like say
$$k = 79921.$$
Then the public key he distributes is
$$ (m, k) = (163276871, 79921).$$

In order to send Bartolo a message using this key, Anabel must convert her message to numbers. She might use the identification A = 11, B = 12, C = 13, … and concatenate her numbers. To send the word CAB, for instance, she would send 131112. Let’s say that Anabel wants to send the message

NUMBER THEORY IS THE QUEEN OF THE SCIENCES

Then she needs to convert this to numbers.

conversion_dict = dict()
alpha = "abcdefghijklmnopqrstuvwxyz".upper()
curnum = 11
for l in alpha:
    conversion_dict[l] = curnum
    curnum += 1
print "Original Message:"
msg = "NUMBERTHEORYISTHEQUEENOFTHESCIENCES"
print msg
print

def letters_to_numbers(m):
    return "".join([str(conversion_dict[l]) for l in m.upper()])

print "Numerical Message:"
msg_num = letters_to_numbers(msg)
print msg_num
Original Message:
NUMBERTHEORYISTHEQUEENOFTHESCIENCES

Numerical Message:
2431231215283018152528351929301815273115152425163018152913191524131529

So Anabel’s message is the number

$$2431231215283018152528351929301815273115152425163018152913191524131529$$

which she wants to encrypt and send to Bartolo. To make this manageable, she cuts the message into 8-digit numbers,

$$24312312, 15283018, 15252835, 19293018, 15273115, 15242516, 30181529, 13191524, 131529.$$

To send her message, she takes one of the 8-digit blocks and raises it to the power of $k$ modulo $m$. That is, to transmit the first block, she computes

$$ 24312312^{79921} \equiv 13851252 \pmod{163276871}.$$

# Secret information
p = 12553
q = 13007
phi = (p-1)*(q-1) # varphi(pq)

# Public information
m = p*q # 163276871
k = 79921

print pow(24312312, k, m)
13851252

She sends this number
$$13851252$$
to Bartolo (maybe by shouting. Even though everyone can hear, they cannot decrypt it). How does Bartolo decrypt this message?

He computes $\varphi(m) = (p-1)(q-1)$ (which he can do because he knows $p$ and $q$ separately), and then finds a solution $u$ to
$$ uk = 1 + \varphi(m) v.$$
This can be done quickly through the Euclidean Algorithm.

def extended_euclidean(a,b):
    if b == 0:
        return (1,0,a)
    else :
        x, y, gcd = extended_euclidean(b, a % b) # Aside: Python has no tail recursion
        return y, x - y * (a // b),gcd           # But it does have meaningful stack traces
    
# This version comes from Exercise 6.3 in the book, but without recursion
def extended_euclidean2(a,b):
    x = 1
    g = a
    v = 0
    w = b
    while w != 0:
        q = g // w
        t = g - q*w
        s = x - q*v
        x,g = v,w
        v,w = s,t
    y = (g - a*x) / b
    return (x,y,g)
 
def modular_inverse(a,m) :
    x,y,gcd = extended_euclidean(a,m)
    if gcd == 1 :
        return x % m
    else :
        return None
print "k, p, q:", k, p, q
print
u = modular_inverse(k,(p-1)*(q-1))
print u
k, p, q: 79921 12553 13007

145604785

In particular, Bartolo computes that his $u = 145604785$. To recover the message, he takes the number $13851252$ sent to him by Anabel and raises it to the $u$ power. He computes
$$ 13851252^{u} \equiv 24312312 \pmod{pq},$$
which we can see must be true as
$$ 13851252^{u} \equiv (24312312^{k})^u \equiv 24312312^{1 + \varphi(pq)v} \equiv 24312312 \pmod{pq}.$$
In this last step, we have used Euler’s Theorem to see that
$$ 24312312^{\varphi(pq)v} \equiv 1 \pmod{pq}.$$

# Checking this power explicitly.
print pow(13851252, 145604785, m)
24312312

Now Bartolo needs to perform this process for each 8-digit chunk that Anabel sent over. Note that the work is very easy, as he computes the integer $u$ only once. Each other time, he simply computes $c^u \pmod m$ for each ciphertext $c$, and this is very fast with repeated-squaring.

We do this below, in an automated fashion, step by step.

First, we split the message into 8-digit chunks.

# Break into chunks of 8 digits in length.
def chunk8(message_number):
    cp = str(message_number)
    ret_list = []
    while len(cp) > 7:
        ret_list.append(cp[:8])
        cp = cp[8:]
    if cp:
        ret_list.append(cp)
    return ret_list

msg_list = chunk8(msg_num)
print msg_list
['24312312', '15283018', '15252835', '19293018', '15273115', 
'15242516', '30181529', '13191524', '131529']

This is a numeric representation of the message Anabel wants to send Bartolo. So she encrypts each chunk. This is done below

# Compute ciphertexts separately on each 8-digit chunk.
def encrypt_chunks(chunked_list):
    ret_list = []
    for chunk in chunked_list:
        #print chunk
        #print int(chunk)
        ret_list.append(pow(int(chunk), k, m))
    return ret_list

cipher_list = encrypt_chunks(msg_list)
print cipher_list
[13851252, 14944940, 158577269, 117640431, 139757098, 25099917, 
88562046, 6640362, 10543199]

This is the encrypted message. Having computed this, Anabel sends this message to Bartolo.

To decrypt the message, Bartolo uses his knowledge of $u$, which comes from his ability to compute $\varphi(pq)$, and decrypts each part of the message. This looks like below.

# Decipher the ciphertexts all in the same way
def decrypt_chunks(chunked_list):
    ret_list = []
    for chunk in chunked_list:
        ret_list.append(pow(int(chunk), u, m))
    return ret_list

decipher_list = decrypt_chunks(cipher_list)
print decipher_list
[24312312, 15283018, 15252835, 19293018, 15273115, 15242516, 
30181529, 13191524, 131529]

Finally, Bartolo concatenates these numbers together and translates them back into letters. Will he get the right message back?

alpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

# Collect deciphered texts into a single list, and translate back into letters.
def chunks_to_letters(chunked_list):
    s = "".join([str(chunk) for chunk in chunked_list])
    ret_str = ""
    while s:
        ret_str += alpha[int(s[:2])-11].upper()
        s = s[2:]
    return ret_str

print chunks_to_letters(decipher_list)
NUMBERTHEORYISTHEQUEENOFTHESCIENCES

Yes! Bartolo successfully decrypts the message and sees that Anabel thinks that Number Theory is the Queen of the Sciences. This is a quote from Gauss, the famous mathematician who has been popping up again and again in this course.

Why is this Secure?

Let’s pause to think about why this is secure.

What if someone catches the message in transit? Suppose Eve is eavesdropping and hears Anabel’s first chunk, $13851252$. How would she decrypt it?

Eve knows that she wants to solve
$$ x^k \equiv 13851252 \pmod {pq}$$
or rather
$$ x^{79921} \equiv 13851252 \pmod {163276871}.$$
How can she do that? We can do that because we know to raise this to a particular power depending on $\varphi(163276871)$. But Eve doesn’t know what $\varphi(163276871)$ is since she can’t factor $163276871$. In fact, knowing $\varphi(163276871)$ is exactly as hard as factoring $163276871$.

But if Eve could somehow find $79921$st roots modulo $163276871$, or if Eve could factor $163276871$, then she would be able to decrypt the message. These are both really hard problems! And it’s these problems that give the method its security.

More generally, one might use primes $p$ and $q$ that are each about $200$ digits long, and a fairly large $k$. Then the resulting $m$ would be about $400$ digits, which is far larger than we know how to factor effectively. The reason for choosing a somewhat large $k$ is for security reasons that are beyond the scope of this segment. The relevant idea here is that since this is a publicly known encryption scheme, many people have strenghtened it over the years by making it more secure against every clever attack known. This is another, sometimes overlooked benefit of public-key cryptography: since the methodology isn’t secret, everyone can contribute to its security — indeed, it is in the interest of anyone desiring such security to contribute. This is sort of the exact opposite of Security through Obscurity.

The nature of code being open, public, private, or secret is also very topical in current events. Recently, Volkswagen cheated in its car emissions-software and had it report false outputs, leading to a large scandal. Their software is proprietary and secret, and the deliberate bug went unnoticed for years. Some argue that this means that more software, and especially software that either serves the public or that is under strong jurisdiction, should be publicly viewable for analysis.

Another relevant current case is that the code for most voting machines in the United States is proprietary and secret. Hopefully they aren’t cheating!

On the other side, many say that it is necessary for companies to be able to have secret software for at least some time period to recuperate the expenses that go into developing the software. This is similar to how drug companies have patents on new drugs for a number of years. This way, a new successful drug pays for its development since the company can charge above the otherwise-market-rate.

Further, many say that opening some code would open it up to attacks from malicious users who otherwise wouldn’t be able to see the code. Of course, this sounds a lot like trying for security through obscurity.

This is a very relevant and big topic, and the shape it takes over the next few years may very well have long-lasting impacts.

Condensed Version

Now that we’ve gone through it all once, we have a condensed RSA system set up with our $p, q,$ and $k$ from above. To show that this can be done quickly with a computer, let’s do another right now.

Let us encrypt, transmit, and decrypt the message

“I have never done anything useful. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world”.

First, we prepare the message for conversion to numbers.

message = """I have never done anything useful. No discovery of mine has made, 
or is likely to make, directly or indirectly, for good or ill, the least 
difference to the amenity of the world"""

message = "".join([l.upper() for l in message if not l in "\n .,"])
print "Our message:\n"+message
Our message:
IHAVENEVERDONEANYTHINGUSEFULNODISCOVERYOFMINEHASMADEORISLIKELY
TOMAKEDIRECTLYORINDIRECTLYFORGOODORILLTHELEASTDIFFERENCETOTHE
AMENITYOFTHEWORLD

Now we convert the message to numbers, and transform those numbers into 8-digit chunks.

numerical_message = letters_to_numbers(message)
print "Our message, converted to numbers:"
print numerical_message
print

plaintext_chunks = chunk8(numerical_message)
print "Separated into 8-digit chunks:"
print plaintext_chunks
Our message, converted to numbers:
1918113215241532152814252415112435301819241731291516312224251419
2913253215283525162319241518112923111415252819292219211522353025
2311211514192815133022352528192414192815133022351625281725251425
2819222230181522151129301419161615281524131530253018151123152419
303525163018153325282214

Separated into 8-digit chunks:
['19181132', '15241532', '15281425', '24151124', '35301819', 
'24173129', '15163122', '24251419', '29132532', '15283525', 
'16231924', '15181129', '23111415', '25281929', '22192115', 
'22353025', '23112115', '14192815', '13302235', '25281924', 
'14192815', '13302235', '16252817', '25251425', '28192222', 
'30181522', '15112930', '14191616', '15281524', '13153025', 
'30181511', '23152419', '30352516', '30181533', '25282214']

We encrypt each chunk by computing $P^k \bmod {m}$ for each plaintext chunk $P$.

ciphertext_chunks = encrypt_chunks(plaintext_chunks)
print ciphertext_chunks
[99080958, 142898385, 80369161, 11935375, 108220081, 82708130,
 158605094, 96274325, 154177847, 121856444, 91409978, 47916550,
 155466420, 92033719, 95710042, 86490776, 15468891, 139085799,
 68027514, 53153945, 139085799, 68027514, 9216376, 155619290,
 83776861, 132272900, 57738842, 119368739, 88984801, 83144549,
 136916742, 13608445, 92485089, 89508242, 25375188]

This is the message that Anabel can sent Bartolo.

To decrypt it, he computes $c^u \bmod m$ for each ciphertext chunk $c$.

deciphered_chunks = decrypt_chunks(ciphertext_chunks)
print "Deciphered chunks:"
print deciphered_chunks
Deciphered chunks:
[19181132, 15241532, 15281425, 24151124, 35301819, 24173129,
 15163122, 24251419, 29132532, 15283525, 16231924, 15181129,
 23111415, 25281929, 22192115, 22353025, 23112115, 14192815,
 13302235, 25281924, 14192815, 13302235, 16252817, 25251425,
 28192222, 30181522, 15112930, 14191616, 15281524, 13153025,
 30181511, 23152419, 30352516, 30181533, 25282214]

Finally, he translates the chunks back into letters.

decoded_message = chunks_to_letters(deciphered_chunks)
print "Decoded Message:"
print decoded_message
Decoded Message:
IHAVENEVERDONEANYTHINGUSEFULNODISCOVERYOFMINEHASMADEORISLIKELY
TOMAKEDIRECTLYORINDIRECTLYFORGOODORILLTHELEASTDIFFERENCETOTHE
AMENITYOFTHEWORLD

Even with large numbers, RSA is pretty fast. But one of the key things that one can do with RSA is securely transmit secret keys for other types of faster-encryption that don’t work in a public-key sense. There is a lot of material in this subject, and it’s very important.

The study of sending and receiving secret messages is called Cryptography. There are lots of interesting related topics, some of which we’ll touch on in class.

The study of analyzing and breaking cryptosystems is called Cryptanalysis, and is something I find quite fun. But it’s also quite intense.

I should mention that in practice, RSA is performed a little bit differently to make it more secure.

Aside: this is the first time I’ve converted a Jupyter Notebook, and it turns out it’s very easy. However, there are a few formatting annoyances that I didn’t consider, but which are too minor to spend time fixing. But now I know!
Posted in Brown University, Expository, Math 420, Math.NT, Mathematics, Programming, Python, Teaching | Tagged , , , , | 3 Comments

Math 420: Second Week Homework

Firstly, we have three administrative notes.

  1. I’ve posted the second homework set. You can find it here.
  2. I’ve also written solutions to the first homework set. You can find those here.
  3. After feedback from the first week, I’m setting stable office hours. My office hours will be from 1-3pm on Monday and 2:30-3:30pm on Tuesday (immediately following our class). [Or we can set up an appointment].

I’ll see you on Tuesday, when we will continue to talk about the Euclidean Algorithm and greatest common divisors.

 

Posted in Brown University, Math 420, Mathematics, Teaching | Tagged , , , | Leave a comment

Math 420: First Week Homework and References

Firstly, there are three administrative notes.

  1. I’ve posted the first homework set. This is due on Thursday, and you can find it here.
  2. I haven’t set official office hour times yet. But I will have office hours on Monday from noon to 2pm on Monday, 1 Feb 2016, in my office at the Science Library.
  3. If you haven’t yet, I encourage you to read the syllabus.

We mentioned several good and interesting “number theoretic” problems in class today. I’d like to remind you of some of them, and link you to some additional places for information.

Pythagorean Theorem

We’ve found all primitive Pythagorean triples in integers, which is a very nice theorem for an hour. But I also mentioned some of the history of the Pythagorean Theorem and the significance of numbers and number theory to the Greeks.

I told the class a story about how the Pythagorean student who revealed that there were irrational numbers was stoned. This is apocryphal. In fact, there is little exact record, but his name was Hippasus and it is more likely that he was drowned for releasing this information.

For this and other reasons, the Pythagorean school of thought split into two sects, one from Pythagoras and one from Hippasus.

Goldbach’s Conjecture

Is it the case that every even integer is the sum of two primes? We think so. But we do not know.

I mentioned the Ternary Goldbach Conjecture, also known as the Weak Goldbach Conjecture, which says that every odd integer greater than $5$ is the sum of three odd primes. This was proved very recently. If you’re interested in what a mathematical paper looks like, you can give this paper a look. [Do not expect to be able to understand the paper — but it is interesting what sorts of tools can be used towards number theory]

Fermat’s Last Theorem

Are there nontrivial integer solutions to $X^n + Y^n = Z^n$ where $n \geq 3$?

This is one of the most storied and studied problems in mathematics. I think this has to do with how simple the statement looks. Further, we managed to fully classify all solutions when $n = 2$ in one class period. It doesn’t seem like it should be too hard to extend that to other exponents, does it?

If time and interest permits, we will return to this topic at the end of the course. There is no way that we could present a proof, or even fully motivate the proof. But we might be able to say a few words about how progress towards the theorem spurred and created mathematics, and maybe we can give a hint of the breadth of the ideas used to finally produce a proof.

Twin Prime Conjecture

Are there infinitely many primes $p$ such that $p+2$ is also prime? We think so, but we don’t know. Two years ago, we had absolutely no idea at all. Then Yitang Zhang had a brilliant idea (and not much later a graduate student named James Maynard had another brilliant idea) which allowed some sort of progress.

This culminated with the Polymath8 Project Bounded Gaps Between Primes. Math can be a social sport, and the polymath projects are massively collaborative online and open projects towards math problems. They’re still a bit new, and a bit experimental. But Polymath8 is certainly extremely successful.

What is known is that there exists at least one even number $H \leq 246$ such that $p$ and $p + H$ is prime infinitely often. In fact, James Maynard showed that you can make more complicated ensembles of prime distances.

The ideas that led to this result can likely be sharpened to give better results, but actually proving that there are infinitely many twin primes is almost certainly going to require a brand new idea and methodology.

The best related result comes from Chinese mathematician Chen Jingrun, who proved that every sufficiently large even integer can be written either as a sum of two primes, or as a sum of a prime and a number with exactly two prime factors. Although this seems very close, it is also likely that this idea cannot be sharpened further.

Writing Numbers as Sums of Squares, Cubes, and So On

Can every integer be written as the sum of three squares? What about four squares? More generally, is there a number $n$ so that every integer can be written as a sum of at most $n$ squares?

Similarly, is there a number $n$ so that every integer can be written as a sum of at most $n$ cubes? What about fourth powers?

These problems are all associated to something called Waring’s Problem, about which much is known and much is unknown.

We also asked which primes can be written as a sum of two squares. Although we might have a hard time finding those primes right now, you might try to show that if $p$ is a prime that can be written as a sum of two squares, then either $p$ is $2$, or $p = 4z + 1$ for some integer $z$. The reasoning is very similar to some of the reasoning done in class today.

Max’s Conjecture

For primitive Pythagorean triples $(a,b,c)$ with $a^2 + b^2 = c^2$, we showed that we can restrict out attention to cases where $a$ is odd, $b$ is even, and $c$ is odd. Max conjectured that those $c$ on the right are always of the form $4k + 1$ for some $k$, or equivalently $c$ is always an integer that leaves remainder $1$ after being divided by $4$.

We didn’t return to this in class, but we can now. First, note that since $c$ is odd, we can write $c$ as $2z + 1$ for some $z$. But we can do more. We can actually write $c$ as either $4z + 1$ or $4z + 3$. (Can you prove this?)

Max conjectured that it is always the case that $c = 4z + 1$. So we might ask, “What if $c = 4z + 3$?”

Writing $a = 2x + 1$ and $b = 2y$, we get the equation

$$ \begin{align}
a^2 + b^2 &= c^2 \\
4x^2 + 4x + 1 + 4y^2 &= 16z^2 + 24z + 3, \end{align}$$

which can be rewritten as
$$ 4x^2 + 4x + 4y^2 = 16z^2 + 24z + 2.$$
You can divide by $2$. Then we ask: what’s the problem? Why is this bad? (It is, and it’s very similar to some questions we asked in class.)

So Max’s Conjecture is true, and every number appearing as $c$ in a primitive Pythagorean triple is of the form $c = 4z + 1$ for some integer $z$.

Posted in Brown University, Math 420, Mathematics | Tagged , , , | Leave a comment