Tag Archives: number theory

Math 420: Supplement on Gaussian Integers

This is a brief supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Class at Brown University. With respect to the book, the nearest material is the material in Chapters 35 and 36, but we take a very different approach.

A pdf of this note can be found here. I’m sure there are typos, so feel free to ask me or correct me if you think something is amiss.

In this note, we cover the following topics.

  1. What are the Gaussian integers?
  2. Unique factorization within the Gaussian integers.
  3. An application of the Gaussian integers to the Diophantine equation $latex {y^2 = x^3 – 1}$.
  4. Other integer-like sets: general rings.
  5. Specific examples within $latex {\mathbb{Z}[\sqrt{2}]}$ and $latex {\mathbb{Z}[\sqrt{-5}]}$.

1. What are the Gaussian Integers?

The Gaussian Integers are the set of numbers of the form $latex {a + bi}$, where $latex {a}$ and $latex {b}$ are normal integers and $latex {i}$ is a number satisfying $latex {i^2 = -1}$. As a collection, the Gaussian Integers are represented by the symbol $latex {\mathbb{Z}[i]}$, or sometimes $latex {\mathbb{Z}[\sqrt{-1}]}$. These might be pronounced either as The Gaussian Integers or as Z append i.

In many ways, the Gaussian integers behave very much like the regular integers. We’ve been studying the qualities of the integers, but we should ask — which properties are really properties of the integers, and which properties hold in greater generality? Is it the integers themselves that are special, or is there something bigger and deeper going on?

These are the main questions that we ask and make some progress towards in these notes. But first, we need to describe some properties of Gaussian integers.

We will usually use the symbols $latex {z = a + bi}$ to represent our typical Gaussian integer. One adds and multiples two Gaussian integers just as you would add and multiply two complex numbers. Informally, you treat $latex {i}$ like a polynomial indeterminate $latex {X}$, except that it satisfies the relation $latex {X^2 = -1}$.

Definition 1 For each complex number $latex {z = a + bi}$, we define the conjugate of $latex {z}$, written as $latex {\overline{z}}$, by
\begin{equation}
\overline{z} = a – bi.
\end{equation}
We also define the norm of $latex {z}$, written as $latex {N(z)}$, by
\begin{equation}
N(z) = a^2 + b^2.
\end{equation}

You can check that $latex {N(z) = z \overline{z}}$ (and in fact this is one of your assigned problems). You can also chack that $latex {N(zw) = N(z)N(w)}$, or rather that the norm is multiplicative (this is also one of your assigned problems).

Even from our notation, it’s intuitive that $latex {z = a + bi}$ has two parts, the part corresponding to $latex {a}$ and the part corresponding to $latex {b}$. We call $latex {a}$ the real part of $latex {z}$, written as $latex {\Re z = a}$, and we call $latex {b}$ the imaginary part of $latex {z}$, written as $latex {\Im z = b}$. I should add that the name ”imaginary number” is a poor name that reflects historical reluctance to view complex numbers as acceptable. For that matter, the name ”complex number” is also a poor name.

As a brief example, consider the Gaussian integer $latex {z = 2 + 5i}$. Then $latex {N(z) = 4 + 25 = 29}$, $latex {\Re z = 2}$, $latex {\Im z = 5}$, and $latex {\overline{z} = 2 – 5i}$.

We can ask similar questions to those we asked about the regular integers. What does it mean for $latex {z \mid w}$ in the complex case?

Definition 2 We say that a Gaussian integer $latex {z}$ divides another Gaussian integer $latex {w}$ if there is some Gaussian integer $latex {k}$ so that $latex {zk = w}$. In this case, we write $latex {z \mid w}$, just as we write for regular integers.

For the integers, we immediately began to study the properties of the primes, which in many ways were the building blocks of the integers. Recall that for the regular integers, we said $latex {p}$ was a prime if its only divisors were $latex {\pm 1}$ and $latex {\pm p}$. In the Gaussian integers, the four numbers $latex {\pm 1, \pm i}$ play the same role as $latex {\pm 1}$ in the usual integers. These four numbers are distinguished as being the only four Gaussian integers with norm equal to $latex {1}$.

That is, the only solutions to $latex {N(z) = 1}$ where $latex {z}$ is a Gaussian integer are $latex {z = \pm 1, \pm i}$. We call these four numbers the Gaussian units.

With this in mind, we are ready to define the notion of a prime for the Gaussian integers.

Definition 3 We say that a Gaussian integer $latex {z}$ with $latex {N(z) > 1}$ is a Gaussian prime if the only divisors of $latex {z}$ are $latex {u}$ and $latex {uz}$, where $latex {u = \pm 1, \pm i}$ is a Gaussian unit.

Remark 1 When we look at other integer-like sets, we will actually use a different definition of a prime.

It’s natural to ask whether the normal primes in $latex {\mathbb{Z}}$ are also primes in $latex {\mathbb{Z}[i]}$. And the answer is no. For instance, $latex {5}$ is a prime in $latex {\mathbb{Z}}$, but
\begin{equation}
5 = (1 + 2i)(1 – 2i)
\end{equation}
in the Gaussian integers. However, the two Gaussian integers $latex {1 + 4i}$ and $latex {1 – 4i}$ are prime. It also happens to be that $latex {3}$ is a Gaussian prime. We will continue to investigate which numbers are Gaussian primes over the next few lectures.

With a concept of a prime, it’s also natural to ask whether or not the primes form the building blocks for the Gaussian integers like they form the building blocks for the regular integers. We take up this in our next topic.

2. Unique Factorization in the Gaussian Integers

Let us review the steps that we followed to prove unique factorization for $latex {\mathbb{Z}}$.

  1. We proved that for $latex {a,b}$ in $latex {\mathbb{Z}}$ with $latex {b \neq 0}$, there exist unique $latex {q}$ and $latex {r}$ such that $latex {a = bq + r}$ with $latex {0 \leq r < b}$. This is called the Division Algorithm.
  2. By repeatedly applying the Division Algorithm, we proved the Euclidean Algorithm. In particular, we showed that the last nonzero remainder was the GCD of our initial numbers.
  3. By performing reverse substition on the steps of the Euclidean Algorithm, we showed that there are integer solutions in $latex {x,y}$ to the Diophantine equation $latex {ax + by = \gcd(a,b)}$. This is often called Bezout’s Theorem or Bezout’s Lemma, although we never called it by that name in class.
  4. With Bezout’s Theorem, we showed that if a prime $latex {p}$ divides $latex {ab}$, then $latex {p \mid a}$ or $latex {p \mid b}$. This is the crucial step towards proving Unique Factorization.
  5. We then proved Unique Factorization.

Each step of this process can be repeated for the Gaussian integers, with a few notable differences. Remarkably, once we have the division algorithm, each proof is almost identical for $latex {\mathbb{Z}[i]}$ as it is for $latex {\mathbb{Z}}$. So we will prove the division algorithm, and then give sketches of the remaining ideas, highlighting the differences that come up along the way.

In the division algorithm, we require the remainder $latex {r}$ to ”be less than what we are dividing by.” A big problem in translating this to the Gaussian integers is that the Gaussian integers are not ordered. That is, we don’t have a concept of being greater than or less than for $latex {\mathbb{Z}[i]}$.

When this sort of problem emerges, we will get around this by taking norms. Since the norm of a Gaussian integer is a typical integer, we will be able to use the ordering of the integers to order our norms.

Theorem 4 For $latex {z,w}$ in $latex {\mathbb{Z}[i]}$ with $latex {w \neq 0}$, there exist $latex {q}$ and $latex {r}$ in $latex {\mathbb{Z}[i]}$ such that $latex {z = qw + r}$ with $latex {N(r) < N(w)}$.

Proof: Here, we will cheat a little bit and use properties about general complex numbers and the rationals to perform this proof. One can give an entirely intrinsic proof, but I like the approach I give as it also informs how to actually compute the $latex {q}$ and $latex {r}$.

The entire proof boils down to the idea of writing $latex {z/w}$ as a fraction and approximating the real and imaginary parts by the nearest integers.

Let us now transcribe that idea. We will need to introduce some additional symbols. Let $latex {z = a_1 + b_1 i}$ and $latex {w = a_2 + b_2 i}$.

Then
\begin{align}
\frac{z}{w} &= \frac{a_1 + b_1 i}{a_2 + b_2 i} = \frac{a_1 + b_1 i}{a_2 + b_2 i} \frac{a_2 – b_2 i}{a_2 – b_2 i} \\
&= \frac{a_1a_2 + b_1 b_2}{a_2^2 + b_2^2} + i \frac{b_1 a_2 – a_1 b_2}{a_2^2 + b_2 ^2} \\
&= u + iv.
\end{align}
By rationalizing the denominator by multiplying by $latex {\overline{w}/ \overline{w}}$, we are able to separate out the real and imaginary parts. In this final expression, we have named $latex {u}$ to be the real part and $latex {v}$ to be the imaginary part. Notice that $latex {u}$ and $latex {v}$ are normal rational numbers.

We know that for any rational number $latex {u}$, there is an integer $latex {u’}$ such that $latex {\lvert u – u’ \rvert \leq \frac{1}{2}}$. Let $latex {u’}$ and $latex {v’}$ be integers within $latex {1/2}$ of $latex {u}$ and $latex {v}$ above, respectively.

Then we claim that we can choose $latex {q = u’ + i v’}$ to be the $latex {q}$ in the theorem statement, and let $latex {r}$ be the resulting remainder, $latex {r = z – qw}$. We need to check that $latex {N(r) < N(w)}$. We will check that explicitly.

We compute
\begin{align}
N(r) &= N(z – qw) = N\left(w \left(\frac{z}{w} – q\right)\right) = N(w) N\left(\frac{z}{w} – q\right).
\end{align}
Note that we have used that $latex {N(ab) = N(a)N(b)}$. In this final expression, we have already come across $latex {\frac{z}{w}}$ before — it’s exactly what we called $latex {u + iv}$. And we called $latex {q = u’ + i v’}$. So our final expression is the same as
\begin{equation}
N(r) = N(w) N(u + iv – u’ – i v’) = N(w) N\left( (u – u’) + i (v – v’)\right).
\end{equation}
How large can the real and imaginary parts of $latex {(u-u’) + i (v – v’)}$ be? By our choice of $latex {u’}$ and $latex {v’}$, they can be at most $latex {1/2}$.

So we have that
\begin{equation}
N(r) \leq N(w) N\left( (\tfrac{1}{2})^2 + (\tfrac{1}{2})^2\right) = \frac{1}{2} N(w).
\end{equation}
And so in particular, we have that $latex {N(r) < N(w)}$ as we needed. $latex \Box$

Note that in this proof, we did not actually show that $latex {q}$ or $latex {r}$ are unique. In fact, unlike the case in the regular integers, it is not true that $latex {q}$ and $latex {r}$ are unique.

Example 1 Consider $latex {3+5i, 1 + 2i}$. Then we compute
\begin{equation}
\frac{3+5i}{1+2i} = \frac{3+5i}{1+2i}\frac{1-2i}{1-2i} = \frac{13}{5} + i \frac{-1}{5}.
\end{equation}
The closest integer to $latex {13/5}$ is $latex {3}$, and the closest integer to $latex {-1/5}$ is $latex {0}$. So we take $latex {q = 3}$. Then $latex {r = (3+5i) – (1+2i)3 = -i}$, and we see in total that
\begin{equation}
3+5i = (1+2i) 3 – i.
\end{equation}
Note that $latex {N(-i) = 1}$ and $latex {N(1 + 2i) = 5}$, so this choice of $latex {q}$ and $latex {r}$ works.

As $latex {13/5}$ is sort of close to $latex {2}$, what if we chose $latex {q = 2}$ instead? Then $latex {r = (3 + 5i) – (1 + 2i)2 = 1 + i}$, leading to the overall expression
\begin{equation}
3_5i = (1 + 2i) 2 + (1 + i).
\end{equation}
Note that $latex {N(1+i) = 2 < N(1+2i) = 5}$, so that this choice of $latex {q}$ and $latex {r}$ also works.

This is an example of how the choice of $latex {q}$ and $latex {r}$ is not well-defined for the Gaussian integers. In fact, even if one decides to choose $latex {q}$ to that $latex {N(r)}$ is minimal, the resulting choices are still not necessarily unique.

This may come as a surprise. The letters $latex {q}$ and $latex {r}$ come from our tendency to call those numbers the quotient and remainder after division. We have shown that the quotient and remainder are not well-defined, so it does not make sense to talk about ”the remainder” or ”the quotient.” This is a bit strange!

Are we able to prove unique factorization when the process of division itself seems to lead to ambiguities? Let us proceed forwards and try to see.

Our next goal is to prove the Euclidean Algorithm. By this, we mean that by repeatedly performing the division algorithm starting with two Gaussian integers $latex {z}$ and $latex {w}$, we hope to get a sequence of remainders with the last nonzero remainder giving a greatest common divisor of $latex {z}$ and $latex {w}$.

Before we can do that, we need to ask a much more basic question. What do we mean by a greatest common divisor? In particular, the Gaussian integers are not ordered, so it does not make sense to say whether one Gaussian integer is bigger than another.

For instance, is it true that $latex {i > 1}$? If so, then certainly $latex {i}$ is positive. We know that multiplying both sides of an inequality by a positive number doesn’t change that inequality. So multiplying $latex {i > 1}$ by $latex {i}$ leads to $latex {-1 > i}$, which is absurd if $latex {i}$ was supposed to be positive!

To remedy this problem, we will choose a common divisor of $latex {z}$ and $latex {w}$ with the greatest norm (which makes sense, as the norm is a regular integer and thus is well-ordered). But the problem here, just as with the division algorithm, is that there may or may not be multiple such numbers. So we cannot talk about ”the greatest common divisor” and instead talk about ”a greatest common divisor.” To paraphrase Lewis Carroll’s\footnote{Carroll was also a mathematician, and hid some nice mathematics inside some of his works.} Alice, things are getting curiouser and curiouser!

Definition 5 For nonzero $latex {z,w}$ in $latex {\mathbb{Z}[i]}$, a greatest common divisor of $latex {z}$ and $latex {w}$, denoted by $latex {\gcd(z,w)}$, is a common divisor with largest norm. That is, if $latex {c}$ is another common divisor of $latex {z}$ and $latex {w}$, then $latex {N(c) \leq N(\gcd(z,w))}$.

If $latex {N(\gcd(z,w)) = 1}$, then we say that $latex {z}$ and $latex {w}$ are relatively prime. Said differently, if $latex {1}$ is a greatest common divisor of $latex {z}$ and $latex {w}$, then we say that $latex {z}$ and $latex {w}$ are relatively prime.

Remark 2 Note that $latex {\gcd(z,w)}$ as we’re writing it is not actually well-defined, and may stand for any greatest common divisor of $latex {z}$ and $latex {w}$.

With this definition in mind, the proof of the Euclidean Algorithm is almost identical to the proof of the Euclidean Algorithm for the regular integers. As with the regular integers, we need the following result, which we will use over and over again.

Lemma 6 Suppose that $latex {z \mid w_1}$ and $latex {z \mid w_2}$. Then for any $latex {x,y}$ in $latex {\mathbb{Z}[i]}$, we have that $latex {z \mid (x w_1 + y w_2)}$.

Proof: As $latex {z \mid w_1}$, there is some Gaussian integer $latex {k_1}$ such that $latex {z k_1 = w_1}$. Similarly, there is some Gaussian integer $latex {k_2}$ such that $latex {z k_2 = w_2}$.

Then $latex {xw_1 + yw_2 = zxk_1 + zyk_2 = z(xk_1 + yk_2)}$, which is divisible by $latex {z}$ as this is the definition of divisibility. $latex \Box$

Notice that this proof is identical to the analogous statement in the integers, except with differently chosen symbols. That is how the proof of the Euclidean Algorithm goes as well.

Theorem 7 let $latex {z,w}$ be nonzero Gaussian integers. Recursively apply the division algorithm, starting with the pair $latex {z, w}$ and then choosing the quotient and remainder in one equation the new pair for the next. The last nonzero remainder is divisible by all common divisors of $latex {z,w}$, is itself a common divisor, and so the last nonzero remainder is a greatest common divisor of $latex {z}$ and $latex {w}$.

Symbolically, this looks like
\begin{align}
z &= q_1 w + r_1, \quad N(r_1) < N(w) \\\\
w &= q_2 r_1 + r_2, \quad N(r_2) < N(r_1) \\\\
r_1 &= q_3 r_2 + r_3, \quad N(r_3) < N(r_2) \\\\
\cdots &= \cdots \\\\
r_k &= q_{k+2} r_{k+1} + r_{k+2}, \quad N(r_{k+2}) < N(r_{k+1}) \\\\
r_{k+1} &= q_{k+3} r_{k+2} + 0,
\end{align}
where $latex {r_{k+2}}$ is the last nonzero remainder, which we claim is a greatest common divisor of $latex {z}$ and $latex {w}$.

Proof: We are claiming several thing. Firstly, we should prove our implicit claim that this algorithm terminates at all. Is it obvious that we should eventually reach a zero remainder?

In order to see this, we look at the norms of the remainders. After each step in the algorithm, the norm of the remainder is smaller than the previous step. As the norms are always nonnegative integers, and we know there does not exist an infinite list of decreasing positive integers, we see that the list of nonzero remainders is finite. So the algorithm terminates.

We now want to prove that the last nonzero remainder is a common divisor and is in fact a greatest common divisor. The proof is actually identical to the proof in the integer case, merely with a different choice of symbols.

Here, we only sketch the argument. Then the rest of the argument can be found by comparing with the proof of the Euclidean Algorithm for $latex {\mathbb{Z}}$ as found in the course textbook.

For ease of exposition, suppose that the algorithm terminated in exatly 3 steps, so that we have
\begin{align}
z &= q_1 w + r_1, \\
w &= q_2 r_1 + r_2 \\
r_1 &= q_3 r_2 + 0.
\end{align}

On the one hand, suppose that $latex {d}$ is a common divisor of $latex {z}$ and $latex {w}$. Then by our previous lemma, $latex {d \mid z – q_1 w = r_1}$, so that we see that $latex {d}$ is a divisor of $latex {r_1}$ as well. Applying to the next line, we have that $latex {d \mid w}$ and $latex {d \mid r_1}$, so that $latex {d \mid w – q_2 r_1 = r_2}$. So every common divisor of $latex {z}$ and $latex {w}$ is a divisor of the last nonzero remainder $latex {r_2}$.

On the other hand, $latex {r_2 \mid r_1}$ by the last line of the algorithm. Then as $latex {r_2 \mid r_1}$ and $latex {r_2 \mid r_1}$, we know that $latex {r_2 \mid q_2 r_1 + r_2 = w}$. Applying this to the first line, as $latex {r_2 \mid r_1}$ and $latex {r_2 \mid w}$, we know that $latex {r_2 \mid q_1 w + r_1 = z}$. So $latex {r_2}$ is a common divisor.

We have shown that $latex {r_2}$ is a common divisor of $latex {z}$ and $latex {w}$, and that every common divisor of $latex {z}$ and $latex {w}$ divides $latex {r_2}$. How do we show that $latex {r_2}$ is a greatest common divisor?

Suppose that $latex {d}$ is a common divisor of $latex {z}$ and $latex {w}$, so that we know that $latex {d \mid r_2}$. In particular, this means that there is some nonzero $latex {k}$ so that $latex {dk = r_2}$. Taking norms, this means that $latex {N(dk) = N(d)N(k) = N(r_2)}$. As $latex {N(d)}$ and $latex {N(k)}$ are both at least $latex {1}$, this means that $latex {N(d) \leq N(r_2)}$.

This is true for every common divisor $latex {d}$, and so $latex {N(r_2)}$ is at least as large as the norm of any common divisor of $latex {z}$ and $latex {w}$. Thus $latex {r_2}$ is a greatest common divisor.

The argument carries on in the same way for when there are more steps in the algorithm. $latex \Box$

Theorem 8 The greatest common divisor of $latex {z}$ and $latex {w}$ is well-defined, up to multiplication by $latex {\pm 1, \pm i}$. In other words, if $latex {\gcd(z,w)}$ is a greatest common divisor of $latex {z}$ and $latex {w}$, then all greatest common divisors of $latex {z}$ and $latex {w}$ are given by $latex {\pm \gcd(z,w), \pm i \gcd(z,w)}$.

Proof: Suppose $latex {d}$ is a greatest common divisor, and let $latex {\gcd(z,w)}$ denote a greatest common divisor resulting from an application of the Euclidean Algorithm. Then we know that $latex {d \mid \gcd(z,w)}$, so that there is some $latex {k}$ so that $latex {dk = \gcd(z,w)}$. Taking norms, we see that $latex {N(d)N(k) = N(\gcd(z,w)}$.

But as both $latex {d}$ and $latex {\gcd(z,w)}$ are greatest common divisors, we must have that $latex {N(d) = N(\gcd(z,w))}$. So $latex {N(k) = 1}$. The only Gaussian integers with norm one are $latex {\pm 1, \pm i}$, so we have that $latex {du = \gcd(z,w)}$ where $latex {u}$ is one of the four Gaussian units, $latex {\pm 1, \pm i}$.

Conversely, it’s clear that the four numbers $latex {\pm \gcd(z,w), \pm i \gcd(z,w)}$ are all greatest common divisors. $latex \Box$

Now that we have the Euclidean Algorithm, we can go towards unique factorization in $latex {\mathbb{Z}[i]}$. Let $latex {g}$ denote a greatest common divisor of $latex {z}$ and $latex {w}$. Reverse substitution in the Euclidean Algorithm shows that we can find Gaussian integer solutions $latex {x,y}$ to the (complex) linear Diophantine equation
\begin{equation}
zx + wy = g.
\end{equation}
Let’s see an example.

Example 2 Consider $latex {32 + 9i}$ and $latex {4 + 11i}$. The Euclidean Algorithm looks like
\begin{align}
32 + 9i &= (4 + 11i)(2 – 2i) + 2 – 5i, \\\\
4 + 11i &= (2 – 5i)(-2 + i) + 3 – i, \\\\
2 – 5i &= (3-i)(1-i) – i, \\\\
3 – i &= -i (1 + 3i) + 0.
\end{align}
So we know that $latex {-i}$ is a greatest common divisor of $latex {32 + 9i}$ and $latex {4 + 11i}$, and so we know that $latex {32+9i}$ and $latex {4 + 11i}$ are relatively prime. Let us try to find a solution to the Diophantine equation
\begin{equation}
x(32 + 9i) + y(4 + 11i) = 1.
\end{equation}
Performing reverse substition, we see that
\begin{align}
-i &= (2 – 5i) – (3-i)(1-i) \\\\
&= (2 – 5i) – (4 + 11i – (2-5i)(-2 + i))(1-i) \\\\
&= (2 – 5i) – (4 + 11i)(1 – i) + (2 – 5i)(-2 + 1)(1 – i) \\\\
&= (2 – 5i)(3i) – (4 + 11i)(1 – i) \\\\
&= (32 + 9i – (4 + 11i)(2 – 2i))(3i) – (4 + 11i)(1 – i) \\\\
&= (32 + 9i) 3i – (4 + 11i)(2 – 2i)(3i) – (4 + 11i)(1-i) \\\\
&= (32 + 9i) 3i – (4 + 11i)(7 + 5i).
\end{align}
Multiplying this through by $latex {i}$, we have that
\begin{equation}
1 = (32 + 9i) (-3) + (4 + 11i)(5 – 7i).
\end{equation}
So one solution is $latex {(x,y) = (-3, 5 – 7i)}$.

Although this looks more complicated, the process is the same as in the case over the regular integers. The apparent higher difficulty comes mostly from our lack of familiarity with basic arithmetic in $latex {\mathbb{Z}[i]}$.

The rest of the argument is now exactly as in the integers.

Theorem 9 Suppose that $latex {z, w}$ are relatively prime, and that $latex {z \mid wv}$. Then $latex {z \mid v}$.

Proof: This is left as an exercise (and will appear on the next midterm in some form — cheers to you if you’ve read this far in these notes). But it’s now the almost the same as in the regular integers. $latex \Box$

Theorem 10 Let $latex {z}$ be a Gaussian integer with $latex {N(z) > 1}$. Then $latex {z}$ can be written uniquely as a product of Gaussian primes, up to multiplication by one of the Gaussian units $latex {\pm 1, \pm i}$.

Proof: We only sketch part of the proof. There are multiple ways of doing this, but we present the one most similar to what we’ve done for the integers. If there are Gaussian integers without unique factorization, then there are some (maybe they tie) with minimal norm. So let $latex {z}$ be a Gaussian integer of minimal norm without unique factorization. Then we can write
\begin{equation}
p_1 p_2 \cdots p_k = z = q_1 q_2 \cdots q_\ell,
\end{equation}
where the $latex {p}$ and $latex {q}$ are all primes. As $latex {p_1 \mid z = q_1 q_2 \cdots q_\ell}$, we know that $latex {p_1}$ divides one of the $latex {q}$ (by Theorem~9), and so (up to units) we can say that $latex {p_1}$ is one of the $latex {q}$ primes. We can divide each side by $latex {p_1}$ and we get two supposedly different factorizations of a Gaussian integer of norm $latex {N(z)/N(p_1) < N(z)}$, which is less than the least norm of an integer without unique factorization (by what we supposed). This is a contradiction, and we can conclude that there are no Gaussian integers without unique factorization. $latex \Box$

If this seems unclear, I recommend reviewing this proof and the proof of unique factroziation for the regular integers. I should also mention that one can modify the proof of unique factorization for $latex {\mathbb{Z}}$ as given in the course textbook as well (since it is a bit different than what we have done). Further, the course textbook does proof of unique factorization for $latex {\mathbb{Z}[i]}$ in Chapter 36, which is very similar to the proof sketched above (although the proof of Theorem~9 is very different.)

3. An application to $latex {y^2 = x^3 – 1}$.

We now consider the nonlinear Diophantine equation $latex {y^2 = x^3 – 1}$, where $latex {x,y}$ are in $latex {\mathbb{Z}}$. This is hard to solve over the integers, but by going up to $latex {\mathbb{Z}[i]}$, we can determine all solutions.

In $latex {\mathbb{Z}[i]}$, we can rewrite $$ y^2 + 1 = (y + i)(y – i) = x^3. \tag{1}$$
We claim that $latex {y+i}$ and $latex {y-i}$ are relatively prime. To see this, suppose that $latex {d}$ is a common divisor of $latex {y+i}$ and $latex {y-i}$. Then $latex {d \mid (y + i) – (y – i) = 2i}$. It happens to be that $latex {2i = (1 + i)^2}$, and that $latex {(1 + i)}$ is prime. To see this, we show the following.

Lemma 11 Suppose $latex {z}$ is a Gaussian integer, and $latex {N(z) = p}$ is a regular prime. Then $latex {z}$ is a Gaussian prime.

Proof: Suppose that $latex {z}$ factors nontrivially as $latex {z = ab}$. Then taking norms, $latex {N(z) = N(a)N(b)}$, and so we get a nontrivial factorization of $latex {N(z)}$. When $latex {N(z)}$ is a prime, then there are no nontrivial factorizations of $latex {N(z)}$, and so $latex {z}$ must have no nontrivial factorization. $latex \Box$

As $latex {N(1+i) = 2}$, which is a prime, we see that $latex {(1 + i)}$ is a Gaussian prime. So $latex {d \mid (1 + i)^2}$, which means that $latex {d}$ is either $latex {1, (1 + i)}$, or $latex {(1+i)^2}$ (up to multiplication by a Gaussian unit).

Suppose we are in the case of the latter two, so that $latex {(1+i) \mid d}$. Then as $latex {d \mid (y + i)}$, we know that $latex {(1 + i) \mid x^3}$. Taking norms, we have that $latex {2 \mid x^6}$.

By unique factorization in $latex {\mathbb{Z}}$, we know that $latex {2 \mid x}$. This means that $latex {4 \mid x^2}$, which allows us to conclude that $latex {x^3 \equiv 0 \pmod 4}$. Going back to the original equation $latex {y^2 + 1 = x^3}$, we see that $latex {y^2 + 1 \equiv 0 \pmod 4}$, which means that $latex {y^2 \equiv 3 \pmod 4}$. A quick check shows that $latex {y^2 \equiv 3 \pmod 4}$ has no solutions $latex {y}$ in $latex {\mathbb{Z}/4\mathbb{Z}}$.

So we rule out the case then $latex {(1 + i) \mid d}$, and we are left with $latex {d}$ being a unit. This es exactly the case that $latex {y+i}$ and $latex {y-i}$ are relatively prime.

Recall that $latex {(y+i)(y-i) = x^3}$. As $latex {y+i}$ and $latex {y-i}$ are relatively prime and their product is a cube, by unique factorization in $latex {\mathbb{Z}[i]}$ we know that $latex {y+i}$ and $latex {y-i}$ much each be Gaussian cubes. Then we can write $latex {y+i = (m + ni)^3}$ for some Gaussian integer $latex {m + ni}$. Expanding, we see that
\begin{equation}
y+i = m^3 – 3mn^2 + i(3m^2n – n^3).
\end{equation}
Equating real and imaginary parts, we have that
\begin{align}
y &= m(m^2 – 3n^2) \\
1 &= n(3m^2 – n^2).
\end{align}
This second line shows that $latex {n \mid 1}$. As $latex {n}$ is a regular integer, we see that $latex {n = 1}$ or $latex {-1}$.

If $latex {n = 1}$, then that line becomes $latex {1 = (3m^2 – 1)}$, or after rearranging $latex {2 = 3m^2}$. This has no solutions.

If $latex {n = -1}$, then that line becomes $latex {1 = -(3m^2 – 1)}$, or after rearranging $latex {0 = 3m^2}$. This has the solution $latex {m = 0}$, so that $latex {y+i = (-i)^3 = i}$, which means that $latex {y = 0}$. Then from $latex {y^2 + 1 = x^3}$, we see that $latex {x = 1}$.

And so the only solution is $latex {(x,y) = (1,0)}$, and there are no other solutions.

4. Other Rings

The Gaussian integers have many of the same properties as the regular integers, even though there are some differences. We could go further. For example, we might consider the following integer-like sets,
\begin{equation}
\mathbb{Z}(\sqrt{d}) = { a + b \sqrt{d} : a,b \in \mathbb{Z} }.
\end{equation}
One can add, subtract, and multiply these together in similar ways to how we can add, subtract, and multiply together integers, or Gaussian integers.

We might ask what properties these other integer-like sets have. For instance, do they have unique factorization?

More generally, there is a better name than ”integer-like set” for this sort of construction.

Suppose $latex {R}$ is a collection of elements, and it makes sense to add, subtract, and multiply these elements together. Further, we want addition and multiplication to behave similarly to how they behave for the regular integers. In particular, if $latex {r}$ and $latex {s}$ are elements in $latex {R}$, then we want $latex {r + s = s + r}$ to be in $latex {R}$; we want something that behaves like $latex {0}$ in the sense that $latex {r + 0 = r}$; for each $latex {r}$, want another element $latex {-r}$ so that $latex {r + (-r) = 0}$; we want $latex {r \cdot s = s \cdot r}$; we want something that behaves like $latex {1}$ in the sense that $latex {r \cdot 1 = r}$ for all $latex {r \neq 0}$; and we want $latex {r(s_1 + s_2) = r s_1 + r s_2}$. Such a collection is called a ring. (More completely, this is called a commutative unital ring, but that’s not important.)

It is not important that you explicitly remember exactly what the definition of a ring is. The idea is that there is a name for things that are ”integer-like” and that we might wonder what properties we have been thinking of as properties of the integers are actually properties of rings.

As a total aside: there are very many more rings too, things that look much more different than the integers. This is one of the fundamental questions that leads to the area of mathematics called Abstract Algebra. With an understanding of abstract algebra, one could then focus on these general number theoretic problems in an area of math called Algebraic Number Theory.

5. The rings $latex {\mathbb{Z}[\sqrt{d}]}$

We can describe some of the specific properties of $latex {\mathbb{Z}[\sqrt{d}]}$, and suggest how some of the ideas we’ve been considering do (or don’t) generalize. For a general element $latex {n = a + b \sqrt{d}}$, we can define the conjugate $latex {\overline{n} = a – b\sqrt {d}}$ and the norm $latex {N(n) = n \cdot \overline{n} = a^2 – d b^2}$. We call those elements $latex {u}$ with $latex {N(u) = 1}$ the units in $latex {\mathbb{Z}[\sqrt{d}]}$.

Some of the definitions we’ve been using turn out to not generalize so easily, or in quite the ways we expect. If $latex {n}$ doesn’t have a nontrivial factoriation (meaning that we cannot write $latex {n = ab}$ with $latex {N(a), N(b) \neq 1}$), then we call $latex {n}$ an irreducible. In the cases of $latex {\mathbb{Z}}$ and $latex {\mathbb{Z}[i]}$, we would have called these elements prime.

In general, we call a number $latex {p}$ in $latex {\mathbb{Z}{\sqrt{d}}}$ a prime if $latex {p}$ has the property that $latex {p \mid ab}$ means that $latex {p \mid a}$ or $latex {p \mid b}$. Of course, in the cases of $latex {\mathbb{Z}}$ and $latex {\mathbb{Z}[i]}$, we showed that irreducibles are primes. But it turns out that this is not usually the case.

Let us look at $latex {\mathbb{Z}{\sqrt{-5}}}$ for a moment. In particular, we can write $latex {6}$ in two ways as
\begin{equation}
6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 – \sqrt{-5}).
\end{equation}
Although it’s a bit challenging to show, these are the only two fundamentally different factorizations of $latex {6}$ in $latex {\mathbb{Z}[\sqrt{-5}]}$. One can show (it’s not very hard, but it’s not particularly illuminating to do here) that neither $latex {2}$ or $latex {3}$ divides $latex {(1 + \sqrt{-5})}$ or $latex {(1 – \sqrt{-5})}$ (and vice versa), which means that none of these four numbers are primes in our more general definition. One can also show that all four numbers are irreducible.

What does this mean? This means that $latex {6}$ can be factored into irreducibles in fundamentally different ways, and that $latex {\mathbb{Z}[\sqrt{-5}]}$ does not have unique factorization.

It’s a good thought exercise to think about what is really different between $latex {\mathbb{Z}[\sqrt{-5}]}$ and $latex {\mathbb{Z}}$. At the beginning of this course, it seemed extremely obvious that $latex {\mathbb{Z}}$ had unique factorization. But in hindsight, is it really so obvious?

Understanding when there is and is not unique factorization in $latex {\mathbb{Z}[\sqrt{d}]}$ is something that people are still trying to understand today. The fact is that we don’t know! In particular, we really don’t know very much when $latex {d}$ is positive.

One reason why can be seen in $latex {\mathbb{Z}[\sqrt{2}]}$. If $latex {n = a + b \sqrt{2}}$, then $latex {N(n) = a^2 – 2 b^2}$. A very basic question that we can ask is what are the units? That is, which $latex {n}$ have $latex {N(n) = 1}$?

Here, that means trying to solve the equation $$ a^2 – 2 b^2 = 1. \tag{2}$$
We have seen this equation a few times before. On the second homework assignment, I asked you to show that there were infinitely many solutions to this equation by finding lines and intersecting them with hyperbolas. We began to investigate this Diophantine equation because each solution leads to another square-triangular number.

So there are infinitely many units in $latex {\mathbb{Z}[\sqrt{2}]}$. This is strange! For instance, $latex {3 + 2 \sqrt{2}}$ is a unit, which means that it behaves just like $latex {\pm 1}$ in $latex {\mathbb{Z}}$, or like $latex {\pm 1, \pm i}$ in $latex {\mathbb{Z}[i]}$. Very often, the statements we’ve been looking at and proving are true ”up to multiplication by units.” Since there are infinitely many in $latex {\mathbb{Z}[\sqrt 2]}$, it can mean that it’s annoying to determine even if two numbers are actually the same up to multiplication by units.

As you look further, there are many more strange and interesting behaviours. It is really interesting to see what properties are very general, and what properties vary a lot. It is also interesting to see the different ways in which properties we’re used to, like unique factorization, can fail.

For instance, we have seen that $latex {\mathbb{Z}[\sqrt -5]}$ does not have unique factorization. We showed this by seeing that $latex {6}$ factors in two fundamentally different ways. In fact, some numbers in $latex {\mathbb{Z}[\sqrt -5]}$ do factor uniquely, and others do not. But if one does not, then it factors in at most two fundamentally different ways.

In other rings, you can have numbers which factor in more fundamentally different ways. The actual behaviour here is also really poorly understood, and there are mathematicians who are actively pursuing these topics.

It’s a very large playground out there.

Posted in Brown University, Expository, Math 420, Math.NT, Mathematics, Teaching | Tagged , , , , , | 2 Comments

Paper: The Second Moments of Sums of Fourier Coefficients of Cusp Forms

This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker.

We have just uploaded a paper to the arXiv on the second moment of sums of Fourier coefficients of cusp forms. This is the first in a trio of papers that we will be uploading and submitting in the near future.

Suppose $latex {f(z)}$ and $latex {g(z)}$ are weight $latex {k}$ holomorphic cusp forms on $latex {\text{GL}_2}$ with Fourier expansions

$$\begin{align} f(z) &= \sum_{n \geq 1} a(n) e(nz) \\
g(z) &= \sum_{n \geq 1} b(n) e(nz). \end{align}$$

Denote the sum of the first $latex {n}$ coefficients of a cusp form $latex {f}$ by $$ S_f(n) := \sum_{m \leq n} a(m). \tag{1}$$

We consider upper bounds for the second moment of $latex {S_f(n)}$.

The famous Ramanujan-Petersson conjecture gives us that $latex {a(n)\ll n^{\frac{k-1}{2} + \epsilon}}$. So one might assume $latex {S_f(X) \ll X^{\frac{k-1}{2} + 1 + \epsilon}}$. However, we expect the better bound $$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}, \tag{2}$$

which we refer to as the “Classical Conjecture,” echoing Hafner and Ivić [HI].

Chandrasekharan and Narasimhan [CN] proved that the Classical Conjecture is true on average by showing that $$ \sum_{n \leq X} \lvert S_f(n) \rvert^2 = CX^{k- 1 + \frac{3}{2}} + B(X), \tag{3}$$

where $latex {B(x)}$ is an error term, $$ B(X) = \begin{cases} O(X^{k}\log^2(X)) \ \Omega\left(X^{k – \frac{1}{4}}\frac{(\log \log \log X)^3}{\log X}\right), \end{cases} \tag{4}$$

and $latex {C}$ is the constant, $$ C = \frac{1}{(4k + 2)\pi^2} \sum_{n \geq 1}\frac{\lvert a(n) \rvert^2}{n^{k + \frac{1}{2}}}. \tag{5}$$

A application of the Cauchy-Schwarz inequality to~(3) leads to the on-average statement that $$ \frac{1}{X} \sum_{n \leq X} |S_f(n)| \ll X^{\frac{k-1}{2} + \frac{1}{4}}. \tag{6}$$

From this, [HI] were able to show in some cases that $$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{3}}. \tag{7}$$

Better lower bounds are known for $latex {B(X)}$. In the same work [HI] improved the lower bound of [CN] for full-integral weight forms of level one and showed that $$ B(X) = \Omega\left(X^{k – \frac{1}{4}}\exp\left(D \tfrac{(\log \log x )^{1/4}}{(\log \log \log x)^{3/4}}\right)\right), \tag{8}$$

for a particular constant $latex {D}$.

The question of better understanding $latex {B(X)}$ is analogous to understanding the error term in the circle problem or divisor problem. In our paper, we introduce the Dirichlet series $$D(s, S_f \times S_g) := \sum_{n \geq 1} \frac{S_f(n) \overline{S_g(n)}}{n^{s + k – 1}}$$

D(s, S_f \times \overline{S_g}) &:= \sum_{n \geq 1} \frac{S_f(n)S_g(n)}{n^{s + k – 1}} and provide their meromorphic continuations. From our review of the literature, these Dirichlet series and their meromorphic continuations are new and provide new approaches to the classical problems related to $latex {S_f(n)}$.

Our primary result is the meromorphic continuation of $latex {D(s, S_f \times S_g)}$. As a first application, we prove a smoothed generalization to~(3).

Theorem 1 Suppose either that $latex {f = g}$ is a Hecke eigenform or that $latex {f}$ and $latex {g}$ have real coefficients. \begin{equation*} \frac{1}{X} \sum_{n \geq 1}\frac{S_f(n)\overline{S_g(n)}}{n^{k – 1}}e^{-n/X} = CX^{\frac{1}{2}} + O_{f,g,\epsilon}(X^{-\frac{1}{2} + \theta + \epsilon}) \end{equation*} where \begin{equation*} C = \frac{\Gamma(\tfrac{3}{2}) }{4\pi^2} \frac{L(\frac{3}{2}, f\times g)}{\zeta(3)}= \frac{\Gamma(\tfrac{3}{2})}{4\pi ^2} \sum_{n \geq 1} \frac{a(n)\overline{b(n)}}{n^{k + \frac{1}{2}}}, \end{equation*} and $latex {\theta}$ denotes progress towards Selberg’s Eigenvalue Conjecture. Similarly, \begin{equation*} \frac{1}{X} \sum_{n \geq 1}\frac{S_f(n)S_g(n)}{n^{k – 1}}e^{-n/X} = C’X^{\frac{1}{2}} + O_{f,g,\epsilon}(X^{-\frac{1}{2} + \theta + \epsilon}), \end{equation*} where \begin{equation*} C’ = \frac{\Gamma(\tfrac{3}{2})}{4\pi^2} \frac{L(\frac{3}{2}, f\times \overline{g})}{\zeta(3)} = \frac{\Gamma(\tfrac{3}{2})}{4\pi ^2} \sum_{n \geq 1} \frac{a(n)b(n)}{n^{k + \frac{1}{2}}}.\end{equation*}

We have a complete meromorphic continuation, and it would not be hard to give additional terms in the asymptotic. But the next terms come from zeroes of the zeta function and are complicated to nail down exactly.

Choosing $latex {f = g}$, we recover a proof of the Classical Conjecture on Average. More interestingly, we show that the secondary growth terms do not arise from a pole, nor are there prescribed polar reasons for growth. The secondary growth in the classical result comes from choosing a sharp cutoff instead of the nicely behaving and natural smooth cutoffs.

We prove analogous results for sums of normalized Fourier coefficients $$ S_f^\alpha(n) := \sum_{m \leq n} \frac{a(m)}{m^\alpha} \tag{9}$$

for $latex {0 \leq \alpha < k}$.

In the path to proving these results, we explicitly demonstrate remarkable cancellation between Rankin-Selberg convolution $latex {L}$-functions $latex {L(s, f\times g)}$ and shifted convolution sums $$ Z(s, 0; f,g) := \sum_{n, h} \frac{a(n)\overline{b(n-h)}}{n^{s + k – 1}}. \tag{10}$$

Comparing our results and methodologies with the main results of [CN] guarantees similar cancellation for general level and general weight, including half-integral weight forms.

We provide additional applications of the meromorphic continuation of $latex {D(s, S_f \times S_g)}$ in forthcoming works, which will be uploaded to the arXiv and described briefly here soon.

For exact references, see the paper.

Posted in Math.NT, Mathematics, Uncategorized | Tagged , | 2 Comments

Another proof of Wilson’s Theorem

While teaching a largely student-discovery style elementary number theory course to high schoolers at the Summer@Brown program, we were looking for instructive but interesting problems to challenge our students. By we, I mean Alex Walker, my academic little brother, and me. After a bit of experimentation with generators and orders, we stumbled across a proof of Wilson’s Theorem, different than the standard proof.

Wilson’s theorem is a classic result of elementary number theory, and is used in some elementary texts to prove Fermat’s Little Theorem, or to introduce primality testing algorithms that give no hint of the factorization.

Theorem 1 (Wilson’s Theorem) For a prime number $latex {p}$, we have $$ (p-1)! \equiv -1 \pmod p. \tag{1}$$

The theorem is clear for $latex {p = 2}$, so we only consider proofs for “odd primes $latex {p}$.”

The standard proof of Wilson’s Theorem included in almost every elementary number theory text starts with the factorial $latex {(p-1)!}$, the product of all the units mod $latex {p}$. Then as the only elements which are their own inverses are $latex {\pm 1}$ (as $latex {x^2 \equiv 1 \pmod p \iff p \mid (x^2 – 1) \iff p\mid x+1}$ or $latex {p \mid x-1}$), every element in the factorial multiples with its inverse to give $latex {1}$, except for $latex {-1}$. Thus $latex {(p-1)! \equiv -1 \pmod p.} \diamondsuit$

Now we present a different proof.

Take a primitive root $latex {g}$ of the unit group $latex {(\mathbb{Z}/p\mathbb{Z})^\times}$, so that each number $latex {1, \ldots, p-1}$ appears exactly once in $latex {g, g^2, \ldots, g^{p-1}}$. Recalling that $latex {1 + 2 + \ldots + n = \frac{n(n+1)}{2}}$ (a great example of classical pattern recognition in an elementary number theory class), we see that multiplying these together gives $latex {(p-1)!}$ on the one hand, and $latex {g^{(p-1)p/2}}$ on the other.

As $latex {g^{(p-1)/2}}$ is a solution to $latex {x^2 \equiv 1 \pmod p}$, and it is not $latex {1}$ since $latex {g}$ is a generator and thus has order $latex {p-1}$. So $latex {g^{(p-1)/2} \equiv -1 \pmod p}$, and raising $latex {-1}$ to an odd power yields $latex {-1}$, completing the proof $\diamondsuit$.

After posting this, we have since seen that this proof is suggested in a problem in Ireland and Rosen’s extremely good number theory book. But it was pleasant to see it come up naturally, and it’s nice to suggest to our students that you can stumble across proofs.

It may be interesting to question why $latex {x^2 \equiv 1 \pmod p \iff x \equiv \pm 1 \pmod p}$ appears in a fundamental way in both proofs.

This post appears on the author’s personal website davidlowryduda.com and on the Math.Stackexchange Community Blog math.blogoverflow.com. It is also available in pdf note form. It was typeset in \TeX, hosted on WordPress sites, converted using the utility github.com/davidlowryduda/mse2wp, and displayed with MathJax.

Posted in Expository, Math.NT, Mathematics | Tagged , , , , , , | 1 Comment

Friendly Introduction to Sieves with a Look Towards Progress on the Twin Primes Conjecture

This is an extension and background to a talk I gave on 9 October 2013 to the Brown Graduate Student Seminar, called `A friendly intro to sieves with a look towards recent progress on the twin primes conjecture.’ During the talk, I mention several sieves, some with a lot of detail and some with very little detail. I also discuss several results and built upon many sources. I’ll provide missing details and/or sources for additional reading here.

Furthermore, I like this talk, so I think it’s worth preserving.

1. Introduction

We talk about sieves and primes. Long, long ago, Euclid famously proved the infinitude of primes ($latex {\approx 300}$ B.C.). Although he didn’t show it, the stronger statement that the sum of the reciprocals of the primes diverges is true:

$latex \displaystyle \sum_{p} \frac{1}{p} \rightarrow \infty, $

where the sum is over primes.

Proof: Suppose that the sum converged. Then there is some $latex {k}$ such that

$latex \displaystyle \sum_{i = k+1}^\infty \frac{1}{p_i} < \frac{1}{2}. $

Suppose that $latex {Q := \prod_{i = 1}^k p_i}$ is the product of the primes up to $latex {p_k}$. Then the integers $latex {1 + Qn}$ are relatively prime to the primes in $latex {Q}$, and so are only made up of the primes $latex {p_{k+1}, \ldots}$. This means that

$latex \displaystyle \sum_{n = 1}^\infty \frac{1}{1+Qn} \leq \sum_{t \geq 0} \left( \sum_{i > k} \frac{1}{p_i} \right) ^t < 2, $

where the first inequality is true since all the terms on the left appear in the middle (think prime factorizations and the distributive law), and the second inequality is true because it’s bounded by the geometric series with ratio $latex {1/2}$. But by either the ratio test or by limit comparison, the sum on the left diverges (aha! Something for my math 100 students), and so we arrive at a contradiction.

Thus the sum of the reciprocals of the primes diverges. $latex \diamondsuit$

(more…)

Posted in Expository, Math.NT, Mathematics | Tagged , , , , , , , , , , , , | Leave a comment

Twenty Mathematicians, Two Hard Problems, One Week, IdeaLab2013

July has been an exciting and busy month for me. I taught number theory 3 hours a day, 5 days a week, for 3 weeks to (mostly) devoted and motivated high school students in the Summer@Brown program. In the middle, I moved to Massachusetts. Immediately after the Summer@Brown program ended, I was given the opportunity to return to ICERM to participate in an experimental program called an IdeaLab.

IdeaLab invited 20 early career mathematicians to come together for a week and to generate ideas on two very different problems: Tipping Points in Climate Systems and Efficient Fully Homomorphic Encryption. Although I plan on writing a bit more about each of these problems and the IdeaLab process in action (at least from my point of view), I should say something about what these are.

Models of Earth’s climate are used all the time, to give daily weather reports, to predict and warn about hurricanes, to attempt to understand the effects of anthropogenic sources of carbon on long-term climate. As we know from uncertainty about weather reports, these models aren’t perfect. In particular, they don’t currently predict sudden, abrupt changes called ‘Tippling points.’ But are tipping points possible? There have been warm periods following ice-ages in the past, so it seems that there might be tipping points that aren’t modelled in the system. Understanding these form the basis for the idea behind the Tipping Points in Climate Systems project. This project also forms another link in Mathematics of Planet Earth.

On the other hand, homomorphic encryption is a topic in modern cryptography. To encrypt a message is to make it hard or impossible for others to read it unless they have a ‘key.’ You might think that you wouldn’t want someone holding onto an encrypted data to be able to do anything with the data, and in most modern encryption algorithms this is the case. But what if we were able to give Google an encrypted dataset and ask them to perform a search on it? Is it possible to have a secure encryption that would allow Google to do some sort of search algorithm and give us the results, but without Google ever understanding the data itself? It may seem far-fetched, but this is exactly the idea behind the Efficient Fully Homomorphic Encryption group. Surprisingly enough, it is possible. But known methods are obnoxiously slow and infeasible. This is why the group was after ‘efficient’ encryption.

So 20 early career mathematicians from all sorts of areas of mathematics gathered to think about these two questions. For the rest of this post, I’d like to talk about the structure and my thoughts on the IdeaLab process. In later posts, I’ll talk about each of the two major topics and what sorts of ideas came out of the process.

(more…)

Posted in Brown University, Expository, Mathematics, Story | Tagged , , , , , , , , , , , , , , | Leave a comment

Chinese Remainder Theorem (SummerNT)

This post picks up from the previous post on Summer@Brown number theory from 2013.

Now that we’d established ideas about solving the modular equation $latex ax \equiv c \mod m$, solving the linear diophantine equation $latex ax + by = c$, and about general modular arithmetic, we began to explore systems of modular equations. That is, we began to look at equations like

Suppose $latex x$ satisfies the following three modular equations (rather, the following system of linear congruences):

$latex x \equiv 1 \mod 5$

$latex x \equiv 2 \mod 7$

$latex x \equiv 3 \mod 9$

Can we find out what $latex x$ is? This is a clear parallel to solving systems of linear equations, as is usually done in algebra I or II in secondary school. A common way to solve systems of linear equations is to solve for a variable and substitute it into the next equation. We can do something similar here.

From the first equation, we know that $latex x = 1 + 5a$ for some $latex a$. Substituting this into the second equation, we get that $latex 1 + 5a \equiv 2 \mod 7$, or that $latex 5a \equiv 1 \mod 7$. So $latex a$ will be the modular inverse of $latex 5 \mod 7$. A quick calculation (or a slightly less quick Euclidean algorithm in the general case) shows that the inverse is $latex 3$. Multiplying both sides by $latex 3$ yields $latex a \equiv 3 \mod 7$, or rather that $latex a = 3 + 7b$ for some $latex b$. Back substituting, we see that this means that $latex x = 1+5a = 1+5(3+7b)$, or that $latex x = 16 + 35b$.

Now we repeat this work, using the third equation. $latex 16 + 35b \equiv 3 \mod 9$, so that $latex 8b \equiv 5 \mod 9$. Another quick calculation (or Euclidean algorithm) shows that this means $latex b \equiv 4 \mod 9$, or rather $latex b = 4 + 9c$ for some $latex c$. Putting this back into $latex x$ yields the final answer:

$latex x = 16 + 35(4 + 9c) = 156 + 315c$

$latex x \equiv 156 \mod 315$

And if you go back and check, you can see that this works. $latex \diamondsuit$

There is another, very slick, method as well. This was a clever solution mentioned in class. The idea is to construct a solution directly. The way we’re going to do this is to set up a sum, where each part only contributes to one of the three modular equations. In particular, note that if we take something like $latex 7 \cdot 9 \cdot [7\cdot9]_5^{-1}$, where this inverse means the modular inverse with respect to $latex 5$, then this vanishes mod $latex 7$ and mod $latex 9$, but gives $latex 1 \mod 5$. Similarly $latex 2\cdot 5 \cdot 9 \cdot [5\cdot9]_7^{-1}$ vanishes mod 5 and mod 9 but leaves the right remainder mod 2, and $latex 5 \cdot 7 \cdot [5\cdot 7]_9^{-1}$ vanishes mod 5 and mod 7, but leaves the right remainder mod 9.

Summing them together yields a solution (Do you see why?). The really nice thing about this algorithm to get the solution is that is parallelizes really well, meaning that you can give different computers separate problems, and then combine the things together to get the final answer. This is going to come up again later in this post.

These are two solutions that follow along the idea of the Chinese Remainder Theorem (CRT), which in general says that as long as the moduli are relative prime, then the system

$latex a_1 x \equiv b_1 \mod m_1$

$latex a_2 x \equiv b_2 \mod m_2$

$latex \cdots$

$latex a_k x \equiv b_k \mod m_k$

will always have a unique solution $latex \mod m_1m_2 \ldots m_k$. Note, this is two statements: there is a solution (statement 1), and the statement is unique up to modding by the product of this moduli (statement 2). Proof Sketch: Either of the two methods described above to solve that problem can lead to a proof here. But there is one big step that makes such a proof much easier. Once you’ve shown that the CRT is true for a system of two congruences (effectively meaning you can replace them by one congruence), this means that you can use induction. You can reduce the n+1st case to the nth case using your newfound knowledge of how to combine two equations into one. Then the inductive hypothesis carries out the proof.

Note also that it’s pretty easy to go backwards. If I know that $latex x \equiv 12 \mod 30$, then I know that $latex x$ will also be the solution to the system

$latex x \equiv 2 \mod 5$

$latex x \equiv 0 \mod 6$

In fact, a higher view of the CRT reveals that the great strength is that considering a number mod a set of relatively prime moduli is the exact same (isomorphic to) considering a number mod the product of the moduli.

The remainder of this post will be about why the CRT is cool and useful.

Application 1: Multiplying Large Numbers

Firstly, the easier application. Suppose you have two really large integers $latex a,b$ (by really large, I mean with tens or hundreds of digits at least – for concreteness, say they each have $latex n$ digits). When a computer computes their product $latex ab$, it has to perform $latex n^2$ digit multiplications, which can be a whole lot if $latex n$ is big. But a computer can calculate mods of numbers in something like $latex \log n$ time, which is much much much faster. So one way to quickly compute the product of two really large numbers is to use the Chinese Remainder Theorem to represent each of $latex a$ and $latex b$ with a set of much smaller congruences. For example (though we’ll be using small numbers), say we want to multiply $latex 12$ by $latex 21$. We might represent $latex 12$ by $latex 12 \equiv 2 \mod 5, 5 \mod 7, 1 \mod 11$ and represent $latex 21$ by $latex 21 \equiv 1 \mod 5, 0 \mod 7, 10 \mod 11$. To find their product, calculate their product in each of the moduli: $latex 2 \cdot 1 \equiv 2 \mod 5, 5 \cdot 0 \equiv 0 \mod 7, 1 \cdot 10 \equiv 10 \mod 11$. We know we can get a solution to the resulting system of congruences using the above algorithm, and the smallest positive solution will be the actual product.

This might not feel faster, but for much larger numbers, it really is. As an aside, here’s one way to make it play nice for parallel processing (which vastly makes things faster). After you’ve computed the congruences of $latex 12$ and $latex 21$ for the different moduli, send the numbers mod 5 to one computer, the numbers mod 7 to another, and the numbers mod 11 to a third (but also send each computer the list of moduli: 5,7,11). Each computer will calculate the product in their modulus and then use the Euclidean algorithm to calculate the inverse of the product of the other two moduli, and multiply these together. Afterwards, the computers resend their data to a central computer, which just adds the result and takes it mod $latex 5 \cdot 7 \cdot 11$ (to get the smallest positive solution). Since mods are fast and all the multiplication is with smaller integers (no bigger than the largest mod, ever), it all goes faster. And since it’s parallelized, you’re replacing a hard task with a bunch of smaller easier tasks that can all be worked on at the same time. Very powerful stuff!

I have actually never seen someone give the optimal running time that would come from this sort of procedure, though I don’t know why. Perhaps I’ll look into that one day.

Application 2: Secret Sharing in Networks of People

This is really slick. Let’s lay out the situation: I have a secret. I want you, my students, to have access to the secret, but only if at least six of you decide together that you want access. So I give each of you a message, consisting of a number and a modulus. Using the CRT, I can create a scheme where if any six of you decide you want to open the message, then you can pool your six bits together to get the message. Notice, I mean any six of you, instead of a designated set of six. Further, no five people can recover the message without a sixth in a reasonable amount of time. That’s pretty slick, right?

The basic idea is for me to encode my message as a number $latex P$ (I use P to mean plain-text). Then I choose a set of moduli, one for each of you, but I choose them in such a way that the product of any $latex 5$ of them is smaller than $latex P$, but the product of any $latex 6$ of them is greater than $latex P$ (what this means is that I choose a lot of primes or near-primes right around the same size, all right around the fifth root of $latex P$). To each of you, I give you the value of $latex P \mod m_i$ and the modulus $latex m_i$, where $latex m_i$ is your modulus. Since $latex P$ is much bigger than $latex m_i$, it would take you a very long time to just happen across the correct multiple that reveals a message (if you ever managed). Now, once six of you get together and put your pieces together, the CRT guarantees a solution. Since the product of your six moduli will be larger than $latex P$, the smallest solution will be $latex P$. But if only five of you get together, since the product of your moduli is less than $latex P$, you don’t recover $latex P$. In this way, we have our secret sharing network.

To get an idea of the security of this protocol, you might imagine if I gave each of you moduli around the size of a quadrillion. Then missing any single person means there are hundreds of trillions of reasonable multiples of your partial plain-text to check before getting to the correct multiple.

A similar idea, but which doesn’t really use the CRT, is to consider the following problem: suppose two millionaires Alice and Bob (two people of cryptological fame) want to see which of them is richer, but without revealing how much wealth they actually have. This might sound impossible, but indeed it is not! There is a way for them to establish which one is richer but with neither knowing how much money the other has. Similar problems exist for larger parties (more than just 2 people), but none is more famous than the original: Yao’s Millionaire Problem.

Alright – I’ll see you all in class.

Posted in Brown University, Expository, Math.NT, Mathematics | Tagged , , , , , , , , , , , | Leave a comment

Notes on the first week (SummerNT)

We’ve covered a lot of ground this first week! I wanted to provide a written summary, with partial proof, of what we have done so far.

We began by learning about proofs. We talked about direct proofs, inductive proofs, proofs by contradiction, and proofs by using the contrapositive of the statement we want to prove. A proof is a justification and argument based upon certain logical premises (which we call axioms); in contrast to other disciplines, a mathematical proof is completely logical and can be correct or incorrect.

We then established a set of axioms for the integers that would serve as the foundation of our exploration into the (often fantastic yet sometimes frustrating) realm of number theory. In short, the integers are a non-empty set with addition and multiplication [which are both associative, commutative, and have an identity, and which behave as we think they should behave; further, there are additive inverses], a total order [an integer is either bigger than, less than, or equal to any other integer, and it behaves like we think it should under addition and multiplication], and satisfying the deceptively important well ordering principle [every nonempty set of positive integers has a least element].

With this logical framework in place, we really began number theory in earnest. We talked about divisibility [we say that $latex a$ divides $latex b$, written $latex a mid b$, if $latex b = ak$ for some integer $latex k$]. We showed that every number has a prime factorization. To do this, we used the well-ordering principle.

Suppose that not all integers have a prime factorization. Then there must be a smallest integer that does not have a prime factorization: call it $latex n$. Then we know that $latex n$ is either a prime or a composite. If it’s prime, then it has a prime factorization. If it’s composite, then it factors as $latex n = ab$ with $latex a,b < n$. But then we know that each of $latex a, b$ have prime factorizations since they are less than $latex n$. Multiplying them together, we see that $latex n$ also has a prime factorization after all. $latex diamondsuit$

Our first major result is the following:

There are infinitely many primes

There are many proofs, and we saw 2 of them in class. For posterity, I’ll present three here.

First proof that there are infinitely many primes

Take a finite collection of primes, say $latex p_1, p_2, ldots, p_k$. We will show that there is at least one more prime not mentioned in the collection. To see this, consider the number $latex p_1 p_2 ldots p_k + 1$. We know that this number will factor into primes, but upon division by every prime in our collection, it leaves a remainder of $latex 1$. Thus it has at least one prime factor different than every factor in our collection. $latex diamondsuit$

This was a common proof used in class. A pattern also quickly emerges: $latex 2 + 1 = 3$, a prime. $latex 2cdot3 + 1 = 7$, a prime. $latex 2 cdot 3 cdot 5 + 1 = 31$, also a prime. It is always the case that a product of primes plus one is another prime? No, in fact. If you look at $latex 2 cdot 3 cdot 5 cdot 7 cdot 11 cdot 13 + 1=30031 = 59cdot 509$, you get a nonprime.

Second proof that there are infinitely many primes

In a similar vein to the first proof, we will show that there is always a prime larger than $latex n$ for any positive integer $latex n$. To see this, consider $latex n! + 1$. Upon dividing by any prime less than $latex n$, we get a remainder of $latex 1$. So all of its prime factors are larger than $latex n$, and so there are infinitely many primes. $latex diamondsuit$

I would also like to present one more, which I’ve always liked.

Third proof that there are infinitely many primes

Suppose there are only finitely many primes $latex p_1, ldots, p_k$. Then consider the two numbers $latex n = p_1 cdot dots cdot p_k$ and $latex n -1$. We know that $latex n – 1$ has a prime factor, so that it must share a factor $latex P$ with $latex n$ since $latex n$ is the product of all the primes. But then $latex P$ divides $latex n – (n – 1) = 1$, which is nonsense; no prime divides $latex 1$. Thus there are infinitely many primes. $latex diamondsuit$

We also looked at modular arithmetic, often called the arithmetic of a clock. When we say that $latex a equiv b mod m$, we mean to say that $latex m | (b – a)$, or equivalently that $latex a = b + km$ for some integer $latex m$ (can you show these are equivalent?). And we pronounce that statement as ” $latex a$ is congruent to $latex b$ mod $latex m$.” We played a lot with modular arithmetic: we added, subtracted, and multiplied many times, hopefully enough to build a bit of familiarity with the feel. In most ways, it feels like regular arithmetic. But in some ways, it’s different. Looking at the integers $latex mod m$ partitions the integers into a set of equivalence classes, i.e. into sets of integers that are congruent to $latex 0 mod m, 1 mod m, ldots$. When we talk about adding or multiplying numbers mod $latex mod m$, we’re really talking about manipulating these equivalence classes. (This isn’t super important to us – just a hint at what’s going on beneath the surface).

We expect that if $latex a equiv b mod m$, then we would also have $latex ac equiv bc mod m$ for any integer $latex c$, and this is true (can you prove this?). But we would also expect that if we had $latex ac equiv bc mod m$, then we would necessarily have $latex a equiv b mod m$, i.e. that we can cancel out the same number on each side. And it turns out that’s not the case. For example, $latex 4 cdot 2 equiv 4 cdot 5 mod 6$ (both are $latex 2 mod 6$), but ‘cancelling the fours’ says that $latex 2 equiv 5 mod 6$ – that’s simply not true. With this example in mind, we went about proving things about modular arithmetic. It’s important to know what one can and can’t do.

One very big and important observation that we noted is that it doesn’t matter what order we operate, as in it doesn’t matter if we multiply an expression out and then ‘mod it’ down, or ‘mod it down’ and then multiply, or if we intermix these operations. Knowing this allows us to simplify expressions like $latex 11^4 mod 12$, since we know $latex 11 equiv -1 mod 12$, and we know that $latex (-1)^2 equiv 1 mod 12$, and so $latex 11^4 equiv (-1)^{2cdot 2} equiv 1 mod 12$. If we’d wanted to, we could have multiplied it out and then reduced – the choice is ours!

Amidst our exploration of modular arithmetic, we noticed some patterns. Some numbers  are invertible in the modular sense, while others are not. For example, $latex 5 cdot 5 equiv 1 mod 6$, so in that sense, we might think of $latex frac{1}{5} equiv 5 mod 6$. More interestingly but in the same vein, $latex frac{1}{2} equiv 6 mod 11$ since $latex 2 cdot 6 equiv 1 mod 11$. Stated more formally, a number $latex a$ has a modular inverse $latex a^{-1} mod m$ if there is a solution to the modular equation $latex ax equiv 1 mod m$, in which case that solution is the modular inverse. When does this happen? Are these units special?

Returning to division, we think of the greatest common divisor. I showed you the Euclidean algorithm, and you managed to prove it in class. The Euclidean algorithm produces the greatest common divisor of $latex a$ and $latex b$, and it looks like this (where I assume that $latex b > 1$:

$latex b = q_1 a + r_1$

$latex a = q_2 r_1 + r_2$

$latex r_1 = q_3 r_2 + r_3$

$latex cdots$

$latex r_k = q_{k+2}r_{k+1} + r_{k+2}$

$latex r_{k+1} = q_{k+3}r_{k+2} + 0$

where in each step, we just did regular old division to guarantee a remainder $latex r_i$ that was less than the divisor. As the divisors become the remainders, this yields a strictly decreasing remainder at each iteration, so it will terminate (in fact, it’s very fast). Further, using the notation from above, I claimed that the gcd of $latex a$ and $latex b$ was the last nonzero remainder, in this case $latex r_{k+2}$. How did we prove it?

Proof of Euclidean Algorithm

Suppose that $latex d$ is a common divisor (such as the greatest common divisor) of $latex a$ and $latex b$. Then $latex d$ divides the left hand side of $latex b – q_1 a = r_1$, and thus must also divide the right hand side. So any divisor of $latex a$ and $latex b$ is also a divisor of $latex r_1$. This carries down the list, so that the gcd of $latex a$ and $latex b$ will divide each remainder term. How do we know that the last remainder is exactly the gcd, and no more? The way we proved it in class relied on the observation that $latex r_{k+2} mid r_{k+1}$. But then $latex r_{k+2}$ divides the right hand side of $latex r_k = q_{k+2} r_{k+1} + r_{k+2}$, and so it also divides the left. This also carries up the chain, so that $latex r_{k+2}$ divides both $latex a$ and $latex b$. So it is itself a divisor, and thus cannot be larger than the greatest common divisor. $latex diamondsuit$

As an aside, I really liked the way it was proved in class. Great job!

The Euclidean algorithm can be turned backwards with back-substitution (some call this the extended Euclidean algorithm,) to give a solution in $latex x,y$ to the equation $latex ax + by = gcd(a,b)$. This has played a super important role in our class ever since. By the way, though I never said it in class, we proved Bezout’s Identity along the way (which we just called part of the Extended Euclidean Algorithm). This essentially says that the gcd of $latex a$ and $latex b$ is the smallest number expressible in the form $latex ax + by$. The Euclidean algorithm has shown us that the gcd is expressible in this form. How do we know it’s the smallest? Observe again that if $latex c$ is a common divisor of $latex a$ and $latex b$, then $latex c$ divides the left hand side of $latex ax + by = d$, and so $latex c mid d$. So $latex d$ cannot be smaller than the gcd.

This led us to explore and solve linear Diophantine equations of the form $latex ax + by = c$ for general $latex a,b,c$. There will be solutions whenever the $latex gcd(a,b) mid c$, and in such cases there are infinitely many solutions (Do you remember how to see infinitely many other solutions?).

Linear Diophantine equations are very closely related a linear problems in modular arithmetic of the form $latex ax equiv c mod m$. In particular, this last modular equation is equivalent to $latex ax + my = c$ for some $latex y$.(Can you show that these are the same?). Using what we’ve learned about linear Diophantine equations, we know that $latex ax equiv c mod m$ has a solution iff $latex gcd(a,m) mid c$. But now, there are not infinitely many incongruent (i.e. not the same $latex mod m$) solutions. This is called the ‘Linear Congruence Theorem,’ and is interestingly the first major result we’ve learned with no proof on wikipedia.

Theorem: the modular equation $latex ax equiv b mod m$ has a solution iff $latex gcd(a,m) mid b$, in which case there are exactly $latex gcd(a,m)$ incongruent solutions.

Proof

We can translate a solution of $latex ax equiv b mod m$ into a solution of $latex ax + my = b$, and vice-versa. So we know from the Extended Euclidean algorithm that there are only solutions if $latex gcd(a,m) mid b$. Now, let’s show that there are $latex gcd(a,m)$ solutions. I will do this a bit differently than how we did it in class.

First, let’s do the case when $latex gcd(a,m)=1$, and suppose we have a solution $latex (x,y)$ so that $latex ax + my = b$. If there is another solution, then there is some perturbation we can do by shifting $latex x$ by a number $latex x’$ and $latex y$ by a number $latex y’$ that yields another solution looking like $latex a(x + x’) + m(y + y’) = b$. As we already know that $latex ax + my = b$, we can remove that from the equation. Then we get simply $latex ax’ = -my’$. Since $latex gcd(m,a) = 1$, we know (see below the proof) that $latex m$ divides $latex x’$. But then the new solution $latex x + x’ equiv x mod m$, so all solutions fall in the same congruence class – the same as $latex x$.

Now suppose that $latex gcd(a,m) = d$ and that there is a solution. Since there is a solution, each of $latex a,m,$ and $latex b$ are divisible by $latex d$, and we can write them as $latex a = da’, b = db’, m = dm’$. Then the modular equation $latex ax equiv b mod m$ is the same as $latex da’ x equiv d b’ mod d m’$, which is the same as $latex d m’ mid (d b’ – d a’x)$. Note that in this last case, we can remove the $latex d$ from both sides, so that $latex m’ mid b’ – a’x$, or that $latex a’x equiv b mod m’$. From the first case, we know this has exactly one solution mod $latex m’$, but we are interested in solutions mod $latex m$. Just as knowing that $latex x equiv 2 mod 4$ means that $latex x$ might be $latex 2, 6, 10 mod 12$ since $latex 4$ goes into $latex 12$ three times, $latex m’$ goes into $latex m$ $latex d$ times, and this gives us our $latex d$ incongruent solutions. $latex diamondsuit.$

I mentioned that we used the fact that we’ve proven 3 times in class now in different forms: if $latex gcd(a,b) = 1$ and $latex a mid bc$, then we can conclude that $latex a mid c$. Can you prove this? Can you prove this without using unique factorization? We actually used this fact to prove unique factorization (really we use the statement about primes: if $latex p$ is a prime and $latex p mid ab$, then we must have that $latex p mid a$ or $latex p mid b$, or perhaps both). Do you remember how we proved that? We used the well-ordered principle to say that if there were a positive integer that couldn’t be uniquely factored, then there is a smaller one. But choosing two of its factorizations, and finding a prime on one side – we concluded that this prime divided the other side. Dividing both sides by this prime yielded a smaller (and therefore unique by assumption) factorization. This was the gist of the argument.

The last major bit of the week was the Chinese Remainder Theorem, which is awesome enough (and which I have enough to say about) that it will get its own post – which I’m working on now.

I’ll see you all in class tomorrow.

Posted in Brown University, Expository, Math.NT, Mathematics | Tagged , , , , , , , , , , , , , , | 1 Comment

Recent developments in Twin Primes, Goldbach, and Open Access

It has been a busy two weeks all over the math community. Well, at least it seemed so to me. Some of my friends have defended their theses and need only to walk to receive their PhDs; I completed my topics examination, Brown’s take on an oral examination; and I’ve given a trio of math talks.

Meanwhile, there have been developments in a relative of the Twin Primes conjecture, the Goldbach conjecture, and Open Access math journals.

1. Twin Primes Conjecture

The Twin Primes Conjecture states that there are infinitely many primes $latex p$ such that $latex p+2$ is also a prime, and falls in the the more general Polignac’s Conjecture, which says that for any even $latex n$, there are infinitely many prime $latex p$ such that $latex p+n$ is also prime. This is another one of those problems that is easy to state but seems tremendously hard to solve. But recently, Dr. Yitang Zhang of the University of New Hampshire has submitted a paper to the Annals of Mathematics (one of the most respected and prestigious journals in the field). The paper is reputedly extremely clear (in contrast to other recent monumental papers in number theory, i.e. the phenomenally technical papers of Mochizuki on the ABC conjecture), and the word on the street is that it went through the entire review process in less than one month. At this time, there is no publicly available preprint, so I have not had a chance to look at the paper. But word is spreading that credible experts have already carefully reviewed the paper and found no serious flaws.

Dr. Zhang’s paper proves that there are infinitely many primes that have a corresponding prime at most $latex 70000000$ or so away. And thus in particular there is at least one number $latex k$ such that there are infinitely many primes such that both $latex p$ and $latex p+k$ are prime. I did not think that this was within the reach of current techniques. But it seems that Dr. Zhang built on top of the work of Goldston, Pintz, and Yildirim to get his result. Further, it seems that optimization of the result will occur and the difference will be brought way down from $latex 70000000$. However, as indicated by Mark Lewko on MathOverflow, this proof will probably not extend naturally to a proof of the Twin Primes conjecture itself. Optimally, it might prove the $latex p$ and $latex p+16$ – primes conjecture (which is still amazing).

One should look out for his paper in an upcoming issue of the Annals.

2. Goldbach Conjecture

I feel strangely tied to the Goldbach Conjecture, as I get far more traffic, emails, and spam concerning my previous post on an erroneous proof of Goldbach than on any other topic I’ve written about. About a year ago, I wrote briefly about progress that Dr. Harald Helfgott had made towards the 3-Goldbach Conjecture. This conjecture states that every odd integer greater than five can be written as the sum of three primes. (This is another easy to state problem that is not at all easy to approach).

One week ago, Helfgott posted a preprint to the arxiv that claims to complete his previous work and prove 3-Goldbach. Further, he uses the circle method and good old L-functions, so I feel like I should read over it more closely to learn a few things as it’s very close to my field. (Further still, he’s a Brandeis alum, and now that my wife will be a grad student at Brandeis I suppose I should include it in my umbrella of self-association). While I cannot say that I read the paper, understood it, and affirm its correctness, I can say that the method seems right for the task (related to the 10th and most subtle of Scott Aaronson’s list that I love to quote).

An interesting side bit to Helfgott’s proof is that it only works for numbers larger than $latex 10^{30}$ or so. Fortunately, he’s also given a computer proof for numbers less than than on the arxiv, along with David Platt. $latex 10^{30}$ is really, really, really big. Even that is a very slick bit.

3. FoM has opened

I care about open access. Fortunately, so do many of the big names. Two of the big attempts to create a good, strong set of open access math journals have just released their first articles. The Forum of Mathematics Sigma and Pi journals have each released a paper on algebraic and complex geometry. And they’re completely open! I don’t know what it takes for a journal to get off the ground, but I know that it starts with people reading its articles. So read up!

The two articles are

GENERIC VANISHING THEORY VIA MIXED HODGE MODULES

and, in Pi

$p$-ADIC HODGE THEORY FOR RIGID-ANALYTIC VARIETIES
Posted in Expository, Math.NT, Mathematics | Tagged , , , , , , , , , , , | 1 Comment

Hurwitz Zeta is a sum of Dirichlet L Functions, and vice-versa

At least three times now, I have needed to use that Hurwitz Zeta functions are a sum of L-functions and its converse, only to have forgotten how it goes. And unfortunately, the current wikipedia article on the Hurwitz Zeta function has a mistake, omitting the $varphi$ term (although it will soon be corrected). Instead of re-doing it each time, I write this detail here, below the fold.
(more…)

Posted in Expository, Math.NT, Mathematics | Tagged , , , , , , | 1 Comment

An Application of Mobius Inversion to Certain Asymptotics I

In this note, I consider an application of generalized Mobius Inversion to extract information of arithmetical sums with asymptotics of the form $latex \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1 – \epsilon})$ for a fixed $latex j$ and a constant $latex a_1$, so that the sum is over both $latex n$ and $latex k$. We will see that $latex \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 – \epsilon})$.

(more…)

Posted in Expository, Math.NT, Mathematics | Tagged , , , , , , , , | Leave a comment