# Tag Archives: Brown University

## Math 420: Supplement on Gaussian Integers II

This is a secondary supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Theory Class at Brown University. This note is also available as a pdf document.

In this note, we cover the following topics.

1. Assumed prerequisites from other lectures.
2. Which regular integer primes are sums of squares?
3. How can we classify all Gaussian primes?

1. Assumed Prerequisites

Although this note comes shortly after the previous note on the Gaussian integers, we covered some material from the book in the middle. In particular, we will assume use the results from chapters 20 and 21 from the textbook.

Most importantly, for ${p}$ a prime and ${a}$ an integer not divisible by ${p}$, recall the Legendre symbol ${\left(\frac{a}{p}\right)}$, which is defined to be ${1}$ if ${a}$ is a square mod ${p}$ and ${-1}$ if ${a}$ is not a square mod ${p}$. Then we have shown Euler’s Criterion, which states that

$$a^{\frac{p-1}{2}} \equiv \left(\frac{a}{p}\right) \pmod p, \tag{1}$$
and which gives a very efficient way of determining whether a given number ${a}$ is a square mod ${p}$.

We used Euler’s Criterion to find out exactly when ${-1}$ is a square mod ${p}$. In particular, we concluded that for each odd prime ${p}$, we have

$$\left(\frac{-1}{p}\right) = \begin{cases} 1 & \text{ if } p \equiv 1 \pmod 4 \ -1 & \text{ if } p \equiv 3 \pmod 4 \end{cases}. \tag{2}$$
Finally, we assume familiarity with the notation and ideas from the previous note on the Gaussian integers.

2. Understanding When ${p = a^2 + b^2}$.

Throughout this section, ${p}$ will be a normal odd prime. The case ${p = 2}$ is a bit different, and we will need to handle it separately. When used, the letters ${a}$ and ${b}$ will denote normal integers, and ${q_1,q_2}$ will denote Gaussian integers.

We will be looking at the following four statements.

1. ${p \equiv 1 \pmod 4}$
2. ${\left(\frac{-1}{p}\right) = 1}$
3. ${p}$ is not a Gaussian prime
4. ${p = a^2 + b^2}$

Our goal will be to show that each of these statements are equivalent. In order to show this, we will show that

$$(1) \implies (2) \implies (3) \implies (4) \implies (1). \tag{3}$$
Do you see why this means that they are all equivalent?

This naturally breaks down into four lemmas.

We have actually already shown one.

Lemma 1 ${(1) \implies (2)}$.

Proof: We have already proved this claim! This is exactly what we get from Euler’s Criterion applied to ${-1}$, as mentioned in the first section. $\Box$

There is one more that is somewhat straightfoward, and which does not rely on going up to the Gaussian integers.

Lemma 2 ${(4) \implies (1)}$.

Proof: We have an odd prime ${p}$ which is a sum of squares ${p = a^2 + b^2}$. If we look mod ${4}$, we are led to consider $$p = a^2 + b^2 \pmod 4. \tag{4}$$
What are the possible values of ${a^2 \pmod 4}$? A quick check shows that the only possibilites are ${a^2 \equiv 0, 1 \pmod 4}$.

So what are the possible values of ${a^2 + b^2 \pmod 4}$? We must have one of ${p \equiv 0, 1, 2 \pmod 4}$. Clearly, we cannot have ${p \equiv 0 \pmod 4}$, as then ${4 \mid p}$. Similarly, we cannot have ${p \equiv 2 \pmod 4}$, as then ${2 \mid p}$. So we necessarily have ${p \equiv 1 \pmod 4}$, which is what we were trying to prove. $\Box$

For the remaining two pieces, we will dive into the Gaussian integers.

Lemma 3 ${(2) \implies (3)}$.

Proof: As ${\left(\frac{-1}{p}\right) = 1}$, we know there is some ${a}$ so that ${a^2 \equiv -1 \pmod p}$. Rearranging, this becomes ${a^2 + 1 \equiv 0 \pmod p}$.

Over the normal integers, we are at an impasse, as all this tells us is that ${p \mid (a^2 + 1)}$. But if we suddenly view this within the Gaussian integers, then ${a^2 + 1}$ factors as ${a^2 + 1 = (a + i)(a – i)}$.

So we have that ${p \mid (a+i)(a-i)}$. If ${p}$ were a Gaussian prime, then we would necessarily have ${p \mid (a+i)}$ or ${p \mid (a-i)}$. (Do you see why?)

But is it true that ${p}$ divides ${a + i}$ or ${a – i}$? For instance, does ${p}$ divide ${a + i}$? No! If so, then ${\frac{a}{p} + \frac{i}{p}}$ would be a Gaussian integer, which is clearly not true.

So ${p}$ does not divide ${a + i}$ or ${a-i}$, and we must therefore conclude that ${p}$ is not a Gaussian prime. $\Box$

Lemma 4 ${(3) \implies (4)}$.

Proof: We now know that ${p}$ is not a Gaussian prime. In particular, this means that ${p}$ is not irreducible, and so it has a nontrivial factorization in the Gaussian integers. (For example, ${5}$ is a regular prime, but it is not a Gaussian prime. It factors as ${5 = (1 + 2i)(1 – 2i)}$ in the Gaussian integers.)

Let’s denote this nontrivial factorization as ${p = q_1 q_2}$. By nontrivial, we mean that neither ${q_1}$ nor ${q_2}$ are units, i.e. ${N(q_1), N(q_2) > 1}$. Taking norms, we see that ${N(p) = N(q_1) N(q_2)}$.

We can evaluate ${N(p) = p^2}$, so we have that ${p^2 = N(q_1) N(q_2)}$. Both ${N(q_1)}$ and ${N(q_2)}$ are integers, and their product is ${p^2}$. Yet ${p^2}$ has exactly two different factorizations: ${p^2 = 1 \cdot p^2 = p \cdot p}$. Since ${N(q_1), N(q_2) > 1}$, we must have the latter.

So we see that ${N(q_1) = N(q_2) = p}$. As ${q_1, q_2}$ are Gaussian integers, we can write ${q_1 = a + bi}$ for some ${a, b}$. Then since ${N(q_1) = p}$, we see that ${N(q_1) = a^2 + b^2}$. And so ${p}$ is a sum of squares, ending the proof. $\Box$

Notice that ${2 = 1 + 1}$ is also a sum of squares. Then all together, we can say the following theorem.

Theorem 5 A regular prime ${p}$ can be written as a sum of two squares, $$p = a^2 + b^2, \tag{5}$$
exactly when ${p = 2}$ or ${p \equiv 1 \pmod 4}$.

A remarkable aspect of this theorem is that it is entirely a statement about the behaviour of the regular integers. Yet in our proof, we used the Gaussian integers in a very fundamental way. Isn’t that strange?

You might notice that in the textbook, Dr. Silverman presents a proof that does not rely on the Gaussian integers. While interesting and clever, I find that the proof using the Gaussian integers better illustrates the deep connections between and around the structures we have been studying in this course so far. Everything connects!

Example 1 The prime ${5}$ is ${1 \pmod 4}$, and so ${5}$ is a sum of squares. In particular, ${5 = 1^2 + 2^2}$.

Example 2 The prime ${101}$ is ${1 \pmod 4}$, and so is a sum of squares. Our proof is not constructive, so a priori we do not know what squares sum to ${101}$. But in this case, we see that ${101 = 1^2 + 10^2}$.

Example 3 The prime ${97}$ is ${1 \pmod 4}$, and so it also a sum of squares. It’s less obvious what the squares are in this case. It turns out that ${97 = 4^2 + 9^2}$.

Example 4 The prime ${43}$ is ${3 \pmod 4}$, and so is not a sum of squares.

3. Classification of Gaussian Primes

In the previous section, we showed that each integer prime ${p \equiv 1 \pmod 4}$ actually splits into a product of two Gaussian numbers ${q_1}$ and ${q_2}$. In fact, since ${N(q_1) = p}$ is a regular prime, ${q_1}$ is a Gaussian irreducible and therefore a Gaussian prime (can you prove this? This is a nice midterm question.)

So in fact, ${p \equiv 1 \pmod 4}$ splits in to the product of two Gaussian primes ${q_1}$ and ${q_2}$.

In this way, we’ve found infinitely many Gaussian primes. Take a regular prime congruent to ${1 \pmod 4}$. Then we know that it splits into two Gaussian primes. Further, if we know how to write ${p = a^2 + b^2}$, then we know that ${q_1 = a + bi}$ and ${q_2 = a – bi}$ are those two Gaussian primes.

In general, we will find all Gaussian primes by determining their interaction with regular primes.

Suppose ${q}$ is a Gaussian prime. Then on the one hand, ${N(q) = q \overline{q}}$. On the other hand, ${N(q) = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}}$ is some regular integer. Since ${q}$ is a Gaussian prime (and so ${q \mid w_1 w_2}$ means that ${q \mid w_1}$ or ${q \mid w_2}$), we know that ${q \mid p_j}$ for some regular integer prime ${p_j}$.

So one way to classify Gaussian primes is to look at every regular integer prime and see which Gaussian primes divide it. We have figured this out for all primes ${p \equiv 1 \pmod 4}$. We can handle ${2}$ by noticing that ${2 = (1 + i) (1-i)}$. Both ${(1+i)}$ and ${(1-i)}$ are Gaussian primes.

The only primes left are those regular primes with ${p \equiv 3 \pmod 4}$. We actually already covered the key idea in the previous section.

Lemma 6 If ${p \equiv 3 \pmod 4}$ is a regular prime, then ${p}$ is also a Gaussian prime.

Proof: In the previous section, we showed that if ${p}$ is not a Gaussian prime, then ${p = a^2 + b^2}$ for some integers ${a,b}$, and then ${ p \equiv 1 \pmod 4}$. Since ${p \not \equiv 1 \pmod 4}$, we see that ${p}$ is a Gaussian prime. $\Box$

In total, we have classified all Gaussian primes.

Theorem 7 The Gaussian primes are given by

1. ${(1+i), (1-i)}$
2. Regular primes ${p \equiv 3 \pmod 4}$
3. The factors ${q_1 q_2}$ of a regular prime ${p \equiv 1 \pmod 4}$. Further, these primes are given by ${a \pm bi}$, where ${p = a^2 + b^2}$.

4. Concluding Remarks

I hope that it’s clear that the regular integers and the Gaussian integers are deeply connected and intertwined. Number theoretic questions in one constantly lead us to investigate the other. As one dives deeper into number theory, more and different integer-like rings appear, all deeply connected.

Each time I teach the Gaussian integers, I cannot help but feel the sense that this is a hint at a deep structural understanding of what is really going on. The interplay between the Gaussian integers and the regular integers is one of my favorite aspects of elementary number theory, which is one reason why I deviated so strongly from the textbook to include it. I hope you enjoyed it too.

## Math 420: Supplement on Gaussian Integers

This is a brief supplemental note on the Gaussian integers, written for my Spring 2016 Elementary Number Class at Brown University. With respect to the book, the nearest material is the material in Chapters 35 and 36, but we take a very different approach.

A pdf of this note can be found here. I’m sure there are typos, so feel free to ask me or correct me if you think something is amiss.

In this note, we cover the following topics.

1. What are the Gaussian integers?
2. Unique factorization within the Gaussian integers.
3. An application of the Gaussian integers to the Diophantine equation ${y^2 = x^3 – 1}$.
4. Other integer-like sets: general rings.
5. Specific examples within ${\mathbb{Z}[\sqrt{2}]}$ and ${\mathbb{Z}[\sqrt{-5}]}$.

1. What are the Gaussian Integers?

The Gaussian Integers are the set of numbers of the form ${a + bi}$, where ${a}$ and ${b}$ are normal integers and ${i}$ is a number satisfying ${i^2 = -1}$. As a collection, the Gaussian Integers are represented by the symbol ${\mathbb{Z}[i]}$, or sometimes ${\mathbb{Z}[\sqrt{-1}]}$. These might be pronounced either as The Gaussian Integers or as Z append i.

In many ways, the Gaussian integers behave very much like the regular integers. We’ve been studying the qualities of the integers, but we should ask — which properties are really properties of the integers, and which properties hold in greater generality? Is it the integers themselves that are special, or is there something bigger and deeper going on?

These are the main questions that we ask and make some progress towards in these notes. But first, we need to describe some properties of Gaussian integers.

We will usually use the symbols ${z = a + bi}$ to represent our typical Gaussian integer. One adds and multiples two Gaussian integers just as you would add and multiply two complex numbers. Informally, you treat ${i}$ like a polynomial indeterminate ${X}$, except that it satisfies the relation ${X^2 = -1}$.

Definition 1 For each complex number ${z = a + bi}$, we define the conjugate of ${z}$, written as ${\overline{z}}$, by

\overline{z} = a – bi.

We also define the norm of ${z}$, written as ${N(z)}$, by

N(z) = a^2 + b^2.

You can check that ${N(z) = z \overline{z}}$ (and in fact this is one of your assigned problems). You can also chack that ${N(zw) = N(z)N(w)}$, or rather that the norm is multiplicative (this is also one of your assigned problems).

Even from our notation, it’s intuitive that ${z = a + bi}$ has two parts, the part corresponding to ${a}$ and the part corresponding to ${b}$. We call ${a}$ the real part of ${z}$, written as ${\Re z = a}$, and we call ${b}$ the imaginary part of ${z}$, written as ${\Im z = b}$. I should add that the name ”imaginary number” is a poor name that reflects historical reluctance to view complex numbers as acceptable. For that matter, the name ”complex number” is also a poor name.

As a brief example, consider the Gaussian integer ${z = 2 + 5i}$. Then ${N(z) = 4 + 25 = 29}$, ${\Re z = 2}$, ${\Im z = 5}$, and ${\overline{z} = 2 – 5i}$.

We can ask similar questions to those we asked about the regular integers. What does it mean for ${z \mid w}$ in the complex case?

Definition 2 We say that a Gaussian integer ${z}$ divides another Gaussian integer ${w}$ if there is some Gaussian integer ${k}$ so that ${zk = w}$. In this case, we write ${z \mid w}$, just as we write for regular integers.

For the integers, we immediately began to study the properties of the primes, which in many ways were the building blocks of the integers. Recall that for the regular integers, we said ${p}$ was a prime if its only divisors were ${\pm 1}$ and ${\pm p}$. In the Gaussian integers, the four numbers ${\pm 1, \pm i}$ play the same role as ${\pm 1}$ in the usual integers. These four numbers are distinguished as being the only four Gaussian integers with norm equal to ${1}$.

That is, the only solutions to ${N(z) = 1}$ where ${z}$ is a Gaussian integer are ${z = \pm 1, \pm i}$. We call these four numbers the Gaussian units.

With this in mind, we are ready to define the notion of a prime for the Gaussian integers.

Definition 3 We say that a Gaussian integer ${z}$ with ${N(z) > 1}$ is a Gaussian prime if the only divisors of ${z}$ are ${u}$ and ${uz}$, where ${u = \pm 1, \pm i}$ is a Gaussian unit.

Remark 1 When we look at other integer-like sets, we will actually use a different definition of a prime.

It’s natural to ask whether the normal primes in ${\mathbb{Z}}$ are also primes in ${\mathbb{Z}[i]}$. And the answer is no. For instance, ${5}$ is a prime in ${\mathbb{Z}}$, but

5 = (1 + 2i)(1 – 2i)

in the Gaussian integers. However, the two Gaussian integers ${1 + 4i}$ and ${1 – 4i}$ are prime. It also happens to be that ${3}$ is a Gaussian prime. We will continue to investigate which numbers are Gaussian primes over the next few lectures.

With a concept of a prime, it’s also natural to ask whether or not the primes form the building blocks for the Gaussian integers like they form the building blocks for the regular integers. We take up this in our next topic.

2. Unique Factorization in the Gaussian Integers

Let us review the steps that we followed to prove unique factorization for ${\mathbb{Z}}$.

1. We proved that for ${a,b}$ in ${\mathbb{Z}}$ with ${b \neq 0}$, there exist unique ${q}$ and ${r}$ such that ${a = bq + r}$ with ${0 \leq r < b}$. This is called the Division Algorithm.
2. By repeatedly applying the Division Algorithm, we proved the Euclidean Algorithm. In particular, we showed that the last nonzero remainder was the GCD of our initial numbers.
3. By performing reverse substition on the steps of the Euclidean Algorithm, we showed that there are integer solutions in ${x,y}$ to the Diophantine equation ${ax + by = \gcd(a,b)}$. This is often called Bezout’s Theorem or Bezout’s Lemma, although we never called it by that name in class.
4. With Bezout’s Theorem, we showed that if a prime ${p}$ divides ${ab}$, then ${p \mid a}$ or ${p \mid b}$. This is the crucial step towards proving Unique Factorization.
5. We then proved Unique Factorization.

Each step of this process can be repeated for the Gaussian integers, with a few notable differences. Remarkably, once we have the division algorithm, each proof is almost identical for ${\mathbb{Z}[i]}$ as it is for ${\mathbb{Z}}$. So we will prove the division algorithm, and then give sketches of the remaining ideas, highlighting the differences that come up along the way.

In the division algorithm, we require the remainder ${r}$ to ”be less than what we are dividing by.” A big problem in translating this to the Gaussian integers is that the Gaussian integers are not ordered. That is, we don’t have a concept of being greater than or less than for ${\mathbb{Z}[i]}$.

When this sort of problem emerges, we will get around this by taking norms. Since the norm of a Gaussian integer is a typical integer, we will be able to use the ordering of the integers to order our norms.

Theorem 4 For ${z,w}$ in ${\mathbb{Z}[i]}$ with ${w \neq 0}$, there exist ${q}$ and ${r}$ in ${\mathbb{Z}[i]}$ such that ${z = qw + r}$ with ${N(r) < N(w)}$.

Proof: Here, we will cheat a little bit and use properties about general complex numbers and the rationals to perform this proof. One can give an entirely intrinsic proof, but I like the approach I give as it also informs how to actually compute the ${q}$ and ${r}$.

The entire proof boils down to the idea of writing ${z/w}$ as a fraction and approximating the real and imaginary parts by the nearest integers.

Let us now transcribe that idea. We will need to introduce some additional symbols. Let ${z = a_1 + b_1 i}$ and ${w = a_2 + b_2 i}$.

Then
\begin{align}
\frac{z}{w} &= \frac{a_1 + b_1 i}{a_2 + b_2 i} = \frac{a_1 + b_1 i}{a_2 + b_2 i} \frac{a_2 – b_2 i}{a_2 – b_2 i} \\
&= \frac{a_1a_2 + b_1 b_2}{a_2^2 + b_2^2} + i \frac{b_1 a_2 – a_1 b_2}{a_2^2 + b_2 ^2} \\
&= u + iv.
\end{align}
By rationalizing the denominator by multiplying by ${\overline{w}/ \overline{w}}$, we are able to separate out the real and imaginary parts. In this final expression, we have named ${u}$ to be the real part and ${v}$ to be the imaginary part. Notice that ${u}$ and ${v}$ are normal rational numbers.

We know that for any rational number ${u}$, there is an integer ${u’}$ such that ${\lvert u – u’ \rvert \leq \frac{1}{2}}$. Let ${u’}$ and ${v’}$ be integers within ${1/2}$ of ${u}$ and ${v}$ above, respectively.

Then we claim that we can choose ${q = u’ + i v’}$ to be the ${q}$ in the theorem statement, and let ${r}$ be the resulting remainder, ${r = z – qw}$. We need to check that ${N(r) < N(w)}$. We will check that explicitly.

We compute
\begin{align}
N(r) &= N(z – qw) = N\left(w \left(\frac{z}{w} – q\right)\right) = N(w) N\left(\frac{z}{w} – q\right).
\end{align}
Note that we have used that ${N(ab) = N(a)N(b)}$. In this final expression, we have already come across ${\frac{z}{w}}$ before — it’s exactly what we called ${u + iv}$. And we called ${q = u’ + i v’}$. So our final expression is the same as

N(r) = N(w) N(u + iv – u’ – i v’) = N(w) N\left( (u – u’) + i (v – v’)\right).

How large can the real and imaginary parts of ${(u-u’) + i (v – v’)}$ be? By our choice of ${u’}$ and ${v’}$, they can be at most ${1/2}$.

So we have that

N(r) \leq N(w) N\left( (\tfrac{1}{2})^2 + (\tfrac{1}{2})^2\right) = \frac{1}{2} N(w).

And so in particular, we have that ${N(r) < N(w)}$ as we needed. $\Box$

Note that in this proof, we did not actually show that ${q}$ or ${r}$ are unique. In fact, unlike the case in the regular integers, it is not true that ${q}$ and ${r}$ are unique.

Example 1 Consider ${3+5i, 1 + 2i}$. Then we compute

\frac{3+5i}{1+2i} = \frac{3+5i}{1+2i}\frac{1-2i}{1-2i} = \frac{13}{5} + i \frac{-1}{5}.

The closest integer to ${13/5}$ is ${3}$, and the closest integer to ${-1/5}$ is ${0}$. So we take ${q = 3}$. Then ${r = (3+5i) – (1+2i)3 = -i}$, and we see in total that

3+5i = (1+2i) 3 – i.

Note that ${N(-i) = 1}$ and ${N(1 + 2i) = 5}$, so this choice of ${q}$ and ${r}$ works.

As ${13/5}$ is sort of close to ${2}$, what if we chose ${q = 2}$ instead? Then ${r = (3 + 5i) – (1 + 2i)2 = 1 + i}$, leading to the overall expression

3_5i = (1 + 2i) 2 + (1 + i).

Note that ${N(1+i) = 2 < N(1+2i) = 5}$, so that this choice of ${q}$ and ${r}$ also works.

This is an example of how the choice of ${q}$ and ${r}$ is not well-defined for the Gaussian integers. In fact, even if one decides to choose ${q}$ to that ${N(r)}$ is minimal, the resulting choices are still not necessarily unique.

This may come as a surprise. The letters ${q}$ and ${r}$ come from our tendency to call those numbers the quotient and remainder after division. We have shown that the quotient and remainder are not well-defined, so it does not make sense to talk about ”the remainder” or ”the quotient.” This is a bit strange!

Are we able to prove unique factorization when the process of division itself seems to lead to ambiguities? Let us proceed forwards and try to see.

Our next goal is to prove the Euclidean Algorithm. By this, we mean that by repeatedly performing the division algorithm starting with two Gaussian integers ${z}$ and ${w}$, we hope to get a sequence of remainders with the last nonzero remainder giving a greatest common divisor of ${z}$ and ${w}$.

Before we can do that, we need to ask a much more basic question. What do we mean by a greatest common divisor? In particular, the Gaussian integers are not ordered, so it does not make sense to say whether one Gaussian integer is bigger than another.

For instance, is it true that ${i > 1}$? If so, then certainly ${i}$ is positive. We know that multiplying both sides of an inequality by a positive number doesn’t change that inequality. So multiplying ${i > 1}$ by ${i}$ leads to ${-1 > i}$, which is absurd if ${i}$ was supposed to be positive!

To remedy this problem, we will choose a common divisor of ${z}$ and ${w}$ with the greatest norm (which makes sense, as the norm is a regular integer and thus is well-ordered). But the problem here, just as with the division algorithm, is that there may or may not be multiple such numbers. So we cannot talk about ”the greatest common divisor” and instead talk about ”a greatest common divisor.” To paraphrase Lewis Carroll’s\footnote{Carroll was also a mathematician, and hid some nice mathematics inside some of his works.} Alice, things are getting curiouser and curiouser!

Definition 5 For nonzero ${z,w}$ in ${\mathbb{Z}[i]}$, a greatest common divisor of ${z}$ and ${w}$, denoted by ${\gcd(z,w)}$, is a common divisor with largest norm. That is, if ${c}$ is another common divisor of ${z}$ and ${w}$, then ${N(c) \leq N(\gcd(z,w))}$.

If ${N(\gcd(z,w)) = 1}$, then we say that ${z}$ and ${w}$ are relatively prime. Said differently, if ${1}$ is a greatest common divisor of ${z}$ and ${w}$, then we say that ${z}$ and ${w}$ are relatively prime.

Remark 2 Note that ${\gcd(z,w)}$ as we’re writing it is not actually well-defined, and may stand for any greatest common divisor of ${z}$ and ${w}$.

With this definition in mind, the proof of the Euclidean Algorithm is almost identical to the proof of the Euclidean Algorithm for the regular integers. As with the regular integers, we need the following result, which we will use over and over again.

Lemma 6 Suppose that ${z \mid w_1}$ and ${z \mid w_2}$. Then for any ${x,y}$ in ${\mathbb{Z}[i]}$, we have that ${z \mid (x w_1 + y w_2)}$.

Proof: As ${z \mid w_1}$, there is some Gaussian integer ${k_1}$ such that ${z k_1 = w_1}$. Similarly, there is some Gaussian integer ${k_2}$ such that ${z k_2 = w_2}$.

Then ${xw_1 + yw_2 = zxk_1 + zyk_2 = z(xk_1 + yk_2)}$, which is divisible by ${z}$ as this is the definition of divisibility. $\Box$

Notice that this proof is identical to the analogous statement in the integers, except with differently chosen symbols. That is how the proof of the Euclidean Algorithm goes as well.

Theorem 7 let ${z,w}$ be nonzero Gaussian integers. Recursively apply the division algorithm, starting with the pair ${z, w}$ and then choosing the quotient and remainder in one equation the new pair for the next. The last nonzero remainder is divisible by all common divisors of ${z,w}$, is itself a common divisor, and so the last nonzero remainder is a greatest common divisor of ${z}$ and ${w}$.

Symbolically, this looks like
\begin{align}
z &= q_1 w + r_1, \quad N(r_1) < N(w) \\\\
w &= q_2 r_1 + r_2, \quad N(r_2) < N(r_1) \\\\
r_1 &= q_3 r_2 + r_3, \quad N(r_3) < N(r_2) \\\\
\cdots &= \cdots \\\\
r_k &= q_{k+2} r_{k+1} + r_{k+2}, \quad N(r_{k+2}) < N(r_{k+1}) \\\\
r_{k+1} &= q_{k+3} r_{k+2} + 0,
\end{align}
where ${r_{k+2}}$ is the last nonzero remainder, which we claim is a greatest common divisor of ${z}$ and ${w}$.

Proof: We are claiming several thing. Firstly, we should prove our implicit claim that this algorithm terminates at all. Is it obvious that we should eventually reach a zero remainder?

In order to see this, we look at the norms of the remainders. After each step in the algorithm, the norm of the remainder is smaller than the previous step. As the norms are always nonnegative integers, and we know there does not exist an infinite list of decreasing positive integers, we see that the list of nonzero remainders is finite. So the algorithm terminates.

We now want to prove that the last nonzero remainder is a common divisor and is in fact a greatest common divisor. The proof is actually identical to the proof in the integer case, merely with a different choice of symbols.

Here, we only sketch the argument. Then the rest of the argument can be found by comparing with the proof of the Euclidean Algorithm for ${\mathbb{Z}}$ as found in the course textbook.

For ease of exposition, suppose that the algorithm terminated in exatly 3 steps, so that we have
\begin{align}
z &= q_1 w + r_1, \\
w &= q_2 r_1 + r_2 \\
r_1 &= q_3 r_2 + 0.
\end{align}

On the one hand, suppose that ${d}$ is a common divisor of ${z}$ and ${w}$. Then by our previous lemma, ${d \mid z – q_1 w = r_1}$, so that we see that ${d}$ is a divisor of ${r_1}$ as well. Applying to the next line, we have that ${d \mid w}$ and ${d \mid r_1}$, so that ${d \mid w – q_2 r_1 = r_2}$. So every common divisor of ${z}$ and ${w}$ is a divisor of the last nonzero remainder ${r_2}$.

On the other hand, ${r_2 \mid r_1}$ by the last line of the algorithm. Then as ${r_2 \mid r_1}$ and ${r_2 \mid r_1}$, we know that ${r_2 \mid q_2 r_1 + r_2 = w}$. Applying this to the first line, as ${r_2 \mid r_1}$ and ${r_2 \mid w}$, we know that ${r_2 \mid q_1 w + r_1 = z}$. So ${r_2}$ is a common divisor.

We have shown that ${r_2}$ is a common divisor of ${z}$ and ${w}$, and that every common divisor of ${z}$ and ${w}$ divides ${r_2}$. How do we show that ${r_2}$ is a greatest common divisor?

Suppose that ${d}$ is a common divisor of ${z}$ and ${w}$, so that we know that ${d \mid r_2}$. In particular, this means that there is some nonzero ${k}$ so that ${dk = r_2}$. Taking norms, this means that ${N(dk) = N(d)N(k) = N(r_2)}$. As ${N(d)}$ and ${N(k)}$ are both at least ${1}$, this means that ${N(d) \leq N(r_2)}$.

This is true for every common divisor ${d}$, and so ${N(r_2)}$ is at least as large as the norm of any common divisor of ${z}$ and ${w}$. Thus ${r_2}$ is a greatest common divisor.

The argument carries on in the same way for when there are more steps in the algorithm. $\Box$

Theorem 8 The greatest common divisor of ${z}$ and ${w}$ is well-defined, up to multiplication by ${\pm 1, \pm i}$. In other words, if ${\gcd(z,w)}$ is a greatest common divisor of ${z}$ and ${w}$, then all greatest common divisors of ${z}$ and ${w}$ are given by ${\pm \gcd(z,w), \pm i \gcd(z,w)}$.

Proof: Suppose ${d}$ is a greatest common divisor, and let ${\gcd(z,w)}$ denote a greatest common divisor resulting from an application of the Euclidean Algorithm. Then we know that ${d \mid \gcd(z,w)}$, so that there is some ${k}$ so that ${dk = \gcd(z,w)}$. Taking norms, we see that ${N(d)N(k) = N(\gcd(z,w)}$.

But as both ${d}$ and ${\gcd(z,w)}$ are greatest common divisors, we must have that ${N(d) = N(\gcd(z,w))}$. So ${N(k) = 1}$. The only Gaussian integers with norm one are ${\pm 1, \pm i}$, so we have that ${du = \gcd(z,w)}$ where ${u}$ is one of the four Gaussian units, ${\pm 1, \pm i}$.

Conversely, it’s clear that the four numbers ${\pm \gcd(z,w), \pm i \gcd(z,w)}$ are all greatest common divisors. $\Box$

Now that we have the Euclidean Algorithm, we can go towards unique factorization in ${\mathbb{Z}[i]}$. Let ${g}$ denote a greatest common divisor of ${z}$ and ${w}$. Reverse substitution in the Euclidean Algorithm shows that we can find Gaussian integer solutions ${x,y}$ to the (complex) linear Diophantine equation

zx + wy = g.

Let’s see an example.

Example 2 Consider ${32 + 9i}$ and ${4 + 11i}$. The Euclidean Algorithm looks like
\begin{align}
32 + 9i &= (4 + 11i)(2 – 2i) + 2 – 5i, \\\\
4 + 11i &= (2 – 5i)(-2 + i) + 3 – i, \\\\
2 – 5i &= (3-i)(1-i) – i, \\\\
3 – i &= -i (1 + 3i) + 0.
\end{align}
So we know that ${-i}$ is a greatest common divisor of ${32 + 9i}$ and ${4 + 11i}$, and so we know that ${32+9i}$ and ${4 + 11i}$ are relatively prime. Let us try to find a solution to the Diophantine equation

x(32 + 9i) + y(4 + 11i) = 1.

Performing reverse substition, we see that
\begin{align}
-i &= (2 – 5i) – (3-i)(1-i) \\\\
&= (2 – 5i) – (4 + 11i – (2-5i)(-2 + i))(1-i) \\\\
&= (2 – 5i) – (4 + 11i)(1 – i) + (2 – 5i)(-2 + 1)(1 – i) \\\\
&= (2 – 5i)(3i) – (4 + 11i)(1 – i) \\\\
&= (32 + 9i – (4 + 11i)(2 – 2i))(3i) – (4 + 11i)(1 – i) \\\\
&= (32 + 9i) 3i – (4 + 11i)(2 – 2i)(3i) – (4 + 11i)(1-i) \\\\
&= (32 + 9i) 3i – (4 + 11i)(7 + 5i).
\end{align}
Multiplying this through by ${i}$, we have that

1 = (32 + 9i) (-3) + (4 + 11i)(5 – 7i).

So one solution is ${(x,y) = (-3, 5 – 7i)}$.

Although this looks more complicated, the process is the same as in the case over the regular integers. The apparent higher difficulty comes mostly from our lack of familiarity with basic arithmetic in ${\mathbb{Z}[i]}$.

The rest of the argument is now exactly as in the integers.

Theorem 9 Suppose that ${z, w}$ are relatively prime, and that ${z \mid wv}$. Then ${z \mid v}$.

Proof: This is left as an exercise (and will appear on the next midterm in some form — cheers to you if you’ve read this far in these notes). But it’s now the almost the same as in the regular integers. $\Box$

Theorem 10 Let ${z}$ be a Gaussian integer with ${N(z) > 1}$. Then ${z}$ can be written uniquely as a product of Gaussian primes, up to multiplication by one of the Gaussian units ${\pm 1, \pm i}$.

Proof: We only sketch part of the proof. There are multiple ways of doing this, but we present the one most similar to what we’ve done for the integers. If there are Gaussian integers without unique factorization, then there are some (maybe they tie) with minimal norm. So let ${z}$ be a Gaussian integer of minimal norm without unique factorization. Then we can write

p_1 p_2 \cdots p_k = z = q_1 q_2 \cdots q_\ell,

where the ${p}$ and ${q}$ are all primes. As ${p_1 \mid z = q_1 q_2 \cdots q_\ell}$, we know that ${p_1}$ divides one of the ${q}$ (by Theorem~9), and so (up to units) we can say that ${p_1}$ is one of the ${q}$ primes. We can divide each side by ${p_1}$ and we get two supposedly different factorizations of a Gaussian integer of norm ${N(z)/N(p_1) < N(z)}$, which is less than the least norm of an integer without unique factorization (by what we supposed). This is a contradiction, and we can conclude that there are no Gaussian integers without unique factorization. $\Box$

If this seems unclear, I recommend reviewing this proof and the proof of unique factroziation for the regular integers. I should also mention that one can modify the proof of unique factorization for ${\mathbb{Z}}$ as given in the course textbook as well (since it is a bit different than what we have done). Further, the course textbook does proof of unique factorization for ${\mathbb{Z}[i]}$ in Chapter 36, which is very similar to the proof sketched above (although the proof of Theorem~9 is very different.)

3. An application to ${y^2 = x^3 – 1}$.

We now consider the nonlinear Diophantine equation ${y^2 = x^3 – 1}$, where ${x,y}$ are in ${\mathbb{Z}}$. This is hard to solve over the integers, but by going up to ${\mathbb{Z}[i]}$, we can determine all solutions.

In ${\mathbb{Z}[i]}$, we can rewrite $$y^2 + 1 = (y + i)(y – i) = x^3. \tag{1}$$
We claim that ${y+i}$ and ${y-i}$ are relatively prime. To see this, suppose that ${d}$ is a common divisor of ${y+i}$ and ${y-i}$. Then ${d \mid (y + i) – (y – i) = 2i}$. It happens to be that ${2i = (1 + i)^2}$, and that ${(1 + i)}$ is prime. To see this, we show the following.

Lemma 11 Suppose ${z}$ is a Gaussian integer, and ${N(z) = p}$ is a regular prime. Then ${z}$ is a Gaussian prime.

Proof: Suppose that ${z}$ factors nontrivially as ${z = ab}$. Then taking norms, ${N(z) = N(a)N(b)}$, and so we get a nontrivial factorization of ${N(z)}$. When ${N(z)}$ is a prime, then there are no nontrivial factorizations of ${N(z)}$, and so ${z}$ must have no nontrivial factorization. $\Box$

As ${N(1+i) = 2}$, which is a prime, we see that ${(1 + i)}$ is a Gaussian prime. So ${d \mid (1 + i)^2}$, which means that ${d}$ is either ${1, (1 + i)}$, or ${(1+i)^2}$ (up to multiplication by a Gaussian unit).

Suppose we are in the case of the latter two, so that ${(1+i) \mid d}$. Then as ${d \mid (y + i)}$, we know that ${(1 + i) \mid x^3}$. Taking norms, we have that ${2 \mid x^6}$.

By unique factorization in ${\mathbb{Z}}$, we know that ${2 \mid x}$. This means that ${4 \mid x^2}$, which allows us to conclude that ${x^3 \equiv 0 \pmod 4}$. Going back to the original equation ${y^2 + 1 = x^3}$, we see that ${y^2 + 1 \equiv 0 \pmod 4}$, which means that ${y^2 \equiv 3 \pmod 4}$. A quick check shows that ${y^2 \equiv 3 \pmod 4}$ has no solutions ${y}$ in ${\mathbb{Z}/4\mathbb{Z}}$.

So we rule out the case then ${(1 + i) \mid d}$, and we are left with ${d}$ being a unit. This es exactly the case that ${y+i}$ and ${y-i}$ are relatively prime.

Recall that ${(y+i)(y-i) = x^3}$. As ${y+i}$ and ${y-i}$ are relatively prime and their product is a cube, by unique factorization in ${\mathbb{Z}[i]}$ we know that ${y+i}$ and ${y-i}$ much each be Gaussian cubes. Then we can write ${y+i = (m + ni)^3}$ for some Gaussian integer ${m + ni}$. Expanding, we see that

y+i = m^3 – 3mn^2 + i(3m^2n – n^3).

Equating real and imaginary parts, we have that
\begin{align}
y &= m(m^2 – 3n^2) \\
1 &= n(3m^2 – n^2).
\end{align}
This second line shows that ${n \mid 1}$. As ${n}$ is a regular integer, we see that ${n = 1}$ or ${-1}$.

If ${n = 1}$, then that line becomes ${1 = (3m^2 – 1)}$, or after rearranging ${2 = 3m^2}$. This has no solutions.

If ${n = -1}$, then that line becomes ${1 = -(3m^2 – 1)}$, or after rearranging ${0 = 3m^2}$. This has the solution ${m = 0}$, so that ${y+i = (-i)^3 = i}$, which means that ${y = 0}$. Then from ${y^2 + 1 = x^3}$, we see that ${x = 1}$.

And so the only solution is ${(x,y) = (1,0)}$, and there are no other solutions.

4. Other Rings

The Gaussian integers have many of the same properties as the regular integers, even though there are some differences. We could go further. For example, we might consider the following integer-like sets,

\mathbb{Z}(\sqrt{d}) = { a + b \sqrt{d} : a,b \in \mathbb{Z} }.

One can add, subtract, and multiply these together in similar ways to how we can add, subtract, and multiply together integers, or Gaussian integers.

We might ask what properties these other integer-like sets have. For instance, do they have unique factorization?

More generally, there is a better name than ”integer-like set” for this sort of construction.

Suppose ${R}$ is a collection of elements, and it makes sense to add, subtract, and multiply these elements together. Further, we want addition and multiplication to behave similarly to how they behave for the regular integers. In particular, if ${r}$ and ${s}$ are elements in ${R}$, then we want ${r + s = s + r}$ to be in ${R}$; we want something that behaves like ${0}$ in the sense that ${r + 0 = r}$; for each ${r}$, want another element ${-r}$ so that ${r + (-r) = 0}$; we want ${r \cdot s = s \cdot r}$; we want something that behaves like ${1}$ in the sense that ${r \cdot 1 = r}$ for all ${r \neq 0}$; and we want ${r(s_1 + s_2) = r s_1 + r s_2}$. Such a collection is called a ring. (More completely, this is called a commutative unital ring, but that’s not important.)

It is not important that you explicitly remember exactly what the definition of a ring is. The idea is that there is a name for things that are ”integer-like” and that we might wonder what properties we have been thinking of as properties of the integers are actually properties of rings.

As a total aside: there are very many more rings too, things that look much more different than the integers. This is one of the fundamental questions that leads to the area of mathematics called Abstract Algebra. With an understanding of abstract algebra, one could then focus on these general number theoretic problems in an area of math called Algebraic Number Theory.

5. The rings ${\mathbb{Z}[\sqrt{d}]}$

We can describe some of the specific properties of ${\mathbb{Z}[\sqrt{d}]}$, and suggest how some of the ideas we’ve been considering do (or don’t) generalize. For a general element ${n = a + b \sqrt{d}}$, we can define the conjugate ${\overline{n} = a – b\sqrt {d}}$ and the norm ${N(n) = n \cdot \overline{n} = a^2 – d b^2}$. We call those elements ${u}$ with ${N(u) = 1}$ the units in ${\mathbb{Z}[\sqrt{d}]}$.

Some of the definitions we’ve been using turn out to not generalize so easily, or in quite the ways we expect. If ${n}$ doesn’t have a nontrivial factoriation (meaning that we cannot write ${n = ab}$ with ${N(a), N(b) \neq 1}$), then we call ${n}$ an irreducible. In the cases of ${\mathbb{Z}}$ and ${\mathbb{Z}[i]}$, we would have called these elements prime.

In general, we call a number ${p}$ in ${\mathbb{Z}{\sqrt{d}}}$ a prime if ${p}$ has the property that ${p \mid ab}$ means that ${p \mid a}$ or ${p \mid b}$. Of course, in the cases of ${\mathbb{Z}}$ and ${\mathbb{Z}[i]}$, we showed that irreducibles are primes. But it turns out that this is not usually the case.

Let us look at ${\mathbb{Z}{\sqrt{-5}}}$ for a moment. In particular, we can write ${6}$ in two ways as

6 = 2 \cdot 3 = (1 + \sqrt{-5})(1 – \sqrt{-5}).

Although it’s a bit challenging to show, these are the only two fundamentally different factorizations of ${6}$ in ${\mathbb{Z}[\sqrt{-5}]}$. One can show (it’s not very hard, but it’s not particularly illuminating to do here) that neither ${2}$ or ${3}$ divides ${(1 + \sqrt{-5})}$ or ${(1 – \sqrt{-5})}$ (and vice versa), which means that none of these four numbers are primes in our more general definition. One can also show that all four numbers are irreducible.

What does this mean? This means that ${6}$ can be factored into irreducibles in fundamentally different ways, and that ${\mathbb{Z}[\sqrt{-5}]}$ does not have unique factorization.

It’s a good thought exercise to think about what is really different between ${\mathbb{Z}[\sqrt{-5}]}$ and ${\mathbb{Z}}$. At the beginning of this course, it seemed extremely obvious that ${\mathbb{Z}}$ had unique factorization. But in hindsight, is it really so obvious?

Understanding when there is and is not unique factorization in ${\mathbb{Z}[\sqrt{d}]}$ is something that people are still trying to understand today. The fact is that we don’t know! In particular, we really don’t know very much when ${d}$ is positive.

One reason why can be seen in ${\mathbb{Z}[\sqrt{2}]}$. If ${n = a + b \sqrt{2}}$, then ${N(n) = a^2 – 2 b^2}$. A very basic question that we can ask is what are the units? That is, which ${n}$ have ${N(n) = 1}$?

Here, that means trying to solve the equation $$a^2 – 2 b^2 = 1. \tag{2}$$
We have seen this equation a few times before. On the second homework assignment, I asked you to show that there were infinitely many solutions to this equation by finding lines and intersecting them with hyperbolas. We began to investigate this Diophantine equation because each solution leads to another square-triangular number.

So there are infinitely many units in ${\mathbb{Z}[\sqrt{2}]}$. This is strange! For instance, ${3 + 2 \sqrt{2}}$ is a unit, which means that it behaves just like ${\pm 1}$ in ${\mathbb{Z}}$, or like ${\pm 1, \pm i}$ in ${\mathbb{Z}[i]}$. Very often, the statements we’ve been looking at and proving are true ”up to multiplication by units.” Since there are infinitely many in ${\mathbb{Z}[\sqrt 2]}$, it can mean that it’s annoying to determine even if two numbers are actually the same up to multiplication by units.

As you look further, there are many more strange and interesting behaviours. It is really interesting to see what properties are very general, and what properties vary a lot. It is also interesting to see the different ways in which properties we’re used to, like unique factorization, can fail.

For instance, we have seen that ${\mathbb{Z}[\sqrt -5]}$ does not have unique factorization. We showed this by seeing that ${6}$ factors in two fundamentally different ways. In fact, some numbers in ${\mathbb{Z}[\sqrt -5]}$ do factor uniquely, and others do not. But if one does not, then it factors in at most two fundamentally different ways.

In other rings, you can have numbers which factor in more fundamentally different ways. The actual behaviour here is also really poorly understood, and there are mathematicians who are actively pursuing these topics.

It’s a very large playground out there.

## Math 100 Fall 2013: Concluding Remarks

This is a post written towards my students in Calc II, Math 100 at Brown University, fall 2013. There will be many asides, written in italics. They are to serve as clarifications of method or true asides, to be digested or passed over.

The semester is over. All the grades are in and known, fall 2013 draws to a close. As you know, I’m interested in the statistics behind the course. I’d mentioned my previous analysis about the extremely high correlation between first midterm and final grade (much higher than I would have thought!). Let’s reveal the statistics and distribution of this course, below the fold.

Posted in Brown University, Math 100, Mathematics, Teaching | | 2 Comments

## Math 100: Before second midterm

You have a midterm next week, and it’s not going to be a cakewalk.

As requested, I’m uploading the last five weeks’ worth of worksheets, with (my) solutions. A comment on the solutions: not everything is presented in full detail, but most things are presented with most detail (except for the occasional one that is far far beyond what we actually expect you to be able to do). If you have any questions about anything, let me know. Even better, ask it here – maybe others have the same questions too.

And since we were unable to go over the quiz in my afternoon recitation today, I’m attaching a worked solution to the quiz as well.

Again, let me know if you have any questions. I will still have my office hours on Tuesday from 2:30-4:30pm in my office (I’m aware that this happens to be immediately before the exam – status not by design). And I’ll be more or less responsive by email.

Study study study!

## Math 100: Week 4

This is a post for my math 100 calculus class of fall 2013. In this post, I give the 4th week’s recitation worksheet (no solutions yet – I’m still writing them up). More pertinently, we will also go over the most recent quiz and common mistakes. Trig substitution, it turns out, is not so easy.

Before we hop into the details, I’d like to encourage you all to avail of each other, your professor, your ta, and the MRC in preparation for the first midterm (next week!).

1. The quiz

There were two versions of the quiz this week, but they were very similar. Both asked about a particular trig substitution

$\displaystyle \int_3^6 \sqrt{36 – x^2} \mathrm{d} x$

And the other was

$\displaystyle \int_{2\sqrt 2}^4 \sqrt{16 – x^2} \mathrm{d}x.$

They are very similar, so I’m only going to go over one of them. I’ll go over the first one. We know we are to use trig substitution. I see two ways to proceed: either draw a reference triangle (which I recommend), or think through the Pythagorean trig identities until you find the one that works here (which I don’t recommend).

We see a ${\sqrt{36 – x^2}}$, and this is hard to deal with. Let’s draw a right triangle that has ${\sqrt{36 – x^2}}$ as a side. I’ve drawn one below. (Not fancy, but I need a better light).

In this picture, note that ${\sin \theta = \frac{x}{6}}$, or that ${x = 6 \sin \theta}$, and that ${\sqrt{36 – x^2} = 6 \cos \theta}$. If we substitute ${x = 6 \sin \theta}$ in our integral, this means that we can replace our ${\sqrt{36 – x^2}}$ with ${6 \cos \theta}$. But this is a substitution, so we need to think about ${\mathrm{d} x}$ too. Here, ${x = 6 \sin \theta}$ means that ${\mathrm{d}x = 6 \cos \theta}$.

Some people used the wrong trig substitution, meaning they used ${x = \tan \theta}$ or ${x = \sec \theta}$, and got stuck. It’s okay to get stuck, but if you notice that something isn’t working, it’s better to try something else than to stare at the paper for 10 minutes. Other people use ${x = 6 \cos \theta}$, which is perfectly doable and parallel to what I write below.

Another common error was people forgetting about the ${\mathrm{d}x}$ term entirely. But it’s important!.

Substituting these into our integral gives

$\displaystyle \int_{?}^{??} 36 \cos^2 (\theta) \mathrm{d}\theta,$

where I have included question marks for the limits because, as after most substitutions, they are different. You have a choice: you might go on and put everything back in terms of ${x}$ before you give your numerical answer; or you might find the new limits now.

It’s not correct to continue writing down the old limits. The variable has changed, and we really don’t want ${\theta}$ to go from ${3}$ to ${6}$.

If you were to find the new limits, then you need to consider: if ${x=3}$ and ${\frac{x}{6} = \sin \theta}$, then we want a ${\theta}$ such that ${\sin \theta = \frac{3}{6}= \frac{1}{2}}$, so we might use ${\theta = \pi/6}$. Similarly, when ${x = 6}$, we want ${\theta}$ such that ${\sin \theta = 1}$, like ${\theta = \pi/2}$. Note that these were two arcsine calculations, which we would have to do even if we waited until after we put everything back in terms of ${x}$ to evaluate.

Some people left their answers in terms of these arcsines. As far as mistakes go, this isn’t a very serious one. But this is the sort of simplification that is expected of you on exams, quizzes, and homeworks. In particular, if something can be written in a much simpler way through the unit circle, then you should do it if you have the time.

So we could rewrite our integral as

$\displaystyle \int_{\pi/6}^{\pi/2} 36 \cos^2 (\theta) \mathrm{d}\theta.$

How do we integrate ${\cos^2 \theta}$? We need to make use of the identity ${\cos^2 \theta = \dfrac{1 + \cos 2\theta}{2}}$. You should know this identity for this midterm. Now we have

$\displaystyle 36 \int_{\pi/6}^{\pi/2}\left(\frac{1}{2} + \frac{\cos 2 \theta}{2}\right) \mathrm{d}\theta = 18 \int_{\pi/6}^{\pi/2}\mathrm{d}\theta + 18 \int_{\pi/6}^{\pi/2}\cos 2\theta \mathrm{d}\theta.$

The first integral is extremely simple and yields ${6\pi}$ The second integral has antiderivative ${\dfrac{\sin 2 \theta}{2}}$ (Don’t forget the ${2}$ on bottom!), and we have to evaluate ${\big[9 \sin 2 \theta \big]_{\pi/6}^{\pi/2}}$, which gives ${-\dfrac{9 \sqrt 3}{2}}$. You should know the unit circle sufficiently well to evaluate this for your midterm.

And so the final answer is ${6 \pi – \dfrac{9 \sqrt 2}{2} \approx 11.0553}$. (You don’t need to be able to do that approximation).

Let’s go back a moment and suppose you didn’t re”{e}valuate the limits once you substituted in ${\theta}$. Then, following the same steps as above, you’d be left with

$\displaystyle 18 \int_{?}^{??}\mathrm{d}\theta + 18 \int_{?}^{??}\cos 2\theta \mathrm{d}\theta = \left[ 18 \theta \right]_?^{??} + \left[ 9 \sin 2 \theta \right]_?^{??}.$

Since ${\frac{x}{6} = \sin \theta}$, we know that ${\theta = \arcsin (x/6)}$. This is how we evaluate the left integral, and we are left with ${[18 \arcsin(x/6)]_3^6}$. This means we need to know the arcsine of ${1}$ and ${\frac 12}$. These are exactly the same two arcsine computations that I referenced above! Following them again, we get ${6\pi}$ as the answer.

We could do the same for the second part, since ${\sin ( 2 \arcsin (x/6))}$ when ${x = 3}$ is ${\sin (2 \arcsin \frac{1}{2} ) = \sin (2 \cdot \frac{\pi}{6} ) = \frac{\sqrt 3}{2}}$; and when ${x = 6}$ we get ${\sin (2 \arcsin 1) = \sin (2 \cdot \frac{\pi}{2}) = \sin (\pi) = 0}$.

Putting these together, we see that the answer is again ${6\pi – \frac{9\sqrt 3}{2}}$.

Or, throwing yet another option out there, we could do something else (a little bit wittier, maybe?). We have this ${\sin 2\theta}$ term to deal with. You might recall that ${\sin 2 \theta = 2 \sin \theta \cos \theta}$, the so-called double-angle identity.

Then ${9 \sin 2\theta = 18 \sin \theta \cos \theta}$. Going back to our reference triangle, we know that ${\cos \theta = \dfrac{\sqrt{36 – x^2}}{6}}$ and that ${\sin \theta = \dfrac{x}{6}}$. Putting these together,

$\displaystyle 9 \sin 2 \theta = \dfrac{ x\sqrt{36 – x^2} }{2}.$

When ${x=6}$, this is ${0}$. When ${x = 3}$, we have ${\dfrac{ 3\sqrt {27}}{2} = \dfrac{9\sqrt 3}{2}}$.

And fortunately, we get the same answer again at the end of the day. (phew).

2. The worksheet

Finally, here is the worksheet for the day. I’m working on their solutions, and I’ll have that up by late this evening (sorry for the delay).

Ending tidbits – when I was last a TA, I tried to see what were the good predictors of final grade. Some things weren’t very surprising – there is a large correlation between exam scores and final grade. Some things were a bit surprising – low homework scores correlated well with low final grade, but high homework scores didn’t really have a strong correlation with final grade at all; attendance also correlated weakly. But one thing that really stuck with me was the first midterm grade vs final grade in class: it was really strong. For a bit more on that, I refer you to my final post from my Math 90 posts.

## Math 100: Week 3 and pre-midterm

This is a post for my Math 100 class of fall 2013. In this post, I give the first three weeks’ worksheets from recitation and the set of solutions to week three’s worksheet, as well as a few administrative details.

Firstly, here is the recitation work from the first three weeks:

1. (there was no recitation the first week)
2. A worksheet focusing on review.
3. A worksheet focusing on integration by parts and u-substitution, with solutions.

In addition, I’d like to remind you that I have office hours from 2-4pm (right now) in Kassar 018. I’ve had multiple people set up appointments with me outside of these hours, which I’m tempted to interpret as suggesting that I change when my office hours are. If you have a preference, let me know, and I’ll try to incorporate it.

Finally, there will be an exam next Tuesday. I’ve been getting a lot of emails about what material will be on the exam. The answer is that everything you have learned up to now and by the end of this week is fair game for exam material. This also means there could be exam questions on material that we have not discussed in recitation. So be prepared. However, I will be setting aside a much larger portion of recitation this Thursday for questions than normal. So come prepared with your questions.

Best of luck, and I’ll see you in class on Thursday.

## An intuitive introduction to calculus

This is a post written for my fall 2013 Math 100 class but largely intended for anyone with knowledge of what a function is and a desire to know what calculus is all about. Calculus is made out to be the pinnacle of the high school math curriculum, and correspondingly is thought to be very hard. But the difficulty is bloated, blown out of proportion. In fact, the ideas behind calculus are approachable and even intuitive if thought about in the right way.

Many people managed to stumble across the page before I’d finished all the graphics. I’m sorry, but they’re all done now! I was having trouble interpreting how WordPress was going to handle my gif files – it turns out that they automagically resize them if you don’t make them of the correct size, which makes them not display. It took me a bit to realize this. I’d like to mention that this actually started as a 90 minute talk I had with my wife over coffee, so perhaps an alternate title would be “Learning calculus in 2 hours over a cup of coffee.”

So read on if you would like to understand what calculus is, or if you’re looking for a refresher of the concepts from a first semester in calculus (like for Math 100 students at Brown), or if you’re looking for a bird’s eye view of AP Calc AB subject material.

1. An intuitive and semicomplete introduction to calculus

We will think of a function ${f(\cdot)}$ as something that takes an input ${x}$ and gives out another number, which we’ll denote by ${f(x)}$. We know functions like ${f(x) = x^2 + 1}$, which means that if I give in a number ${x}$ then the function returns the number ${f(x) = x^2 + 1}$. So I put in ${1}$, I get ${1^2 + 1 = 2}$, i.e. ${f(1) = 2}$. Primary and secondary school overly conditions students to think of functions in terms of a formula or equation. The important thing to remember is that a function is really just something that gives an output when given an input, and if the same input is given later then the function spits the same output out. As an aside, I should mention that the most common problem I’ve seen in my teaching and tutoring is a fundamental misunderstanding of functions and their graphs

For a function that takes in and spits out numbers, we can associate a graph. A graph is a two-dimensional representation of our function, where by convention the input is put on the horizontal axis and the output is put on the vertical axis. Each axis is numbered, and in this way we can identify any point in the graph by its coordinates, i.e. its horizontal and vertical position. A graph of a function ${f(x)}$ includes a point ${(x,y)}$ if ${y = f(x)}$.

Thus each point on the graph is really of the form ${(x, f(x))}$. A large portion of algebra I and II is devoted to being able to draw graphs for a variety of functions. And if you think about it, graphs contain a huge amount of information. Graphing ${f(x)= x^2 + 1}$ involves drawing an upwards-facing parabola, which really represents an infinite number of points. That’s pretty intense, but it’s not what I want to focus on here.

1.1. Generalizing slope – introducing the derivative

You might recall the idea of the ‘slope’ of a line. A line has a constant ratio of how much the ${y}$ value changes for a specific change in ${x}$, which we call the slope (people always seem to remember rise over run). In particular, if a line passes through the points ${(x_1, y_1)}$ and ${(x_2, y_2)}$, then its slope will be the vertical change ${y_2 – y_1}$ divided by the horizontal change ${x_2 – x_1}$, or ${\dfrac{y_2 – y_1}{x_2 – x_1}}$.

So if the line is given by an equation ${f(x) = \text{something}}$, then the slope from two inputs ${x_1}$ and ${x_2}$ is ${\dfrac{f(x_2) – f(x_1)}{x_2 – x_1}}$. As an aside, for those that remember things like the ‘standard equation’ ${y = mx + b}$ or ‘point-slope’ ${(y – y_0) = m(x – x_0)}$ but who have never thought or been taught where these come from: the claim that lines are the curves of constant slope is saying that for any choice of ${(x_1, y_1)}$ on the line, we expect ${\dfrac{y_2 – y_1}{x_2 – x_1} = m}$ a constant, which I denote by ${m}$ for no particularly good reason other than the fact that some textbook author long ago did such a thing. Since we’re allowing ourselves to choose any ${(x_1, y_1)}$, we might drop the subscripts – since they usually mean a constant – and rearrange our equation to give ${y_2 – y = m(x_2 – x)}$, which is what has been so unkindly drilled into students’ heads as the ‘point-slope form.’ This is why lines have a point-slope form, and a reason that it comes up so much is that it comes so naturally from the defining characteristic of a line, i.e. constant slope.

But one cannot speak of the ‘slope’ of a parabola.

Intuitively, we look at our parabola ${x^2 + 1}$ and see that the ‘slope,’ or an estimate of how much the function ${f(x)}$ changes with a change in ${x}$, seems to be changing depending on what ${x}$ values we choose. (This should make sense – if it didn’t change, and had constant slope, then it would be a line). The first major goal of calculus is to come up with an idea of a ‘slope’ for non-linear functions. I should add that we already know a sort of ‘instantaneous rate of change’ of a nonlinear function. When we’re in a car and we’re driving somewhere, we’re usually speeding up or slowing down, and our pace isn’t usually linear. Yet our speedometer still manages to say how fast we’re going, which is an immediate rate of change. So if we had a function ${p(t)}$ that gave us our position at a time ${t}$, then the slope would give us our velocity (change in position per change in time) at a moment. So without knowing it, we’re familiar with a generalized slope already. Now in our parabola, we don’t expect a constant slope, so we want to associate a ‘slope’ to each input ${x}$. In other words, we want to be able to understand how rapidly the function ${f(x)}$ is changing at each ${x}$, analogous to how the slope ${m}$ of a line ${g(x) = mx + b}$ tells us that if we change our input by an amount ${h}$ then our output value will change by ${mh}$.

How does calculus do that? The idea is to get closer and closer approximations. Suppose we want to find the ‘slope’ of our parabola at the point ${x = 1}$. Let’s get an approximate answer. The slope of the line coming from inputs ${x = 1}$ and ${x = 2}$ is a (poor) approximation. In particular, since we’re working with ${f(x) = x^2 + 1}$, we have that ${f(2) = 5}$ and ${f(1) = 2}$, so that the ‘approximate slope’ from ${x = 1}$ and ${x = 2}$ is ${\frac{5 – 2}{2 – 1} = 3}$. But looking at the graph,

we see that it feels like this slope is too large. So let’s get closer. Suppose we use inputs ${x = 1}$ and ${x = 1.5}$. We get that the approximate slope is ${\frac{3.25 – 2}{1.5 – 1} = 2.5}$. If we were to graph it, this would also feel too large. So we can keep choosing smaller and smaller changes, like using ${x = 1}$ and ${x = 1.1}$, or ${x = 1}$ and ${x = 1.01}$, and so on. This next graphic contains these approximations, with chosen points getting closer and closer to ${1}$.

Let’s look a little closer at the values we’re getting for our slopes when we use ${1}$ and ${2, 1.5, 1.1, 1.01, 1.001}$ as our inputs. We get

$\displaystyle \begin{array}{c|c} \text{second input} & \text{approx. slope} \\ \hline 2 & 3 \\ 1.5 & 2.5 \\ 1.1 & 2.1 \\ 1.01 & 2.01 \\ 1.001 & 2.001 \end{array}$

It looks like the approximate slopes are approaching ${2}$. What if we plot the graph with a line of slope ${2}$ going through the point ${(1,2)}$?

It looks great! Let’s zoom in a whole lot.

That looks really close! In fact, what I’ve been allowing as the natural feeling slope, or local rate of change, is really the line tangent to the graph of our function at the point ${(1, f(1))}$. In a calculus class, you’ll spend a bit of time making sense of what it means for the approximate slopes to ‘approach’ ${2}$. This is called a ‘limit,’ and the details are not important to us right now. The important thing is that this let us get an idea of a ‘slope’ at a point on a parabola. It’s not really a slope, because a parabola isn’t a line. So we’ve given it a different name – we call this ‘the derivative.’ So the derivative of ${f(x) = x^2 + 1}$ at ${x = 1}$ is ${2}$, i.e. right around ${x = 1}$ we expect a rate of change of ${2}$, so that we expect ${f(1 + h) – f(1) \approx 2h}$. If you think about it, we’re saying that we can approximate ${f(x) = x^2 + 1}$ near the point ${(1, 2)}$ by the line shown in the graph above: this line passes through ${(1,2)}$ and it’s slope is ${2}$, what we’re calling the slope of ${f(x) = x^2 + 1}$ at ${x = 1}$.

Let’s generalize. We were able to speak of the derivative at one point, but how about other points? The rest of this post is below the ‘more’ tag below.

Posted in Brown University, Expository, Math 100, Mathematics | | 9 Comments

## Twenty Mathematicians, Two Hard Problems, One Week, IdeaLab2013

July has been an exciting and busy month for me. I taught number theory 3 hours a day, 5 days a week, for 3 weeks to (mostly) devoted and motivated high school students in the Summer@Brown program. In the middle, I moved to Massachusetts. Immediately after the Summer@Brown program ended, I was given the opportunity to return to ICERM to participate in an experimental program called an IdeaLab.

IdeaLab invited 20 early career mathematicians to come together for a week and to generate ideas on two very different problems: Tipping Points in Climate Systems and Efficient Fully Homomorphic Encryption. Although I plan on writing a bit more about each of these problems and the IdeaLab process in action (at least from my point of view), I should say something about what these are.

Models of Earth’s climate are used all the time, to give daily weather reports, to predict and warn about hurricanes, to attempt to understand the effects of anthropogenic sources of carbon on long-term climate. As we know from uncertainty about weather reports, these models aren’t perfect. In particular, they don’t currently predict sudden, abrupt changes called ‘Tippling points.’ But are tipping points possible? There have been warm periods following ice-ages in the past, so it seems that there might be tipping points that aren’t modelled in the system. Understanding these form the basis for the idea behind the Tipping Points in Climate Systems project. This project also forms another link in Mathematics of Planet Earth.

On the other hand, homomorphic encryption is a topic in modern cryptography. To encrypt a message is to make it hard or impossible for others to read it unless they have a ‘key.’ You might think that you wouldn’t want someone holding onto an encrypted data to be able to do anything with the data, and in most modern encryption algorithms this is the case. But what if we were able to give Google an encrypted dataset and ask them to perform a search on it? Is it possible to have a secure encryption that would allow Google to do some sort of search algorithm and give us the results, but without Google ever understanding the data itself? It may seem far-fetched, but this is exactly the idea behind the Efficient Fully Homomorphic Encryption group. Surprisingly enough, it is possible. But known methods are obnoxiously slow and infeasible. This is why the group was after ‘efficient’ encryption.

So 20 early career mathematicians from all sorts of areas of mathematics gathered to think about these two questions. For the rest of this post, I’d like to talk about the structure and my thoughts on the IdeaLab process. In later posts, I’ll talk about each of the two major topics and what sorts of ideas came out of the process.

## Math 90: Week 8

Today, we had a set of problems as usual, and a quiz! (And I didn’t tell you about the quiz, even though others did, so I’m going to pretend that it was a pop quiz)!. Below, you’ll find the three problems, their solutions, and a worked-out quiz.

Posted in Brown University, Math 90, Mathematics | | 2 Comments

## Math 90: Week 4

It was quiz-day!

The class did pretty well on the quiz. I wrote the quiz, and I’m pleased with the skill-level demonstrated. The average was about a 77%, and the median was an 80%. (For stat-witty folk, this means that the lower scores were somehow ‘lower’ than the average scores).

Anyhow, the solutions to the day’s problems and the quiz are below the fold:

Posted in Brown University, Math 90, Mathematics | | 3 Comments