Tag Archives: proof

An Intuitive Overview of Taylor Series

This is a note written for my fall 2013 Math 100 class, but it was not written “for the exam,” nor does anything on here subtly hint at anything on any exam. But I hope that this will be helpful for anyone who wants to get a basic understanding of Taylor series. What I want to do is try to get some sort of intuitive grasp on Taylor series as approximations of functions. By intuitive, I mean intuitive to those with a good grasp of functions, the basics of a first semester of calculus (derivatives, integrals, the mean value theorem, and the fundamental theorem of calculus) – so it’s a mathematical intuition. In this way, this post is a sort of follow-up of my earlier note, An Intuitive Introduction to Calculus.

PLEASE NOTE that my math compiler and my markdown compiler sometimes compete, and sometimes repeated derivatives are too high or too low by one pixel.

We care about Taylor series because they allow us to approximate other functions in predictable ways. Sometimes, these approximations can be made to be very, very, very accurate without requiring too much computing power. You might have heard that computers/calculators routinely use Taylor series to calculate things like $latex {e^x}$ (which is more or less often true). But up to this point in most students’ mathematical development, most mathematics has been clean and perfect; everything has been exact algorithms yielding exact answers for years and years. This is simply not the way of the world.

Here’s a fundamental fact to both mathematics and life: almost anything worth doing is probably pretty hard and pretty messy.

For a very recognizable example, let’s think about finding zeroes of polynomials. Finding roots of linear polynomials is very easy. If we see $latex {5 + x = 0}$, we see that $latex {-5}$ is the zero. Similarly, finding roots of quadratic polynomials is very easy, and many of us have memorized the quadratic formula to this end. Thus $latex {ax^2 + bx + c = 0}$ has solutions $latex {x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}}$. These are both nice, algorithmic, and exact. But I will guess that the vast majority of those who read this have never seen a “cubic polynomial formula” for finding roots of cubic polynomials (although it does exist, it is horrendously messy – look up Cardano’s formula). There is even an algorithmic way of finding the roots of quartic polynomials. But here’s something amazing: there is no general method for finding the exact roots of 5th degree polynomials (or higher degree).

I don’t mean We haven’t found it yet, but there may be one, or even You’ll have to use one of these myriad ways – I mean it has been shown that there is no general method of finding exact roots of degree 5 or higher polynomials. But we certainly can approximate them arbitrarily well. So even something as simple as finding roots of polynomials, which we’ve been doing since we were in middle school, gets incredibly and unbelievably complicated.

So before we hop into Taylor series directly, I want to get into the mindset of approximating functions with other functions.

1. Approximating functions with other functions

We like working with polynomials because they’re so easy to calculate and manipulate. So sometimes we try to approximate complicated functions with polynomials, a problem sometimes called
polynomial interpolation”.

Suppose we wanted to approximate $latex {\sin(x)}$. The most naive approximation that we might do is see that $latex {\sin(0) = 0}$, so we might approximate $latex {\sin(x)}$ by $latex {p_0(x) = 0}$. We know that it’s right at least once, and since $latex {\sin(x)}$ is periodic, it’s going to be right many times. I write $latex {p_0}$ to indicate that this is a degree $latex {0}$ polynomial, that is, a constant polynomial. Clearly though, this is a terrible approximation, and we can do better.


Posted in Expository, Mathematics, Teaching | Tagged , , , , , , , , | 29 Comments

Notes on the first week (SummerNT)

We’ve covered a lot of ground this first week! I wanted to provide a written summary, with partial proof, of what we have done so far.

We began by learning about proofs. We talked about direct proofs, inductive proofs, proofs by contradiction, and proofs by using the contrapositive of the statement we want to prove. A proof is a justification and argument based upon certain logical premises (which we call axioms); in contrast to other disciplines, a mathematical proof is completely logical and can be correct or incorrect.

We then established a set of axioms for the integers that would serve as the foundation of our exploration into the (often fantastic yet sometimes frustrating) realm of number theory. In short, the integers are a non-empty set with addition and multiplication [which are both associative, commutative, and have an identity, and which behave as we think they should behave; further, there are additive inverses], a total order [an integer is either bigger than, less than, or equal to any other integer, and it behaves like we think it should under addition and multiplication], and satisfying the deceptively important well ordering principle [every nonempty set of positive integers has a least element].

With this logical framework in place, we really began number theory in earnest. We talked about divisibility [we say that $latex a$ divides $latex b$, written $latex a mid b$, if $latex b = ak$ for some integer $latex k$]. We showed that every number has a prime factorization. To do this, we used the well-ordering principle.

Suppose that not all integers have a prime factorization. Then there must be a smallest integer that does not have a prime factorization: call it $latex n$. Then we know that $latex n$ is either a prime or a composite. If it’s prime, then it has a prime factorization. If it’s composite, then it factors as $latex n = ab$ with $latex a,b < n$. But then we know that each of $latex a, b$ have prime factorizations since they are less than $latex n$. Multiplying them together, we see that $latex n$ also has a prime factorization after all. $latex diamondsuit$

Our first major result is the following:

There are infinitely many primes

There are many proofs, and we saw 2 of them in class. For posterity, I’ll present three here.

First proof that there are infinitely many primes

Take a finite collection of primes, say $latex p_1, p_2, ldots, p_k$. We will show that there is at least one more prime not mentioned in the collection. To see this, consider the number $latex p_1 p_2 ldots p_k + 1$. We know that this number will factor into primes, but upon division by every prime in our collection, it leaves a remainder of $latex 1$. Thus it has at least one prime factor different than every factor in our collection. $latex diamondsuit$

This was a common proof used in class. A pattern also quickly emerges: $latex 2 + 1 = 3$, a prime. $latex 2cdot3 + 1 = 7$, a prime. $latex 2 cdot 3 cdot 5 + 1 = 31$, also a prime. It is always the case that a product of primes plus one is another prime? No, in fact. If you look at $latex 2 cdot 3 cdot 5 cdot 7 cdot 11 cdot 13 + 1=30031 = 59cdot 509$, you get a nonprime.

Second proof that there are infinitely many primes

In a similar vein to the first proof, we will show that there is always a prime larger than $latex n$ for any positive integer $latex n$. To see this, consider $latex n! + 1$. Upon dividing by any prime less than $latex n$, we get a remainder of $latex 1$. So all of its prime factors are larger than $latex n$, and so there are infinitely many primes. $latex diamondsuit$

I would also like to present one more, which I’ve always liked.

Third proof that there are infinitely many primes

Suppose there are only finitely many primes $latex p_1, ldots, p_k$. Then consider the two numbers $latex n = p_1 cdot dots cdot p_k$ and $latex n -1$. We know that $latex n – 1$ has a prime factor, so that it must share a factor $latex P$ with $latex n$ since $latex n$ is the product of all the primes. But then $latex P$ divides $latex n – (n – 1) = 1$, which is nonsense; no prime divides $latex 1$. Thus there are infinitely many primes. $latex diamondsuit$

We also looked at modular arithmetic, often called the arithmetic of a clock. When we say that $latex a equiv b mod m$, we mean to say that $latex m | (b – a)$, or equivalently that $latex a = b + km$ for some integer $latex m$ (can you show these are equivalent?). And we pronounce that statement as ” $latex a$ is congruent to $latex b$ mod $latex m$.” We played a lot with modular arithmetic: we added, subtracted, and multiplied many times, hopefully enough to build a bit of familiarity with the feel. In most ways, it feels like regular arithmetic. But in some ways, it’s different. Looking at the integers $latex mod m$ partitions the integers into a set of equivalence classes, i.e. into sets of integers that are congruent to $latex 0 mod m, 1 mod m, ldots$. When we talk about adding or multiplying numbers mod $latex mod m$, we’re really talking about manipulating these equivalence classes. (This isn’t super important to us – just a hint at what’s going on beneath the surface).

We expect that if $latex a equiv b mod m$, then we would also have $latex ac equiv bc mod m$ for any integer $latex c$, and this is true (can you prove this?). But we would also expect that if we had $latex ac equiv bc mod m$, then we would necessarily have $latex a equiv b mod m$, i.e. that we can cancel out the same number on each side. And it turns out that’s not the case. For example, $latex 4 cdot 2 equiv 4 cdot 5 mod 6$ (both are $latex 2 mod 6$), but ‘cancelling the fours’ says that $latex 2 equiv 5 mod 6$ – that’s simply not true. With this example in mind, we went about proving things about modular arithmetic. It’s important to know what one can and can’t do.

One very big and important observation that we noted is that it doesn’t matter what order we operate, as in it doesn’t matter if we multiply an expression out and then ‘mod it’ down, or ‘mod it down’ and then multiply, or if we intermix these operations. Knowing this allows us to simplify expressions like $latex 11^4 mod 12$, since we know $latex 11 equiv -1 mod 12$, and we know that $latex (-1)^2 equiv 1 mod 12$, and so $latex 11^4 equiv (-1)^{2cdot 2} equiv 1 mod 12$. If we’d wanted to, we could have multiplied it out and then reduced – the choice is ours!

Amidst our exploration of modular arithmetic, we noticed some patterns. Some numbers  are invertible in the modular sense, while others are not. For example, $latex 5 cdot 5 equiv 1 mod 6$, so in that sense, we might think of $latex frac{1}{5} equiv 5 mod 6$. More interestingly but in the same vein, $latex frac{1}{2} equiv 6 mod 11$ since $latex 2 cdot 6 equiv 1 mod 11$. Stated more formally, a number $latex a$ has a modular inverse $latex a^{-1} mod m$ if there is a solution to the modular equation $latex ax equiv 1 mod m$, in which case that solution is the modular inverse. When does this happen? Are these units special?

Returning to division, we think of the greatest common divisor. I showed you the Euclidean algorithm, and you managed to prove it in class. The Euclidean algorithm produces the greatest common divisor of $latex a$ and $latex b$, and it looks like this (where I assume that $latex b > 1$:

$latex b = q_1 a + r_1$

$latex a = q_2 r_1 + r_2$

$latex r_1 = q_3 r_2 + r_3$

$latex cdots$

$latex r_k = q_{k+2}r_{k+1} + r_{k+2}$

$latex r_{k+1} = q_{k+3}r_{k+2} + 0$

where in each step, we just did regular old division to guarantee a remainder $latex r_i$ that was less than the divisor. As the divisors become the remainders, this yields a strictly decreasing remainder at each iteration, so it will terminate (in fact, it’s very fast). Further, using the notation from above, I claimed that the gcd of $latex a$ and $latex b$ was the last nonzero remainder, in this case $latex r_{k+2}$. How did we prove it?

Proof of Euclidean Algorithm

Suppose that $latex d$ is a common divisor (such as the greatest common divisor) of $latex a$ and $latex b$. Then $latex d$ divides the left hand side of $latex b – q_1 a = r_1$, and thus must also divide the right hand side. So any divisor of $latex a$ and $latex b$ is also a divisor of $latex r_1$. This carries down the list, so that the gcd of $latex a$ and $latex b$ will divide each remainder term. How do we know that the last remainder is exactly the gcd, and no more? The way we proved it in class relied on the observation that $latex r_{k+2} mid r_{k+1}$. But then $latex r_{k+2}$ divides the right hand side of $latex r_k = q_{k+2} r_{k+1} + r_{k+2}$, and so it also divides the left. This also carries up the chain, so that $latex r_{k+2}$ divides both $latex a$ and $latex b$. So it is itself a divisor, and thus cannot be larger than the greatest common divisor. $latex diamondsuit$

As an aside, I really liked the way it was proved in class. Great job!

The Euclidean algorithm can be turned backwards with back-substitution (some call this the extended Euclidean algorithm,) to give a solution in $latex x,y$ to the equation $latex ax + by = gcd(a,b)$. This has played a super important role in our class ever since. By the way, though I never said it in class, we proved Bezout’s Identity along the way (which we just called part of the Extended Euclidean Algorithm). This essentially says that the gcd of $latex a$ and $latex b$ is the smallest number expressible in the form $latex ax + by$. The Euclidean algorithm has shown us that the gcd is expressible in this form. How do we know it’s the smallest? Observe again that if $latex c$ is a common divisor of $latex a$ and $latex b$, then $latex c$ divides the left hand side of $latex ax + by = d$, and so $latex c mid d$. So $latex d$ cannot be smaller than the gcd.

This led us to explore and solve linear Diophantine equations of the form $latex ax + by = c$ for general $latex a,b,c$. There will be solutions whenever the $latex gcd(a,b) mid c$, and in such cases there are infinitely many solutions (Do you remember how to see infinitely many other solutions?).

Linear Diophantine equations are very closely related a linear problems in modular arithmetic of the form $latex ax equiv c mod m$. In particular, this last modular equation is equivalent to $latex ax + my = c$ for some $latex y$.(Can you show that these are the same?). Using what we’ve learned about linear Diophantine equations, we know that $latex ax equiv c mod m$ has a solution iff $latex gcd(a,m) mid c$. But now, there are not infinitely many incongruent (i.e. not the same $latex mod m$) solutions. This is called the ‘Linear Congruence Theorem,’ and is interestingly the first major result we’ve learned with no proof on wikipedia.

Theorem: the modular equation $latex ax equiv b mod m$ has a solution iff $latex gcd(a,m) mid b$, in which case there are exactly $latex gcd(a,m)$ incongruent solutions.


We can translate a solution of $latex ax equiv b mod m$ into a solution of $latex ax + my = b$, and vice-versa. So we know from the Extended Euclidean algorithm that there are only solutions if $latex gcd(a,m) mid b$. Now, let’s show that there are $latex gcd(a,m)$ solutions. I will do this a bit differently than how we did it in class.

First, let’s do the case when $latex gcd(a,m)=1$, and suppose we have a solution $latex (x,y)$ so that $latex ax + my = b$. If there is another solution, then there is some perturbation we can do by shifting $latex x$ by a number $latex x’$ and $latex y$ by a number $latex y’$ that yields another solution looking like $latex a(x + x’) + m(y + y’) = b$. As we already know that $latex ax + my = b$, we can remove that from the equation. Then we get simply $latex ax’ = -my’$. Since $latex gcd(m,a) = 1$, we know (see below the proof) that $latex m$ divides $latex x’$. But then the new solution $latex x + x’ equiv x mod m$, so all solutions fall in the same congruence class – the same as $latex x$.

Now suppose that $latex gcd(a,m) = d$ and that there is a solution. Since there is a solution, each of $latex a,m,$ and $latex b$ are divisible by $latex d$, and we can write them as $latex a = da’, b = db’, m = dm’$. Then the modular equation $latex ax equiv b mod m$ is the same as $latex da’ x equiv d b’ mod d m’$, which is the same as $latex d m’ mid (d b’ – d a’x)$. Note that in this last case, we can remove the $latex d$ from both sides, so that $latex m’ mid b’ – a’x$, or that $latex a’x equiv b mod m’$. From the first case, we know this has exactly one solution mod $latex m’$, but we are interested in solutions mod $latex m$. Just as knowing that $latex x equiv 2 mod 4$ means that $latex x$ might be $latex 2, 6, 10 mod 12$ since $latex 4$ goes into $latex 12$ three times, $latex m’$ goes into $latex m$ $latex d$ times, and this gives us our $latex d$ incongruent solutions. $latex diamondsuit.$

I mentioned that we used the fact that we’ve proven 3 times in class now in different forms: if $latex gcd(a,b) = 1$ and $latex a mid bc$, then we can conclude that $latex a mid c$. Can you prove this? Can you prove this without using unique factorization? We actually used this fact to prove unique factorization (really we use the statement about primes: if $latex p$ is a prime and $latex p mid ab$, then we must have that $latex p mid a$ or $latex p mid b$, or perhaps both). Do you remember how we proved that? We used the well-ordered principle to say that if there were a positive integer that couldn’t be uniquely factored, then there is a smaller one. But choosing two of its factorizations, and finding a prime on one side – we concluded that this prime divided the other side. Dividing both sides by this prime yielded a smaller (and therefore unique by assumption) factorization. This was the gist of the argument.

The last major bit of the week was the Chinese Remainder Theorem, which is awesome enough (and which I have enough to say about) that it will get its own post – which I’m working on now.

I’ll see you all in class tomorrow.

Posted in Brown University, Expository, Math.NT, Mathematics | Tagged , , , , , , , , , , , , , , | 1 Comment

Calculations with a Gauss-type Sum

It’s been a while since I’ve posted – I’m sorry. I’ve been busy, perhaps working on a paper (let’s hope it becomes a paper) and otherwise trying to learn things. This post is very closely related to some computations that have been coming up in what I’m currently looking at (in particular, looking at h-th coefficients of Eisenstein series of half-integral weight). I hope to write a very expository-level article on this project that I’ve been working on, outsourcing but completely providing computations behind the scenes in posts such as this one.

I’d like to add that this post took almost no time to write, as I used some vim macros and latex2wp to automatically convert a segment of something I’d written into wordpress-able html containing the latex. And that’s pretty awesome.

There is a particular calculation that I’ve had to do repeatedly recently, and that I will mention and use again. In an effort to have a readable account of this calculation, I present one such account here. Finally, I cannot help but say that this (and the next few posts, likely) are all joint work with Chan and Mehmet, also from Brown University.

Let us consider the following generalized Gauss Sum:

$latex \displaystyle H_m(c’) : = \varepsilon_{c’} \sum_{r_1\bmod c’}\left(\frac{r_1}{c’}\right) e^{2 \pi i m\frac{r_1}{c’}} \ \ \ \ \ (1)$

where I let $latex {\left(\frac{\cdot}{\cdot}\right)}$ be the Legendre Symbol, and there $latex {\varepsilon_k}$ is the sign of the $latex {k}$th Gauss sum, so that it is $latex {1}$ if $latex {k \equiv 1 \mod 4}$ and it is $latex {i}$ if $latex {k \equiv 3 \mod 4}$. It is not defined for $latex {k}$ even.

Lemma 1 $latex {H_m(n)}$ is multiplicative in $latex {n}$.

Proof: Let $latex {n_1,n_2}$ be two relatively prime integers. Any integer $latex {a \bmod n_1n_2}$ can be written as $latex {a = b_2n_1 + b_1n_2}$, where $latex {b_1}$ runs through integers $latex {\bmod\, n_1}$ and $latex {b_2}$ runs $latex {\bmod\, n_2}$ with the Chinese Remainder Theorem. Then

$latex \displaystyle H_m(n_1n_2) = \varepsilon_{n_1n_2} \sum_{a \bmod n_1n_2} \left(\frac{a}{n_1n_2}\right) e^{2\pi i m \frac{a}{n_1n_2}}$

$latex \displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \sum_{b_2 \bmod n_2} \left(\frac{b_2n_1 +b_1n_2}{n_1n_2}\right) e^{2 \pi im\frac{b_2n_1 +b_1n_2}{n_1n_2}}$

$latex \displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \left(\frac{b_2n_1 +b_1n_2}{n_1}\right) e^{2\pi i m \frac{b_1n_2}{n_1n_2}} \sum_{b_2\bmod n_2} \left(\frac{b_2n_1 +b_1n_2}{n_2}\right) e^{2\pi im\frac{b_2n_1}{n_1n_2}}$

$latex \displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \left(\frac{b_1n_2}{n_1}\right) e^{2\pi i m \frac{b_1}{n_1}} \sum_{b_2\bmod n_2} \left(\frac{b_2n_1}{n_2}\right) e^{2\pi im\frac{b_2}{n_2}}$

$latex \displaystyle = \varepsilon_{n_1n_2}\left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right)\sum_{b_1 \bmod n_1} \left(\frac{b_1}{n_1}\right) e^{2\pi i m \frac{b_1}{n_1}} \sum_{b_2\bmod n_2} \left(\frac{b_2}{n_2}\right) e^{2\pi im\frac{b_2}{n_2}}$

$latex \displaystyle = \varepsilon_{n_1n_2} \varepsilon_{n_1}^{-1} \varepsilon_{n_2}^{-1} \left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right) H_m(n_1)H_{m}(n_2)$

Using quadratic reciprocity, we see that $latex {\varepsilon_{n_1n_2} \varepsilon_{n_1}^{-1} \varepsilon_{n_2}^{-1} \left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right)= 1}$, so that $latex {H_m(n_1n_2) = H_m(n_1)H_m(n_2)}$. $latex \Box$

Let’s calculate $latex {H_m(p^k)}$ for prime powers $latex {p^k}$. Let $latex {\zeta = e^{2\pi i /p^k}}$ be a primitive $latex {p^k}$th root of unity. First we deal with the case of odd $latex {p}$, $latex {p\not |m}$. If $latex {k = 1}$, we have the typical quadratic Gauss sum multiplied by $latex {\varepsilon _p}$

$latex \displaystyle H_m(p) = \varepsilon_p \sum_{a \bmod p} e^{2\pi i m \frac a p}\left(\frac a p\right) = \varepsilon_p \left(\frac m p\right) \varepsilon_p \sqrt p = \left(\frac{-m} p\right) \sqrt p \ \ \ \ \ (2)$

For $latex {k > 1}$, we will see that $latex {H_m(p^k)}$ is $latex {0}$. We split into cases when $latex {k}$ is even or odd. If $latex {k}$ is even, then we are just summing the primitive $latex {p^k}$th roots of unity, which is $latex {0}$. If $latex {k>1}$ is odd,

$latex \displaystyle \sum_{a\bmod p^k} \zeta^a \left(\frac a {p^k}\right) = \sum_{a\bmod p^k} \zeta^a \left(\frac{a}{p}\right) = \sum_{b \bmod p}\sum_{c\bmod p^{k-1}} \zeta^{b+pc} \left(\frac b p\right) $

$latex \displaystyle = \sum_{b\bmod p} \zeta^b \left(\frac b p\right) \sum_{c\bmod p^{k-1}} \zeta^{pc} = 0, \ \ \ \ \ (3)$

since the inner sum is again a sum of roots of unity. Thus

$latex \displaystyle \left(1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} + \frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}} + \cdots \right) =$

$latex \displaystyle = \left(1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}}\right) $

$latex \displaystyle = \left(1+ \left(\frac {-m(-1)^{k + 1/2}}{p}\right)\frac{1}{p^{2s-\frac12}} \right) $

$latex \displaystyle = \left. \left(1-\frac1{p^{4s-1}}\right) \middle/ \left(1- \left(\frac{m(-1)^{k – 1/2}}{p}\right)\frac{1}{p^{2s-\frac12}}\right)\right.$

Notice that this matches up with the $latex {p}$th part of the Euler product for $latex {\displaystyle \frac{L(2s-\frac12,\left(\frac{m(-1)^{k – 1/2}}{\cdot}\right))}{\zeta(4s-1)}}$.

Now consider those odd $latex {p}$ such that $latex {p\mid m}$. Suppose $latex {p^l \parallel m}$. Then $latex {e^{2 \pi i m \ p^k} = \zeta^m}$ is a primitive $latex {p^{k-l}}$th root of unity (or $latex {1}$ if $latex {l \geq k}$). If $latex {l \geq k}$, then

$latex \displaystyle \sum_{a \bmod p^k} \zeta^{am} \left(\frac{a}{p^k}\right) = \sum_{a \bmod p^k} \left(\frac{a}{p^k}\right) = \begin{cases} 0 &\text{if } 2\not | k \\ \phi(p^k) &\text{if } 2 \mid k \end{cases} \ \ \ \ \ (4)$

If $latex {k=l+1}$ and $latex {k}$ is odd, then we essentially have a Gauss sum

$latex \displaystyle \sum_{a\bmod p^k} \zeta^{am} \left(\frac{a}{p^k}\right) = \sum_{a\bmod p^k}\zeta^{am} \left(\frac a p\right) = $

$latex \displaystyle = \sum_{c\bmod p^{k-1}} \zeta^{pcm} \sum_{b\bmod p} \zeta^{am} \left(\frac b p\right) = p^{k-1}\left(\frac{m/p^l}{p}\right)\varepsilon_p\sqrt p $

If $latex {k = l+1}$ and $latex {k}$ is even, noting that $latex {\zeta^m}$ is a $latex {p}$th root of unity,

$latex \displaystyle \sum_{a\bmod p^k} \zeta^{am}\left(\frac {a}{p^k}\right) = \sum_{\substack{a\bmod p^k\\(a,p) = 1}} \zeta^{am} =$

$latex \displaystyle = \sum_{a\bmod p^k}\zeta^{am} – \sum_{a\bmod p^{k-1}}\zeta^{pam} = 0 – p^{k-1} = -p^l. $

If $latex {k>l+1}$ then the sum will be zero. For $latex {k}$ even, this follows from the previous case. If $latex {k}$ is odd,

$latex \displaystyle \sum_{a\bmod p^k} \zeta^{am} \left(\frac a{p^k}\right) = \sum_{b\bmod p}\zeta^{bm} \left(\frac b p \right)\sum_{c\bmod p^{k-1}}\zeta^{pmc}= 0. $

Now, consider the Dirichlet series

$latex \displaystyle \sum_{c > 0, \tt odd} \frac{H_m(c)}{c^{2s}} = \prod_{p \neq 2} \left( 1 + \frac{H_m(p)}{p^{2s}} + \frac{H_m(p^2)}{p^{4s}} + \ldots\right)$.

Let us combine all these facts to construct the $latex {p}$th factor of the Dirichlet series in question, for $latex {p}$ dividing $latex {m}$. Assume first that $latex {p^l\parallel m}$ with $latex {l}$ even,

$latex \displaystyle 1 + \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} + \frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}}+ \cdots = $

$latex \displaystyle = \left( 1+ \varepsilon_{p^2}\frac{\phi(p^2)}{p^{4s}} + \cdots + \varepsilon_{p^l}\frac{\phi(p^l)}{p^{2ls}} + \varepsilon_{p^{l+1}}\frac{\left(\frac{(-1)^{k + 1/2}m/p^l}{p}\right)\varepsilon_p \sqrt p p^l}{p^{2(l+1)s}}\right)$

$latex \displaystyle = \left( 1+\frac{\phi(p^2)}{p^{4s}} + \frac{\phi(p^4)}{p^{8s}}+\cdots +\frac{\phi(p^{l})}{p^{2ls}} + \frac{\left(\frac{(-1)^{k – 1/2}m/p^l}{p}\right)p^{l+\frac12}}{p^{2(l+1)s}}\right) $

$latex \displaystyle = \left(1+ \frac{p^2 – p}{p^{4s}} + \cdots + \frac{p^{l}-p^{l-1}}{p^{2ls}} + \frac{\left(\frac{(-1)^{k – 1/2}m/p^l}{p}\right)p^{l+\frac12}}{p^{2(l+1)s}}\right)$

$latex \displaystyle = \left(1-\frac{1}{p^{4s-1}}\right)\left(1+\frac{1}{p^{4(s-\frac12)}} +\cdots + \frac{1}{p^{2(l-2)(s-\frac12)}}\right)+$

$latex \displaystyle +\frac{p^l}{p^{2ls}} \left(1+ \frac{\left(\frac{(-1)^{k – 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)$

$latex \displaystyle = \left(1-\frac{1}{p^{4s-1}}\right) \left(1+ \frac{1}{p^{4(s-\frac12)}}+\cdots +\right. $

$latex \displaystyle + \left. \frac{1}{p^{2(l-2)(s-\frac12)}} + \frac{1}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k – 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1}\right)$

$latex \displaystyle = \left(1-\frac{1}{p^{4s-1}}\right) \left(\sum_{i=0}^{\lfloor \frac{l-1}{2} \rfloor} \frac{1}{p^{4(s-\frac12)i}} +\frac{1}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k – 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1} \right)$

because for even $latex {k}$, $latex {\varepsilon_{p^k} = 1}$, and for odd $latex {k}$, $latex {\varepsilon_{p^k} = \varepsilon_p}$. Similarly, for $latex {l}$ odd,

$latex \displaystyle 1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} +\frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}}+ \cdots $

$latex \displaystyle = \left( 1+ \varepsilon_{p^2}\frac{\phi(p^2)}{p^{4s}} + \varepsilon_{p^4}\frac{\phi(p^4)}{p^{8s}} + \cdots + \varepsilon_{p^{l-1}}\frac{\phi(p^{l-1})}{p^{2(l-1)s}} + \varepsilon_{p^{l+1}}\frac{- p^l}{p^{2(l+1)s}}\right)\nonumber$

$latex \displaystyle = \left( 1+\frac{\phi(p^2)}{p^{4s}} + \frac{\phi(p^4)}{p^{8s}}+\cdots +\frac{\phi(p^{l-1})}{p^{2(l-1)s}} + \frac{-p^{l}}{p^{2(l+1)s}}\right) \nonumber$

$latex \displaystyle = \left(1+ \frac{p^2 – p}{p^{4s}} + \frac{p^4-p^3}{p^{8s}} + \cdots + \frac{p^{l-1}-p^{l-2}}{p^{2(l-1)s}} – \frac{p^l}{p^{2(l+1)s}}\right) \nonumber$

$latex \displaystyle = \left(1-\frac{1}{p^{4s-1}}\right)\left(\sum_{i=0}^{\frac{l-1}{2}} \frac{1}{p^{4(s-\frac12)i}}\right)$

Putting this together, we get that

$latex \displaystyle \prod_{p \neq 2} \left(\sum_{k=1}^\infty \frac{H_m(p)}{p^{2ks}}\right) = \frac{L_2(2s-\frac12,\left(\frac{m(-1)^{k – 1/2}}{\cdot}\right))}{\zeta_{2}(4s-1)} \times $

$latex \displaystyle \phantom{\sum \sum\sum\sum} \prod_{p^l \parallel m, p\neq 2} \left(\sum_{i=0}^{\lfloor \frac{l-1}{2} \rfloor} \frac{1}{p^{4(s-\frac12)i}} +\frac{\mathbf{1}_{2{\mathbb Z}}(l)}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k – 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1}\right) \ \ \ \ \ (5)$

Posted in Math.NT, Mathematics | Tagged , , , , , , , , | Leave a comment

An elementary proof of when 2 is a quadratic residue

This has been a week of asking and answering questions from emails, as far as I can see. I want to respond to two questions publicly, though, as they’ve some interesting value. So this is the first of a pair of blog posts. One is a short and sweet elementary proof of when $latex 2$ is a quadratic residue of a prime, responding to Moschops’s comments on an earlier blog post. But to continue my theme of some good and some bad, I’d also like to consider the latest “proof” of the Goldbach conjecture (which I’ll talk about in the next post tomorrow). More after the fold:


Posted in Expository, Math.NT, Mathematics, SE | Tagged , , , , , , , , , | 7 Comments

Precalculus Supplement: Synthetic Division

I think it is a sign.

In the question How does Synthetic Division Work? by the user Riddler on math.stackexchange, Riddler says that he’s never seen a proof of Synthetic Division. This gave me a great case of Mom’s Corollary (the generalization of the fact that when mothers tell you something, you are often reminded of specific cases of it within three days – at least with my mom), as it came up with a student whom I’m tutoring. It turns out many of my students haven’t liked synthetic division. I chatted with some of the other Brown grads, and in general, they didn’t like synthetic division either.

It was one of those things that was taught to us before we thought about why different things worked. Somehow, it wasn’t interesting or useful enough to be revisited. But let’s take a look at synthetic division after the fold:


Posted in Brown University, Mathematics, precalculus | Tagged , , , , , , | 3 Comments