Monthly Archives: November 2013

An Intuitive Overview of Taylor Series

This is a note written for my fall 2013 Math 100 class, but it was not written “for the exam,” nor does anything on here subtly hint at anything on any exam. But I hope that this will be helpful for anyone who wants to get a basic understanding of Taylor series. What I want to do is try to get some sort of intuitive grasp on Taylor series as approximations of functions. By intuitive, I mean intuitive to those with a good grasp of functions, the basics of a first semester of calculus (derivatives, integrals, the mean value theorem, and the fundamental theorem of calculus) – so it’s a mathematical intuition. In this way, this post is a sort of follow-up of my earlier note, An Intuitive Introduction to Calculus.

PLEASE NOTE that my math compiler and my markdown compiler sometimes compete, and sometimes repeated derivatives are too high or too low by one pixel.

We care about Taylor series because they allow us to approximate other functions in predictable ways. Sometimes, these approximations can be made to be very, very, very accurate without requiring too much computing power. You might have heard that computers/calculators routinely use Taylor series to calculate things like $latex {e^x}$ (which is more or less often true). But up to this point in most students’ mathematical development, most mathematics has been clean and perfect; everything has been exact algorithms yielding exact answers for years and years. This is simply not the way of the world.

Here’s a fundamental fact to both mathematics and life: almost anything worth doing is probably pretty hard and pretty messy.

For a very recognizable example, let’s think about finding zeroes of polynomials. Finding roots of linear polynomials is very easy. If we see $latex {5 + x = 0}$, we see that $latex {-5}$ is the zero. Similarly, finding roots of quadratic polynomials is very easy, and many of us have memorized the quadratic formula to this end. Thus $latex {ax^2 + bx + c = 0}$ has solutions $latex {x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}}$. These are both nice, algorithmic, and exact. But I will guess that the vast majority of those who read this have never seen a “cubic polynomial formula” for finding roots of cubic polynomials (although it does exist, it is horrendously messy – look up Cardano’s formula). There is even an algorithmic way of finding the roots of quartic polynomials. But here’s something amazing: there is no general method for finding the exact roots of 5th degree polynomials (or higher degree).

I don’t mean We haven’t found it yet, but there may be one, or even You’ll have to use one of these myriad ways – I mean it has been shown that there is no general method of finding exact roots of degree 5 or higher polynomials. But we certainly can approximate them arbitrarily well. So even something as simple as finding roots of polynomials, which we’ve been doing since we were in middle school, gets incredibly and unbelievably complicated.

So before we hop into Taylor series directly, I want to get into the mindset of approximating functions with other functions.

1. Approximating functions with other functions

We like working with polynomials because they’re so easy to calculate and manipulate. So sometimes we try to approximate complicated functions with polynomials, a problem sometimes called
polynomial interpolation”.

Suppose we wanted to approximate $latex {\sin(x)}$. The most naive approximation that we might do is see that $latex {\sin(0) = 0}$, so we might approximate $latex {\sin(x)}$ by $latex {p_0(x) = 0}$. We know that it’s right at least once, and since $latex {\sin(x)}$ is periodic, it’s going to be right many times. I write $latex {p_0}$ to indicate that this is a degree $latex {0}$ polynomial, that is, a constant polynomial. Clearly though, this is a terrible approximation, and we can do better.


Posted in Expository, Mathematics, Teaching | Tagged , , , , , , , , | 21 Comments

Response to fattybake’s question on Reddit

We want to understand the integral

$\displaystyle \int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 + t^2)^n}. \ \ \ \ \ (1)$

Although fattybake mentions the residue theorem, we won’t use that at all. Instead, we will be very clever.

We will do a technique that was once very common (up until the 1910s or so), but is much less common now: let’s multiply by $latex {\displaystyle \Gamma(n) = \int_0^\infty u^n e^{-u} \frac{\mathrm{d}u}{u}}$. This yields

$latex \displaystyle \int_0^\infty \int_{-\infty}^\infty \left(\frac{u}{1 + t^2}\right)^n e^{-u}\mathrm{d}t \frac{\mathrm{d}u}{u} = \int_{-\infty}^\infty \int_0^\infty \left(\frac{u}{1 + t^2}\right)^n e^{-u} \frac{\mathrm{d}u}{u}\mathrm{d}t, \ \ \ \ \ (2)$

where I interchanged the order of integration because everything converges really really nicely. Do a change of variables, sending $latex {u \mapsto u(1+t^2)}$. Notice that my nicely behaving measure $latex {\mathrm{d}u/u}$ completely ignores this change of variables, which is why I write my $latex {\Gamma}$ function that way. Also be pleased that we are squaring $latex {t}$, so that this is positive and doesn’t mess with where we are integrating. This leads us to

$\displaystyle \int_{-\infty}^\infty \int_0^\infty u^n e^{-u + -ut^2} \frac{\mathrm{d}u}{u}\mathrm{d}t = \int_0^\infty \int_{-\infty}^\infty u^n e^{-u + -ut^2} \mathrm{d}t\frac{\mathrm{d}u}{u},$

where I change the order of integration again. Now we have an inner $latex {t}$ integral that we can do, as it’s just the standard Gaussian integral (google this if this doesn’t make sense to you). The inner integral is

$latex \displaystyle \int_{-\infty}^\infty e^{-ut^2} \mathrm{d}t = \sqrt{\pi / u}. $

Putting this into the above yields

$latex \displaystyle \sqrt{\pi} \int_0^\infty u^{n-1/2} e^{-u} \frac{\mathrm{d}u}{u}, \ \ \ \ \ (4)$

which is exactly the definition for $latex {\Gamma(n-\frac12) \cdot \sqrt \pi}$.

But remember, we multiplied everything by $latex {\Gamma(n)}$ to start with. So we divide by that to get the result:

$latex \displaystyle \int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 + t^2)^n} = \dfrac{\sqrt{\pi} \Gamma(n-\frac12)}{\Gamma(n)} \ \ \ \ \ (5)$

Finally, a copy of the latex file itself.

Posted in Math.NT, Mathematics | Tagged , , | 1 Comment

Math 100: Before second midterm

You have a midterm next week, and it’s not going to be a cakewalk.

As requested, I’m uploading the last five weeks’ worth of worksheets, with (my) solutions. A comment on the solutions: not everything is presented in full detail, but most things are presented with most detail (except for the occasional one that is far far beyond what we actually expect you to be able to do). If you have any questions about anything, let me know. Even better, ask it here – maybe others have the same questions too.

Without further ado –

And since we were unable to go over the quiz in my afternoon recitation today, I’m attaching a worked solution to the quiz as well.

Again, let me know if you have any questions. I will still have my office hours on Tuesday from 2:30-4:30pm in my office (I’m aware that this happens to be immediately before the exam – status not by design). And I’ll be more or less responsive by email.

Study study study!

Posted in Brown University, Math 100, Mathematics | Tagged , , , , , , , , , , | Leave a comment

Comments vs documentation in python

In my first programming class we learned python. It went fine (I thought), I got the idea, and I moved on (although I do now see that much of what we did was not ‘pythonic’).

But now that I’m returning to programming (mostly in python), I see that I did much of it all wrong. One of my biggest surprises was how wrong I was about comments. Too many comments are terrible. Redundant comments make code harder to maintain. If the code is too complex to understand without comments, it’s probably just bad code.

That’s not so hard, right? You read some others’ code, see their comment conventions, and move on. You sort of get into zen moments where the code becomes very clear and commentless, and there is bliss. But then we were at a puzzling competition, and someone wanted to use a piece of code I’d written. Sure, no problem! And the source was so beautifully clear that it almost didn’t even need a single comment to understand.

But they didn’t want to read the code. They wanted to use the code. Comments are very different than documentation. The realization struck me, and again I had it all wrong. In hindsight, it seems so obvious! I’ve programmed java, I know about javadoc. But no one had ever actually wanted to use my code before (and wouldn’t have been able to easily if they had)!

Enter pydoc and sphinx. These are tools that allow html API to be generated from comment docstrings in the code itself. There is a cost – some comments with specific formatting are below each method or class. But it’s reasonable, I think.

Pydoc ships with python, and is fine for single modules or small projects. But for large projects, you’ll need something more. In fact, even the documentation linked above for pydoc was generated with sphinx:

The pydoc documentation isĀ  ‘Created using Sphinx 1.0.7.’

This isn’t to say that pydoc is bad. But I didn’t want to limit myself. Python uses sphinx, so I’ll give it a try too.

And thus I (slightly excessively, to get the hang of it) comment on my solutions to Project Euler on my github. The current documentation looks alright, and will hopefully look better as I get the hang of it.

Full disclosure – this was originally going to be a post on setting up sphinx-generated documentation on github’s pages automatically. I got sidetracked – that will be the next post.

Posted in Programming, Python | Tagged , , , , , | Leave a comment