I suddenly have college degrees to my name. In some sense, I think that I should feel different – but all I’ve really noticed is that I’ve much less to do. Fewer deadlines, anyway. So now I can blog again! Unfortunately, I won’t quite be able to blog as much as I might like, as I will be traveling quite a bit this summer. In a few days I’ll hit Croatia.

Georgia Tech is magnificent at helping its students through their first few tough classes. Although the average size of each of the four calculus classes is around 150 students, they are broken up into 30 person recitations with a TA (usually a good thing, but no promises). Some classes have optional ‘Peer Led Undergraduate Study’ programs, where TA-level students host additional hours to help students master exercises over the class material. There is free tutoring available in many of the freshmen dorms every on most, if not all, nights of the week. If that doesn’t work, there is also free tutoring available from the Office of Minority Education or the Department of Success Programs – the host of the so-called 1-1 Tutoring program (I was a tutor there for two years). One can schedule 1-1 appointments between 8 am and something like 9 pm, and you can choose your tutor. For the math classes, each professor and TA holds office hours, and there is a general TA lounge where most questions can be answered, regardless of whether one’s TA is there. Finally, there is also the dedicated ‘Math Lab,’ a place where 3-4 highly educated math students (usually math grad students, though there are a couple of math seniors) are available each hour between 10 am and 4 pm (something like that – I had Thursday from 1-2 pm, for example). It’s a good theory.

During Dead Week, the week before finals, I had a group of Calc I students during my Math Lab hour. They were asking about integration by parts – when in the world is it useful? At first, I had a hard time saying something that they accepted as valuable – it’s an engineering school, and the things I find interesting do not appeal to the general engineering population of Tech. I thought back during my years at Tech (as this was my last week as a student there, it put me in a very nostalgic mood), and I realized that I associate IBP most with my quantum mechanics classes with Dr. Kennedy. In general, the way to solve those questions was to find some sort of basis of eigenvectors, normalize everything, take more inner products than you want, integrate by parts until it becomes meaningful, and then exploit as much symmetry as possible. Needless to say, that didn’t satisfy their question.

There are the very obvious answers. One derives Taylor’s formula and error with integration by parts:

$latex \begin{array}{rl}

f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\

&= f(0) + xf'(0) + \displaystyle \int_0^x tf”(x-t)\,dt\\

&= f(0) + xf'(0) + \frac{x^2}2f”(0) + \displaystyle \int_0^x \frac{t^2}2 f”'(x-t)\,dt

\end{array}

$ … and so on.

But in all honesty, Taylor’s theorem is rarely used to estimate values of a function by hand, and arguing that it is useful to know at least the bare bones of the theory behind one’s field is an uphill battle. This would prevent me from mentioning the derivation of the Euler-Maclaurin formula as well.

I appealed to aesthetics: Taylor’s Theorem says that $latex \displaystyle \sum_{n\ge0} x^n/n! = e^x$, but repeated integration by parts yields that $latex \displaystyle \int_0^\infty x^n e^{-x} dx=n!$. That’s sort of cool – and not as obvious as it might appear at first. Although I didn’t mention it then, we also have the pretty result that n integration by parts applied to $latex \displaystyle \int_0^1 \dfrac{ (-x\log x)^n}{n!} dx = (n+1)^{-(n+1)}$. Summing over n, and remembering the Taylor expansion for $latex e^x$, one gets that $latex \displaystyle \int_0^1 x^{-x} dx = \displaystyle \sum_{n=1}^\infty n^{-n}$.

Finally, I decided to appeal to that part of the student that wants only to do well on tests. Then for a differentiable function $latex f$ and its inverse $latex f^{-1}$, we have that:

$latex \displaystyle \int f(x)dx = xf(x) – \displaystyle \int xf'(x)dx = $

$latex = xf(x) – \displaystyle \int f^{-1}(f(x))f'(x)dx = xf(x) – \displaystyle \int f^{-1}(u)du$.

In other words, knowing the integral of $latex f$ gives the integral of $latex f^{-1}$ very cheaply, and this is why we use integration by parts to integrate things like $latex \ln x$, $latex \arctan x$, etc. Similarly, one gets the reduction formulas necessary to integrate $latex \sin^n (x)$ or $latex \cos^n (x)$. If one believes that being able to integrate things is useful, then these are useful.There is of course the other class of functions such as $latex \cos(x)\sin(x)$ or $latex e^x \sin(x)$, where one integrates by parts twice and solves for the integral. I still think that’s really cool – sort of like getting something for nothing.

And at the end of the day, they were satisfied. But this might be the crux of the problem that explains why so many Tech students, despite having so many resources for success, still fail – they have to trudge through a whole lost of ‘useless theory’ just to get to the ‘good stuff.’