Monthly Archives: May 2011

Visiting Gaussian Quadrature

I frequent math.stackexchange these days (if you haven’t heard of it, you should go check it out), and every once in a while I get stunned by a solution or a thoughtful question. As I took my Numerical Analysis Class my last semester as an undergrad (last semester, woo hoo!), I remember coming up against Gaussian Quadrature estimates for integration. It’s a very cool thing, but the system of equations to solve seems very challenging – in fact, it feels like one must use a numerical approximation method to solve them. While I don’t have any big qualms with numerical solutions, I much prefer exact solutions. Here is the best method I’ve seen in solving these (this is for 3 points, but we see how it could be used for 1,2, and 4 points as well), and all credit must be given to the user Aryabhatta at Math SE, from this post.

The task is easy to state: solve the following system:

$latex \begin{aligned}
a + b + c &= m_0 \\
ax + by + cz &= m_1 \\
ax^2 + by^2 + cz^2 &= m_2 \\
ax^3 + by^3 + cz^3 &= m_3 \\
ax^4 + by^4 + cz^4 &= m_4 \\
ax^5 + by^5 + cz^5 &= m_5

We are to solve for x, y, z, a, b, and c; the m are given. This is unfortunately nonlinear. And when I first came across such a nonlinear system, I barely recognized the fact that it would be so annoying to solve. It would seem that for too many years, the solutions the most of the questions that I’ve had to come across were too pretty to demand such ‘vulgar’ attempts to solve them. Anyhow, one could use iterative methods to arrive at a solution. Or one could use the Golub-Welsch Algorithm (which I also discovered at Math SE). One could use resultants, which I did in my class. Or one could be witty.

Let’s introduce three new variables. Let x, y, and z be the roots of $latex t^3 + pt^2 + qt + r$. Then we have

$latex \begin{aligned}
x^3 + px^2 + qx + r = 0\\
y^3 + py^2 + qy + r = 0\\
z^3 + pz^2 + qz + r = 0

Multiply equation (1) by $latex a$, equation (2) by $latex b$, and equation (3) by $latex c$ and add. Then we get

$latex m_3 + pm_2 + qm_1 + rm_0 = 0 $

Multiply equation (1) by $latex ax$, equation (2) by $latex by$, and equation (3) by $latex cz$ and add. Then we get

$latex m_4 + pm_3 + qm_2 + rm_1 = 0 $

Finally (you might have guessed it) multiply equation (1) by $latex ax^2$, equation (2) by $latex by^2$, and equation (3) by $latex cz^2$ and add. Then we get

$latex m_5 + pm_4 + qm_3 + rm_2 = 0 $

Now (4),(5), (6) is just a set of 3 linear equations in terms of the variables p, q, r. Solving them yields our cubic. We can then solve the cubic (perhaps using Cardano’s formula, etc.) for x, y, and z. And once we know x, y, and z we have only a linear system to solve to find the weights a, b, and c. That’s way cool!

Posted in Expository, Math.NA, Mathematics | Tagged , , , , | 4 Comments

Factoring I

I remember when I first learnt that testing for primality is in P (as noted in the paper Primes is in P, which explains the AKS algorithm). Some time later, I was talking with a close friend of mine (who has since received his bachelors in Computer Science). He had thought it was hard to believe that it was possible to determine whether a number was prime without factoring that number. That’s pretty cool. The AKS algorithm doesn’t even rely on anything really deep – it’s just a clever application of many (mostly) elementary results. Both of us were well aware of the fact that numbers are hard, as an understatement, to factor. My interest in factoring algorithms has suddenly surged again, so I’m going to look through some factoring algorithms (other than my interesting algorithm, that happens to be terribly slow).


Posted in Expository, Math.NT, Mathematics | Tagged , , , , , | 1 Comment

Daily Math in Zagreb

So I’m in Zagreb now, and naturally this means that I’ve not updated this blog in a while. But this is not to say that I haven’t been doing math! In fact, I’ve been doing lots, even little things to impress the girl. ‘Math to i-impress the g-girl?’ you might stutter, a little insalubriously. Yes! Math to impress the girl!

She is working on finishing her last undergrad thesis right now, which is what brings us to Croatia (she works, I play – the basis for a strong relationship, I think… but I’m on my way to becoming a mathematician, which isn’t really so different to play). After a few ‘average’ days of thesis writing, she has one above and beyond successful day. This is good, because she is very happy on successful days and gets dissatisfied if she has a bad writing day. So what does a knowledgeable and thoughtful mathematician do? It’s time for a mathematical interlude –

Gambling and Regression to the Mean

There is a very well-known fallacy known as the Gambler’s Fallacy, which is best explained through examples. This is the part of our intuition that sees a Roulette table spin red 10 times in a row and thinks, ‘I bet it will spin black now, to ‘catch up.’ ‘ Or someone tosses heads 10 times in a row, and we might start to bet that it’s more likely than before to toss tails now. Of course, this is fallacious thinking – neither roulette nor coins has any memory. They don’t ‘remember’ that they’re on some sort of streak, and they have the same odds from one toss to another (which we assume to be even – conceivably the coin is double-sided, or the Roulette wheel is flat and needs air, or something).

The facts that flipping a coin always has about even odds and that the odds of Roulette being equally against the gambler are what allow casinos to expect to make money. It also distinguishes them from games with ‘memory,’ such as blackjack (I happen to think that Bringing Down the House is a fun read). But that’s another story.

But the related concept of ‘Regression to the Mean’ holds more truth – this says that the means of various sets of outcomes should eventually approximate the expected mean (perhaps called the ‘actual mean’ – flipping a coin should have about half heads and half tails, for instance). So if someone flips a coin 20 times and gets heads all 20 times, we would expect them to get fewer than 20 heads in the next 20 throws, Note, I didn’t say that tails are more likely than heads!

Back to the Girl

So how does this relate? I anticipated that the next day of writing would not be as good as the previous, and that she might accordingly be a bit disappointed with herself for it. And, the next day – she was! But alas, I came prepared with sour cherry juice (if you’ve never had it, you’re missing out), and we picked up some strawberries. Every day is better if it includes sour cherry juice and strawberries.

Posted in Humor, Mathematics, Story | Tagged , , , , , | Leave a comment

Integration by Parts

I suddenly have college degrees to my name. In some sense, I think that I should feel different – but all I’ve really noticed is that I’ve much less to do. Fewer deadlines, anyway. So now I can blog again! Unfortunately, I won’t quite be able to blog as much as I might like, as I will be traveling quite a bit this summer. In a few days I’ll hit Croatia.

Georgia Tech is magnificent at helping its students through their first few tough classes. Although the average size of each of the four calculus classes is around 150 students, they are broken up into 30 person recitations with a TA (usually a good thing, but no promises). Some classes have optional ‘Peer Led Undergraduate Study’ programs, where TA-level students host additional hours to help students master exercises over the class material. There is free tutoring available in many of the freshmen dorms every on most, if not all, nights of the week. If that doesn’t work, there is also free tutoring available from the Office of Minority Education or the Department of Success Programs – the host of the so-called 1-1 Tutoring program (I was a tutor there for two years). One can schedule 1-1 appointments between 8 am and something like 9 pm, and you can choose your tutor. For the math classes, each professor and TA holds office hours, and there is a general TA lounge where most questions can be answered, regardless of whether one’s TA is there. Finally, there is also the dedicated ‘Math Lab,’ a place where 3-4 highly educated math students (usually math grad students, though there are a couple of math seniors) are available each hour between 10 am and 4 pm (something like that – I had Thursday from 1-2 pm, for example). It’s a good theory.

During Dead Week, the week before finals, I had a group of Calc I students during my Math Lab hour. They were asking about integration by parts – when in the world is it useful? At first, I had a hard time saying something that they accepted as valuable – it’s an engineering school, and the things I find interesting do not appeal to the general engineering population of Tech. I thought back during my years at Tech (as this was my last week as a student there, it put me in a very nostalgic mood), and I realized that I associate IBP most with my quantum mechanics classes with Dr. Kennedy. In general, the way to solve those questions was to find some sort of basis of eigenvectors, normalize everything, take more inner products than you want, integrate by parts until it becomes meaningful, and then exploit as much symmetry as possible. Needless to say, that didn’t satisfy their question.

There are the very obvious answers. One derives Taylor’s formula and error with integration by parts:

$latex \begin{array}{rl}
f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\
&= f(0) + xf'(0) + \displaystyle \int_0^x tf”(x-t)\,dt\\
&= f(0) + xf'(0) + \frac{x^2}2f”(0) + \displaystyle \int_0^x \frac{t^2}2 f”'(x-t)\,dt
$ … and so on.

But in all honesty, Taylor’s theorem is rarely used to estimate values of a function by hand, and arguing that it is useful to know at least the bare bones of the theory behind one’s field is an uphill battle. This would prevent me from mentioning the derivation of the Euler-Maclaurin formula as well.

I appealed to aesthetics: Taylor’s Theorem says that $latex \displaystyle \sum_{n\ge0} x^n/n! = e^x$, but repeated integration by parts yields that $latex \displaystyle \int_0^\infty x^n e^{-x} dx=n!$. That’s sort of cool – and not as obvious as it might appear at first. Although I didn’t mention it then, we also have the pretty result that n integration by parts applied to $latex \displaystyle \int_0^1 \dfrac{ (-x\log x)^n}{n!} dx = (n+1)^{-(n+1)}$. Summing over n, and remembering the Taylor expansion for $latex e^x$, one gets that $latex \displaystyle \int_0^1 x^{-x} dx = \displaystyle \sum_{n=1}^\infty n^{-n}$.

Finally, I decided to appeal to that part of the student that wants only to do well on tests. Then for a differentiable function $latex f$ and its inverse $latex f^{-1}$, we have that:
$latex \displaystyle \int f(x)dx = xf(x) – \displaystyle \int xf'(x)dx = $
$latex = xf(x) – \displaystyle \int f^{-1}(f(x))f'(x)dx = xf(x) – \displaystyle \int f^{-1}(u)du$.
In other words, knowing the integral of $latex f$ gives the integral of $latex f^{-1}$ very cheaply, and this is why we use integration by parts to integrate things like $latex \ln x$, $latex \arctan x$, etc. Similarly, one gets the reduction formulas necessary to integrate $latex \sin^n (x)$ or $latex \cos^n (x)$. If one believes that being able to integrate things is useful, then these are useful.There is of course the other class of functions such as $latex \cos(x)\sin(x)$ or $latex e^x \sin(x)$, where one integrates by parts twice and solves for the integral. I still think that’s really cool – sort of like getting something for nothing.

And at the end of the day, they were satisfied. But this might be the crux of the problem that explains why so many Tech students, despite having so many resources for success, still fail – they have to trudge through a whole lost of ‘useless theory’ just to get to the ‘good stuff.’

Posted in Expository, Georgia Tech, Mathematics, Story | Tagged , , , , | Leave a comment