Tag Archives: mean value theorem

Choosing functions and generating figures for “When are there continuous choices for the mean value abscissa?”

In my previous note, I described some of the main ideas behind the paper “When are there continuous choices for the mean value abscissa?” that I wrote joint with Miles Wheeler. In this note, I discuss the process behind generating the functions and figures in our paper.

Our functions came in two steps: we first need to choose which functions to plot; then we need to figure out how to graphically solve their general mean value abscissae problem.

Afterwards, we can decide how to plot these functions well.

Choosing the right functions to plot

The first goal is to find the right functions to plot. From the discussion in our paper, this amounts to specifying certain local conditions of the function. And for a first pass, we only used these prescribed local conditions.

The idea is this: to study solutions to the mean value problem, we look at the zeroes of the function $$ F(b, c) = \frac{f(b) – f(a)}{b – a} – f'(c). $$ When $F(b, c) = 0$, we see that $c$ is a mean value abscissa for $f$ on the interval $(a, b)$.

By the implicit function theorem, we can solve for $c$ as a function of $b$ around a given solution $(b_0, c_0)$ if $F_c(b_0, c_0) \neq 0$. For this particular function, $F_c(b_0, c_0) = -f”(c_0)$.

More generally, it turns out that the order of vanishing of $f’$ at $b_0$ and $c_0$ governs the local behaviour of solutions in a neighborhood of $(b_0, c_0)$.

To make figures, we thus need to make functions with prescribed orders of vanishing of $f’$ at points $b_0$ and $c_0$, where $c_0$ is itself a mean value abscissa for the interval $(a_0, b_0)$.

Without loss of generality, it suffices to consider the case when $f(a_0) = f(b_0) = 0$, as otherwise we can study the function $$
g(x) = f(x) – \left( \frac{f(b_0) – f(a_0)}{b_0 – a_0}(x – a_0) + f(a_0) \right),
$$
which has $g(a_0) = g(b_0) = 0$, and those triples $(a, b, c)$ which solve this for $f$ also solve this for $g$.

And for consistency, we made the arbitrary decisions to have $a_0 = 0$, $b_0 = 3$, and $c_0 = 1$. This decision simplified many of the plotting decisions, as the important points were always $0$, $1$, and $3$.

A first idea

Thus the first task is to be able to generate functions $f$ such that:

  1. $f(0) = 0$,
  2. $f(3) = 0$,
  3. $f'(1) = 0$ (so that $1$ is a mean value abscissa), and
  4. $f'(x)$ has prescribed order of vanishing at $1$, and
  5. $f'(x)$ has prescribed order of vanishing at $3$.

These conditions can all be met by an appropriate interpolating polynomial. As we are setting conditions on both $f$ and its derivatives at multiple points, this amounts to the fundamental problem in Hermite interpolation. Alternatively, this amounts to using Taylor’s theorem at multiple points and then using the Chinese Remainder Theorem over $\mathbb{Z}[x]$ to combine these polynomials together.

There are clever ways of solving this, but this task is so small that it doesn’t require cleverness. In fact, this is one of the laziest solutions we could think of. We know that given $n$ Hermite conditions, there is a unique polynomial of degree $n – 1$ that interpolates these conditions. Thus we

  1. determine the degree of the polynomial,
  2. create a degree $n-1$ polynomial with variable coefficients in sympy,
  3. have sympy symbolically compute the relations the coefficients must satisfy,
  4. ask sympy to solve this symbolic system of equations.

In code, this looks like

import sympy
from sympy.abc import X, B, C, D    # Establish our variable names
def interpolate(conds):
    """
    Finds the polynomial of minimal degree that solves the given Hermite conditions.

    conds is a list of the form
      [(x1, r1, v1), (x2, r2, v2), ...]
    where the polynomial p is to satisfy p^(r_1) (x_1) = v_1, and so on.
    """
    # the degree will be one less than the number of conditions
    n = len(conds)

    # generate a symbol for each coefficient
    A = [sympy.Symbol("a[%d]" % i) for i in range(n)]

    # generate the desired polynomial symbolically
    P = sum([A[i] * X**i for i in range(n)])

    # generate the equations the polynomial must satisfy
    #
    # for each (x, r, v), sympy evaluates the rth derivative of P wrt X,
    # substitutes x in for X, and requires that this equals v.
    EQNS = [sympy.diff(P, X, r).subs(X, x) - v for x, r, v in conds]

    # solve this system for the coefficients A[n]
    SOLN = sympy.solve(EQNS, A)

    return P.subs(SOLN)

We note that we use the convention that a sympy symbol for something is capitalized. For example, we think of the polynomial as being represented by $$
p(x) = a(0) + a(1)x + a(2)x^2 + \cdots + a(n)x^n.
$$
In sympy variables, we think of this as

P = A[0] + A[1] * X + A[2] * X**2 + ... + A[n] * X**n.

With this code, we can ask for the unique degree 1 polynomial which is $1$ at $1$, and whose first derivative is $2$ at $1$.

> interpolate([(1, 0, 1), (1, 1, 2)])
2*X - 1

Indeed, $2x – 1$ is this polynomial.

Too rigid

We have now produced a minimal Hermite solver. But there is a major downside: the unique polynomial exhibiting the necessary behaviours we required is essentially never a good didactic example. We don’t just want plots — we want beautiful, simple plots.

Add knobs to turn

We add two conditions for additional control, and hopefully for additional simplicity of the resulting plot.

Firstly, we added the additional constraint that $f(1) = 1$. This is small, but it’s a small prescribed value. So now at least all three points of interest will fit within a $[0, 3] \times [0, 3]$ box.

Secondly, we also allow the choice of the value of the first nonvanishing derivatives at $1$ and $3$. In reality, we treat these as parameters to change the shape of the resulting graph. Roughly speaking, if the order of vanishing of $f(x) – f(1)$ is $k$ at $1$, then near $1$ the approximation $f(x) \approx f^{(k)}(1) x^k/k!$ is true. Morally, the larger the value of the derivative, the more the graph will resemble $x^k$ near that point.

In code, we implemented this by making functions that will add the necessary Hermite conditions to our input to interpolate.

# We fix the values of a0, b0, c0.
a0 = 0
b0 = 3
c0 = 1

# We require p(a0) = 0, p(b0) = 0, p(c0) = 1, p'(c0) = 0.
BASIC_CONDS = [(a0, 0, 0), (b0, 0, 0), (c0, 0, 1), (c0, 1, 0)]

def c_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at c0 equal to `n`, with
    first nonzero residue `residue`.

    NOTE: the order `n` is in terms of f', not of f. That is, this is the amount
    of additional degeneracy to add.  This may be a source of off-by-one errors.
    """
    return [(c0, 1 + i, 0) for i in range(1, n + 1)] + [(c0, n + 2, residue)]


def b_degen(n, residue):
    """
    Give Hermite conditions for order of vanishing at b0 equal to `n`, with
    first nonzero residue `residue`.
    """
    return [(b0, i, 0) for i in range(1, n + 1)] + [(b0, n + 1, residue)]

def poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.
    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return interpolate(conds)

Then apparently the unique polynomial degree $5$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$ is given by

> poly_with_degens()
11*X**5/16 - 21*X**4/4 + 113*X**3/8 - 65*X**2/4 + 123*X/16

Too many knobs

In principle, this is a great solution. And if you turn the knobs enough, you can get a really nice picture. But the problem with this system (and with many polynomial interpolation problems) is that when you add conditions, you can introduce many jagged peaks and sudden changes. These can behave somewhat unpredictably and chaotically — small changes in Hermite conditions can lead to drastic changes in resulting polynomial shape.

What we really want is for the interpolator to give a polynomial that doesn’t have sudden changes.

Minimize change

The problem: the polynomial can have really rapid changes that makes the plots look bad.

The solution: minimize the polynomial’s change.

That is, if $f$ is our polynomial, then its rate of change at $x$ is $f'(x)$. Our idea is to “minimize” the average size of the derivative $f’$ — this should help keep the function in frame. There are many ways to do this, but we want to choose one that fits into our scheme (so that it requires as little additional work as possible) but which works well.

We decide that we want to focus our graphs on the interval $(0, 4)$. Then we can measure the average size of the derivative $f’$ by its L2 norm on $(0, 4)$: $$ L2(f) = \int_0^4 (f'(x))^2 dx. $$

We add an additional Hermite condition of the form (pt, order, VAL) and think of VAL as an unknown symbol. We arbitrarily decided to start with $pt = 2$ (so that now behavior at the points $0, 1, 2, 3$ are all being controlled in some way) and $order = 1$. The point itself doesn’t matter very much, since we’re going to minimize over the family of polynomials that interpolate the other Hermite conditions with one degree of freedom.

In other words, we are adding in the condition that $f'(2) = VAL$ for an unknown VAL.

We will have sympy compute the interpolating polynomial through its normal set of (explicit) conditions as well as the symbolic condition (2, 1, VAL). Then $f = f(\mathrm{VAL}; x)$.

Then we have sympy compute the (symbolic) L2 norm of the derivative of this polynomial with respect to VAL over the interval $(0, 4)$, $$L2(\mathrm{VAL}) = \int_0^x f'(\mathrm{VAL}; x)^2 dx.$$

Finally, to minize the L2 norm, we have sympy compute the derivative of $L2(\mathrm{VAL})$ with respect to VAL and find the critical points, when the derivative is equal to $0$. We choose the first one to give our value of VAL.1

In code, this looks like

def smoother_interpolate(conds, ctrl_point=2, order=1, interval=(0,4)):
    """
    Find the polynomial of minimal degree that interpolates the Hermite
    conditions in `conds`, and whose behavior at `ctrl_point` minimizes the L2
    norm on `interval` of its derivative.
    """
    # Add the symbolic point to the conditions.
    # Recall that D is a sympy variable
    new_conds = conds + [(ctrl_point, order, D)]

    # Find the polynomial interpolating `new_conds`, symbolic in X *and* D
    P = interpolate(new_conds)

    # Compute L2 norm of the derivative on `interval`
    L2 = sympy.integrate(sympy.diff(P, X)**2, (X, *interval))

    # Take the first critical point of the L2 norm with respect to D
    SOLN = sympy.solve(sympy.diff(L2, D), D)[0]

    # Substitute the minimizing solution in for D and return
    return P.subs(D, SOLN)


def smoother_poly_with_degens(nc=0, nb=0, residue_c=3, residue_b=3):
    """
    Give unique polynomial with given degeneracies for this MVT problem whose
    derivative on (0, 4) has minimal L2 norm.

    `nc` is the order of vanishing of f' at c0, with first nonzero residue `residue_c`.
    `nb` is the order of vanishing of f at b0, with first nonzero residue `residue_b`.

    """
    conds = BASIC_CONDS + c_degen(nc, residue_c) + b_degen(nb, residue_b)
    return smoother_interpolate(conds)

Then apparently the polynomial degree $6$ polynomial $f$ with $f(0) = f(3) = f'(1) = 0$, $f(1) = 1$, and $f”(1) = f'(3) = 3$, and with minimal L2 derivative norm on $(0, 4)$ is given by

> smoother_poly_with_degens()
-9660585*X**6/33224848 + 27446837*X**5/8306212 - 232124001*X**4/16612424
  + 57105493*X**3/2076553 - 858703085*X**2/33224848 + 85590321*X/8306212

> sympy.N(smoother_poly_with_degens())
-0.290763858423069*X**6 + 3.30437472580762*X**5 - 13.9729157526921*X**4
  + 27.5001374874612*X**3 - 25.8452073279613*X**2 + 10.3043747258076*X

Is it much better? Let’s compute the L2 norms.

> interval = (0, 4)
> sympy.N(sympy.integrate(sympy.diff(poly_with_degens(), X)**2, (X, *interval)))
1865.15411706349

> sympy.N(sympy.integrate(sympy.diff(smoother_poly_with_degens(), X)**2, (X, *interval)))
41.1612799050325

That’s beautiful. And you know what’s better? Sympy did all the hard work.

For comparison, we can produce a basic plot using numpy and matplotlib.

import matplotlib.pyplot as plt
import numpy as np

def basic_plot(F, n=300):
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    b1d = np.linspace(-.5, 4.5, n)
    f = sympy.lambdify(X, F)(b1d)
    ax.plot(b1d,f,'k')
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-1, 5])
    ax.plot([0, c0, b0],[0, F.subs(X,c0),F.subs(X,b0)],'ko')
    fig.savefig("basic_plot.pdf")

Then the plot of poly_with_degens() is given by

 

 

The polynomial jumps upwards immediately and strongly for $x > 3$.

On the other hand, the plot of smoother_poly_with_degens() is given by

This stays in frame between $0$ and $4$, as desired.

Choose data to highlight and make the functions

This was enough to generate the functions for our paper. Actually, the three functions (in a total of six plots) in figures 1, 2, and 5 in our paper were hand chosen and hand-crafted for didactic purposes: the first two functions are simply a cubic and a quadratic with certain points labelled. The last function was the non-analytic-but-smooth semi-pathological counterexample, and so cannot be created through polynomial interpolation.

But the four functions highlighting different degenerate conditions in figures 3 and 4 were each created using this L2-minimizing interpolation system.

In particular, the function in figure 3 comes is

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)

which is one of the simplest L2 minimizing polynomials with the typical Hermite conditions, $f”(c_0) = 0$, and opposite-default sign of $f'(b_0)$.

The three functions in figure 4 are (from left to right)

F_bmin = smoother_poly_with_degens(nc=1, nb=1, residue_c=10, residue_b=10)
F_bzero = smoother_poly_with_degens(nc=1, nb=2, residue_c=-20, residue_b=20)
F_bmax = smoother_poly_with_degens(nc=1, nb=1, residue_c=20, residue_b=-10)

We chose much larger residues because the goal of the figure is to highlight how the local behavior at those points corresponds to the behavior of the mean value abscissae, and larger residues makes those local behaviors more dominating.

Plotting all possible mean value abscissae

Now that we can choose our functions, we want to figure out how to find all solutions of the mean value condition $$
F(b, c) = \frac{f(b) – f(a_0)}{b – a_0} – f'(c).
$$
Here I write $a_0$ as it’s fixed, while both $b$ and $c$ vary.

Our primary interest in these solutions is to facilitate graphical experimentation and exploration of the problem — we want these pictures to help build intuition and provide examples.

Although this may seem harder, it is actually a much simpler problem. The function $F(b, c)$ is continuous (and roughly as smooth as $f$ is).

Our general idea is a common approach for this sort of problem:

  1. Compute the values of $F(b, c)$ on a tight mesh (or grid) of points.
  2. Restrict attention to the domain where solutions are meaningful.
  3. Plot the contour of the $0$-level set.

Contours can be well-approximated from a tight mesh. In short, if there is a small positive number and a small negative number next to each other in the mesh of computed values, then necessarily $F(b, c) = 0$ between them. For a tight enough mesh, good plots can be made.

To solve this, we again have sympy create and compute the function for us. We use numpy to generate the mesh (and to vectorize the computations, although this isn’t particularly important in this application), and matplotlib to plot the resulting contour.

Before giving code, note that the symbol F in the sympy code below stands for what we have been mathematically referring to as $f$, and not $F$. This is a potential confusion from our sympy-capitalization convention. It is still necessary to have sympy compute $F$ from $f$.

In code, this looks like

import sympy
import scipy
import numpy as np
import matplotlib.pyplot as plt

def abscissa_plot(F, n=300):
    # Compute the derivative of f
    DF = sympy.diff(F,X)

    # Define CAP_F --- "capital F"
    #
    # this is (f(b) - f(0))/(b - 0) - f'(c).
    CAP_F = (F.subs(X, B) - F.subs(X, 0)) / (B - 0) - DF.subs(X, C)

    # build the mesh
    b1d = np.linspace(-.5, 4.5, n)
    b2d, c2d = np.meshgrid(b1d, b1d)

    # compute CAP_F within the mesh
    cap_f_mesh = sympy.lambdify((B, C), CAP_F)(b2d, c2d)

    # restrict attention to below the diagonal --- we require c < b
    # (although the mas inequality looks reversed in this perspective)
    valid_cap_f_mesh = scipy.ma.array(cap_f_mesh, mask=c2d>b2d)

    # Set up plot basics
    fig = plt.figure(figsize=(6, 2.5))
    ax = fig.add_subplot(1, 1, 1)
    ax.set_aspect('equal')
    ax.grid(True)
    ax.set_xlim([-.5, 4.5])
    ax.set_ylim([-.5, 4.5])

    # plot the contour
    ax.contour(b2d, c2d, valid_cap_f_mesh, [0], colors='k')

    # plot a diagonal line representing the boundary
    ax.plot(b1d,b1d,'k--')

    # plot the guaranteed point
    ax.plot(b0,c0,'ko')

    fig.savefig("abscissa_plot.pdf")

Then plots of solutions to $F(b, c) = 0$ for our basic polynomials are given by

for poly_with_degens(), while for smoother_poly_with_degens() we get

And for comparison, we can now create a (slightly worse looking) version of the plots in figure 3.

F3 = smoother_poly_with_degens(nc=1, residue_b=-3)
basic_plot(F3)
abscissa_plot(F3)

This produces the two plots

For comparison, a (slightly scaled) version of the actual figure appearing in the paper is

 

Copy of the code

A copy of the code used in this note (and correspondingly the code used to generate the functions for the paper) is available on my github as an ipython notebook.

Posted in Expository, Math.CA, Mathematics, Programming, Python, sagemath | Tagged , , , , , , , | 3 Comments

Paper: When are there continuous choices for the Mean Value Abscissa? with Miles Wheeler

When are there continuous choices for the Mean Value Abscissa?

Miles Wheeler and I have recently uploaded a paper to the arXiv called “When are there continuous choices for the mean value abscissa?”, which we have submitted to an expository journal. The underlying question is simple but nontrivial.

The mean value theorem of calculus states that, given a differentiable function $f$ on an interval $[a, b]$, then there exists a $c \in (a, b)$ such that
$$ \frac{f(b) – f(a)}{b – a} = f'(c).$$
We call $c$ the mean value abscissa.
Our question concerns potential behavior of this abscissa when we fix the left endpoint $a$ of the interval and vary $b$. For each $b$, there is at least one abscissa $c_b$ such that the mean value theorem holds with that abscissa. But generically there may be more than one choice of abscissa for each interval. When can we choose $c_b$ as a continuous function of $b$? That is, when can we write $c = c(b)$ such that
$$ \frac{f(b) – f(a)}{b – a} = f'(c(b))$$
for all $b$ in some interval?
We think of this as a continuous choice for the mean value abscissa.

This is a great question. It’s widely understandable — even to students with only one semester of calculus. Further it encourages a proper understanding of what a function is, as thinking of $c$ as potentially a function of $b$ is atypical and interesting.

But I also like this question because the answer is not as simple as you might think, and there are a few nice ideas that get to the answer.

Should you find yourself reading this without knowing the answer, I encourage you to consider it right now. Should continuous choices of abscissas exist? What if the function is really well-behaved? What if it’s smooth? Or analytic?

Let’s focus on the smooth question. Suppose that $f$ is smooth — that it is infinitely differentiable. These are a distinguished class of functions. But it turns out that being smooth is not sufficient: here is a counterexample.

In this figure, there are points $b$ arbitrarily near $b_0$ such that the secant line from $a_0$ to $b$ have positive slope, and points arbitrarily near such that the secant lines have negative slope. There are infinitely many mean value abscissae with $f'(c_0) = 0$, but all of them are either far from a point $c$ where $f'(c) > 0$ or far from a point $c$ where $f'(c) < 0$. And thus there is no continuous choice. From a theorem oriented point of view, our main theorem is that if $f$ is analytic, then there is always a locally continuous choice. That is, for every interval $[a_0, b_0]$, there exists a mean value abscissa $c$ such that $c = c(b)$ for some interval $B$ containing $b_0$. But the purpose of this article isn’t simply to prove this theorem. The purpose is to exposit how the ideas that are used to study this problem and to prove these results are fundamentally based only on a couple of central ideas covered in introductory single and multivariable calculus. All of this paper is completely accessible to a student having studied only single variable calculus (and who is willing to believe that partial derivatives exist are a reasonable object). We prove and use simple-but-nontrivial versions of the contraction mapping theorem, the implicit function theorem, and Morse’s lemma. The implicit function theorem is enough to say that any abscissa $c_0$ such that $f”(c_0) \neq 0$ has a unique continuous extension. Thus immediately for “most” intervals on “most” reasonable functions, we answer in the affirmative. Morse’s lemma allows us to say a bit more about the case when $f”(c_0) = 0$ but $f'{}'{}'(c_0) \neq 0$. In this case there are either multiple continuous extensions or none. And a few small ingredients and the idea behind Morse’s lemma, combined with the implicit function theorem again, is enough to prove the main result. ## Student projects A calculus student looking for a project to dive into and sharpen their calculus skills could find ideas here to sink their teeth into. Beginning by understanding this paper is a great start. A good motivating question would be to carry on one additional step, and to study explicitly the behavior of a function near a point where $f”(c_0) = f'{}'{}'(c_0) = 0$, but $f^{(4)}(c_0) \neq 0$. A slightly more open question that we lightly touch on (but leave largely implicit) is ther inverse question: when can one find a mean value abscissa $c$ such that the right endpoint $b$ can be written as a continuous function $b(c)$ for some neighborhood $C$ containing the initial point $c_0$? Much of the analysis is the same, but figuring it out would require some attention. A much deeper question is to consider the abscissa as a function of both the left endpoint $a$ and the right endpoint $b$. The guiding question here could be to decide when one can write the abscissa as a continuous function $c(a, b)$ in a neighborhood of $(a_0, b_0)$. I would be interested to see a graphical description of the possible shapes of these functions — I’m not quite sure what they might look like. There is also a nice computational problem. In the paper, we include several plots of solution curves in $(b, c)$ space. But we did this with a meshed implicit function theorem solver. A computationally inclined student could devise an explicit way of constructing solutions. On the one hand, this is guaranteed to work since one can apply contraction mappings explicitly to make the resulting function from the implicit function theorem explicit. But on the other hand, many (most?) applications of the implicit function theorem are in more complicated high dimensional spaces, whereas the situation in this paper is the smallest nontrivial example. ## Producing the graphs We made 13 graphs in 5 figures for this article. These pictures were created using matplotlib. The data was created using numpy, scipy, and sympy from within the scipy/numpy python stack, and the actual creation was done interactively within a jupyter notebook. The actual notebook is available here, (along with other relatively raw jupyter notebooks). The most complicated graph is this one.

This figure has graphs of three functions along the top. In each graph, the interval $[0, 3]$ is considered in the mean value theorem, and the point $c_0 = 1$ is a mean value abscissa. In each, we also have $f”(c_0) = 0$, and the point is that the behavior of $f”(b_0)$ has a large impact on the nature of the implicit functions. The three graphs along the bottom are in $(b, c)$ space and present all mean value abscissa for each $b$. This is not a function, but the local structure of the graphs are interesting and visually distinct.

The process of making these examples and making these figures is interesting in itself. We did not make these figures explicitly, but instead chose certain points and certain values of derivatives at those points, and used Hermite interpolation find polynomials with those points.1

In the future I plan on writing a note on the creation of these figures.

Posted in Expository, Math.CA, Mathematics | Tagged , , , , | Leave a comment

“On Functions Whose Mean Value Abscissas are Midpoints, with Connections to Harmonic Functions” (with Paul Carter)

This is joint work with Paul Carter. Humorously, we completed this while on a cross-country drive as we moved the newly minted Dr. Carter from Brown to Arizona.

I’ve had a longtime fascination with the standard mean value theorem of calculus.

Mean Value Theorem
Suppose $f$ is a differentiable function. Then there is some $c \in (a,b)$ such that
\begin{equation}
\frac{f(b) – f(a)}{b-a} = f'(c).
\end{equation}

The idea for this project started with a simple question: what happens when we interpret the mean value theorem as a differential equation and try to solve it? As stated, this is too broad. To narrow it down, we might specify some restriction on the $c$, which we refer to as the mean value abscissa, guaranteed by the Mean Value Theorem.

So I thought to try to find functions satisfying
\begin{equation}
\frac{f(b) – f(a)}{b-a} = f’ \left( \frac{a + b}{2} \right)
\end{equation}
for all $a$ and $b$ as a differential equation. In other words, let’s try to find all functions whose mean value abscissas are midpoints.

This looks like a differential equation, which I only know some things about. But my friend and colleague Paul Carter knows a lot about them, so I thought it would be fun to ask him about it.

He very quickly told me that it’s essentially impossible to solve this from the perspective of differential equations. But like a proper mathematician with applied math leanings, he thought we should explore some potential solutions in terms of their Taylor expansions. Proceeding naively in this way very quickly leads to the answer that those (assumed smooth) solutions are precisely quadratic polynomials.

It turns out that was too simple. It was later pointed out to us that verifying that quadratic polynomials satisfy the midpoint mean value property is a common exercise in calculus textbooks, including the one we use to teach from at Brown. Digging around a bit reveals that this was even known (in geometric terms) to Archimedes.

So I thought we might try to go one step higher, and see what’s up with
\begin{equation}\label{eq:original_midpoint}
\frac{f(b) – f(a)}{b-a} = f’ (\lambda a + (1-\lambda) b), \tag{1}
\end{equation}
where $\lambda \in (0,1)$ is a weight. So let’s find all functions whose mean value abscissas are weighted averages. A quick analysis with Taylor expansions show that (assumed smooth) solutions are precisely linear polynomials, except when $\lambda = \frac{1}{2}$ (in which case we’re looking back at the original question).

That’s a bit odd. It turns out that the midpoint itself is distinguished in this way. Why might that be the case?

It is beneficial to look at the mean value property as an integral property instead of a differential property,
\begin{equation}
\frac{1}{b-a} \int_a^b f'(t) dt = f’\big(c(a,b)\big).
\end{equation}
We are currently examining cases when $c = c_\lambda(a,b) = \lambda a + (1-\lambda b)$. We can see the right-hand side is differentiable by differentiating the left-hand side directly. Since any point can be a weighted midpoint, one sees that $f$ is at least twice-differentiable. One can actually iterate this argument to show that any $f$ satisfying one of the weighted mean value properties is actually smooth, justifying the Taylor expansion analysis indicated above.

An attentive eye might notice that the midpoint mean value theorem, written as the integral property
\begin{equation}
\frac{1}{b-a} \int_a^b f'(t) dt = f’ \left( \frac{a + b}{2} \right)
\end{equation}
is exactly the one-dimensional case of the harmonic mean value property, usually written
\begin{equation}
\frac{1}{\lvert B_h \rvert} = \int_{B_h(x)} g(t) dV = g(x).
\end{equation}
Here, $B_h(x)$ is the ball of radius $h$ and center $x$. Any harmonic function satisfies this mean value property, and any function satisfying this mean value property is harmonic.

From this viewpoint, functions satisfying our original midpoint mean value property~\eqref{eq:original_midpoint} have harmonic derivatives. But the only one-dimensional harmonic functions are affine functions $g(x) = cx + d$. This gives immediately that the set of solutions to~\eqref{eq:original_midpoint} are quadratic polynomials.

The weighted mean value property can also be written as an integral property. Trying to connect it similarly to harmonic functions led us to consider functions satisfying
\begin{equation}
\frac{1}{\lvert B_h \rvert} = \int_{B_h(x)} g(t) dV = g(c_\lambda(x,h)),
\end{equation}
where $c_\lambda(x,h)$ should be thought of as some distinguished point in the ball $B_h(x)$ with a weight parameter $\lambda$. More specifically,

Are there weighted harmonic functions corresponding to a weighted harmonic mean value property?
In one dimension, the answer is no, as seen above. But there are many more multivariable harmonic functions [in fact, I’ve never thought of harmonic functions on $\mathbb{R}^1$ until this project, as they’re too trivial]. So maybe there are weighted harmonic functions in higher dimensions?

This ends up being the focus of the latter half of our paper. Unexpectedly (to us), an analogous methodology to our approach in the one-dimensional case works, with only a few differences.

It turns out that no, there are no weighted harmonic functions on $\mathbb{R}^n$ other than trivial extensions of harmonic functions from $\mathbb{R}^{n-1}$.

Harmonic functions are very special, and even more special than we had thought. The paper is a fun read, and can be found on the arxiv now. It has been accepted and will appear in American Mathematical Monthly.

Posted in Expository, Math.CA, Mathematics | Tagged , , | Leave a comment

Notes from a talk on the Mean Value Theorem

1. Introduction

When I first learned the Mean Value Theorem and the Intermediate Value Theorem, I thought they were both intuitively obvious and utterly useless. In one of my courses in analysis, I was struck when, after proving the Mean Value Theorem, my instructor said that all of calculus was downhill from there. But it was a case of not being able to see the forest for the trees, and I missed the big picture.

I have since come to realize that almost every major (and often, minor) result of calculus is a direct and immediate consequence of the Mean Value Theorem and the Intermediate Value Theorem. In this talk, we will focus on the forest, the big picture, and see the Mean Value Theorem for what it really is: the true Fundamental Theorem of Calculus.

(more…)

Posted in Expository, Mathematics | Tagged , , , , , , | 2 Comments

Continuity of the Mean Value

1. Introduction

When I first learned the mean value theorem as a high schooler, I was thoroughly unimpressed. Part of this was because it’s just like Rolle’s Theorem, which feels obvious. But I think the greater part is because I thought it was useless. And I continued to think it was useless until I began my first proof-oriented treatment of calculus as a second year at Georgia Tech. Somehow, in the interceding years, I learned to value intuition and simple statements.

I have since completely changed my view on the mean value theorem. I now consider essentially all of one variable calculus to be the Mean Value Theorem, perhaps in various forms or disguises. In my earlier note An Intuitive Introduction to Calculus, we state and prove the Mean Value Theorem, and then show that we can prove the Fundamental Theorem of Calculus with the Mean Value Theorem and the Intermediate Value Theorem (which also felt silly to me as a high schooler, but which is not silly).

In this brief note, I want to consider one small aspect of the Mean Value Theorem: can the “mean value” be chosen continuously as a function of the endpoints? To state this more clearly, first recall the theorem:

Suppose $latex {f}$ is a differentiable real-valued function on an interval $latex {[a,b]}$. Then there exists a point $latex {c}$ between $latex {a}$ and $latex {b}$ such that $$ \frac{f(b) – f(a)}{b – a} = f'(c), \tag{1}$$
which is to say that there is a point where the slope of $latex {f}$ is the same as the average slope from $latex {a}$ to $latex {b}$.

What if we allow the interval to vary? Suppose we are interested in a differentiable function $latex {f}$ on intervals of the form $latex {[0,b]}$, and we let $latex {b}$ vary. Then for each choice of $latex {b}$, the mean value theorem tells us that there exists $latex {c_b}$ such that $$ \frac{f(b) – f(0)}{b} = f'(c_b). $$
Then the question we consider today is, as a function of $latex {b}$, can $latex {c_b}$ be chosen continuously? We will see that we cannot, and we’ll see explicit counterexamples. This, after the fold.

(more…)

Posted in Expository, Mathematics | Tagged , , , , | Leave a comment

An Intuitive Overview of Taylor Series

This is a note written for my fall 2013 Math 100 class, but it was not written “for the exam,” nor does anything on here subtly hint at anything on any exam. But I hope that this will be helpful for anyone who wants to get a basic understanding of Taylor series. What I want to do is try to get some sort of intuitive grasp on Taylor series as approximations of functions. By intuitive, I mean intuitive to those with a good grasp of functions, the basics of a first semester of calculus (derivatives, integrals, the mean value theorem, and the fundamental theorem of calculus) – so it’s a mathematical intuition. In this way, this post is a sort of follow-up of my earlier note, An Intuitive Introduction to Calculus.

PLEASE NOTE that my math compiler and my markdown compiler sometimes compete, and sometimes repeated derivatives are too high or too low by one pixel.

We care about Taylor series because they allow us to approximate other functions in predictable ways. Sometimes, these approximations can be made to be very, very, very accurate without requiring too much computing power. You might have heard that computers/calculators routinely use Taylor series to calculate things like $latex {e^x}$ (which is more or less often true). But up to this point in most students’ mathematical development, most mathematics has been clean and perfect; everything has been exact algorithms yielding exact answers for years and years. This is simply not the way of the world.

Here’s a fundamental fact to both mathematics and life: almost anything worth doing is probably pretty hard and pretty messy.

For a very recognizable example, let’s think about finding zeroes of polynomials. Finding roots of linear polynomials is very easy. If we see $latex {5 + x = 0}$, we see that $latex {-5}$ is the zero. Similarly, finding roots of quadratic polynomials is very easy, and many of us have memorized the quadratic formula to this end. Thus $latex {ax^2 + bx + c = 0}$ has solutions $latex {x = \frac{-b \pm \sqrt{b^2 – 4ac}}{2a}}$. These are both nice, algorithmic, and exact. But I will guess that the vast majority of those who read this have never seen a “cubic polynomial formula” for finding roots of cubic polynomials (although it does exist, it is horrendously messy – look up Cardano’s formula). There is even an algorithmic way of finding the roots of quartic polynomials. But here’s something amazing: there is no general method for finding the exact roots of 5th degree polynomials (or higher degree).

I don’t mean We haven’t found it yet, but there may be one, or even You’ll have to use one of these myriad ways – I mean it has been shown that there is no general method of finding exact roots of degree 5 or higher polynomials. But we certainly can approximate them arbitrarily well. So even something as simple as finding roots of polynomials, which we’ve been doing since we were in middle school, gets incredibly and unbelievably complicated.

So before we hop into Taylor series directly, I want to get into the mindset of approximating functions with other functions.

1. Approximating functions with other functions

We like working with polynomials because they’re so easy to calculate and manipulate. So sometimes we try to approximate complicated functions with polynomials, a problem sometimes called
polynomial interpolation”.

Suppose we wanted to approximate $latex {\sin(x)}$. The most naive approximation that we might do is see that $latex {\sin(0) = 0}$, so we might approximate $latex {\sin(x)}$ by $latex {p_0(x) = 0}$. We know that it’s right at least once, and since $latex {\sin(x)}$ is periodic, it’s going to be right many times. I write $latex {p_0}$ to indicate that this is a degree $latex {0}$ polynomial, that is, a constant polynomial. Clearly though, this is a terrible approximation, and we can do better.

(more…)

Posted in Expository, Mathematics, Teaching | Tagged , , , , , , , , | 27 Comments

Math 90: Week 8

Today, we had a set of problems as usual, and a quiz! (And I didn’t tell you about the quiz, even though others did, so I’m going to pretend that it was a pop quiz)!. Below, you’ll find the three problems, their solutions, and a worked-out quiz.

(more…)

Posted in Brown University, Math 90, Mathematics | Tagged , , , , , , , , , , , | 2 Comments