Category Archives: Mathematics

Notes from a talk at Dartmouth on the Fibonacci zeta function

I recently gave a talk “at Dartmouth”1. The focus of the talk was the (odd-indexed) Fibonacci zeta function:
$$ \sum_{n \geq 1} \frac{1}{F(2n-1)^s},$$
where $F(n)$ is the nth Fibonacci number. The theme is that the Fibonacci zeta function can be recognized as coming from an inner product of automorphic forms, and the continuation of the zeta function can be understood in terms of the spectral expansion of the associated automorphic forms.

This is a talk from ongoing research. I do not yet understand “what’s really going on”. But within the talk I describe a few different generalizations; firstly, there is a generalization to other zeta functions that can be viewed as traces of units on quadratic number fields, and secondly there is a generalization to quadratic forms recognizing solutions to Pell’s equation.

I intend to describe additional ideas from this talk in the coming months, as I figure out how pieces fit together. But for now, here are the slides.

Posted in Expository, Math.NT, Mathematics | Tagged , , , , | Leave a comment

Pictures of equidistribution – the line

In my previous note, we considered equidistribution of rational points on the circle $X^2 + Y^2 = 2$. This is but one of a large family of equidistribution results that I’m not particularly familiar with.

This note is the first in a series of notes dedicated to exploring this type of equidistribution visually. In this note, we will investigate a simpler case — rational points on the line.

(more…)

Posted in Expository, Math.AG, Math.NT, Mathematics | Tagged , , , , | Leave a comment

Points on X^2 + Y^2 = 2 equidistribute with respect to height

When you order rational points on the circle $X^2 + Y^2 = 2$ by height, these points equidistribute.

Stated differently, suppose that $I$ is an arc on the circle $X^2 + Y^2 = 2$. Then asymptotically, the number of rational points on the arc $I$ with height bounded by a number $H$ is equal to what you would expect if $\lvert I\rvert /2\sqrt{2}\pi$ of all points with height up to $H$ were on this arc. Here, $\lvert I\rvert /2\sqrt{2}\pi$ the ratio of the arclength of the arc $I$ with the total circumference of the circle.

This only makes sense if we define the height of a rational point on the circle. Given a point $(a/c, b/c)$ (written in least terms) on the circle, we define the height of this point to be $c$.

In forthcoming work with my frequent collaborators Chan Ieong Kuan, Thomas Hulse, and Alexander Walker, we count three term arithmetic progressions of squares. If $C^2 – B^2 = B^2 – A^2$, then clearly $A^2 + C^2 = 2B^2$, and thus a 3AP of squares corresponds to a rational point on the circle $X^2 + Y^2 = 2$. We compare one of our results to what you would expect from equidistribution. From general principles, we expected such equidistribution to be true. But I wasn’t sure how to prove it.

With helpful assistance from Noam Elkies, Emmanuel Peyre, and John Voight (who each immediately knew how to prove this), I learned how to prove this fact.

The rest of this note contains this proof.

(more…)

Posted in Expository, Math.NT, Mathematics, sage, sagemath | Tagged , | Leave a comment

Proposal for new images for modular forms on the LMFDB

I recently gave a talk about different visualizations of modular forms, including many new visualizations that I have been developing and making. I have continued to develop these images, and I now have a proposal for new visualizations for modular forms in the LMFDB.

To see a current visualization, look at this modular form page. The image from that page (as it is currently) looks like this.

This is a plot on a disk model. To make sense of this plot, I note that the real axis in the upper-half-plane model is the circumference of the circle, and the imaginary axis in the upper-half-plane model is the vertical diameter of the circle. In particular, $z = 0$ is the bottom of the circle, $z = i$ is the center of the circle, and $z = \infty$ is the top of the circle. The magnitude is currently displayed — the big blue region is where the magnitude is very small. In a neighborhood of the blue blob, there are a few bands of color that are meaningful — but then things change too quickly and the graph becomes a graph of noise.

I propose one of the following alternatives. I maintain the same badge and model for the space, but I change what is plotted and what colors to use. Also, I plot them larger so that we can get a good look at them; for the LMFDB they would probably be produced at the same (small) size.

Plots with “Contours”

I have made three plots with contours. They are all morally the same, except for the underlying colorscheme. The “default” sage colorscheme leads to the following plot.

The good thing is that it’s visually striking. But I recently learned that this colorscheme is hated, and it’s widely thought to be a poor choice in almost every situation.

A little bit ago, matplotlib added two colorschemes designed to fix the problems with the default colorscheme. (sage’s preferences are behind — the new matplotlib default has changed). This is one of them, called twilight.

And this is the other default, called viridis. I don’t actually think this should be used, since the hues change from bright yellow to dark blue at complex-argument pi to negative pi. This gives the strong lines, which correspond to those places where the argument of the modular form is pi.Plots without Contours

I’ve also prepared these plots without the contours, and I think they’re quite nice as well.

First jet.

Then twilight. At the talk I recently gave, this was the favorite — but I hadn’t yet implemented the contour-plots above for non-default colorschemes.Then viridis. (I’m still not serious about this one — but I think it’s pretty).Note on other Possibilities

There are other possibilities, such as perhaps plotting on a portion of the upper half-plane instead of a disk-model. I describe a few of these possibilities and give examples in the notes from my last talk. I should note that I can now produce contour-type plots there as well, though I haven’t done that.

For fun, here is the default colorscheme, but rotated. This came about accidentally (as did so many other plots in this excursion), but I think it highlights how odd jet is.

Gathering Opinions

This concludes my proposal. I am collecting opinions. If you are struck by an idea or an opinion and would like to share it with me, please let me know, email me, or leave a comment below.

Posted in LMFDB, Mathematics, sage, sagemath | Tagged , , , , , | Leave a comment

Notes behind a talk: visualizing modular forms

Today, I’ll be at Bowdoin College giving a talk on visualizing modular forms. This is a talk about the actual process and choices involved in illustrating a modular form; it’s not about what little lies one might hold in their head in order to form some mental image of a modular form.1

This is a talk heavily inspired by the ICERM semester program on Illustrating Mathematics (currently wrapping up). In particular, I draw on2 conversations with Frank Farris (about using color to highlight desired features), Elias Wegert (about using logarithmically scaling contours), Ed Harriss (about the choice of colorscheme), and Brendan Hassett (about overall design choices).

There are very many pictures in the talk!

Here are the slides for the talk.

I wrote a few different complex-plotting routines for this project. At their core, they are based on sage’s complex_plot. There are two major variants that I use.

The first (currently called “ccomplex_plot”. Not a good name) overwrites how sage handles lightness in complex_plot in order to produce “contours” at spots where the magnitude is a two-power. These contours are actually a sudden jump in brightness.

The second (currently called “raw_complex_plot”, also not a good name) is even less formal. It vectorizes the computation and produces an object containing the magnitude and argument information for each pixel to be drawn. It then uses numpy and matplotlib to convert these magnitudes and phases into RGB colors according to a matplotlib-compatible colormap.

I am happy to send either of these pieces of code to anyone who wants to see them, but they are very much written for my own use at the moment. I intend to improve them for general use later, after I’ve experimented further.

In addition, I generated all the images for this talk in a single sagemath jupyter notebook (with the two .spyx cython dependencies I allude to above). This is also available here. (Note that using a service like nbviewer or nbconvert to view or convert it to html might be a reasonable idea).

As a final note, I’ll add that I mistyped several times in the preparation of the images for this talk. Included below are a few of the interesting-looking mistakes. The first two resulted from incorrectly applied conformal mappings, while the third came from incorrectly applied color correction.

Posted in Expository, Math.NT, Mathematics, sage, sagemath, sagemath | Tagged , , , | Leave a comment

Making Plots of Modular Forms

Making plots of modular forms

Inspired by the images and ideas of Elias Wegert, I thought it might be interesting to attempt to implement a version of his colorizing technique for complex functions in sage. The purpose is ultimately to revisit how one plots modular forms in the LMFDB (see lmfdb.org and click around to see various plots — some are good, others are less good).

 

The challenge is that plotting a function from $\mathbb{C} \longrightarrow \mathbb{C}$ is that the graph is naturally 4-dimensional, and we are very bad at visualizing 4d things. In fact, we want to use only 2d to visualize it.

A complex number $z = re^{i \theta}$ is determined by the magnitude ($r$) and the argument ($\theta$). Thus
one typical approach to represent the value taken by a function $f$ at a point $z$ is to represent the magnitude of $f(z)$ in terms of the brightness, and to represent the argument in terms of color.

For example, the typical complex space would then look like the following.

(more…)

Posted in Mathematics | Tagged , | Leave a comment

Non-real poles and irregularity of distribution I

$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.

We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}’ a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron’s formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy \begin{equation}\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds \end{equation} for an appropriate choice of $\sigma$.

Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a “Main Term” $\MT(x)$ and an “Error Term” $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to irregularity in the distribution of the coefficients.

To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider nice weight functions, i.e.\ weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.

For a weight function $v(\cdot)$, denote its Mellin transform by \begin{equation} V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}. \end{equation} Then we can study the more general dual family \begin{equation}\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma – i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds. \end{equation}

We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.

Theorem 1

Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big). \end{equation}


Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.

Theorem 2

Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) – \MT(x) = \Omega_\pm( x^{\sigma_0 – \epsilon} ). \end{equation}


The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham’s Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) – x = \Omega_\pm(x^{\Theta – \epsilon})$ and $\psi(x) – x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev’s second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan’s text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn’t have it nicely digested when I needed it).

Motivation and Application

Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape \begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s} \end{equation} for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.

Explicit asymptotics take the form \begin{equation} \sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X \end{equation} where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.

In this case, applying Theorem 2 with the Perron-weight \begin{equation} v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases} \end{equation} shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X). \end{equation} Similarly, Theorem 1 shows that \begin{equation} \sideset{}{‘}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta – \epsilon}), \end{equation} where $\Theta < 7/64$ is the supremum of the deviations to Selberg’s Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).

More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.

Proofs

The proofs of these results will soon appear on the arXiv.

Posted in Math.NT, Mathematics | Tagged , , , | Leave a comment

Notes from a talk at the Maine-Quebec Number Theory Conference

Today I will be giving a talk at the Maine-Quebec Number Theory conference. Each year that I attend this conference, I marvel at how friendly and inviting an environment it is — I highly recommend checking the conference out (and perhaps modelling other conferences after it).

The theme of my talk is about spectral poles and their contribution towards asymptotics (especially of error terms). I describe a few problems in which spectral poles appear in asymptotics. Unlike the nice simple cases where a single pole (or possibly a few poles) appear, in these cases infinite lines of poles appear.

For a bit over a year, I have encountered these and not known what to make of them. Could you have the pathological case that residues of these poles generically cancel? Could they combine to be larger than expected? How do we make sense of them?

The resolution came only very recently.1

I will later write a dedicated note to this new idea (involving Dirichlet integrals and Landau’s theorem in this context), but for now — here are the slides for my talk.

Posted in Expository, Math.NT, Mathematics | Tagged , , , | 2 Comments

The Insidiousness of Mathematics

insidious (adjective)

1.

a. Having a gradual and cumulative effect
b. of a disease : developing so gradually as to be well established before becoming apparent

2.
a. awaiting a chance to entrap
b. harmful but enticing

— Merriam-Webster Dictionary

In early topics in mathematics, one can often approach a topic from a combination of intution and first principles in order to deduce the desired results. In later topics, it becomes necessary to repeatedly sharpen intuition while taking advantage of the insights of the many mathematicians who came before — one sees much further by standing on the giants. Somewhere in the middle, it becomes necessary to accept the idea that there are topics and ideas that are not at all obvious. They might appear to have been plucked out of thin air. And this is a conceptual boundary.

In my experience, calculus is often the class where students primarily confront the idea that it is necessary to take advantage of the good ideas of the past. It sneaks up. The main ideas of calculus are intuitive — local rates of change can be approximated by slopes of secant lines and areas under curves can be approximated by sums of areas of boxes. That these are deeply connected is surprising.

To many students, Taylor’s Theorem is one of the first examples of a commonly-used result whose proof has some aspect which appears to have been plucked out of thin air.1 Learning Taylor’s Theorem in high school was one of the things that inspired me to begin to revisit calculus with an eye towards why each result was true.

I also began to try to prove the fundamental theorems of single and multivariable calculus with as little machinery as possible. High school me thought that topology was overcomplicated and unnecessary for something so intuitive as calculus.2

This train of thought led to my previous note, on another proof of Taylor’s Theorem. That note is a simplified version of one of the first proofs I devised on my own.

Much less obviously, this train of thought also led to the paper on the mean value theorem written with Miles. Originally I had thought that “nice” functions should clearly have continuous choices for mean value abscissae, and I thought that this could be used to provide alternate proofs for some fundamental calculus theorems. It turns out that there are very nice functions that don’t have continuous choices for mean value abscissae, and that actually using that result to prove classical calculus results is often more technical than the typical proofs.

The flow of ideas is turbulent, highly nonlinear.

I used to think that developing extra rigor early on in my mathematical education was the right way to get to deeper ideas more quickly. There is a kernel of truth to this, as transitioning from pre-rigorous mathematics to rigorous mathematics is very important. But it is also necessary to transition to post-rigorous mathematics (and more generally, to choose one’s battles) in order to organize and communicate one’s thoughts.

In hindsight, I think now that I was focused on the wrong aspect. As a high school student, I had hoped to discover the obvious, clear, intuitive proofs of every result. Of course it is great to find these proofs when they exist, but it would have been better to grasp earlier that sometimes these proofs don’t exist. And rarely does actual research proceed so cleanly — it’s messy and uncertain and full of backtracking and random exploration.

Posted in Expository, Math.CA, Mathematics | Leave a comment

Another proof of Taylor’s Theorem

In this note, we produce a proof of Taylor’s Theorem. As in many proofs of Taylor’s Theorem, we begin with a curious start and then follow our noses forward.

Is this a new proof? I think so. But I wouldn’t bet a lot of money on it. It’s certainly new to me.

Is this a groundbreaking proof? No, not at all. But it’s cute, and I like it.1

We begin with the following simple observation. Suppose that $f$ is two times continuously differentiable. Then for any $t \neq 0$, we see that \begin{equation} f'(t) – f'(0) = \frac{f'(t) – f'(0)}{t} t. \end{equation} Integrating each side from $0$ to $x$, we find that \begin{equation} f(x) – f(0) – f'(0) x = \int_0^x \frac{f'(t) – f'(0)}{t} t dt. \end{equation} To interpret the integral on the right in a different way, we will use the mean value theorem for integrals.

Mean Value Theorem for Integrals

Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \int_0^x g(t) h(t) dt = g(c) \int_0^x h(t) dt. \end{equation}

Suppose without loss of generality that $h(t)$ is nonnegative. Since $g$ is continuous on $[0, x]$, it attains its minimum $m$ and maximum $M$ on this interval. Thus \begin{equation} m \int_0^x h(t) dt \leq \int_0^x g(t)h(t)dt \leq M \int_0^x h(t) dt. \end{equation} Let $I = \int_0^x h(t) dt$. If $I = 0$ (or equivalently, if $h(t) \equiv 0$), then the theorem is trivially true, so suppose instead that $I \neq 0$. Then \begin{equation} m \leq \frac{1}{I} \int_0^x g(t) h(t) dt \leq M. \end{equation} By the intermediate value theorem, $g(t)$ attains every value between $m$ and $M$, and thus there exists some $c$ such that \begin{equation} g(c) = \frac{1}{I} \int_0^x g(t) h(t) dt. \end{equation} Rearranging proves the theorem.

For this application, let $g(t) = (f'(t) – f'(0))/t$ for $t \neq 0$, and $g(0) =f'{}'(0)$. The continuity of $g$ at $0$ is exactly the condition that $f'{}'(0)$exists. We also let $h(t) = t$.

For $x > 0$, it follows from the mean value theorem for integrals that there exists a $c \in [0, x]$ such that \begin{equation} \int_0^x \frac{f'(t) – f'(0)}{t} t dt = \frac{f'(c) – f'(0)}{c} \int_0^x t dt = \frac{f'(c) – f'(0)}{c} \frac{x^2}{2}. \end{equation} (Very similar reasoning applies for $x < 0$). Finally, by the mean value theorem (applied to $f’$), there exists a point $\xi \in (0, c)$ such that \begin{equation} f'{}'(\xi) = \frac{f'(c) – f'(0)}{c}. \end{equation} Putting this together, we have proved that there is a $\xi \in (0, x)$ such that \begin{equation} f(x) – f(0) – f'(0) x = f'{}'(\xi) \frac{x^2}{2}, \end{equation} which is one version of Taylor’s Theorem with a linear approximating polynomial.

This approach generalizes. Suppose $f$ is a $(k+1)$ times continuously differentiable function, and begin with the trivial observation that \begin{equation} f^{(k)}(t) – f^{(k)}(0) = \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t. \end{equation} Iteratively integrate $k$ times: first from $0$ to $t_1$, then from $0$ to $t_2$, and so on, with the $k$th interval being from $0$ to $t_k = x$.

Then the left hand side becomes \begin{equation} f(x) – \sum_{n = 0}^k f^{(n)}(0)\frac{x^n}{n!}, \end{equation} the difference between $f$ and its degree $k$ Taylor polynomial. The right hand side is
\begin{equation}\label{eq:only}\underbrace{\int _0^{t_k = x} \cdots \int _0^{t _1}} _{k \text{ times}} \frac{f^{(k)}(t) – f^{(k)}(0)}{t} t \, dt \, dt _1 \cdots dt _{k-1}.\end{equation}

To handle this, we note the following variant of the mean value theorem for integrals.

Mean value theorem for iterated integrals

Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn’t change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \underbrace{\int_0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} g(t) h(t) dt =g(c) \underbrace{\int _0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} h(t) dt. \end{equation}

In fact, this can be proved in almost exactly the same way as in the single-integral version, so we do not repeat the proof.

With this theorem, there is a $c \in [0, x]$ such that we see that \eqref{eq:only} can be written as \begin{equation} \frac{f^{(k)}(c) – f^{(k)}(0)}{c} \underbrace{\int _0^{t _k = x} \cdots \int _0^{t _1}} _{k \; \text{times}} t \, dt \, dt _1 \cdots dt _{k-1}. \end{equation} By the mean value theorem, the factor in front of the integrals can be written as $f^{(k+1)}(\xi)$ for some $\xi \in (0, x)$. The integrals can be directly evaluated to be $x^{k+1}/(k+1)! $.

Thus overall, we find that \begin{equation} f(x) = \sum_{n = 0}^n f^{(n)}(0) \frac{x^n}{n!} + f^{(k+1)}(\xi) \frac{x^{k+1}}{(k+1)!} \end{equation} for some $\xi \in (0, x)$. Thus we have proved Taylor’s Theorem (with Lagrange’s error bound).

Posted in Math.CA, Mathematics | Tagged , | Leave a comment