Author Archives: mixedmath

Revealing zero in fully homomorphic encryption is a Bad Thing

When I was first learning number theory, cryptography seemed really fun and really practical. I thought elementary number theory was elegant, and that cryptography was an elegant application. As I continued to learn more about mathematics, and in particular modern mathematics, I began to realize that decades of instruction and improvement (and perhaps of more useful points of view) have simplified the presentation of elementary number theory, and that modern mathematics is less elegant in presentation.

Similarly, as I learned more about cryptography, I learned that though the basic ideas are very simple, their application is often very inelegant. For example, the basis of RSA follows immediately from Euler’s Theorem as learned while studying elementary number theory, or alternately from Lagrange’s Theorem as learned while studying group theory or abstract algebra. And further, these are very early topics in these two areas of study!

But a naive implementation of RSA is doomed (For that matter, many professional implementations have their flaws too). Every now and then, a very clever expert comes up with a new attack on popular cryptosystems, generating new guidelines and recommendations. Some guidelines make intuitive sense [e.g. don’t use too small of an exponent for either the public or secret keys in RSA], but many are more complicated or designed to prevent more sophisticated attacks [especially side-channel attacks].

In the summer of 2013, I participated in the ICERM IdeaLab working towards more efficient homomorphic encryption. We were playing with existing homomorphic encryption schemes and trying to come up with new methods. One guideline that we followed is that an attacker should not be able to recognize an encryption of zero. This seems like a reasonable guideline, but I didn’t really understand why, until I was chatting with others at the 2017 Joint Mathematics Meetings in Atlanta.

It turns out that revealing zero isn’t just against generally sound advice. Revealing zero is a capital B capital T Bad Thing.

Basic Setup

For the rest of this note, I’ll try to identify some of this reasoning.

In a typical cryptosystem, the basic setup is as follows. Andrew has a message that he wants to send to Beatrice. So Andrew converts the message into a list of numbers $M$, and uses some sort of encryption function $E(\cdot)$ to encrypt $M$, forming a ciphertext $C$. We can represent this as $C = E(M)$. Andrew transmits $C$ to Beatrice. If an eavesdropper Eve happens to intercept $C$, it should be very hard for Eve to recover any information about the original message from $C$. But when Beatrice receives $C$, she uses a corresponding decryption function $D(\cdot)$ to decrypt $C$, $M = d(C)$.

Often, the encryption and decryption techniques are based on number theoretic or combinatorial primitives. Some of these have extra structure (or at least they do with basic implementation). For instance, the RSA cryptosystem involves a public exponent $e$, a public mod $N$, and a private exponent $d$. Andrew encrypts the message $M$ by computing $C = E(M) \equiv M^e \bmod N$. Beatrice decrypts the message by computing $M = C^d \equiv M^{ed} \bmod N$.

Notice that in the RSA system, given two messages $M_1, M_2$ and corresponding ciphertexts $C_1, C_2$, we have that
\begin{equation}
E(M_1 M_2) \equiv (M_1 M_2)^e \equiv M_1^e M_2^e \equiv E(M_1) E(M_2) \pmod N. \notag
\end{equation}
The encryption function $E(\cdot)$ is a group homomorphism. This is an example of extra structure.

A fully homomorphic cryptosystem has an encryption function $E(\cdot)$ satisfying both $E(M_1 + M_2) = E(M_1) + E(M_2)$ and $E(M_1M_2) = E(M_1)E(M_2)$ (or more generally an analogous pair of operations). That is, $E(\cdot)$ is a ring homomorphism.

This extra structure allows for (a lot of) extra utility. A fully homomorphic $E(\cdot)$ would allow one to perform meaningful operations on encrypted data, even though you can’t read the data itself. For example, a clinic could store (encrypted) medical information on an external server. A doctor or nurse could pull out a cellphone or tablet with relatively little computing power or memory and securely query the medical data. Fully homomorphic encryption would allow one to securely outsource data infrastructure.

A different usage model suggests that we use a different mental model. So suppose Alice has sensitive data that she wants to store for use on EveCorp’s servers. Alice knows an encryption method $E(\cdot)$ and a decryption method $D(\cdot)$, while EveCorp only ever has mountains of ciphertexts, and cannot read the data [even though they have it].

Why revealing zero is a Bad Thing

Let us now consider some basic cryptographic attacks. We should assume that EveCorp has access to a long list of plaintext messages $M_i$ and their corresponding ciphertexts $C_i$. Not everything, but perhaps from small leaks or other avenues. Among the messages $M_i$ it is very likely that there are two messages $M_1, M_2$ which are relatively prime. Then an application of the Euclidean Algorithm gives a linear combination of $M_1$ and $M_2$ such that
\begin{equation}
M_1 x + M_2 y = 1 \notag
\end{equation}
for some integers $x,y$. Even though EveCorp doesn’t know the encryption method $E(\cdot)$, since we are assuming that they have access to the corresponding ciphertexts $C_1$ and $C_2$, EveCorp has access to an encryption of $1$ using the ring homomorphism properties:
\begin{equation}\label{eq:encryption_of_one}
E(1) = E(M_1 x + M_2 y) = x E(M_1) + y E(M_2) = x C_1 + y C_2.
\end{equation}
By multiplying $E(1)$ by $m$, EveCorp has access to a plaintext and encryption of $m$ for any message $m$.

Now suppose that EveCorp can always recognize an encryption of $0$. Then EveCorp can mount a variety of attacks exposing information about the data it holds.

For example, EveCorp can test whether a particular message $m$ is contained in the encrypted dataset. First, EveCorp generates a ciphertext $C_m$ for $m$ by multiplying $E(1)$ by $m$, as in \eqref{eq:encryption_of_one}. Then for each ciphertext $C$ in the dataset, EveCorp computes $C – C_m$. If $m$ is contained in the dataset, then $C – C_m$ will be an encryption of $0$ for the $C$ corresponding to $m$. EveCorp recognizes this, and now knows that $m$ is in the data. To be more specific, perhaps a list of encrypted names of medical patients appears in the data, and EveCorp wants to see if JohnDoe is in that list. If they can recognize encryptions of $0$, then EveCorp can access this information.

And thus it is unacceptable for external entities to be able to consistently recognize encryptions of $0$.

Up to now, I’ve been a bit loose by saying “an encryption of zero” or “an encryption of $m$”. The reason for this is that to protect against recognition of encryptions of $0$, some entropy is added to the encryption function $E(\cdot)$, making it multivalued. So if we have a message $M$ and we encrypt it once to get $E(M)$, and we encrypt $M$ later and get $E'(M)$, it is often not true that $E(M) = E'(M)$, even though they are both encryptions of the same message. But these systems are designed so that it is true that $C(E(M)) = C(E'(M)) = M$, so that the entropy doesn’t matter.

This is a separate matter, and something that I will probably return to later.

Posted in Crypto, Math.NT, Mathematics, Programming | Tagged , | Leave a comment

My Teaching

I am currently not teaching anything. Instead I am visiting MSRI in Berkeley, California.

In fall 2016, I taught Math 100 (second semester calculus, starting with integration by parts and going through sequences and series) at Brown University. Here are my concluding remarks.

In spring 2016, I designed and taught Math 42 (elementary number theory) at Brown University. My students were exceptional — check out a showcase of some of their final projects. Here are my concluding remarks.

In fall 2014, I taught Math 170 (advanced placement second semester calculus) at Brown University.

I taught number theory in the Summer@Brown program for high school students in the summers of 2013-2015.

I taught a privately requested course in precalculus in the summer of 2013.

I have served as a TA (many, many, many times) for

  • Math 90 (first semester calculus) at Brown University
  • Math 100 (second semester calculus) at Brown University
  • Math 1501 (first semester calculus) at Georgia Tech
  • Math 1502 (second semester calculus, starting with sequences and series but also with 7 weeks of linear algebra) at Georgia Tech
  • Math 2401 (multivariable calculus) at Georgia Tech (there’s essentially no content on this site about this – this was just before I began to maintain a website)

I sometimes tutor at Brown (but not limited to Brown students) and around Boston, on a wide variety of topics (not just the ordinary, boring ones). I charge $80/hour, but I am not currently looking for tutees.

Below, you can find my most recent posts tagged under “Teaching”.

Posted in Brown University, Math 100, Teaching | Leave a comment

Programming Masthead

I maintain the following programming projects:

HNRSS: (source), a HackerNews RSS generator written in python. HNRSS periodically updates RSS feeds from the HN frontpage and best list. It also attempts to automatically summarize the link (if there is a link) and includes the top five comments, all to make it easier to determine whether it’s worth checking out.

LaTeX2Jax: (source), a tool to convert LaTeX documents to HTML with MathJax. This is a modification of the earlier MSE2WP, which converts Math.StackExchange flavored markdown to WordPress+MathJax compatible html. In particular, this is more general, and allows better control of the resulting html by exposing more CSS elements (that generically aren’t available on free WordPress setups). This is what is used for all math posts on this site.

MSE2WP: (source), a tool to convert Math.Stackexchange flavored markdown to WordPress+MathJax compatible html. This was once written for the Math.Stackexchange Community Blog. But as that blog is shutting down, there is much less of a purpose for this script. Note that this began as a modified version of latex2wp.

 

I actively contribute to:

python-markdown2: (source),  a fast and complete python implementation of markdown, with a few additional features.

 

And I generally support or have contributed to:

SageMath: (main site), a free and open source system of tools for mathematics. Some think of it as a free alternative to the “Big M’s” — Maple, Mathematica, Magma.

Matplotlib: (main site), a plotting library in python. Most of the static plots on this site were creating using matplotlib.

crouton: (source), a tool for making Chromebooks, which by default are very limited in capability, into hackable linux laptops. This lets you directly run Linux on the device at the same time as having ChromeOS installed. The only cost is that there is absolutely no physical security at all (and every once in a while a ChromeOS update comes around and breaks lots of things). It’s great!

 

Below, you can find my most recent posts tagged “Programming” on this site.

I will note the following posts which have received lots of positive feedback.

  1. A Notebook Preparing for a Talk at Quebec-Maine
  2. A Brief Notebook on Cryptography
  3. Computing pi with Tools from Calculus (which includes computational tidbits, though no actual programming).
Posted in Programming | Leave a comment

Math 100 Fall 2016: Concluding Remarks

It is that time of year. Classes are over. Campus is emptying. Soon it will be mostly emptiness, snow, and grad students (who of course never leave).

I like to take some time to reflect on the course. How did it go? What went well and what didn’t work out? And now that all the numbers are in, we can examine course trends and data.

Since numbers are direct and graphs are pretty, let’s look at the numbers first.

Math 100 grades at a glance

Let’s get an understanding of the distribution of grades in the course, all at once.

box_plots

These are classic box plots. The center line of each box denotes the median. The left and right ends of the box indicate the 1st and 3rd quartiles. As a quick reminder, the 1st quartile is the point where 25% of students received that grade or lower. The 3rd quartile is the point where 75% of students received that grade or lower. So within each box lies 50% of the course.

Each box has two arms (or “whiskers”) extending out, indicating the other grades of students. Points that are plotted separately are statistical outliers, which means that they are $1.5 \cdot (Q_3 – Q_1)$ higher than $Q_3$ or lower than $Q_1$ (where $Q_1$ denotes the first quartile and $Q_3$ indicates the third quartile).

A bit more information about the distribution itself can be seen in the following graph.

violin_plot

Within each blob, you’ll notice an embedded box-and-whisker graph. The white dots indicate the medians, and the thicker black parts indicate the central 50% of the grade. The width of the colored blobs roughly indicate how many students scored within that region. [As an aside, each blob actually has the same area, so the area is a meaningful data point].

(more…)

Posted in Brown University, Math 100, Mathematics, Teaching | Tagged , , , , | 1 Comment

Computing pi with tools from Calculus

Computing $\pi$

This note was originally written in the context of my fall Math 100 class at Brown University. It is also available as a pdf note.

While investigating Taylor series, we proved that
\begin{equation}\label{eq:base}
\frac{\pi}{4} = 1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \frac{1}{9} + \cdots
\end{equation}
Let’s remind ourselves how. Begin with the geometric series
\begin{equation}
\frac{1}{1 + x^2} = 1 – x^2 + x^4 – x^6 + x^8 + \cdots = \sum_{n = 0}^\infty (-1)^n x^{2n}. \notag
\end{equation}
(We showed that this has interval of convergence $\lvert x \rvert < 1$). Integrating this geometric series yields
\begin{equation}
\int_0^x \frac{1}{1 + t^2} dt = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}. \notag
\end{equation}
Note that this has interval of convergence $-1 < x \leq 1$.

We also recognize this integral as
\begin{equation}
\int_0^x \frac{1}{1 + t^2} dt = \text{arctan}(x), \notag
\end{equation}
one of the common integrals arising from trigonometric substitution. Putting these together, we find that
\begin{equation}
\text{arctan}(x) = x – \frac{x^3}{3} + \frac{x^5}{5} – \frac{x^7}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{x^{2n+1}}{2n+1}. \notag
\end{equation}
As $x = 1$ is within the interval of convergence, we can substitute $x = 1$ into the series to find the representation
\begin{equation}
\text{arctan}(1) = 1 – \frac{1}{3} + \frac{1}{5} – \frac{1}{7} + \cdots = \sum_{n = 0}^\infty (-1)^n \frac{1}{2n+1}. \notag
\end{equation}
Since $\text{arctan}(1) = \frac{\pi}{4}$, this gives the representation for $\pi/4$ given in \eqref{eq:base}.

However, since $x=1$ was at the very edge of the interval of convergence, this series converges very, very slowly. For instance, using the first $50$ terms gives the approximation
\begin{equation}
\pi \approx 3.121594652591011. \notag
\end{equation}
The expansion of $\pi$ is actually
\begin{equation}
\pi = 3.141592653589793238462\ldots \notag
\end{equation}
So the first $50$ terms of \eqref{eq:base} gives two digits of accuracy. That’s not very good.

I think it is very natural to ask: can we do better? This series converges slowly — can we find one that converges more quickly?

(more…)

Posted in Brown University, Expository, Math 100, Mathematics, Teaching | Tagged , , , | Leave a comment

Series Convergence Tests with Prototypical Examples

This is a note written for my Fall 2016 Math 100 class at Brown University. We are currently learning about various tests for determining whether series converge or diverge. In this note, we collect these tests together in a single document. We give a brief description of each test, some indicators of when each test would be good to use, and give a prototypical example for each. Note that we do justify any of these tests here — we’ve discussed that extensively in class. [But if something is unclear, send me an email or head to my office hours]. This is here to remind us of the variety of the various tests of convergence.

A copy of just the statements of the tests, put together, can be found here. A pdf copy of this whole post can be found here.

In order, we discuss the following tests:

  1. The $n$th term test, also called the basic divergence test
  2. Recognizing an alternating series
  3. Recognizing a geometric series
  4. Recognizing a telescoping series
  5. The Integral Test
  6. P-series
  7. Direct (or basic) comparison
  8. Limit comparison
  9. The ratio test
  10. The root test

The $n$th term test

Statement

Suppose we are looking at $\sum_{n = 1}^\infty a_n$ and
\begin{equation}
\lim_{n \to \infty} a_n \neq 0. \notag
\end{equation}
Then $\sum_{n = 1}^\infty a_n$ does not converge.

When to use it

When applicable, the $n$th term test for divergence is usually the easiest and quickest way to confirm that a series diverges. When first considering a series, it’s a good idea to think about whether the terms go to zero or not. But remember that if the limit of the individual terms is zero, then it is necessary to think harder about whether the series converges or diverges.

Example

Each of the series
\begin{equation}
\sum_{n = 1}^\infty \frac{n+1}{2n + 4}, \quad \sum_{n = 1}^\infty \cos n, \quad \sum_{n = 1}^\infty \sqrt{n} \notag
\end{equation}
diverges since their limits are not $0$.

Recognizing alternating series

Statement

Suppose $\sum_{n = 1}^\infty (-1)^n a_n$ is a series where

  1. $a_n \geq 0$,
  2. $a_n$ is decreasing, and
  3. $\lim_{n \to \infty} a_n = 0$.

Then $\sum_{n = 1}^\infty (-1)^n a_n$ converges.

Stated differently, if the terms are alternating sign, decreasing in absolute size, and converging to zero, then the series converges.

When to use it

The key is in the name — if the series is alternating, then this is the goto idea of analysis. Note that if the terms of a series are alternating and decreasing, but the terms do not go to zero, then the series diverges by the $n$th term test.

Example

Suppose we are looking at the series
\begin{equation}
\sum_{n = 1}^\infty \frac{(-1)^n}{\log(n+1)} = \frac{-1}{\log 2} + \frac{1}{\log 3} + \frac{-1}{\log 4} + \cdots \notag
\end{equation}
The terms are alternating.
The sizes of the terms are $\frac{1}{\log (n+1)}$, and these are decreasing.
Finally,
\begin{equation}
\lim_{n \to \infty} \frac{1}{\log(n+1)} = 0. \notag
\end{equation}
Thus the alternating series test applies and shows that this series converges.

(more…)

Posted in Brown University, Math 100, Mathematics, Teaching | Tagged , , , | Leave a comment

A Notebook Preparing for a Talk at Quebec-Maine

This is a notebook containing a representative sample of the code I used to  generate the results and pictures presented at the Quebec-Maine Number Theory Conference on 9 October 2016. It was written in a Jupyter Notebook using Sage 7.3, and later converted for presentation on this site.
There is a version of the notebook available on github. Alternately, a static html version without WordPress formatting is available here. Finally, this notebook is also available in pdf form.
The slides for my talk are available here.

Testing for a Generalized Conjecture on Iterated Sums of Coefficients of Cusp Forms

Let $f$ be a weight $k$ cusp form with Fourier expansion

$$ f(z) = \sum_{n \geq 1} a(n) e(nz). $$

Deligne has shown that $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$. It is conjectured that

$$ S_f^1(n) := \sum_{m \leq X} a(m) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}. $$

It is known that this holds on average, and we recently showed that this holds on average in short intervals.
(See HKLDW1, HKLDW2, and HKLDW3 for details and an overview of work in this area).
This is particularly notable, as the resulting exponent is only 1/4 higher than that of a single coefficient.
This indicates extreme cancellation, far more than what is implied merely by the signs of $a(n)$ being random.

It seems that we also have

$$ \sum_{m \leq X} S_f^1(m) \ll X^{\frac{k-1}{2} + \frac{2}{4} + \epsilon}. $$

That is, the sum of sums seems to add in only an additional 1/4 exponent.
This is unexpected and a bit mysterious.

The purpose of this notebook is to explore this and higher conjectures.
Define the $j$th iterated sum as

$$ S_f^j(X) := \sum_{m \leq X} S_f^{j-1} (m).$$

Then we numerically estimate bounds on the exponent $\delta(j)$ such that

$$ S_f^j(X) \ll X^{\frac{k-1}{2} + \delta(j) + \epsilon}. $$

In [1]:
# This was written in SageMath 7.3 through a Jupyter Notebook.
# Jupyter interfaces to sage by loading it as an extension
%load_ext sage

# sage plays strangely with ipython. This re-allows inline plotting
from IPython.display import display, Image

We first need a list of coefficients of one (or more) cusp forms.
For initial investigation, we begin with a list of 50,000 coefficients of the weight $12$ cusp form on $\text{SL}(2, \mathbb{Z})$, $\Delta(z)$, i.e. Ramanujan’s delta function.
We will use the data associated to the 50,000 coefficients for pictoral investigation as well.

We will be performing some numerical investigation as well.
For this, we will use the first 2.5 million coefficients of $\Delta(z)$

In [2]:
# Gather 10 coefficients for simple checking
check_10 = delta_qexp(11).coefficients()
print check_10

fiftyk_coeffs = delta_qexp(50000).coefficients()
print fiftyk_coeffs[:10] # these match expected

twomil_coeffs = delta_qexp(2500000).coefficients()
print twomil_coeffs[:10] # these also match expected
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
In [3]:
# Function which iterates partial sums from a list of coefficients

def partial_sum(baselist):
    ret_list = [baselist[0]]
    for b in baselist[1:]:
        ret_list.append(ret_list[-1] + b)
    return ret_list

print check_10
print partial_sum(check_10) # Should be the partial sums
[1, -24, 252, -1472, 4830, -6048, -16744, 84480, -113643, -115920]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
In [4]:
# Calculate the first 10 iterated partial sums
# We store them in a single list list, `sums_list`
# the zeroth elelemnt of the list is the array of initial coefficients
# the first element is the array of first partial sums, S_f(n)
# the second element is the array of second iterated partial sums, S_f^2(n)

fiftyk_sums_list = []
fiftyk_sums_list.append(fiftyk_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
    fiftyk_sums_list.append(partial_sum(fiftyk_sums_list[-1]))
    
print partial_sum(check_10)
print fiftyk_sums_list[1][:10]         # should match above
    
twomil_sums_list = []
twomil_sums_list.append(twomil_coeffs) # zeroth index contains coefficients
for j in range(10):                    # jth index contains jth iterate
    twomil_sums_list.append(partial_sum(twomil_sums_list[-1]))
    
print twomil_sums_list[1][:10]         # should match above
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]
[1, -23, 229, -1243, 3587, -2461, -19205, 65275, -48368, -164288]

As is easily visible, the sums alternate in sign very rapidly.
For instance, we believe tha the first partial sums should change sign about once every $X^{1/4}$ terms in the interval $[X, 2X]$.
In this exploration, we are interested in the sizes of the coefficients.
But in HKLDW3, we investigated some of the sign changes of the partial sums.

Now seems like a nice time to briefly look at the data we currently have.
What do the first 50 thousand coefficients look like?
So we normalize them, getting $A(n) = a(n)/n^{5.5}$ and plot these coefficients.

In [5]:
norm_list = []
for n,e in enumerate(fiftyk_coeffs, 1):
    normalized_element = 1.0 * e / (1.0 * n**(5.5))
    norm_list.append(normalized_element)
print norm_list[:10]
1
In [6]:
# Make a quick display
normed_coeffs_plot = scatter_plot(zip(range(1,60000), norm_list), markersize=.02)
normed_coeffs_plot.save("normed_coeffs_plot.png")
display(Image("normed_coeffs_plot.png"))

Since some figures will be featuring prominently in the talk I’m giving at Quebec-Maine, let us make high-quality figures now.

 

(more…)

  1. 00000000000000, -0.530330085889911, 0.598733612492945, -0.718750000000000, 0.691213333204735, -0.317526448138560, -0.376547696558964, 0.911504835123284, -0.641518061271148, -0.366571226366719
Posted in Math.NT, Mathematics, Open, Programming, sagemath | Tagged , , , | 1 Comment

Math 100: Completing the partial fractions example from class

An Unfinished Example

At the end of class today, someone asked if we could do another example of a partial fractions integral involving an irreducible quadratic. We decided to look at the integral

$$ \int \frac{1}{(x^2 + 4)(x+1)}dx. $$
Notice that $latex {x^2 + 4}$ is an irreducible quadratic polynomial. So when setting up the partial fraction decomposition, we treat the $latex {x^2 + 4}$ term as a whole.

So we seek to find a decomposition of the form

$$ \frac{1}{(x^2 + 4)(x+1)} = \frac{A}{x+1} + \frac{Bx + C}{x^2 + 4}. $$
Now that we have the decomposition set up, we need to solve for $latex {A,B,}$ and $latex {C}$ using whatever methods we feel most comfortable with. Multiplying through by $latex {(x^2 + 4)(x+1)}$ leads to

$$ 1 = A(x^2 + 4) + (Bx + C)(x+1) = (A + B)x^2 + (B + C)x + (4A + C). $$
Matching up coefficients leads to the system of equations

$$\begin{align}
0 &= A + B \\
0 &= B + C \\
1 &= 4A + C.
\end{align}$$
So we learn that $latex {A = -B = C}$, and $latex {A = 1/5}$. So $latex {B = -1/5}$ and $latex {C = 1/5}$.

Together, this means that

$$ \frac{1}{(x^2 + 4)(x+1)} = \frac{1}{5}\frac{1}{x+1} + \frac{1}{5} \frac{-x + 1}{x^2 + 4}. $$
Recall that if you wanted to, you could check this decomposition by finding a common denominator and checking through.

Now that we have performed the decomposition, we can return to the integral. We now have that

$$ \int \frac{1}{(x^2 + 4)(x+1)}dx = \underbrace{\int \frac{1}{5}\frac{1}{x+1}dx}_ {\text{first integral}} + \underbrace{\int \frac{1}{5} \frac{-x + 1}{x^2 + 4} dx.}_ {\text{second integral}} $$
We can handle both of the integrals on the right hand side.

The first integral is

$$ \frac{1}{5} \int \frac{1}{x+1} dx = \frac{1}{5} \ln (x+1) + C. $$

The second integral is a bit more complicated. It’s good to see if there is a simple $latex {u}$-substition, since there is an $latex {x}$ in the numerator and an $latex {x^2}$ in the denominator. But unfortunately, this integral needs to be further broken into two pieces that we know how to handle separately.

$$ \frac{1}{5} \int \frac{-x + 1}{x^2 + 4} dx = \underbrace{\frac{-1}{5} \int \frac{x}{x^2 + 4}dx}_ {\text{first piece}} + \underbrace{\frac{1}{5} \int \frac{1}{x^2 + 4}dx.}_ {\text{second piece}} $$

The first piece is now a $latex {u}$-substitution problem with $latex {u = x^2 + 4}$. Then $latex {du = 2x dx}$, and so

$$ \frac{-1}{5} \int \frac{x}{x^2 + 4}dx = \frac{-1}{10} \int \frac{du}{u} = \frac{-1}{10} \ln u + C = \frac{-1}{10} \ln (x^2 + 4) + C. $$

The second piece is one of the classic trig substitions. So we draw a triangle.

triangle

In this triangle, thinking of the bottom-left angle as $latex {\theta}$ (sorry, I forgot to label it), then we have that $latex {2\tan \theta = x}$ so that $latex {2 \sec^2 \theta d \theta = dx}$. We can express the so-called hard part of the triangle by $latex {2\sec \theta = \sqrt{x^2 + 4}}$.

Going back to our integral, we can think of $latex {x^2 + 4}$ as $latex {(\sqrt{x^2 + 4})^2}$ so that $latex {x^2 + 4 = (2 \sec \theta)^2 = 4 \sec^2 \theta}$. We can now write our integral as

$$ \frac{1}{5} \int \frac{1}{x^2 + 4}dx = \frac{1}{5} \int \frac{1}{4 \sec^2 \theta} 2 \sec^2 \theta d \theta = \frac{1}{5} \int \frac{1}{2} d\theta = \frac{1}{10} \theta. $$
As $latex {2 \tan \theta = x}$, we have that $latex {\theta = \text{arctan}(x/2)}$. Inserting this into our expression, we have

$$ \frac{1}{10} \int \frac{1}{x^2 + 4} dx = \frac{1}{10} \text{arctan}(x/2) + C. $$

Combining the first integral and the first and second parts of the second integral together (and combining all the constants $latex {C}$ into a single constant, which we also denote by $latex {C}$), we reach the final expression

$$ \int \frac{1}{(x^2 + 4)(x + 1)} dx = \frac{1}{5} \ln (x+1) – \frac{1}{10} \ln(x^2 + 4) + \frac{1}{10} \text{arctan}(x/2) + C. $$

And this is the answer.

Other Notes

If you have any questions or concerns, please let me know. As a reminder, I have office hours on Tuesday from 9:30–11:30 (or perhaps noon) in my office, and I highly recommend attending the Math Resource Center in the Kassar House from 8pm-10pm, offered Monday-Thursday. [Especially on Tuesday and Thursdays, when there tend to be fewer people there].

On my course page, I have linked to two additional resources. One is to Paul’s Online Math notes for partial fraction decomposition (which I think is quite a good resource). The other is to the Khan Academy for some additional worked through examples on polynomial long division, in case you wanted to see more worked examples. This note can also be found on my website, or in pdf form.

Good luck, and I’ll see you in class.

Posted in Math 100, Mathematics, Teaching | Tagged , | Leave a comment

“On Functions Whose Mean Value Abscissas are Midpoints, with Connections to Harmonic Functions” (with Paul Carter)

This is joint work with Paul Carter. Humorously, we completed this while on a cross-country drive as we moved the newly minted Dr. Carter from Brown to Arizona.

I’ve had a longtime fascination with the standard mean value theorem of calculus.

Mean Value Theorem
Suppose $f$ is a differentiable function. Then there is some $c \in (a,b)$ such that
\begin{equation}
\frac{f(b) – f(a)}{b-a} = f'(c).
\end{equation}

The idea for this project started with a simple question: what happens when we interpret the mean value theorem as a differential equation and try to solve it? As stated, this is too broad. To narrow it down, we might specify some restriction on the $c$, which we refer to as the mean value abscissa, guaranteed by the Mean Value Theorem.

So I thought to try to find functions satisfying
\begin{equation}
\frac{f(b) – f(a)}{b-a} = f’ \left( \frac{a + b}{2} \right)
\end{equation}
for all $a$ and $b$ as a differential equation. In other words, let’s try to find all functions whose mean value abscissas are midpoints.

This looks like a differential equation, which I only know some things about. But my friend and colleague Paul Carter knows a lot about them, so I thought it would be fun to ask him about it.

He very quickly told me that it’s essentially impossible to solve this from the perspective of differential equations. But like a proper mathematician with applied math leanings, he thought we should explore some potential solutions in terms of their Taylor expansions. Proceeding naively in this way very quickly leads to the answer that those (assumed smooth) solutions are precisely quadratic polynomials.

It turns out that was too simple. It was later pointed out to us that verifying that quadratic polynomials satisfy the midpoint mean value property is a common exercise in calculus textbooks, including the one we use to teach from at Brown. Digging around a bit reveals that this was even known (in geometric terms) to Archimedes.

So I thought we might try to go one step higher, and see what’s up with
\begin{equation}\label{eq:original_midpoint}
\frac{f(b) – f(a)}{b-a} = f’ (\lambda a + (1-\lambda) b), \tag{1}
\end{equation}
where $\lambda \in (0,1)$ is a weight. So let’s find all functions whose mean value abscissas are weighted averages. A quick analysis with Taylor expansions show that (assumed smooth) solutions are precisely linear polynomials, except when $\lambda = \frac{1}{2}$ (in which case we’re looking back at the original question).

That’s a bit odd. It turns out that the midpoint itself is distinguished in this way. Why might that be the case?

It is beneficial to look at the mean value property as an integral property instead of a differential property,
\begin{equation}
\frac{1}{b-a} \int_a^b f'(t) dt = f’\big(c(a,b)\big).
\end{equation}
We are currently examining cases when $c = c_\lambda(a,b) = \lambda a + (1-\lambda b)$. We can see the right-hand side is differentiable by differentiating the left-hand side directly. Since any point can be a weighted midpoint, one sees that $f$ is at least twice-differentiable. One can actually iterate this argument to show that any $f$ satisfying one of the weighted mean value properties is actually smooth, justifying the Taylor expansion analysis indicated above.

An attentive eye might notice that the midpoint mean value theorem, written as the integral property
\begin{equation}
\frac{1}{b-a} \int_a^b f'(t) dt = f’ \left( \frac{a + b}{2} \right)
\end{equation}
is exactly the one-dimensional case of the harmonic mean value property, usually written
\begin{equation}
\frac{1}{\lvert B_h \rvert} = \int_{B_h(x)} g(t) dV = g(x).
\end{equation}
Here, $B_h(x)$ is the ball of radius $h$ and center $x$. Any harmonic function satisfies this mean value property, and any function satisfying this mean value property is harmonic.

From this viewpoint, functions satisfying our original midpoint mean value property~\eqref{eq:original_midpoint} have harmonic derivatives. But the only one-dimensional harmonic functions are affine functions $g(x) = cx + d$. This gives immediately that the set of solutions to~\eqref{eq:original_midpoint} are quadratic polynomials.

The weighted mean value property can also be written as an integral property. Trying to connect it similarly to harmonic functions led us to consider functions satisfying
\begin{equation}
\frac{1}{\lvert B_h \rvert} = \int_{B_h(x)} g(t) dV = g(c_\lambda(x,h)),
\end{equation}
where $c_\lambda(x,h)$ should be thought of as some distinguished point in the ball $B_h(x)$ with a weight parameter $\lambda$. More specifically,

Are there weighted harmonic functions corresponding to a weighted harmonic mean value property?
In one dimension, the answer is no, as seen above. But there are many more multivariable harmonic functions [in fact, I’ve never thought of harmonic functions on $\mathbb{R}^1$ until this project, as they’re too trivial]. So maybe there are weighted harmonic functions in higher dimensions?

This ends up being the focus of the latter half of our paper. Unexpectedly (to us), an analogous methodology to our approach in the one-dimensional case works, with only a few differences.

It turns out that no, there are no weighted harmonic functions on $\mathbb{R}^n$ other than trivial extensions of harmonic functions from $\mathbb{R}^{n-1}$.

Harmonic functions are very special, and even more special than we had thought. The paper is a fun read, and can be found on the arxiv now. It has been accepted and will appear in American Mathematical Monthly.

Posted in Expository, Math.CA, Mathematics | Tagged , , | Leave a comment

Paper: Sign Changes of Coefficients and Sums of Coefficients of Cusp Forms

This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker, and is a another sequel to our previous work. This is the third in a trio of papers, and completes an answer to a question posed by our advisor Jeff Hoffstein two years ago.

We have just uploaded a preprint to the arXiv giving conditions that guarantee that a sequence of numbers contains infinitely many sign changes. More generally, if the sequence consists of complex numbers, then we give conditions that guarantee sign changes in a generalized sense.

Let $\mathcal{W}(\theta_1, \theta_2) := { re^{i\theta} : r \geq 0, \theta \in [\theta_1, \theta_2]}$ denote a wedge of complex plane.

Suppose ${a(n)}$ is a sequence of complex numbers satisfying the following conditions:

  1. $a(n) \ll n^\alpha$,
  2. $\sum_{n \leq X} a(n) \ll X^\beta$,
  3. $\sum_{n \leq X} \lvert a(n) \rvert^2 = c_1 X^{\gamma_1} + O(X^{\eta_1})$,

where $\alpha, \beta, c_1, \gamma_1$, and $\eta_1$ are all real numbers $\geq 0$. Then for any $r$ satisfying $\max(\alpha+\beta, \eta_1) – (\gamma_1 – 1) < r < 1$, the sequence ${a(n)}$ has at least one term outside any wedge $\mathcal{W}(\theta_1, \theta_2)$ with $0 \theta_2 – \theta_1 < \pi$ for some $n \in [X, X+X^r)$ for all sufficiently large $X$.

These wedges can be thought of as just slightly smaller than a half-plane. For a complex number to escape a half plane is analogous to a real number changing sign. So we should think of this result as guaranteeing a sort of sign change in intervals of width $X^r$ for all sufficiently large $X$.

The intuition behind this result is very straightforward. If the sum of coefficients is small while the sum of the squares of the coefficients are large, then the sum of coefficients must experience a lot of cancellation. The fact that we can get quantitative results on the number of sign changes is merely a task of bookkeeping.

Both the statement and proof are based on very similar criteria for sign changes when ${a(n)}$ is a sequence of real numbers first noticed by Ram Murty and Jaban Meher. However, if in addition it is known that

\begin{equation}
\sum_{n \leq X} (a(n))^2 = c_2 X^{\gamma_2} + O(X^{\eta_2}),
\end{equation}

and that $\max(\alpha+\beta, \eta_1, \eta_2) – (\max(\gamma_1, \gamma_2) – 1) < r < 1$, then generically both sequences ${\text{Re} (a(n)) }$ and $latex{ \text{Im} (a(n)) }$ contain at least one sign change for some $n$ in $[X , X + X^r)$ for all sufficiently large $X$. In other words, we can detect sign changes for both the real and imaginary parts in intervals, which is a bit more special.

It is natural to ask for even more specific detection of sign changes. For instance, knowing specific information about the distribution of the arguments of $a(n)$ would be interesting, and very closely reltated to the Sato-Tate Conjectures. But we do not yet know how to investigate this distribution.

In practice, we often understand the various criteria for the application of these two sign changes results by investigating the Dirichlet series
\begin{align}
&\sum_{n \geq 1} \frac{a(n)}{n^s} \\
&\sum_{n \geq 1} \frac{S_f(n)}{n^s} \\
&\sum_{n \geq 1} \frac{\lvert S_f(n) \rvert^2}{n^s} \\
&\sum_{n \geq 1} \frac{S_f(n)^2}{n^s},
\end{align}
where
\begin{equation}
S_f(n) = \sum_{m \leq n} a(n).
\end{equation}

In the case of holomorphic cusp forms, the two previous joint projects with this group investigated exactly the Dirichlet series above. In the paper, we formulate some slightly more general criteria guaranteeing sign changes based directly on the analytic properties of the Dirichlet series involved.

In this paper, we apply our sign change results to our previous work to show that $S_f(n)$ changes sign in each interval $[X, X + X^{\frac{2}{3} + \epsilon})$ for sufficiently large $X$. Further, if there are coefficients with $\text{Im} a(n) \neq 0$, then the real and imaginary parts each change signs in those intervals.

We apply our sign change results to single coefficients of $\text{GL}(2)$ cusp forms (and specifically full integral weight holomorphic cusp forms, half-integral weight holomorphic cusp forms, and Maass forms). In large part these are minor improvements over folklore and what is known, except for the extension to complex coefficients.

We also apply our sign change results to single isolated coefficients $A(1,m)$ of $\text{GL}(3)$ Maass forms. This seems to be a novel result, and adds to the very sparse literature on sign changes of sequences associated to $\text{GL}(3)$ objects. Murty and Meher recently proved a general sign change result for $\text{GL}(n)$ objects which is similar in feel.

As a final application, we also consider sign changes of partial sums of $\nu$-normalized coefficients. Let
\begin{equation}
S_f^\nu(X) := \sum_{n \leq X} \frac{a(n)}{n^{\nu}}.
\end{equation}
As $\nu$ gets larger, the individual coefficients $a(n)n^{-\nu}$ become smaller. So one should expect that sign changes in ${S_f^\nu(n)}$ to change based on $\nu$. And in particular, as $\nu$ gets very large, the number of sign changes of $S_f^\nu$ should decrease.

Interestingly, in the case of holomorphic cusp forms of weight $k$, we are able to show that there are sign changes of $S_f^\nu(n)$ in intervals even for normalizations $\nu$ a bit above $\nu = \frac{k-1}{2}$. This is particularly interesting as $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$, so for $\nu > \frac{k-1}{2}$ the coefficients are \emph{decreasing} with $n$. We are able to show that when $\nu = \frac{k-1}{2} + \frac{1}{6} – \epsilon$, the sequence ${S_f^\nu(n)}$ has at least one sign change for $n$ in $[X, 2X)$ for all sufficiently large $X$.

It may help to consider a simpler example to understand why this is surprising. Consider the classic example of a sequence of $b(n)$, where $b(n) = 1$ or $b(n) = -1$, randomly, with equal probability. Then the expected size of the sums of $b(n)$ is about $\sqrt n$. This is an example of \emph{square-root cancellation}, and such behaviour is a common point of comparison. Similarly, the number of sign changes of the partial sums of $b(n)$ is also expected to be about $\sqrt n$.

Suppose now that $b(n) = \frac{\pm 1}{\sqrt n}$. If the first term is $1$, then it takes more then the second term being negative to make the overall sum negative. And if the first two terms are positive, then it would take more then the following three terms being negative to make the overall sum negative. So sign changes of the partial sums are much rarer. In fact, they’re exceedingly rare, and one might barely detect more than a dozen through computational experiment (although one should still expect infinitely many).

This regularity, in spite of the decreasing size of the individual coefficients $a(n)n^{-\nu}$, suggests an interesting regularity in the sign changes of the individual $a(n)$. We do not know how to understand or measure this effect or its regularity, and for now it remains an entirely qualitative observation.

For more details and specific references, see the paper on the arXiv.

Posted in Math.NT, Mathematics | Tagged , , , | Leave a comment