MixedMath - explorations in math and programinghttps://davidlowryduda.comDavid's personal blog.en-usCopyright David Lowry-Duda (2022) - All Rights Reserved.admin@davidlowryduda.comadmin@davidlowryduda.comWed, 05 Jun 2024 20:48:39 +0000Wed, 05 Jun 2024 20:48:39 +0000mixedmathapp/generate_rss.py v0.1https://cyber.harvard.edu/rss/rss.htmlhttps://davidlowryduda.com/static/images/favicon-32x32.pngMixedMathhttps://davidlowryduda.comMaass forms in the LMFDBhttps://davidlowryduda.com/maass-forms-now-in-lmfdbDavid Lowry-Duda<p>Rigorous Maass forms are now in the $L$-functions and modular forms database
<a href="https://www.lmfdb.org/">LMFDB</a>.
This is something I've been working on for a while, and it's nice to actually
make the data available.<span class="aside"> This naturally follows certain talks I've
given before, include <a href="/talk-on-computing-maass-forms">this one</a> and <a href="/talk-computing-and-verifying-maass-forms">this
one</a>, as well as various chalk talks
over the last couple fo years.</span></p>
<p>The computations of the actual Maass forms uses work of Kieran Child,
Andrei Seymour-Howell, and me. A variety of underlying techniques are used,
including rigorous implementations of the Selberg trace formula, rigorous
Hejhal's algorithm, and certification strategies.<sup>1</sup>
<span class="aside"><sup>1</sup>See <a href="https://arxiv.org/abs/2204.11761"><em>Certification of Maass cuso forms of arbitrary level and character</em></a> by Child
and <em>Dimensions of the spaces of cusp forms and newforms on $\Gamma_0(N)$ and
$\Gamma_1(N)$</em> by Booker, Strömbergsson, and Venkatesh. I haven't talked much
about these before.</span>
And this all builds on the earlier heuristic
approaches of Hejhal, later refined significantly by Strömberg, Lemurell, and
Then. (And all data for $\mathrm{GL}(3)$ or $\mathrm{GL}(4)$ forms comes from
completely different techniques of Farmer, Lemurell, and others).</p>
<p>For now, the Maass forms are <a href="https://beta.lmfdb.org/ModularForm/GL2/Q/Maass/">on the LMFDB Beta</a>.
If nothing breaks (and I hope it won't), then they'll soon be on the main LMFDB
too.</p>
<p>For the rest of this note, I want to describe a couple interesting facets of
the newly available database.</p>
<h1>Consecutive Maass Forms</h1>
<p>One of our promises in the LMFDB is to not miss any Maass forms.
For any given level, we have <em>every</em> Maass form with spectral parameter $0 < r
< R$ for some $R$ (that currently depends strongly on the level).
This guarantee comes largely from the trace formula: we can guarantee that we
haven't missed any forms.<sup>2</sup>
<span class="aside"><sup>2</sup>Or rather, any missing forms come from an
implementation mistake, not a theoretical mistake.</span></p>
<p>Thus when you look at all Maass forms in some data range, we know that they are
all there.
This should be a big help with experimentation.</p>
<h1>Labels</h1>
<p>One advantage of having consecutive Maass forms is that we can <em>label</em> Maass
forms in a systematic way.
For a (cuspidal, Hecke, new) Maass form on $M_k(\Gamma_0(N), \chi)$, we assign
it the label <code>N.k.a.m.d</code>, where</p>
<ul>
<li>$N$ is the level,</li>
<li>$k$ is the weight,</li>
<li>$N.a$ is the Conrey label of the Dirichlet character $\chi$,</li>
<li>$m$ is the <strong>spectral index</strong>, and</li>
<li>$d$ is a disambiguation index if multiple Maass forms share the same earlier
data. (This currently never happens).</li>
</ul>
<p>The <strong>spectral index</strong> is the index of the spectral parameter $R$ in the list
of all spectral parameters for a given weight, level, and character, starting
from $1$ if $R > 0$.
The special case when the spectral index $m = 0$ is reserved for Maass forms
induced from Hecke characters.
These are not currently in the LMFDB, but they will be at some point in the
future.</p>
<p>In addition, if $k = 0$, $a = 1$, and $d = 1$, (which accounts for every Maass
form currently in the LMFDB), then we assign the <em>short label</em> <code>N.m</code>.</p>
<h2>Labels in the LMFDB</h2>
<p>Labels can be a big deal in the LMFDB. We don't like changing labels, ever.
Once we give a form a name, we want that name to never change.
This means that there can be big label discussions.</p>
<p>This time I tried designing what I thought was a robust label format,
implementing it, and then asking for confirmation. This worked well! David Roe
had the idea of adding the <em>short label</em>, which makes things look nicer.</p>
<h1>Plots of Maass Forms</h1>
<p>With new objects in the LMFDB, there comes new portraits.</p>
<figure class="center">
<img src="/wp-content/uploads/2024/06/sample_maass_plot.png" />
</figure>
<p>These are similar to the portraits we made for classical modular forms.
But the Maass forms we currently have are all real-valued, and thus only take
on two separate hues.
(And unlike the classical modular forms, we included hints at contours to
indicate a bit more).</p>
<p>I overengineered portrait creation.
I actually have <em>a lot</em> to say about it, and I'll write that down another time.</p>
<h1>Check out the Maass Forms</h1>
<p>So go forth and check them out!
Right now they're on beta. If you manage to break anything, let me know and
I'll fix it.</p>
<p>Happy mathing!</p>https://davidlowryduda.com/maass-forms-now-in-lmfdbWed, 05 Jun 2024 03:14:15 +0000Another year, another TeXLive reinstallationhttps://davidlowryduda.com/another-year-another-texliveDavid Lowry-Duda<p>Every year, <a href="https://www.tug.org/texlive/">TeX Live</a> updates in a breaking way.
This year it was on 13 March 2024.</p>
<p>I don't notice until I need to do something somewhat uncommon with my TeX
distribution, such as compiling a new template or style file.
Today, I'm writing a funding proposal and was (re)compiling a similar proposal
I wrote a couple of years ago.
It happens to use the <code>extdash</code> package, which I don't have installed.
But trying to install it now (via <code>tlmgr</code>) throws an error saying that my
distribution is 2023, and now it's 2024, so it's time to upgrade.</p>
<p>LaTeX distributions come in two forms: the main binaries and packages.
Packages are updated continually and can be updated throughout the year.
The main binaries are updated once each year.
But once a binary is updated, the package manager (associated to a previous
binary) will no longer allow package updates.</p>
<p>The effect is that a new texlive installation is required each year.
There are some update scripts, but these are unsupported and still essentially
require downloading as much material as a full install.</p>
<p>I haven't heard a compelling reason why this behavior is tolerated (let alone
desirable).
Further, following instructions typically leads with several side-by-side
installations of tex (although each installation is HUGE and this is also
unacceptable).</p>
<p>The only benefit of this is that keeping older versions allows snapshot
recompilation of older documents, or snapshot recompilation using packages with
backwards incompatible updates (such as moderncv, which breaks <em>all the time</em>).
But I don't like having that bloat, so I end up completely removing old
versions and installing a fresh texlive each year.</p>
<p>I sometimes want bleeding-edge latex things and don't use my linux
distribution's package manager for this.
Instead, I do it myself through the texlive package manager <code>tlmgr</code>.
These are my notes on updating texlive from year to year.</p>
<p>(In particular, these are the steps that I went through just now so that I can
compile what I really wanted to compile).</p>
<h2>Remove the previous installation</h2>
<p>I keep the texlive source in <code>$HOME/src/texlive</code>.<sup>1</sup>
<span class="aside"><sup>1</sup>You can probably guess
where I keep source files for other things that I build and manage
myself).</span>
Looking now, I see that I have 598832 bytes of <em>stuff</em> there, meaning that the
complete size of my texlive distribution for all latex that I've compiled in
the last year is approximately 585MB.
This is about 1/10th the size of the standard <code>TeXLive Full</code> installation on
Ubuntu, which is one of the reasons why I prefer to manage the installation
myself.<sup>2</sup>
<span class="aside"><sup>2</sup>I began to use tlmgr to manage texlive when I was using a
chromebook as my main driver, and I needed to be able to fit texlive on my tiny
harddrive. I learned a lot about resource constraints then.</span></p>
<p>I note for interest that 261MB of my texlive is dedicated to documentation for
installed packages. Another 96MB is for fonts.</p>
<p>Regardless, I remove the entirety of my <code>$HOME/src/texlive</code> at once.</p>
<h2>Acquire and run the new installer</h2>
<p>Go to <a href="https://tug.org/texlive/acquire.html">tug.org</a> and download the
<a href="https://tug.org/texlive/acquire-netinstall.html">texlive internet installer</a>.
This year, this is a 5.5MB file <code>install-tl-unx.tar.gz</code> .</p>
<p>Move it to a freshly created <code>$HOME/src/texlive</code> and unpack it.</p>
<p><code>cd</code> into the new directory and run the installer (which is a little perl
script).</p>
<blockquote>
<p>I now customize <em>many</em> of the options. For people who read this that aren't
me, I emphasize that my usecase might not be the same as your usecase. Now is
an obvious time to deviate from my procedure.</p>
</blockquote>
<ol>
<li>Verify that it detects the correct platform, <code>GNU/Linux on x86_64</code> for me.</li>
<li>Change the <code>installation scheme</code> from <code>scheme-full</code> (which takes 8.253GB this year) to <code>custom</code> .</li>
<li>Go into the <code>Collections</code> submenu. I default to having too little and then
install things later with tlmgr as necessary later. Thus I first <code>deselect
all</code>, and then I install exactly three collections: <strong>Essential programs and
files</strong>, <strong>LaTeX fundamental packages</strong>, and <strong>XeTeX and packages</strong>.</li>
<li>Set the installation directories. I use <code>$HOME/src/texlive</code> and <em>I do not
separate by year.</em> I also set <code>TEXMFHOME</code> to <code>~/.texmf</code> (i.e. I change it to
be a hidden file instead of polluting my home directory visibly).</li>
<li>I set tex to <strong>use letter size instead of A4 by default</strong>, because I live in
the US. I note that I <strong>keep the "install font/macro doc tree" option</strong>,
which downloads documentation and which ultimately doubles the size of my
installation. I actually read the documentation sometimes. I think this is
unusual.</li>
<li>Set the installation to proceed.</li>
</ol>
<p>This year, this apparently uses 557MB of disk space.</p>
<p>This led to texlive installing 181 packages (and their commented source and
documentation) and took less than 10 minutes.</p>
<p>Afterwards, the installer will display a <strong>very important message</strong> about
setting MANPATH, INFOPATH, and PATH. In principle I would alter these in my
<code>.bashrc</code>, but in practice my previous <code>.bashrc</code> points to these new spots
since I'm overwriting where my texlive installation directory.</p>
<h2>Remove texlive installer</h2>
<p>TeXLive is now installed, so I can remove <code>$HOME/src/texlive/<texliveinstaller></code>.
The exact name is different, depending on the date.</p>
<h2>Install utility packages</h2>
<p>I use various utilities frequently. I install these with</p>
<div class="codehilite"><pre><span></span><code>which tlmgr <span class="c1"># to make sure that new tlmgr is detected</span>
tlmgr install latexmk lacheck chktex latexdiff pdfcrop <span class="se">\</span>
pdfjam texdiff texdoc
</code></pre></div>
<p>These are utility packages that are less common. I use <code>lacheck</code> and <code>chktex</code>
for latex linting. I use <code>texdoc</code> to see documentation for packages. I use
<code>latexmk</code> to handle recompilation. I use the others for various scripts I've
written. All of these are tiny.</p>
<p>Installation took 31 seconds.</p>
<h2>Install specific packages I use</h2>
<p>I have four fundamental types of papers that I often compile.</p>
<ul>
<li>A generic research paper that I might post to the arxiv.</li>
<li>Personal notes</li>
<li>Public notes</li>
<li>Beamer presentations</li>
</ul>
<p>I have different templates and packages that I use for each of these. I know
that I'll make all four again this year and I know exactly which packages they
need. I install those now.</p>
<div class="codehilite"><pre><span></span><code>tlmgr install setspace mathtools booktabs wrapfig changebar <span class="se">\</span>
xcolor lipsum tocloft fancyvrb enumitem threeparttable <span class="se">\</span>
beamer beamertheme-metropolis adjustbox pgfopts <span class="se">\</span>
xkeyval collectbox <span class="nb">times</span> minted ragged2e multirow
</code></pre></div>
<p>I made this list a couple of years ago by checking what was necessary to
compile a couple specific documents.</p>
<p>I install any other package as necessary throughout the year. Right now, my
installation uses 706MB.<sup>3</sup>
<span class="aside"><sup>3</sup>Wow, that's 150 MB more than my entire
distribution last year, including various packages installed throughout the
year for ad hoc reasons! I wonder what changed so much. I see that 334MB, which is 73MB more than before.</span></p>https://davidlowryduda.com/another-year-another-texliveMon, 15 Apr 2024 03:14:15 +0000FLT3 at LftCM2024https://davidlowryduda.com/flt3-at-lftcm2024David Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/flt3-at-lftcm2024Sat, 30 Mar 2024 03:14:15 +0000Quanta on Murmurationshttps://davidlowryduda.com/quanta-on-murmurationsDavid Lowry-Duda<p>Quanta wrote an article <strong>Elliptic Curve 'Murmurations' Found With AI Take
Flight</strong> (<a href="https://www.quantamagazine.org/elliptic-curve-murmurations-found-with-ai-take-flight-20240305/">link to article</a>).</p>
<p>The article describes some of the story behind the recent <em>murmurations in
number theory</em> phenomena that I've been giving talks about. I think it's a
pretty well-written article that gives a reasonable overview. Check it out!</p>
<p>It touches on <a href="/paper-modular-murmurations/">my recent work</a> with Bober,
Booker, and Lee.<span class="aside">If they'd waited a couple of weeks, then they might
have been able to include forthcoming work with Booker, Lee, Seymour-Howell,
and Zubrilina! But we're a couple of weeks away for that, I think.</span></p>
<p>And as far as I see, Quanta is the only outlet (in English) that covers recent
research developments in math for a non-specialist audience (<em>pop-math</em>).
It's certainly the case that mathematicians aren't doing a particularly job
covering our own work in an accessible way.</p>https://davidlowryduda.com/quanta-on-murmurationsTue, 05 Mar 2024 03:14:15 +0000Not quite 3 and a half yearshttps://davidlowryduda.com/three-years-countingDavid Lowry-Duda<h1>Publishing Record</h1>
<p>I submitted a paper on 2 September 2020. It was accepted this week, on 17
February 2024.</p>
<p><span class="aside">I'm continuing to include more of the tiny evaluations I do <em>all
the time</em> with my computing environments. These may be easy manual calculations
— but I've been known to make simple mistakes.</span></p>
<div class="codehilite"><pre><span></span><code><span class="kn">from</span> <span class="nn">datetime</span> <span class="kn">import</span> <span class="n">date</span>
<span class="kn">from</span> <span class="nn">dateutil.relativedelta</span> <span class="kn">import</span> <span class="n">relativedelta</span>
<span class="n">submitted</span> <span class="o">=</span> <span class="n">date</span><span class="p">(</span><span class="mi">2020</span><span class="p">,</span> <span class="mi">9</span><span class="p">,</span> <span class="mi">2</span><span class="p">)</span>
<span class="n">accepted</span> <span class="o">=</span> <span class="n">date</span><span class="p">(</span><span class="mi">2024</span><span class="p">,</span> <span class="mi">2</span><span class="p">,</span> <span class="mi">17</span><span class="p">)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">accepted</span> <span class="o">-</span> <span class="n">submitted</span><span class="p">)</span>
<span class="c1"># >> datetime.timedelta(days=1263)</span>
<span class="nb">print</span><span class="p">(</span><span class="n">relativedelta</span><span class="p">(</span><span class="n">accepted</span><span class="p">,</span> <span class="n">submitted</span><span class="p">))</span>
<span class="c1"># >> relativedelta(years=+3, months=+5, days=+15)</span>
</code></pre></div>
<p>That was 1263 days, or 3 years, 5 months, and 15 days. It wasn't quite long
enough to include a leapday — it missed by two weeks.</p>
<p>This beats my previous record (of 2 years and 3 months). I too am guilty:
during the same time, I took a year to review a paper.</p>
<p>This is an annoying aspect of academia and publishing.<sup>1</sup>
<span class="aside"><sup>1</sup>At least this isn't
one of those stoires where years pass and then the paper is <strong>rejected</strong>. I
haven't had that happen, and I haven't done this as a reviewer. But it <em>does</em>
happen too.</span></p>
<p>I don't think I kept enough records to track my average time from submission to
publication. Perhaps I should keep better track and report this on my
<a href="/research/">Research</a> page too.</p>https://davidlowryduda.com/three-years-countingWed, 21 Feb 2024 03:14:15 +0000Examining Excess in the Schmidt Boundhttps://davidlowryduda.com/schmidt-experimentDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/schmidt-experimentTue, 13 Feb 2024 03:14:15 +0000Bounds on partial sums from functional equationshttps://davidlowryduda.com/bounds-on-partial-sums-from-feDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/bounds-on-partial-sums-from-feTue, 16 Jan 2024 03:14:15 +0000Paper: Towards a Classification of Isolated $j$-invariantshttps://davidlowryduda.com/paper-isolated-j-invariantsDavid Lowry-Duda<p>I'm happy to announce that a new paper, "Towards a classification of isolated
$j$-invariants", now appears <a href="https://arxiv.org/abs/2311.07740">on the arxiv</a>.
This was done with my collaborators Abbey Bourdon, Sachi Hashimoto, Timo
Keller, Zev Klagsbrun, Travis Morrison, Filip Najman, and Himanshu Shukla.</p>
<p>There are many collaborators because this was borne out of a workshop at CIRM
from earlier this year, and we all attacked this problem together. This is the
first time I have collaborated with any of these collaborators (though several
of us are now involved in another project that will eventually appear).</p>
<p>The modular curve $X_1(N)$ is moduli space and an algebraic curve (defined over
$\mathbb{Q}$ for us) whose points parametrize elliptic curves with a point of
order $N$. We study <em>isolated</em> points, which are morally points on $X_1(N)$
that don't come from infinite families.</p>
<p>Perhaps the simplest form of infinite families comes come from a rational map
$f: X_1(N) \longrightarrow \mathbb{P}^1$ of degree $d$. By Hilbert's
irreducibility theorem, $f^{-1}(\mathbb{P}^1(\mathbb{Q}))$ contains infinitely
many closed points of degree $d$.</p>
<p>Similarly, to any closed point $x$ of degree $d$ one can associate the rational
divisor $P_1 + \cdots + P_d$, where $P_j$ are the points in the Galois orbit
associated to $x$. This gives a natural map $\Phi_d: X_1(N)^{(d)}
\longrightarrow \mathrm{Jac}(X_1(N))$. If $\Phi_d(x) = \Phi_d(y)$ for some
point $y$, one can show that there exists a nonconstant function $f : X_1(N)
\longrightarrow \mathbb{P}^1$ of degree $d$ again. Thus positive rank abelian
subvarieties of the Jacobian also give infinite families of points.</p>
<p>Roughly, we say that a closed point $x$ is <strong>isolated</strong> if it doesn't come from
either of the two constructions ($\mathbb{P}^1$, which we call $\mathbb{P}^1$
isolated — and abelian subvariety, which we call AV-isolated) above.
Further, a closed point $x$ of degree $d$ is called <strong>sporadic</strong> if there are
only finitely many closed points of degree at most $d$.</p>
<p>If $x \in X_1(N)$ is an isolated point, we say $j(x) \in X_1(1) \cong
\mathbb{P}^1$ is an <strong>isolated $j$-invariant</strong>. In this paper, we seek to
answer a question of Bourdon, Ejder, Liu, Odumodo, and Viray.</p>
<blockquote>
<p>Question: Can one explicitly identify the (likely finite) set of isolated
$j$-invariants in $\mathbb{Q}$?</p>
</blockquote>
<p>Our main result is decision algorithm. Given a $j$-invariant, our algorithm
produces a finite list of potential (level, degree) pairs such that one only
needs to verify that degree $\mathrm{degree}$ points on $X_1(\mathrm{level})$
are not isolated.</p>
<p>Stated differently, our algorithm has one-sided error. It either reports that
an element is not isolated, or it reports that it <em>might</em> be isolated and gives
a list of places containing the data where isolated points must come from.</p>
<p>In principle, this sounds like it might be insufficient. But we ran our
algorithm on every elliptic curve in the LMFDB and the outputs of our algorithm
are always the empty set — except for $4$ exceptions where we know the
$j$-invariants are isolated.</p>
<p>Concretely, we know that for any non-CM elliptic curve over $\mathbb{Q}$ with
an isolated $j(E) \in \mathbb{Q}$ and with</p>
<ul>
<li>conductor up to $500000$,</li>
<li>or with conductor that is $7$-smooth,</li>
<li>or of prime conductor $p < 3 \cdot 10^8$,</li>
</ul>
<p>then $j(E) \in \{ -140625/8, -9317, 351/4, -162677523113838677 \}$. These
latter correspond to $\mathbb{P}^1$ isolated points on $X_1(21), X_1(37),
X_1(28)$, and $X_1(37)$ (respectively).<sup>1</sup>
<span class="aside"><sup>1</sup>An appendix to our paper by
Derickx and Mark van Hoeij shows that the last $j$-invariant is actually
isolated, not merely sporadic.</span></p>
<p>More generally, we are led to conjecture that these (and the CM $j$-invariants)
are all of the isolated points on $X_1(N)$.</p>
<p>Broadly, our algorithm works by first considering the Galois image of elliptic
curves (thanks to the code of David Zywina and the efforts of David Roe and
others for making this broadly accessible). An earlier result of Bourdon and
her collaborators<sup>2</sup>
<span class="aside"><sup>2</sup>Abbey proposed this problem at the workshop based on the
observation that her result might make things tenable. She was right!</span></p>
<p>allows one to radically narrow the focus of the Galois image to a minimal set
of points of interest. We do this narrowing. We also show that no elliptic
curve with adelic image of genus $0$ gives an isolated point, which allows us
to ignore many potential leafs of computation.</p>
<p>The details are interesting and we try to be as explicit as possible. The code
for this project is also available, and can be found on my github at
<a href="https://github.com/davidlowryduda/isolated_points">github.com/davidlowryduda/isolated_points</a>.</p>https://davidlowryduda.com/paper-isolated-j-invariantsWed, 15 Nov 2023 03:14:15 +0000Paper: Congruent number triangles with the same hypotenusehttps://davidlowryduda.com/paper-congruent-triangles-same-hypotenuseDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/paper-congruent-triangles-same-hypotenuseTue, 14 Nov 2023 03:14:15 +0000Bringing back the blogrollhttps://davidlowryduda.com/bringing-back-the-blogrollDavid Lowry-Duda<p>It used to be <em>very</em> common for sites to have a page that said "blogroll" at
the top and it linked to a bunch of blogs and sites that the author liked.
People used to find other sites like this, and these sorts of links powered
early search engines.</p>
<p>I stopped having a blogroll after I switched away from Wordpress and didn't
think too much about it.</p>
<p>But things changed. RSS worked<sup>1</sup>
<span class="aside"><sup>1</sup>RIP Google Reader, 2013. It died so that
Google+ could... also die a quiet death.</span>
and google search/Twitter were
semi-reliable ways to find new things.</p>
<p>Now Twitter is dead, email newsletters are awful<sup>2</sup>
<span class="aside"><sup>2</sup>though almost all of them
were almost always awful. The medium is inferior to RSS in almost every way
except a very important one: it's easier to monetize.</span>
, and Google search
has lost its utility to the combination of earnest actors paying top dollar to
appear at the top and SEO optimzer spam flooding the web.<sup>3</sup>
<span class="aside"><sup>3</sup>And <a href="https://en.wikipedia.org/wiki/Goodhart%27s_law">Goodhart's
Law</a> strikes again!</span></p>
<p>The web is full of walled gardens, hiding the beautiful things within behind
large, bland, stone walls.</p>
<p>The fundamental problem of internet content discovery is hard again!
It's now rather hard to find places with consistently good content.
I love longform media. It requires thought and intention. And it requires time.</p>
<p>So I'm bringing back my blogroll (and more generally, a list of sites, links,
resources, and books that I find interesting). It is at the <a href="/blogroll">top of every
page</a>. Check it out!</p>
<p>If you have a site, bring out your blogroll. Help the small web flourish.</p>https://davidlowryduda.com/bringing-back-the-blogrollMon, 13 Nov 2023 03:14:15 +0000Blogroll (and interesting links)https://davidlowryduda.com/blogrollDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/blogrollWed, 01 Nov 2023 03:14:15 +0000Paper: Murmurations of modular forms in the weight aspecthttps://davidlowryduda.com/paper-modular-murmurationsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/paper-modular-murmurationsSat, 21 Oct 2023 03:14:15 +0000Murmurations in Maass formshttps://davidlowryduda.com/maass-murmurationsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/maass-murmurationsWed, 19 Jul 2023 03:14:15 +0000Slides from a talk at Concordia Universityhttps://davidlowryduda.com/slides-from-concordia-2023David Lowry-Duda<p>Today, I'm giving a talk at QVNTS on computing Maass forms. My slides are
available <a href="/wp-content/uploads/2023/02/Concordia.pdf">here</a>.</p>
<p>Please let me know if there are any questions or comments.</p>https://davidlowryduda.com/slides-from-concordia-2023Thu, 23 Feb 2023 03:14:15 +0000Paper: Sums of cusp forms coefficients along quadratic sequenceshttps://davidlowryduda.com/paper-sums-of-coeffs-along-quadraticsDavid Lowry-Duda<p>I am happy to announce that my frequent collaborators, Alex Walker and Chan
Kuan, and I have just posted a <a href="https://arxiv.org/abs/2301.11901">preprint to the
arxiv</a> called <em>Sums of cusp form coefficients
along quadratic sequences</em>.</p>
<p>Our primary result is the following.</p>
<div class="theorem">
<p>Let $f(z) = \sum_{n \geq 1} a(n) q^n = \sum_{n \geq 1} A(n) n^{\frac{k-1}{2}}
q^n$ denote a holomorphic cusp form of weight $k \geq 2$ on $\Gamma_0(N)$,
possibly with nontrivial nebentypus. For any $h > 0$ and any $\epsilon > 0$, we
have that
\begin{equation}
\sum_{n^2 + h \leq X^2} A(n^2 + h) = c_{f, h} X + O_{f, h,
\epsilon}(X^{\eta(k) + \epsilon}),
\end{equation}
where
\begin{equation*}
\eta(k) = 1 - \frac{1}{k + 3 - \sqrt{k(k-2)}} \approx \frac{3}{4} +
\frac{1}{32k - 44} + O(1/k^2).
\end{equation*}</p>
</div>
<p>The constant $c_{f, h}$ above is typically $0$, but we weren't the first to
notice this.</p>
<h2>Context</h2>
<p>We approach this by studying the Dirichlet series
\begin{equation*}
D_h(s) := \sum_{n \geq 1} \frac{r_1(n) a(n+h)}{(n + h)^{s}},
\end{equation*}
where
\begin{equation*}
r_1(n) = \begin{cases}
1 & n = 0 \\
2 & n = m^2, m \neq 0 \\
0 & \text{else}
\end{cases}
\end{equation*}
is the number of ways of writing $n$ as a (sum of exactly $1$) square.</p>
<p>This approach isn't new. In his 1984 paper <em>Additive number theory and Maass
forms</em>, Peter Sarnak suggests that one could relate the series
\begin{equation*}
\sum_{n \geq 1} \frac{d(n^2 + h)}{(n^2 + h)^2}
\end{equation*}
to a Petersson inner product involving a theta function, a weight $0$
Eisenstein series, and a half-integral weight Poincaré series. This inner
product can be understood spectrally, hence the spectrum of the half-integral
weight hyperbolic Laplacian (and half-integral weight Maass forms) reflect the
behavior of the ordinary-seeming partial sums
\begin{equation*}
\sum_{n \leq X} d(n^2 + h).
\end{equation*}</p>
<p>A broader class of sums was studied by Blomer.<span class="aside">Blomer. <em>Sums of Hecke
eigenvalues over values of quadratic polynomials</em>. <strong>IMRN</strong>, 2008.</span></p>
<p>Let $q(x) \in \mathbb{Z}[x]$ denote any monic quadratic polynomial. Then Blomer
showed that
\begin{equation*}
\sum_{n \leq X} A(q(n)) = c_{f, q}X + O_{f, q, \epsilon}(X^{\frac{6}{7} + \epsilon}).
\end{equation*}
Blomer already noted that the main term typically doesn't occur.</p>
<p>More recently, Templier and Tsimerman<span class="aside">Templier and Tsimerman.
<em>Non-split sums of coefficients of $\mathrm{GL}(2)$-automorphic forms</em>.
<strong>Israel J. Math</strong> 2013</span></p>
<p>showed that $D_h(s)$ has polynomial growth in vertical strips and has
reasonable polar behavior. This allows them to show that
\begin{equation*}
\sum_{n \geq 0} A(n^2 + h) g\big( (n^2 + h)/X \big)
=
c_{f, h, g} X + O_\epsilon(X^{\frac{1}{2} + \Theta + \epsilon}),
\end{equation*}
where $\Theta$ is a term that is probably $0$ coming from a contribution from
potentially exceptional eigenvalues of the Laplacian and the Selberg Eigenvalue
Conjecture, and where $g$ is a smooth function of sufficient decay.</p>
<p>Templier and Tsimerman approach their result in two different ways: one studies
the Dirichlet series $D_h(s)$ with the same initial steps as outlined by
Sarnak. The second way is more representation theoretic and allows greater
flexibility in the permitted forms.</p>
<h2>Placing our techniques in context</h2>
<p>Broadly, our approach begins in the same way as Templier and Tsimerman —
we study $D_h(s)$ through a Petersson inner product involving half-integral
weight Poincaré series. The great challenge is to understand the discrete
spectrum and half-integral weight Maass forms, and we deviate from Templier and
Tsimerman sharply in our treatment of the discrete spectrum.</p>
<p>For each eigenvalue $\lambda_j$ there is an associated type $\frac{1}{2} +
it_j$ and form
\begin{equation*}
\mu_j(z) = \sum_{n \neq 0} \rho_j(n) W_{\mathrm{sgn}(n) \frac{k}{2}, it_j}(4\pi \lvert n \rvert y) e(nx),
\end{equation*}
where $W$ is a Whittaker function and the coefficients $\rho_j(n)$ are very
mysterious. We average Maass forms in long averages over the eigenvalues and
types (indexed by $j$) <em>and</em> long averages over coefficients $n$. We base the
former approach average on Blomer's work above, and for the latter we improve
on (a part of) the seminar work of Duke, Friedlander, and
Iwaniec.<span class="aside">Duke, Friedlander, Iwaniec. <em>The subconvexity problem for
Artin $L$-functions</em>. <strong>Inventiones</strong>. 2002.</span></p>
<p>For this, it is necessary to establish certain uniform bounds for Whittaker functions.</p>
<p>To apply our bounds for the discrete spectrum, Maass forms, and Whittaker
functions, we use that $f$ is holomorphic in an essential way. We decompose $f$
into a sum of finitely many holomorphic Poincaré series. This is done by Blomer
as well. But in contrast, we study the resulting shifted convolutions whereas
Blomer recollects terms into Kloosterman and Salié type sums.</p>
<p>Ultimately we conclude with a standard contour shifting argument.</p>
<h2>Additional remarks</h2>
<h3>Continuous vs discrete spectra</h3>
<p>The quality of our bound mirrors the quality of our understanding of the
discrete spectrum. This is interesting in that the continuous and discrete
spectra are typically of a similar calibre of size and difficulty.</p>
<p>But here we are examining a half-integral weight object into half-integral
weight spectra. The continuous spectrum comes from Eisenstein series, and it
turns out that the coefficients of real-analytic half-integral weight
Eisenstein series are (essentially) Dirichlet $L$-functions — and these
are relatively easy to understand.</p>
<p>Existing bounds for half-integral weight Maass forms are much weaker than
corresponding bounds for full-integral weight Maass forms.</p>
<p>In principle, there is also a residual spectrum to consider here (in contrast
to weight $0$ spectral expansions). But in practice this is perfectly handled
by in the work of Templier and Tsimerman and presents no further difficulty.</p>
<h3>Two too-brief summaries</h3>
<p>One too-brief-to-be-correct summary of this paper is that <em>by restricting to a
smaller class of quadratic polynomials than Blomer, it is possible to prove a
stronger result</em>. In reality, we restrict to a class of quadratic polynomials
that allows the corresponding Dirichlet series to be easily recognized as a
Petersson inner product involving a standard theta function and a half-integral
weight Poincaré series.</p>
<p>Another too-brief-to-be-correct summary of this paper is that <em>examining
Whittaker functions and Bessel functions even closer reveals that they control
all of multiplicative number theory</em>. Actually, this might be
correct.<span class="aside">As a corollary, I guess all multiplicative number theory is
controlled by monodromy?</span></p>https://davidlowryduda.com/paper-sums-of-coeffs-along-quadraticsFri, 27 Jan 2023 03:14:15 +0000On Mathstodonhttps://davidlowryduda.com/on-mathstodonDavid Lowry-Duda<p>I've been on Mastodon for just over 6 weeks now, inspired by obvious events
approximately two months ago. Specifically, I've been on the math-friendly
server <a href="https://mathstodon.xyz/"><em>Mathstodon</em></a>, which includes latex rendering.</p>
<p>In short: I like it.</p>
<p>Right now, <em>Mathstodon</em> is a kind place. The general culture is kind and
inviting. I'm reminded a bit of young StackExchange sites, which are often so
happy to come into existence that it seems like every new post is
treasured.</p>
<p>Given the similarities to twitter, it is natural to compare and contrast.
Twitter is not kind. Outrage evidently boosts engagement and snark generates
retweets and likes. On <em>Mathstodon</em>, moderators maintain civility. I don't pay
much attention to other Mastodon servers — they have different moderators
and possibly different cultures.</p>
<p>But it's also true that Mastodon (and <em>Mathstodon</em>) are growing rapidly, and I
don't know any social media sites that managed to maintain a positive culture
while growing larger. Math.StackExchange<sup>1</sup>
<span class="aside"><sup>1</sup> Which I've helped moderate for
a decade and which is dear to me.</span>
used to be much friendlier than it is
now, but I think it's impossible to bring that positivity back.<sup>2</sup>
<span class="aside"><sup>2</sup> I have
thought much about this. See my other posts <a href="/challenges-facing-community-cohesion-and-math-stackexchange-in-particular/">Challenges facing cohesion at
MSE</a>,
<a href="/ghosts-of-forums-past/">Ghosts of forums past</a>, and <a href="/splitting-mathse-into-novicemathse-is-a-bad-idea/">Splitting MSE into
NoviceMathSE is a bad idea</a>
for more.</span></p>
<p>Is the Mastodon system of having separate instances with separate cultures the
answer? I don't know. Conceivably if <em>Mathstodon</em> starts to feel bad, I could
pick and and move elsewhere — or of course run my own. Time will tell.</p>
<h2>The Algorithm</h2>
<p>I wasn't sure if I would like or not like the lack of <strong>the algorithm</strong>, the
mysterious ordering system. But to my surprise, I miss essentially nothing
about twitters algorithm.</p>
<p>Maintaining a reasonable signal-to-noise ratio on social media is a struggle.
A deep problem I've faced on twitter is that I like to read things written by
people who write about Hard Problems for a living. To ensure they get
sufficient audience on twitter, it's necessary to post and repost the same
essay/story (possibly with lightly different titles or formats). If any of
these gets sufficient following, then it will bubble up to the feed.</p>
<p>This is too noisy for me.</p>
<p>The Mastodon system is closer to an interleaved set of RSS feeds.<sup>3</sup>
<span class="aside"><sup>3</sup> The
ActivityPub protocol that Mastodon implements is sort of like a souped up
RSS/Atom protocol that allows more rapid updates.</span>
Everything is strictly
in the order that they're made. I love RSS, so perhaps it is no surprise that I
like this system.</p>
<p>And the system allows unimaginatively-named "lists", which are feeds containing
various specific accounts.</p>
<p>At least at the moment, this has a great signal-to-noise ratio.</p>
<h2>Lack of Trending</h2>
<p>The notable exception is the lack of <strong>trending</strong> information. Mastodon does
not have an answer to trending local content.</p>
<p>Concretely, I thought about this when the Boston MBTA screeched to a halt (as
it has a recent tendency to do) a bit before Christmas. If I had opened
twitter, I might have been able to find the source of the disruption —
not because I follow any MBTA accounts, but because this type of kerfluffle
would cause enough activity that it would have populated the feed.</p>
<p>But I don't have twitter on my devices, and I don't currently see myself
reinstalling twitter. In principle, the various MBTA twitter posts could have
appeared on Mastodon as well — but I wouldn't see them. I suppose I could
look for the tag #MBTA, or maybe #Boston? These aren't sufficient yet.</p>
<p>One can debate the merits of having twitter be the de facto place for random
civic and political organizations to post news, but this is common. And this is
another place where Mastodon is currently lacking.</p>https://davidlowryduda.com/on-mathstodonSun, 01 Jan 2023 03:14:15 +0000Implementation notes on modular curve visualizationshttps://davidlowryduda.com/modcurvevizDavid Lowry-Duda<p>The LMFDB will soon have a new section on modular curves. And as with modular
forms, each curve will have a <em>portrait</em> or <em>badge</em> that gives a rough
approximation to some of the characteristics of the curve.</p>
<p>I wrote a note on some of the technical observations and implementation details
concerning these curves. This note can be <a href="/wp-content/uploads/2022/12/VisualizingModularCurves.pdf">found
here</a>. I've also
added a link to it in the unpublished notes section of my <a href="/research">research
page</a>.</p>
<p>Instead of going into details here, I'll refer to the details in the note. I'll
give the core idea.</p>
<p>Each modular curve comes from a subgroup $H \subset \mathrm{GL}(2,
\mathbb{Z}/N\mathbb{Z})$ for some $N$ called the <em>level</em>. To form a
visualization, we compute cosets for $H \cap \mathrm{SL}(2,
\mathbb{Z}/N\mathbb{Z})$ inside $\mathrm{SL}(2, \mathbb{Z}/N\mathbb{Z})$, lift
these to <em>nice</em> elements in $\mathrm{SL}(2, \mathbb{Z})$, and then translate
the standard fundamental domain of $\mathrm{SL}(2, \mathbb{Z}) \backslash
\mathcal{H}$ by these cosets.</p>
<p>We show this on the Poincaré disk, to give a badge format similar to what we
did for modular forms.</p>
<p>This is not a perfect representation, but it captures some of the character of
the curve.</p>
<p>Here are a few of the images that we produce.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2022/12/mcportrait.8.24.1.13.png"
width="400px" />
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2022/12/mcportrait.10.18.0.1.png"
width="400px" />
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2022/12/mcportrait.10.72.1.1.png"
width="400px" />
</figure>
<p>I had studied how to produce space efficient SVG files as well, though I did
not go in this direction in the end. But I think these silhouettes are
interesting, so I include them too.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2022/12/mc8.24.1.13.svg"
width="400px" />
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2022/12/mc10.72.1.1.svg"
width="400px" />
</figure>https://davidlowryduda.com/modcurvevizWed, 07 Dec 2022 03:14:15 +0000Slides on Maass forms, nearing completionhttps://davidlowryduda.com/slides-agntc-dec22David Lowry-Duda<p>I'm giving an update today on my project to compute Maass forms. Today, I
describe the final steps about how to make the computation rigorous. This
complements a <a href="/talk-on-computing-maass-forms">talk I gave two years ago</a> about
how to implement a <em>heuristic</em> evaluation.</p>
<p>The slides for my talk today <a href="/wp-content/uploads/2022/12/SimonsDec2022_maass.pdf">are available here</a>.</p>
<p>I hope to have a preprint describing this algorithm and its implementation
shortly. I also hope to have a beta update to LMFDB with this information by
the next meeting of the collaboration.</p>https://davidlowryduda.com/slides-agntc-dec22Fri, 02 Dec 2022 03:14:15 +0000Slides on Modular Forms and the L-functions, a talk at the Simons Center for Geometry and Physicshttps://davidlowryduda.com/slides-scgp2022David Lowry-Duda<p>I'm giving a talk today on modular forms and their $L$-functions at the Simons
Center for Geometry and Physics. The slides for this talk are <a href="/wp-content/uploads/2022/11/SCGP.pdf">available
here</a>.</p>
<p>I refer to many things that I have done before in this talk.</p>
<p>For references and proofs of the various aspects of Dirichlet series associated
to half-integral weight modular forms, I wrote a series of notes stemming from
<a href="/zeros-of-dirichlet-series-masthead/">this post</a>.</p>
<p>For more information on computing and working with Maass forms, see <a href="/talk-computing-and-verifying-maass-forms/">notes from
another talk I gave</a>.</p>
<p>And finally, many of the objects discussed are <a href="https://www.lmfdb.org/">in the
LMFDB</a>.</p>https://davidlowryduda.com/slides-scgp2022Wed, 09 Nov 2022 03:14:15 +0000Supplement to Langlands surveyhttps://davidlowryduda.com/scgp-langlandsDavid Lowry-Duda<p>Today, I gave an introductory survey on the Langlands program to a group of
mathematicians and physicists at the Simons Center for Geometry and Physics at
Stonybrook University.</p>
<p>Many of these connections are well-illustrated on the <a href="https://www.lmfdb.org">LMFDB</a>.
I highly suggest poking around and clicking on things — on many pages
there are links to related objects.</p>
<p>Part of the Langlands program includes the modularity conjecture for elliptic
curves over $\mathbb{Q}$. On the LMFDB, this means that we can go look at
<a href="https://www.lmfdb.org/EllipticCurve/Q/">elliptic curves over $\mathbb{Q}$</a>,
take some arbitrary elliptic curve like
\begin{equation*}
y^2 + xy + y = x^3 - 113x - 469,
\end{equation*}
(which has <a href="https://www.lmfdb.org/EllipticCurve/Q/105/a/1">this homepage in the LMFDB</a>),
and then see that this corresponds to <a href="https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/105/2/a/a/">this modular form on the
LMFDB</a>. And
they have the <a href="https://www.lmfdb.org/L/2/105/1.1/c1/0/1">same L-function</a>.</p>
<p>During the talk, Brian gave an example or an Artin representation of the
symmetric group $S_3$ on three symbols. For reference, he pulled data <a href="https://www.lmfdb.org/ArtinRepresentation/2.23.3t2.b.a">from
this representation page on the LMFDB</a>.</p>
<p>From one perspective, the Langlands program is a monolithic wall of imposing,
intimidating mathematics. But from another perspective, the Langlands program
is most interesting because it organizes and connects seemingly different
phenomena.</p>
<p>It's not necessary to understand each detail — instead it's interesting
to note that there are many fundamentally different ways of producing highly
structured data (like $L$-functions). And remarkably we think that <em>every</em> such
$L$-function will behave beautifully, including satisfying their own Riemann
Hypothesis.</p>https://davidlowryduda.com/scgp-langlandsThu, 03 Nov 2022 03:14:15 +0000Notes from a survey talk by Jeff Lagarias on the Collatz problemhttps://davidlowryduda.com/lagarias-collatzDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/lagarias-collatzWed, 02 Nov 2022 03:14:15 +0000Paul R. Halmos -- Lester R. Ford Awardhttps://davidlowryduda.com/halmos-ford-awardDavid Lowry-Duda<p>This is an update with (unexpected) good news. My collaborator <a href="https://people.bath.ac.uk/mw2319/">Miles
Wheeler</a> and I were given the <a href="https://www.maa.org/programs-and-communities/member-communities/maa-awards/writing-awards/paul-halmos-lester-ford-awards">Paul R.
Halmos – Lester R. Ford Award</a>
for our paper <a href="https://doi.org/10.1080/00029890.2021.1840879">Perturbing the mean value theorem: implicit functions, the morse
lemma, and beyond</a><sup>1</sup>
<span class="aside"><sup>1</sup>The
<a href="https://arxiv.org/abs/1906.02026">arxiv version</a> of this paper is called <em>When
are there continuous choices for the mean value abscissa</em>, and is slightly
longer than the final published form.</span></p>
<p>This was an unexpected honor. Partly, this is due to the fact that we first
wrote this paper years ago and it was accepted in 2019. But the publication
backlog meant that it wasn't published until January of 2021, and thus eligible
for the 2022 award. But also it's always sort of a nice surprise to hear that
people <em>read</em> what I write.</p>
<p>I described <a href="/paper-continuous-choices-mvt/">this paper before</a>, and
wrote <a href="/choosing-functions-for-mvt-abscissa/">an additional note on how we chose our functions and made the figures</a>.</p>
<p>When my wife learned that this paper won an award, she asked
if this "was that paper you really liked that you wrote wtih Miles during grad
school"? Yes! It is that paper. I really do like this paper.</p>
<h2>Origin story</h2>
<p>The string of ideas leading to this paper began when I first began to TA
calculus at Brown. I was becoming aware of the fact that I would soon be
teaching calculus courses, and I began to really think about why we structured
the courses the way we do.<sup>2</sup>
<span class="aside"><sup>2</sup>I don't have a completely satisfactory
explanation for everything, especially for the "integration bag of tricks" or
the "series convergence bag of tricks" portion. But upon reflection, I can
understand the purpose of <em>most</em> portions of the calculus sequence.</span></p>
<p>One of my least favorite questions was the <em>verify the mean value theorem for
the function $f(x)$ on the interval $[1, 4]$ by...</em> sort of question. The
problem is that this question is really just a way to check that one
understands the statement of the mean value theorem — and this statement
<em>feels</em> very unimportant.</p>
<p>But it turns out that the mean value theorem is <em>extremely</em> important. The mean
value theorem and intermediate value theorems are the two sneaky abstractions
that encapsulate underlying topological ideas that we typically brush aside in
introductory calculus courses.</p>
<h3>We don't do calculus on real valued functions over the rationals</h3>
<p>We illustrate this with two examples in the analogous case of functions
\begin{equation*}
f: \mathbb{Q} \longrightarrow \mathbb{R}.
\end{equation*}
I would expect that many introductory students would think these functions
<em>feel intuitively about the same</em> as functions from $\mathbb{R}$ to
$\mathbb{R}$. But in fact both the intermediaet value theorem and mean value
theorem are false for real-valued functions defined on the rationals.</p>
<p>The intermediate value theorem is false here. Consider the function
\begin{align*}
f(x): \mathbb{Q} &\longrightarrow \mathbb{R}\\
x &\mapsto x^2 - 2.
\end{align*}
On the interval $[0, 2]$, for example, we see that $f(0) = -2$ and $f(2) = 2$.
But there is no $x \in \mathbb{Q} \cap [0, 2]$ such that $f(x) = 0$.</p>
<p>The mean value theorem is false too. Consider the function
\begin{align*}
f(x): \mathbb{Q} &\longrightarrow \mathbb{R} \\
x &\mapsto \begin{cases} 0 & \text{if } x^2 > 2, \\
1 & \text{if } x^2 > 2.
\end{cases}
\end{align*}
For this function, $f'(x) = 0$ everywhere, even though the function isn't
constant. (But it is <em>locally constant</em>). We try to avoid pathological examples
initially.<sup>3</sup>
<span class="aside"><sup>3</sup>It is a funny thing. Initially, we pretend that everything
is as nice as possible to build intuition. Then we see that there are
pathological counterexamples and use these to sharpen intuition. Seeing these
too early is potentially misleading. Implicit within the order is an idea of
what <em>usually</em> hapens and what behavior is <em>exceptional</em>.</span></p>
<p>It turns out that the topological space the functions are defined on <em>really
matter.</em></p>
<h3>Mean value theorem as abstraction</h3>
<p>I don't talk about this in a typical calculus class because topological
concerns are almost entirely ignored. We work with the practical case of
real-valued functions defined over the reals. And the key tools we use to hide
the underlying topological details are the intermediate and mean value
theorems.</p>
<p>I didn't fully realize this until I wondered whether we could teach calculus
<em>without</em> covering these two theorems.</p>
<p>It might be possible. But I decided that it was a bad idea.</p>
<p>Instead, I think it's a good idea to give students a better idea of how these
two theorems are two of the most important ideas in calculus. I kept this idea
in mind when I wrote <a href="/an-intuitive-introduction-to-calculus/">An intuitive introduction to
calculus</a> for my students in 2013.</p>
<p>And more broadly, I began to investigate just how many things we could reduce,
as directly as possible, to the mean value theorem. Miles realized we could
investigate the continuity of the mean value abscissa with a low dimnsional
implicit function theorem and baby version of Morse's lemma, and the rest is
history.</p>
<p>I've collected so many random facts and questions tangentially related to the
mean value theorem over the years. But I could always collect more!</p>
<h3>MathFest</h3>
<p>I attended <a href="https://www.maa.org/meetings/mathfest">MathFest</a>
for the first time to receive this award in person. MathFest was an
extraordinary and interesting experience. I was particularly impressed by how
many sessions and talks focused on actionable ways to improve math education
in the classroom <em>now</em>.</p>https://davidlowryduda.com/halmos-ford-awardMon, 17 Oct 2022 03:14:15 +0000Slides from a talk at Maine-Quebechttps://davidlowryduda.com/slides-mq2022David Lowry-Duda<p>This weekend I'm at the
<a href="https://archimede.mat.ulaval.ca/QUEBEC-MAINE/22/qm22.html">Maine-Québec</a>
number theory conference. This year, it's in Québec!</p>
<p>I'm giving a talk surveying ideas from <a href="https://arxiv.org/abs/2204.01651"><em>Improved bounds on number fields of
small degree</em></a>, joint work with Anderson,
Gafni, Hughes, Lemke Oliver, Thorne, Wang, and Zhang.</p>
<p>The slides for this talk are <a href="/wp-content/uploads/2022/10/MaineQuebec2022_DLD.pdf">available
here</a>.</p>https://davidlowryduda.com/slides-mq2022Sat, 15 Oct 2022 03:14:15 +0000Initial thoughts on visualizing number fieldshttps://davidlowryduda.com/pcmi-vis-nfDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/pcmi-vis-nfFri, 22 Jul 2022 03:14:15 +0000Computing coset representatives for quotients of congruence subgroupshttps://davidlowryduda.com/coset-reps-cong-subgroupsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/coset-reps-cong-subgroupsWed, 20 Jul 2022 03:14:15 +0000Note Series on Zeros of General Dirichlet Serieshttps://davidlowryduda.com/zeros-of-dirichlet-series-mastheadDavid Lowry-Duda<p>I wrote a series of notes on some aspects of the theoretical behavior
and zeros of Dirichlet series in the <em>extended Selberg Class</em>. There are a few
different ways of extending the Selberg Class, but here I mean Dirichlet series
\begin{equation*}
L(s) = \sum_{n \geq 1} \frac{a(n)}{n^s}
\end{equation*}
with a functional equation of the shape $s \mapsto 1 - s$, satisfying</p>
<ol>
<li>A Ramanujan–Petersson bound <em>on average</em>, meaning that $\sum_{n \leq N}
\lvert a(n) \rvert^2 \ll N^{1 + \epsilon}$ for any $\epsilon > 0$.</li>
<li>$L(s)$ has <em>analytic continuation</em> to $\mathbb{C}$ to an entire function of
finite order.</li>
<li>$L(s)$ satisfies a functional equation of the typical self-dual shape.</li>
</ol>
<p>There is no assumption of an Euler product.</p>
<p>Although I write these notes for general Dirichlet series in the extended
Selberg class, I was really thinking about Dirichlet series associated to
half-integral weight modular forms.</p>
<h2>Links and Summaries of each Note</h2>
<ol>
<li>
<p><a href="https://davidlowryduda.com/zeros-of-dirichlet-series/">The first note</a> sets the stage, defines the relevant series, and
establishes fundamental results to be used later. Jensen's inequality and
Jensen's theorem are given, as are generic convexity bounds for these
Dirichlet series.</p>
<p>The first note also contains a proof of a new fact to me:<sup>1</sup>
<span class="aside"><sup>1</sup>but only new
<em>to me</em>. I based my presentation of this fact on notes from Hardy from a
century ago.</span>
if a Dirichlet series has a zero in its domain of
absolute convergence, then it has infinitely many, and these zeros are
<em>almost periodic</em>.</p>
</li>
<li>
<p><a href="https://davidlowryduda.com/zeros-of-dirichlet-series-ii/">The second note</a> is relatively short and shows that these Dirichlet
series are in fact entire of order $1$. Then it establishes weak
zero-counting results based only on this order of growth.</p>
<p>These are foundational ideas, and in essence are no different than analysis
for $\zeta(s)$ or typical $L$-functions in the Selberg class.</p>
</li>
<li>
<p><a href="https://davidlowryduda.com/zeros-of-dirichlet-series-iii/">The third note</a> describes a theorem of Potter from
1940, proving Lindelöf-on-average (in the $t$-aspect) on certain vertical
lines, depending on the degree. This suggests that for Dirichlet series
associated to half-integral weight modular forms, the Lindelöf Hypothesis
might be <em>true</em> even though the Riemann Hypothesis is false.</p>
</li>
<li>
<p><a href="https://davidlowryduda.com/zeros-of-dirichlet-series-iv/">The fourth note</a> proves that one hundred percent of zeros of
Dirichlet series associated to half-integral weight modular forms lie within
$\epsilon$ of the critical line, for any $\epsilon > 0$.</p>
<p>For these proofs, the <em>standard proofs</em> that I've seen elsewhere for Selberg
$L$-functions don't quite apply. Fundamentally, this is because zeros are
counted using some form of the argument principle, and for Selberg
$L$-functions with Euler product, the logarithmic derivative of the
$L$-functions is both easier to understand (because of the Euler product)
and gives convenient access to sum up changes in the argument. But in
practice the actual methods I use were mostly lightly modified from the
vast, extensive literature on various techniques applied to study $\zeta(s)$
(before better techniques arose using the Euler product in more
sophisticated ways).</p>
<p>I note that I do not know how to prove a lower bound for the percentage of
zeros lying <em>directly on the critical line</em> for series without an Euler
product. It is possible to apply a generalized form of the classical
argument of Hardy to show that there are infinitely many zeros on the line,
but I haven't managed to modify any of the various results for lower bounds.</p>
</li>
</ol>
<p>I will also note that with Thomas Hulse, Mehmet Kiral, and Li-Mei Lim, I've
computed many examples of zeros of half-integral weight forms and I
conjecture that 100 percent of their zeros lie <em>directly on the critical line</em>.
(But there are many, many zeros not on the critical line). I described some of
those computations <a href="https://davidlowryduda.com/slides-from-a-talk-on-half-integral-weight-dirichlet-series/">in this
talk</a>.</p>https://davidlowryduda.com/zeros-of-dirichlet-series-mastheadSat, 09 Jul 2022 03:14:15 +0000Zeros of Dirichlet Series IVhttps://davidlowryduda.com/zeros-of-dirichlet-series-ivDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/zeros-of-dirichlet-series-ivFri, 08 Jul 2022 03:14:15 +0000Zeros of Dirichlet Series IIIhttps://davidlowryduda.com/zeros-of-dirichlet-series-iiiDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/zeros-of-dirichlet-series-iiiThu, 07 Jul 2022 03:14:15 +0000Visualizations for Quanta's 'What is the Langlands Program?'https://davidlowryduda.com/quanta-langlands-vizDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/quanta-langlands-vizWed, 01 Jun 2022 03:14:15 +0000Now pagehttps://davidlowryduda.com/nowDavid Lowry-Duda<h1>What I'm doing Now</h1>
<p>Last updated <strong>5 June 2024</strong></p>
<p>This is a <a href="https://nownownow.com/about">now page</a>. It is a written version of
what I might say if we met in person and you asked me what I'm up to. The date
at the top is important: it helps the reader determine if this is a <em>now</em> page
or a <em>then</em> page.</p>
<h2>Big News</h2>
<p>This summer, my wife and I are expecting a child and I hope to be very
dedicated to that. I'm slower to respond to usual. We're very excited.</p>
<h2>Research Travel and News</h2>
<p>I'm typically in either Boston or Providence.</p>
<p>I have limited travel plans in the coming months.</p>
<h2>Recent Travel and News</h2>
<p>2024 happenings</p>
<ul>
<li>On April 22nd, I'm giving a talk at the Brown Algebra seminar.</li>
<li>During the week of April 8th, I was in NYC. But not <em>on</em> April 8th, as I was
actually in Vermont looking for totality in an eclipse. And it was
<strong>awesome</strong>.</li>
<li>During the week of April 1st, I was in Scotland.</li>
<li>I was at <a href="https://conferences.cirm-math.fr/2970.html">Lean for the Curious Mathematician</a> at CIRM.</li>
<li>I co-organized of the session <strong>Arithmetic Geometry with a View towards
Computation</strong>, at the Joint Math Meetings.</li>
<li>During the week of January 8th, I was in NYC attending a conference on
computational number theory at the Simons Foundation.</li>
</ul>
<p>2023 happenings</p>
<ul>
<li>I am a co-organizer of a workshop that will be held at AIM from 4 December to
8 December, 2023: <a href="https://aimath.org/workshops/upcoming/cyberinfrastructure/">Open-source cyberinfrastructure supporting mathematics
research</a>. Our
registration is filled up, but if you are interested in the broader problems,
let me know. I'm sure there is lots to do.</li>
<li>I was at Maine-Quebec (again) this year. It was wonderful seeing so many
familiar faces.</li>
<li>I was at <a href="https://icerm.brown.edu/events/sc-23-lucant/">Lucant</a> from July
10th to July 14th, and at the Murmurations hot topic event the week before.</li>
<li>I returned to Marseilles, France in early May.</li>
<li>I was at AIM (in San Jose) in early March.</li>
<li>I was in Marseilles, France from late February to early March.</li>
</ul>
<p>2022 happenings</p>
<ul>
<li>I was at <a href="https://archimede.mat.ulaval.ca/QUEBEC-MAINE/">Maine-Quebec</a>
on October 15-16.</li>
<li>I was at the <a href="https://scgp.stonybrook.edu/archives/35461">Simons Center for Geometry and Physics</a>
roughly from 24 October to 18 November. This is at Stony Brook, NY.</li>
<li>I was at the <a href="https://math.mit.edu/~edgarc/MCW2.html">LMFDB Modular Curves Workshop</a>,
even though it is in the middle of my stay at Stony Brook.</li>
<li>I was at the <a href="https://www.jointmathematicsmeetings.org/jmm">Joint Math Meetings</a> in Boston from 4 January to 7 January.
I am not presenting this year.</li>
<li>I was in New York City during the week of January 9th to January 13th. This
includes attending the annual meeting of the Simons Foundation.</li>
</ul>
<h2>Research</h2>
<p>I'm currently actively working on several projects.</p>
<ul>
<li>Check out <a href="https://code4math.org/">code4math</a>, a new community for
mathematicians, programmers, and enthusiasts. We have a zulip chat! I'll talk
more about this elsewhere.</li>
<li>I'm working with Jeff Hoffstein and Bertrand Cambou on a project concerning
biometric cryptography. Our idea was a finalist for the Innovation of the
Year award at Innovation@Brown! I'm keeping a bit quiet about details now,
but I'll be very excited to talk about it later.</li>
<li>The LMFDB is about to have many more Maass forms than before. Stay tuned.</li>
</ul>
<h2>Teaching</h2>
<p>I'm not currently teaching.</p>
<h2>Fun</h2>
<ul>
<li>I've been biking a lot. I'm using a site called Wandrer to track individual
streets that I bike, and I'm on a quest to bike (or run/walk) every street
(scope to be determined later). So far, I've completed a bit over 10 percent
of the streets in Suffolk County, which includes Boston.</li>
<li>I make my own soda. Think <a href="https://en.wikipedia.org/wiki/Soda_jerk">soda jerk</a>.
What this means is that I make syrups (typically fruity, less sweet than
store bought soda) and have a CO2 tank that I use to carbonate water
(typically at a bit more than typical carbonation). Our current favorites are
grapefruit soda and rhubarb soda.</li>
</ul>https://davidlowryduda.com/nowTue, 10 May 2022 03:14:15 +0000Slides from a talk on Improved Bounds for Number Fieldshttps://davidlowryduda.com/talk-improved-bounds-number-fieldsDavid Lowry-Duda<p>At a meeting of the Algebraic Geometry, Number Theory, and Computation Simons
Collaboration, I gave a short talk surveying the results and ideas of
<a href="https://arxiv.org/abs/2204.01651"><em>Improved bounds on number fields of small
degree</em></a>, joint work with Anderson, Gafni,
Hughes, Lemke Oliver, Thorne, Wang, and Zhang.</p>
<p>In many ways this is a follow-up to an <a href="/slides-from-a-talk-on-quantitative-hilbert-irreducibility/">earlier talk I gave to the Simons
Collaboration</a>
about work towards a quantitative form of Hilbert's irreducibility theorem.
Both of these results grew out of an AIM workshop.</p>
<p>The slides for this talk <a href="/wp-content/uploads/2022/05/SimonsMay2022_Schmidt.pdf">are available here</a>.</p>https://davidlowryduda.com/talk-improved-bounds-number-fieldsMon, 09 May 2022 03:14:15 +0000Simplified proofs and reasoning in "Improved Bounds on Number Fields of Small Degree"https://davidlowryduda.com/simplified-improved-boundsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/simplified-improved-boundsSat, 30 Apr 2022 03:14:15 +0000Paper: Counting number fields of small degreehttps://davidlowryduda.com/paper-counting-number-fieldsDavid Lowry-Duda<h1>Counting number fields of small degree</h1>
<p>Recently, my collaborators Theresa C. Anderson, Ayla Gafni, Kevin Hughes,
Robert J. Lemke Oliver, Frank Thorne, Jiuya Wang, Ruixiang Zhang, and I
uploaded a <a href="https://arxiv.org/abs/2204.01651">preprint to the arxiv</a> called
"Improved bounds on number fields of small degree". This collaboration is a
continuation<sup>1</sup>
<span class="aside"><sup>1</sup>though with a few different cast members</span>
of our
<a href="https://arxiv.org/abs/2107.02914">previous work on quantitative Hilbert
irreducibility</a>, which will appear in IMRN.</p>
<p>In this paper, we improve the upper bound due to Schmidt for estimates on the
number of number fields of degree $6 \leq n \leq 94$. Actually, we improve on
Schmidt for all $n \geq 6$, but for $n \geq 95$ Lemke Oliver and Thorne have
different, better bounds.</p>
<p>Schmidt proved the following.</p>
<div class="theorem" data-text="Schmidt 95">
<p>For $n \geq 6$, there are $\ll X^{(n+2)/4}$ number fields of degree $n$ and
having discriminant bounded by $X$.</p>
</div>
<p>We prove a polynomial improvement that decays with the degree.</p>
<div class="theorem" data-text="AGHLDLOTWZ">
<p>For $n \geq 6$, there are
\begin{equation*}
\ll_\epsilon X^{\frac{n + 2}{4} - \frac{1}{4n - 4} + \epsilon}
\end{equation*}
number fields of degree $n$ and having discriminant bounded by $X$.</p>
</div>
<p>Towards the end of this project, we learned that Bhargava, Shankar, and Wang
were also producing improvements over Schmidt in this range. On the same day
that we posted our paper to the arxiv, they posted <a href="https://arxiv.org/abs/2204.01331">their
paper</a>, in which they prove the following.</p>
<div class="theorem" data-text="BSW">
<p>For $n \geq 6$, there are
\begin{equation*}
\ll_\epsilon X^{\frac{n + 2}{4} - \frac{1}{2n - 2} + \frac{1}{2^{2g}(2n-2)} + \epsilon}
\end{equation*}
number fields of degree $n$ and having discriminant bounded by $X$, where $g
= \lfloor \frac{n-1}{2} \rfloor$.</p>
</div>
<p>In both our work and in BSW, the broad strategy is based on Schmidt's approach.
For a monic polynomial
\begin{equation*}
f(x) = x^n + c_1 x^{n-1} + \cdots + c_n,
\end{equation*}
we define the height $H(f)$ to be
\begin{equation*}
H(f) := \max( \lvert c_i \rvert^{1/i} ).
\end{equation*}
Then Schmidt showed that to count number fields of discriminant up to $X$, it
suffices to count polynomials of height roughly up to $X^{1/(2n - 2)}$.</p>
<p>The challenge is that <em>most</em> of these polynomials cut out number fields of
discriminant <em>much larger</em> than $X$. The challenge is then to count relevant
polynomials and to identify irrelevant polynomials.</p>
<p>Remarkably, the broad strategy in out work and in BSW for identifying
irrelevant polynomials is similar. For a prototypical polynomial $f$ of degree
$n$ and of height $X^{1/(2n-2)}$, we should expect the discriminant
of $f$ to be approximately $X^{n/2}$. We should also expect the field cut
out by $f$ to have discriminant roughly this size. Recalling that we are
counting number fields of discriminant only up to $X$, this means that a
<strong>relevant</strong> polynomial of this height must be exceptional in one of two ways:</p>
<ol>
<li>either the discriminant of $f$ is unusually small, or</li>
<li>the discriminant of the number field cut out by $f$ is much smaller than the
discriminant of $f$.</li>
</ol>
<p>In both out work and in BSW, those $f$ with unusually small discriminant
are bounded straightforwardly and lossily.</p>
<p>The heart of the argument is in the latter case. Here, the ratio of the two
discriminants is the square of the index
$[\mathcal{O}_K : \mathbb{Z}[\alpha]]$, where $\alpha$ is a root of $f$. Thus
we bound the number of polynomials whose discriminants have large square
divisors.</p>
<p>In establishing bounds for polynomials with particularly squarefull
discriminants that our ideas and those in BSW significantly diverge.</p>
<p>In our work, we study the problem locally. That is, we study the behavior of
$\psi_{p^{2k}}$, the characteristic function for monic polynomials of degree
$n$ over $\mathbb{Z}/p^{2k}\mathbb{Z}$ having discriminant congruent to $0
\bmod p^{2k}$. As in our work on quantitative Hilbert irreducibility, we
translate this problem into a sieve problem with local weights coming from
Fourier transforms $\widehat{\psi_{p^{2k}}}$ after passing through Poisson
summation, and we study the Fourier transforms using a variety of somewhat
ad-hoc techniques.</p>
<p>In BSW, they reason differently. They use recent explicit quantitative Hilbert
irreducibility work from Castillo and Dietmann to replace the fundamental
underlying sieve. To do this, they translate the task of counting relevant
polynomials into the task of counting <strong>distinguished</strong> points in spaces of
$n \times n$ symmetric matrices — and then show that Castillo and
Dietmann's work bounds these points.</p>
<p>Even though the number field count in BSW is stronger than our number field
count, we think that our methods and ideas will have other applications.
Further, we've noticed remarkable interactions between local Fourier analysis
and discriminants of polynomials.</p>
<h2>See also</h2>
<ul>
<li>See also my note on <a href="/simplified-improved-bounds/">a description and simplified proofs of many of the ideas in this paper</a>.</li>
</ul>https://davidlowryduda.com/paper-counting-number-fieldsThu, 28 Apr 2022 03:14:15 +0000Comments on this sitehttps://davidlowryduda.com/comments-v4David Lowry-Duda<p>I am on the fourth iteration of a comment system for this site.</p>
<h3>Comment Graveyard</h3>
<p>Initially I used Wordpress default, which is okay-ish. But there are problems
with formatting and writing mathjax-able math in comments.<sup>1</sup>
<span class="aside"><sup>1</sup>More generally,
post-rendering Wordpress content with javascript is a security nightmare and is
often hard. This is one reason why this site is no longer using Wordpress.</span></p>
<p>Then I used Disqus. Disqus works by running externally hosted
javascript.<sup>2</sup>
<span class="aside"><sup>2</sup>Also a security nightmare.</span>
As one might worry, they
began to inject ads into comment sections. I <strong>do not run ads</strong> and that was
unacceptable. I've learned a lesson about external dependencies.</p>
<p>Then I used a comment system built on top of Wordpress. This was slightly
better, but written in PHP.</p>
<h3>Simple Comments</h3>
<p>The new comment system is email-based. <strong>Plain email.</strong> I drew inspiration from
<a href="https://tdarb.org/blog/poormans-comment-system.html">tdarb</a> and
<a href="https://solar.lowtechmagazine.com/">lowtechmagazine</a> (who have precisely the
same comment "system").</p>
<p>I know that requiring an email adds a small amount of <em>friction</em> in the comment
process. I don't know how this will affect comment spam,<sup>3</sup>
<span class="aside"><sup>3</sup>which was wildly
common in previous iterations.</span>
but I think it might balance out.</p>
<p>Currently, I enable a significant amount of markup in comments. The comments
are processed with markdown and allow mathjax (assuming that mathjax is enabled
on the page). This is because I use the same preprocessing on comments as I do
on pages for this site.</p>https://davidlowryduda.com/comments-v4Sat, 02 Apr 2022 03:14:15 +0000Plaintext Emailhttps://davidlowryduda.com/plaintext-emailDavid Lowry-Duda<p>Some email clients and email marketing groups have popularized email usage
patters which are considered poor form for developer emails, technical emails,
or on mailing lists.</p>
<h3>Plaintext Email</h3>
<p>Many email clients compose emails with HTML, enabling rich text formatting.
Rich text formatting hinders development-oriented email conversations as it can
break simple tasks like copy-pasting code snippets.</p>
<p>HTML emails are mainly used for marketing (or to include tracking pixels, i.e.
special images hosted on a server that tracks information about the receiver
upon loading them). HTML emails are one of the most common vectors for
phishing, they're less accessible, and they're viewed inconsistently among
receivers.</p>
<p>If you're sending an email, consider preferring plaintext. If you're
sending a technical email or an email concerning programming, you should very
strongly prefer plaintext.</p>
<p>For more on plaintext email, see <a href="https://useplaintext.email/">useplaintext.email</a>.</p>https://davidlowryduda.com/plaintext-emailMon, 28 Mar 2022 03:14:15 +0000Slides from a talk: Computing and verifying Maass formshttps://davidlowryduda.com/talk-computing-and-verifying-maass-formsDavid Lowry-Duda<p>Today, I'm giving a talk on ongoing efforts to compute and verify Maass
forms. We describe Maass forms, Hejhal's algorithm, and the idea of how to
rigorously improve weak initial estimates to high precision estimates.</p>
<p><a
href="http://davidlowryduda.com/wp-content/uploads/2022/03/BYUMaass-compressed.pdf">The
slides for my talk are available here</a>.</p>https://davidlowryduda.com/talk-computing-and-verifying-maass-formsThu, 24 Mar 2022 03:14:15 +0000Slides from a talk: How computation and experimentation inform researchhttps://davidlowryduda.com/talk-how-computation-and-experimentation-inform-researchDavid Lowry-Duda<figure class="center">
<img src="https://davidlowryduda.com/wp-content/uploads/2022/03/byu_focus_on_math.png" width="600" />
</figure>
<p>Today I'm giving a talk at BYU on computation, experimentation, and
research. The first half of the talk examines the historic role of computation
and experimentation. Several examples are given. In the second half, I take a
more personal angle and begin to describe how I've incorporated computational
and experimental ideas into my own work.</p>
<p><a href="http://davidlowryduda.com/wp-content/uploads/2022/03/BYU_FOCUS_compressed.pdf">The
slides from my talk are available here</a>.</p>https://davidlowryduda.com/talk-how-computation-and-experimentation-inform-researchWed, 23 Mar 2022 03:14:15 +0000colormapplot - like phasematplot, but with colormapshttps://davidlowryduda.com/colormapplotDavid Lowry-Duda<p>I am happy to announce that an enhanced version of <a href="https://github.com/davidlowryduda/phase_mag_plot/">phasemagplot</a> is now available, which I refer to as <code>colormapplot</code>. (See also the <a href="/phase_mag_plot-a-sage-package-for-plotting-complex-functions/">announcement post for phasemagplot</a>).</p>
<p>This is available at <a href="https://github.com/davidlowryduda/phase_mag_plot/">davidlowryduda/phase<em>mag</em>plot </a>on github as a sage library. See the github page and README for examples and description. The docstring from within sage should also be of use.</p>
<p>As a general rule, the interface is designed to mimic the complex plotting interface from sage as closely as possible. The primary difference here is that there is an optional <code>cmap</code> keyword argument. This can be given any matplotlib-compatible colormap, and the resulting image will be given with that colormap.</p>
<p>This is capable of producing colormapped, contoured images such as the following.</p>
<figure class="center shadowed">
<img src="https://davidlowryduda.com/wp-content/uploads/2022/02/poly_cividis.png" width="500" />
</figure>https://davidlowryduda.com/colormapplotFri, 11 Feb 2022 03:14:15 +0000Zeros of Dirichlet Series IIhttps://davidlowryduda.com/zeros-of-dirichlet-series-iiDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/zeros-of-dirichlet-series-iiMon, 24 Jan 2022 03:14:15 +0000Zeros of Dirichlet Serieshttps://davidlowryduda.com/zeros-of-dirichlet-seriesDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/zeros-of-dirichlet-seriesThu, 20 Jan 2022 03:14:15 +0000On van der Waerden's Conjecturehttps://davidlowryduda.com/on-van-der-waerdens-conjectureDavid Lowry-Duda<p>Recently<sup>1</sup>
<span class="aside"><sup>1</sup>actually a few weeks ago</span>
, Manjul Bhargava uploaded his
paper <a href="https://arxiv.org/abs/2111.06507">Galois groups of random
integer polynomials and van der Waerden's Conjecture</a> to the arXiv. The
primary result of this paper is to prove van der Waerden's conjecture that the
number of polynomials with "small" Galois group is "small".</p>
<p>Improving the bounds towards this conjecture was one of the purposes of my
recent paper with Anderson, Gafni, Lemke Oliver, Shakan, and Zhang (accepted to
IMRN; <a href="https://arxiv.org/abs/2107.02914">arXiv preprint</a>; previous
discussion <a
href="/paper-announcement-quantitative-hit-and-almost-prime-polynomial-discriminants/">on
this site</a>). I'll refer to this paper as AGLDLOSZ<sup>2</sup>
<span class="aside"><sup>2</sup>If we make my last
name "LoDu" instead of LD, then AGLoDuLOSZ is pronouncable in Polish or
Hungarian, which is what I say to myself when I read it.</span>
.</p>
<p>For $H \geq 2$, let $E _n(H)$ count the number of monic integer polynomials
$f(x) = x^n + a _1 x^{n-1} + \cdots + a _n$ of degree $n$ with $\lvert a _i
\rvert \leq H$, <em>and</em> whose Galois group is not the full Galois group $S
_n$. Classical reasoning due to Hilbert shows that $E _n(H) = o(H^n)$,
sometimes phrased as indicating that one hundred percent of monic polynomials
are irreducible and have Galois group $S _n$.</p>
<p>Van der Waerden's conjecture concerns improving this count. Improvements using
varied techniques and ideas have appeared over the years. Prior to the paper of
Bhargava, the best record was held by my collaborators and me in AGLDLOSZ, when
we showed that $$ E _n(H) = O(H^{n - \frac{2}{3} + \frac{2}{3n + 3} +
\epsilon}). $$ But now Bhargava proves the conjecture outright, proving that $$
E _n(H) = O(H^{n - 1}). $$</p>
<p>This is a remarkable improvement and a very good result!</p>
<p>As in AGLDLOSZ, Bhargava studies the problem with a mixture algebraic
techniques and Fourier analysis. Let $V(\mathbb{F} _p)$ denote the space of
monic degree $n$ (which I keep implicit in the notation) polynomials over
$\mathbb{F} _p$. For any complex function $\psi _p$ on $V(\mathbb{F} _p)$,
define its Fourier transform $\widehat{\psi} _p$ by $$ \widehat{\psi} _p(x) =
\frac{1}{p^n} \sum _{g \in V^*(\mathbb{F} _p)}
\psi _p(g) \exp\left( \frac{2\pi i \langle f, g \rangle}{p} \right).$$</p>
<p>We should think of $\psi _p$ as standing for a characteristic function of some appropriate set $S \subset V(\mathbb{F} _p)$. If $\phi$ is a Schwarz function approximating the characteristic function of $[-1, 1]^n$, then Poisson summation gives $$ \sum _{f \in V(\mathbb{Z})} \phi(f/H) \psi _p(f) =
H^n \sum _{g \in V^*(\mathbb{Z})} \widehat{\phi}(gH/p) \widehat{\psi} _p(g).
\tag{1}
$$ For reasonable $\phi$, the left hand side of $(1)$ gives a good upper bound for the number of elements projecting to $S$ from the polynomial box $[-H, H]^n$. As $\phi$ is Schwarz, we should expect the rapid decay of $\widehat{\phi}$ to rapidly bound the error term on the right hand side by $\max \lvert \widehat{\psi} _p(g) \rvert$ times the size of the box $H^n$, with a possible main term coming from the $g = 0$ term.</p>
<p>In AGLDLOSZ, we used precisely this Fourier setup in a modified form of Selberg's sieve. We focused on counting polynomials $f$ whose Galois group was a subgroup of $A _n$, and we chose $\psi _p$ to be roughly an indicator function that $f (\bmod p)$ had splitting type mod $p$ that was compatible with Galois group $A _n$. (Actually, we were sieving the <em>incompatible</em> elements <em>out</em>, but this is unimportant). The limit of our result was in understanding the size of the error term in $(1)$, which amounts to providing good bounds for the Fourier transform $\widehat{\psi} _p(g)$. For us, we related this error term to general bounds for the Mobius $\mu$ function and applied these general bounds.</p>
<p>In this paper, Bhargava uses more refined indicator functions. Suppose that the polynomial $f$ factors over $\mathbb{F} _p$ as $\prod P _i^{e _i}$, where each $P _i$ is irreducible (and distinct) and the degree of $P _i$ is $f _i$. Then the degree of $f$ is $\sum f _i e _i$ and we can define the <em>index</em> of $f$ mod $p$ as $\sum (e _i - 1) f _i$.</p>
<p>Bhargava roughly considers indicator functions for polynomials having specified index, and shows that for almost any index the corresponding Fourier transform has significant decay. This is roughly the content of Proposition 24 and Corollary 25 (for non-monic polynomials) or Proposition 28 and Corollary 29 (for monic polynomials).</p>
<p>The ideas and methods used in the proofs of Propositions 24 and 28 in particular are very powerful. I think they're worth meditating over, and I'll spend more time thinking about them.</p>
<p>To complete the argument, Bhargava then splits up the regions to estimate. Note that counting polynomials and counting the number fields generated by those polynomials are very similar; here, we count the number fields. Using a result of Lemke Oliver and Thorne, it is possible to bound the number of polynomials leading to number fields with "small" absolute discriminant and "small" product of ramified primes. If the product of ramified primes is "small" but the discriminant is "large", then the index of these polynomials must be large and is thus bounded by his index counts above.</p>
<p>The third case, where the product of the ramified primes is large, takes more work. Bhargava supplies an additional argument using discriminants. In short, one can show that for each ramified prime $p _r$, the source polynomial $f$ must have a triple root or a pair of double roots mod $p _r$. It turns out that this controls the mod $p$ structure of an iterated discriminant, and counting the number of polynomials giving this structure gives a bound $O(H^{n-1 + \epsilon})$. (This is my summary of the bottom paragraphs of pg 22 on the arXiv version).</p>
<p>Further work is needed to remove the $\epsilon$, but this takes small details when compared to the earlier, bigger ideas.</p>https://davidlowryduda.com/on-van-der-waerdens-conjectureTue, 07 Dec 2021 03:14:15 +0000Slides from a talk on Quantitative Hilbert Irreducibilityhttps://davidlowryduda.com/slides-from-a-talk-on-quantitative-hilbert-irreducibilityDavid Lowry-Duda<p>I'm giving a talk today on my recent and forthcoming work in collaboration with Theresa Anderson, Ayla Gafni, Robert Lemke Oliver, George Shakan, Frank Thorne, Jiuya Wang, and Ruixiang Zhang. The <a rel="noreferrer noopener" href="/wp-content/uploads/2021/11/SimonsNov2021_HIT.pdf" target="_blank">slides for my talk can be found here</a>.</p>
<p>This talk includes some discussion of our paper to appear in IMRN (<a rel="noreferrer noopener" href="https://arxiv.org/abs/2107.02914" target="_blank">link to the arXiv version</a>, which is mostly the same as what will be published). (See also my <a rel="noreferrer noopener" href="/paper-announcement-quantitative-hit-and-almost-prime-polynomial-discriminants/" target="_blank">previous discussion on this paper</a>). But I'll note that in this talk I lean towards a few ideas that did not make it into the paper, but which we are using in current work.</p>
<p>In particular, in our paper we don't need to use group actions or classify orbit sizes, but it turns out that this is a very strong idea! I'll note that in a very particular case, Thorne and Taniguchi have applied this type of orbit counting method <a rel="noreferrer noopener" href="https://arxiv.org/abs/1607.07827" target="_blank">in their paper</a> "Orbital exponential sums for prehomogeneous vector spaces" to gain extremely strong, specific understanding of Fourier transform for their application.</p>https://davidlowryduda.com/slides-from-a-talk-on-quantitative-hilbert-irreducibilitySat, 06 Nov 2021 03:14:15 +0000Project Report on "Prime Sums"https://davidlowryduda.com/project-report-on-prime-sumsDavid Lowry-Duda<p>This summer, I proposed a research project for <a href="https://promys.org/">PROMYS</a> (the PROgram in Mathematics for Young Scientists), a six-week intensive summer program at Boston University where highly motivated high school students explore mathematics. Three students (Nir Elber, Raymond Feng, and Henry Xie) chose to work on this project, and previous PROMYSer Anupam Datta gave additional guidance. Their <a href="/wp-content/uploads/2021/10/prime-sums-summary.pdf">summary of their findings can be found here</a>. (<strong>UPDATE</strong>: a version of this now appears <a href="https://arxiv.org/abs/2111.02795">on the arXiv</a> too).</p>
<p>Here I briefly describe the project and the work of Nir, Raymond, and Henry.</p>
<p>The project was organized around understanding why the following picture has so much structure.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2021/10/fig_5e7.png" width="100%" />
</figure>
<p>Fundamentally, this image depicts differences between sums related to primes. Let $p_n$ denote the $n$th prime. It follows from the Prime Number Theorem that $p_n \approx n \log n$, and thus that $n p_n \approx n^2 \log n$. One can also show that $$ \sum_{m \leq n} p_m \approx \frac{1}{2} n^2 \log n,$$ and thus we should have that $$ \frac{n p_n}{\sum_{m \leq n} p_m} \to 2.\tag{1}$$</p>
<p>The vertical axis in the image above examines differences between consecutive $n$ in $(1)$ (in log scale), while the horizontal axis gives $n$ (also in log scale).</p>
<p>The fact that $(1) \to 0$ corresponds to the overall downwards trend in the graph. But there is so much more structure! Why do the points fall into "troughs" or along "curtains"? Does each line mean something?</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2021/10/four_bands.png"
width="100%" />
</figure>
<p>In this version, I've colored differences coming from when $p_n$ is a twin prime (in blue), a cousin prime (in green), a sexy prime (in red), or a prime $p$ such that the next prime is $p+8$ (in cyan). The first dot is black because it comes from $2$. The next two correspond to $3$ and $5$ (both twin primes), and the fourth dot corresponds to $7$ and is green because the next prime after $7$ is $11$, and so on.</p>
<p>This is a strong hint at distributional aspects alluded to within the plots.</p>
<p>Nir, Raymond, and Henry proved many things! They quantified the rate of convergence in $(1)$ and thus quantified the guaranteed downward trend in the images and found images that better convey the structure of what's going on better. I was already very impressed, but then they branched out and studied more!</p>
<h3>Cramér's Model</h3>
<p>We chose to investigate a nuanced question: what aspects of the initial plots depend strongly on the fact that the underlying data consists of <em>primes</em>, and what aspects depend only on the fact that the underlying data consists of integers with the same <em>density as the primes</em>?</p>
<p>To study this, one can create a new set of distinguished elements called Promys Primes (PPrimes) with the same density as true primes using probabilistic ideas of Cramér. Let's call $2$ and $3$ PPrimes, and then for each odd $m \geq 5$, we call $m$ a PPrime with probability $2 / \log m$. Do this for a large sequence of $m$, and we get a collection of PPrimes that has (with very high probability) the same density as true primes, but none of the multiplicative structure.</p>
<p>It turns out that for sets of PPrimes, there are analogous pictures and the asymptotics are even better! This is in section 3 of their write-up.</p>
<h3>Gaussian Integers</h3>
<p>We also thought to study analogous situations in related sets of primes, such as the Gaussian integers. Recall that the Gaussian integers $\mathbb{Z}[i] = \{ a + bi : a, b \in \mathbb{Z} \}$ are a unique factorization domain and have a rich theory of primes. Sometimes this theory is very similar to the standard theory of primes over $\mathbb{Z}$. But there are challenges.</p>
<p>One significant challenge is that $\mathbb{C}$ is not ordered. A related challenge is that there are more <em>units</em>. Over $\mathbb{Z}$, both $2$ and $-2$ are primes, but we typically recognize $2$ as being more "simple". For Gaussian primes, there isn't such a choice; for example each of $1 + i, 1 - i, -1 + i, -1 - i$ are Gaussian primes, but none are more simple or fundamental than the others.</p>
<p>More concretely, one has to be careful even with how to define the "sum of the first $n$ primes". One natural thought might be to sum all Gaussian primes $\pi$ that have norm up to $X$. But one can quickly see that this sum is $0$ for analogous reasons to why the sum of all the typical primes with absolute values up to $X$ must vanish ($p + -p = 0$). In the Gaussian case, it is also true that $$ \sum_{N(\pi) \leq X} \pi^2 = 0.$$</p>
<p>But they considered higher powers, where there aren't trivial or obvious reasons for massive cancellation, and they showed that there is <em>always</em> nontrivial cancellation. This is interesting on its own!</p>
<p>Then they also constructed a mixture, a Cramér-type model for Gaussian primes and showed that one should expect nontrivial cancellation there for purely distributional reasons.</p>
<p>I leave the <a href="/wp-content/uploads/2021/10/prime-sums-summary.pdf">details to their write-up</a>. But they've done great work, and I look forward to seeing what they come up with in the future.</p>https://davidlowryduda.com/project-report-on-prime-sumsSat, 30 Oct 2021 03:14:15 +0000Slides from a talk at Maine-Québechttps://davidlowryduda.com/slides-from-a-talk-at-maine-quebecDavid Lowry-Duda<p>At this year's <a href="https://archimede.mat.ulaval.ca/MAINE-QUEBEC/">Maine-Québec Number Theory Conference</a>, I'm giving a talk on <strong>Zeros of half-integral weight Dirichlet series</strong>. <a href="/wp-content/uploads/2021/10/MQ2021.pdf">Here are the slides</a>. I note that the references for the slides are included here at the end.</p>
<p>I'll also note a few open problems that I don't know how to handle and that I briefly describe during the talk.</p>
<ol><li>Is it possible to show that every (symmetrized) Dirichlet series associated to a half-integral weight modular form must have zeros off the critical line? This is true in practice, but seems hard to show.</li><li>Is it possible to determine whether a given Dirichlet series has zeros in the half-plane of absolute convergence? If there is one zero, there are infinitely many - but is there a way of determining if there are any?</li><li>Why does there seem to be a gap around the critical line in zero distribution?</li><li>Can one explain why the pair correlation seems well-behaved (even heuristically)?</li></ol>https://davidlowryduda.com/slides-from-a-talk-at-maine-quebecSat, 02 Oct 2021 03:14:15 +0000Slides from a talk at Bridges 2021https://davidlowryduda.com/slides-from-a-talk-at-bridges-2021david lowry-duda<p>I gave a talk on visualizations of modular forms made with Adam Sakareassen at Bridges 2021. This talk goes with <a rel="noreferrer noopener" href="http://archive.bridgesmathart.org/2021/bridges2021-273.html" target="_blank">our short article</a>. In this talk, I describe the line of ideas going towards producing three dimensional visualizations of modular forms, which I like to call <em>modular terrains</em>. When we first wrote that talk, we were working towards the following video visualization.</p>
<iframe width="560" height="315" src="https://www.youtube.com/embed/s6sdEbGNdic" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>
<p>We are now working in a few different directions, involving informational visualizations of different forms and different types of forms, as well as purely artistic visualizations.</p>
<p>The <a href="/wp-content/uploads/2021/08/towards_flying_through_modular_forms.pdf" target="_blank" rel="noreferrer noopener">slides for this talk can be found here</a>.</p>
<p>I've recently been very fond of including renderings based on a picture of my wife and I in Iceland (from the beforetimes). This is us as a wallpaper (preserving many of the symmetries) for a particular modular form.</p>
<div class="wp-block-image"><figure class="aligncenter size-full is-resized"><a href="/wp-content/uploads/2021/08/family_render_oblique.png"><img src="/wp-content/uploads/2021/08/family_render_oblique.png" alt="" class="wp-image-3014" width="500" height="500"/></a></figure></div>
<p>I reused a few images from <a rel="noreferrer noopener" href="https://visual.davidlowryduda.com/2021/painted_modular_terrains.html" target="_blank">Painted Modular Terrains</a>, which I made a few months ago.</p>
<p>If you're interested, you might also like a few previous talks and papers of mine:</p>
<ul><li><a rel="noreferrer noopener" href="/slides-from-a-talk-on-visualizing-modular-forms/" data-type="post" data-id="2978" target="_blank">Slides from a talk on Visualizing Modular Forms</a></li><li><a rel="noreferrer noopener" href="/slides-from-a-talk-on-computing-maass-forms/" data-type="post" data-id="2936" target="_blank">Slides from a talk on computing Maass forms</a></li><li><a rel="noreferrer noopener" href="/notes-behind-a-talk-visualizing-modular-forms/" data-type="post" data-id="2821" target="_blank">Notes behind a talk: visualizing modular forms</a></li><li><a rel="noreferrer noopener" href="/trace-form/" data-type="post" data-id="2968" target="_blank">Trace form 3.32.a.a</a></li><li><a rel="noreferrer noopener" href="/phase_mag_plot-a-sage-package-for-plotting-complex-functions/" data-type="post" data-id="2889" target="_blank">phase_mag_plot: a sage package for plotting complex functions</a></li><li>A paper: <a rel="noreferrer noopener" href="https://arxiv.org/abs/2002.05234" target="_blank">Visualizing modular forms</a></li><li>A paper: <a rel="noreferrer noopener" href="https://arxiv.org/abs/2002.04717" target="_blank">Computing classical modular forms</a></li><li>Bridges paper: <a href="http://archive.bridgesmathart.org/2021/bridges2021-273.html" target="_blank" rel="noreferrer noopener">Towards flying through modular forms</a></li></ul>https://davidlowryduda.com/slides-from-a-talk-at-bridges-2021Mon, 02 Aug 2021 03:14:15 +0000Paper: Quantitative HIT and Almost Prime Polynomial Discriminantshttps://davidlowryduda.com/paper-announcement-quantitative-hit-and-almost-prime-polynomial-discriminantsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/paper-announcement-quantitative-hit-and-almost-prime-polynomial-discriminantsFri, 09 Jul 2021 03:14:15 +0000Slides from a talk on Visualizing Modular Formshttps://davidlowryduda.com/slides-from-a-talk-on-visualizing-modular-formsDavid Lowry-Duda<p>Yesterday I gave a talk at the University of Oregon Number Theory seminar on <em>Visualizing Modular Forms</em>. This is a spiritual successor to my <a href="https://arxiv.org/abs/2002.05234">paper on Visualizing modular forms</a> that is to appear in Simons Symposia volume <em>Arithmetic Geometry, Number Theory, and Computation</em>. </p>
<p>I've worked with modular forms for almost 10 years now, but I've only known what a modular form looks like for about 2 years. In this talk, I explored visual representations of modular forms, with lots of examples.</p>
<p>The <a href="/wp-content/uploads/2021/05/visualizing_modular_forms-compressed.pdf">slides are available here</a>.</p>
<p>I'll share one visualization here that I liked a lot: a visualization of a particular Maass form on $\mathrm{SL}(2, \mathbb{Z})$.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2021/05/trans_new_mu_contour_viridis.png"
width="100%" />
</figure>https://davidlowryduda.com/slides-from-a-talk-on-visualizing-modular-formsTue, 18 May 2021 03:14:15 +0000Trace form 3.32.a.ahttps://davidlowryduda.com/trace-formDavid Lowry-Duda<p>When asked if I might contribute an image for <a href="https://www.msri.org/programs/332#workshops">MSRI program 332</a>, I thought it would be fun to investigate a modular form with a label roughly formed from the program number, 332. We investigate the trace form <code>3.32.a.a</code>.</p>
<div class="wp-block-image"><figure class="aligncenter size-full is-resized"><img src="/wp-content/uploads/2021/04/portrait_pure_transparent.png" alt="" class="wp-image-2969" width="450" height="450"/></figure></div>
<p>The space of weight $32$ modular forms on $\Gamma_0(3)$ with trivial central character is an $11$-dimensional vector space. The subspace of newforms is a $5$-dimensional vector space.</p>
<p>These newforms break down into two groups: the two embeddings of an abstract newform whose coefficients lie in a quadratic field, and the three embeddings of an abstract newform whose coefficients lie in a cubic field. The label <code>3.32.a.a</code> is a label for the two newforms with coefficients in a quadratic field.</p>
<p>These images are for the trace form, made by summing the two conjugate newforms in <code>3.32.a.a</code>. This trace form is a newform of weight $32$ on $\Gamma_1(3)$.</p>
<p>Each modular form is naturally defined on the upper half-plane. In these images, the upper half-plane has been mapped to the unit disk. This mapping is uniquely specified by the following pieces of information: the real line $y = 0$ in the plane is mapped to the boundary of the disk, and the three points $(0, i, \infty)$ map to the (bottom, center, top) of the disk.</p>
<div class="wp-block-image"><figure class="aligncenter size-full is-resized"><img src="/wp-content/uploads/2021/04/portrait_contoured.png" alt="" class="wp-image-2970" width="450" height="450"/></figure></div>
<p>This is a relatively high weight modular form, meaning that magnitudes can change very quickly. In the contoured image, each contour indicates a multiplicative change in elevation: points on one contour are $32$ times larger or smaller than points on adjacent contours.</p>
<p>I have a bit more about this and related visualizations on my <a href="/2021/newform_orbit_332.html" data-type="URL" data-id="/2021/newform_orbit_332.html">visualization site</a>.</p>https://davidlowryduda.com/trace-formTue, 13 Apr 2021 03:14:15 +0000Slides from a talk on Half Integral Weight Dirichlet Serieshttps://davidlowryduda.com/slides-from-a-talk-on-half-integral-weight-dirichlet-seriesDavid Lowry-Duda<p>On Thursday, 18 March, I gave a talk on half-integral weight Dirichlet series at the Ole Miss number theory seminar.</p>
<p>This talk is a description of ongoing explicit computational experimentation with Mehmet Kiral, Tom Hulse, and Li-Mei Lim on various aspects of half-integral weight modular forms and their Dirichlet series.</p>
<p>These Dirichlet series behave like typical beautiful automorphic L-functions in many ways, but are very different in other ways.</p>
<p>The first third of the talk is largely about the "typical" story. The general definitions are abstractions designed around the objects that number theorists have been playing with, and we also briefly touch on some of these examples to have an image in mind.</p>
<p>The second third is mostly about how half-integral weight Dirichlet series aren't quite as well-behaved as L-functions associated to GL(2) automorphic forms, but sufficiently well-behaved to be comprehendable. Unlike the case of a full-integral weight modular form, there isn't a canonical choice of "nice" forms to study, but we identify a particular set of forms with symmetric functional equations to study. There are several small details that can be considered here, and I largely ignore them for this talk. This is something that I hope to return to in the future.</p>
<p>In the final third of the talk, we examine the behavior and zeros of a handful of half-integral weight Dirichlet series. There are plots of zeros, including a plot of approximately the first 150k zeros of one particular form. These are also interesting, and I intend to investigate and describe these more on this site later.</p>
<p><a href="/wp-content/uploads/2021/03/OleMiss2021_half_wt.pdf">The slides for this talk are available here</a>. </p>
<p></p>https://davidlowryduda.com/slides-from-a-talk-on-half-integral-weight-dirichlet-seriesMon, 22 Mar 2021 03:14:15 +0000A balancing act in "Uniform bounds for lattice point counting"https://davidlowryduda.com/a-balancing-act-in-uniform-bounds-for-lattice-point-countingDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/a-balancing-act-in-uniform-bounds-for-lattice-point-countingMon, 08 Mar 2021 03:14:15 +0000Setting up a Wacom Intuos CTL-4100 drawing tablet on Ubuntu 20.04 LTShttps://davidlowryduda.com/setting-up-a-wacom-intuos-ctl-4100-drawing-tablet-on-ubuntu-20-04-ltsDavid Lowry-Duda<p>For much of the pandemic, when it has come time to write things by hand, I could write on my (old, inexpensive) tablet, or write on paper and point a camera. But more recently I've begun to use collaborative whiteboards, and my tablet simply cannot handle it. To be fair, it's several years old, I got it on sale, and it was even then quite inexpensive. But it's just not up to the task.</p>
<p>So I bought a Wacom drawing tablet to plug into my computer. Specifically, I bought a Wacom Intuos CTL-4100 (about 70 dollars) and have gotten it working on my multiple monitor Ubuntu 20.04 LTS setup.</p>
<p>For many, that would be the end of the story — as these work very well and are just plug-and-play. Or at least, that's the story on supported systems. I use linux as my daily driver, and on my main machine I use Ubuntu. This is explicitly unsupported by Wacom, but there has long been community support and community drivers.</p>
<p>I note here the various things that I've done to make this tablet work out well.</p>
<p>My ubuntu distribution (20.04 LTS) already had drivers installed, so I could just plug it in and "use" the drawing tablet. But there were problems.</p>
<p>Firstly, it turns out that when Wacom Intuos CTL-4100 is first plugged in, the status light on the Wacom dims and indicates that it's miscalibrated. This is immediately noticeable, as the left third of the tablet corresponds to the whole writing area on the screen (which also happens to be incorrect at first — this is the second point handled below).</p>
<p>This is caused by the tablet mis-identifying my operating system as Android, and the dimmed light is one way the tablet indicates it's in Android mode. (I'll note that this is also indicated with a different vendor ID in <code>lsusb</code>, where it's reported as <code>0x2D1F</code> instead of <code>0x056A</code>. This doesn't actually matter, but it did help me track down the problem).</p>
<p>Thus after plugging in my tablet, it is necessary to restart the tablet in "PC Mode". This is done by holding the two outer keys on the tablet for a few seconds until the light turns off and on again. After it turns on, it should be at full brightness.</p>
<p>Secondly, I also have multiple screens set up. Although it looks fine, in practice what actually happens is that I have a single "screen" of a certain dimension and the X window system partitions the screen across my monitors. But the Wacom tablet initially was mapped to the whole "screen", and thus the left side of the tablet was at the far left of my left monitor, and 7 inches or so to the right on the tablet corresponded to the far right of my right monitor. All of my writing had the wrong aspect ratio and this was totally unwieldy.</p>
<p>But this is fixable. After plugging in the tablet and having it in PC Mode (described above), it is possible to map its output to a region of the "screen". This is easiest done through <code>xrandr</code> and <code>xsetwacom</code>.</p>
<p>First, I used <code>xrandr –listactivemonitors</code> to get the name of my monitors. I see that my right monitor is labelled <code>DP-2</code>. I've decided that my monitor labelled <code>DP-2</code> will be the monitor in which I use this tablet — the area on the tablet will correspond to the area mapped to my right monitor.</p>
<p>Now I will map the <code>STYLUS</code> to this monitor. First I need to find the id of the stylus. To do this, I use <code>xsetwacom –list devices</code>, whose output for me was</p>
<p><pre><code>Wacom Intuos S Pad pad id: 21 type: PAD
Wacom Intuos S Pen stylus id: 22 type: STYLUS
Wacom Intuos S Pen eraser id: 23 type: ERASER
Wacom Intuos S Pen cursor id: 24 type: CURSOR
</code></pre></p>
<p>I want to map the stylus. (I don't currently know the effect of mapping anythign else, and that hasn't been necessary, but I suppose this is a thing to keep in mind). Thus I note the id <code>22</code>.</p>
<p>Then I run <code>xsetwacom –set "21" MapToOutput DP-2</code>, and all works excellently.</p>
<p>I'm sure that I'll encounter more problems at some point in the future. When I do, I'll update these notes accordingly.</p>https://davidlowryduda.com/setting-up-a-wacom-intuos-ctl-4100-drawing-tablet-on-ubuntu-20-04-ltsMon, 01 Mar 2021 03:14:15 +0000Slides from a talk at AIMhttps://davidlowryduda.com/slides-from-a-talk-at-aimDavid Lowry-Duda<p>I'm currently at an AIM workshop on Arithmetic Statistics, Discrete Restriction, and Fourier Analysis. This morning (AIM time)/afternoon (USEast time), I'll be giving a talk on <em>Lattice points and sums of Fourier Coefficients of modular forms</em>.</p>
<p>The theme of this talk is embodied in the statement that several lattice counting problems like the Gauss circle problem are essentially the same as very modular-form-heavy problems, sometimes very closely similar and sometimes appearing slightly different.</p>
<p>In this talk, I describe several recent adventures, successes and travails, in my studies of problems related to the Gauss circle problem and the task of producing better bounds for the sum of the first several coefficients of holomorphic cuspforms.</p>
<p><a href="/wp-content/uploads/2021/02/AIM2021_sums_of_fourier.pdf">Here are the slides for my talk.</a></p>
<p>I'll note that various parts of this talk have appeared in several previous talks of mine, but since it's the pandemic era this is the first time much of this has appeared in slides.</p>https://davidlowryduda.com/slides-from-a-talk-at-aimWed, 17 Feb 2021 03:14:15 +0000Slides from a talk on computing Maass formshttps://davidlowryduda.com/slides-from-a-talk-on-computing-maass-formsDavid Lowry-Duda<p>Yesterday, I gave a talk on various aspects of computing Maass cuspforms at Rutgers.</p>
<p><a href="/wp-content/uploads/2021/02/Rutgers2021_maass_forms.pdf">Here are the slides for my talk.</a></p>
<p>Unlike most other talks that I've given, this doesn't center on past results that I've proved. Instead, this is a description of an ongoing project to figure out how to rigorously compute many Maass forms, implement this efficiently in code, and add this data to the <a href="https://LMFDB.org">LMFDB</a>.</p>https://davidlowryduda.com/slides-from-a-talk-on-computing-maass-formsWed, 10 Feb 2021 03:14:15 +0000Talk on computing Maass formshttps://davidlowryduda.com/talk-on-computing-maass-formsDavid Lowry-Duda<p>In a remarkable coincidence, I'm giving two talks on Maass forms today (after not giving any talks for 3 months). One of these was a chalk talk (or rather camera on pen on paper talk). My other talk can be found at <a href="/static/Talks/ComputingMaass20/" data-type="URL" data-id="/static/Talks/ComputingMaass20/">/static/Talks/ComputingMaass20/</a>.</p>
<p>In this talk, I briefly describe how one goes about computing Maass forms for congruence subgroups of $\mathrm{SL}(2)$. This is a short and pointed exposition of ideas mostly found in papers of Hejhal and Fredrik Strömberg's PhD thesis. More precise references are included at the end of the talk.</p>
<p>This amounts to a description of the idea of Hejhal's algorithm on a congruence subgroup.</p>
<h2>Side notes on revealjs</h2>
<p>I decided to experiment a bit with this talk. This is not a TeX-Beamer talk (as is most common for math) — instead it's a revealjs talk. I haven't written a revealjs talk before, but it was surprisingly easy.</p>
<p>It took me more time than writing a beamer talk, most likely because I don't have a good workflow with reveal and there were several times when I wanted to use nontrivial javascript capabilities. In particular, I wanted to have a few elements transition from one slide to the next (using the automatic transition capabilities).</p>
<p>At first, I had thought I would write in an intermediate markup format and then translate this into revealjs, but I quickly decided against that plan. The composition stage was a bit more annoying.</p>
<p>But I think the result is more appealing than a beamer talk, and it's sufficiently interesting that I'll revisit it later.</p>https://davidlowryduda.com/talk-on-computing-maass-formsFri, 04 Dec 2020 03:14:15 +0000Long live opalstack!https://davidlowryduda.com/long-live-opalstackDavid Lowry-Duda<p>I finished migrating my webserver from webfaction to opalstack. For those wondering if this is a good idea, I would recommend it. It was fairly painless.</p>
<p>Having said that, I did lose a little data: exactly the one post that warned that I was moving! I don't know how this happened, but after checking this is the only thing lost in the transfer.</p>
<p>But in total, migrating and setting up the various websites (static, php, wordpress, flask) was straightforward, although nontrivial. Migrating email and ensuring that I had no email downtime was more important to me, and more annoying.</p>
<p>In total, the accounts I manage had on the order of 10k email messages, which is not very many in the big picture. Creating new matching mailboxes and mailusers was very easy, but somehow the transferring of the messages was a surprisingly time-consuming endeavor.</p>
<p>I also learned that complete DNS propagation for my email took about 41 hours. Well actually, google/gmail took 41 hours, and every other email service detected the change within 23 hours. I suppose this has something to do with the vast multitude of caching that google/gmail must do. But it was very easy (though slightly annoying) to manually sync email for the whole time. (Actually, I'd expected it to take up to 72 hours, so this wasn't so bad).</p>
<p>Long live the opalstack.</p>
<p>(As the one post where I warn about migration disappeared, I'll note that I moved because godaddy bought webfaction and have been changing the service/running it into the ground. I liked webfaction before they were bought, and I've used them for almost 10 years. Hopefully opalstack will stay stable for many years to come.)</p>https://davidlowryduda.com/long-live-opalstackFri, 04 Dec 2020 03:14:15 +0000The current cover art for the Proceedings of the Royal Societyhttps://davidlowryduda.com/cover-for-prsaDavid Lowry-Duda<p>The <a href="https://royalsocietypublishing.org/toc/rspa/2020/476/2240">current
issue</a> of the Proceedings of the Royal Society A<sup>1</sup>
<span class="aside"><sup>1</sup>covering mathematical,
physical, and engineering sciences, as opposed to the "B" Proceedings, which
covers biological sciences</span>
features cover artwork made by Vikas
Krishnamurthy, Miles Wheeler, Darren Crowdy, Adrian Constantin, and me.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/08/coverbig.jpg"
width="500" />
</figure>
<p>A version of the cover pre-addition is the following.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/08/rawcover.jpg"
width="100%" />
</figure>
<p>This is based on the work in <a
href="https://royalsocietypublishing.org/doi/10.1098/rspa.2020.0310">A
transformation between stationary point vortex equilibria</a>, which concerns
solutions to Euler's equation for inviscid (2D) fluid motion $$ \frac{\partial
\mathbf{V}}{\partial t} + (\mathbf{V} \cdot \nabla) \mathbf{V} = - \frac{\nabla
p}{p_0}, $$ where $\nabla = (\partial/\partial x, \partial / \partial y)$ is
the 2D gradient operator. There is a notion of vortices for these systems, and
the paper examines configurations of point vortices under certain idealized
conditions that leads to particularly nice analysis. In the situation studied,
one can sometimes begin with one configurations of point vortices and perform a
transformation that yields another, bigger and more complicated configuration.</p>
<p>This is the situation depicted on the cover — begin with a simple configuration and iterate the process. The spiral shape was added afterwards and doesn't describe underlying mathematical phenomena. The different colors of each vortex shows whether that vortex is a sink or a source, essentially.</p>
<p>I was told most of this after the fact by Miles — who researches fluid dynamics, is a friend from grad school, and was <a href="/paper-continuous-choices-mvt/">my coauthor on a paper about the mean value theorem</a>. I do not typically think about fluid dynamics (and did not write the paper), and it's a bit funny how I got involved in the production of this cover. But it was fun, and we produced many arresting images. In the future Miles and I intend to revisit these images and better describe how the various aspects of the image describes and reflects the underlying mathematical behavior.</p>
<p>As a fun aside — we didn't only produce one image. We made many, and we made many configurations.<sup>2</sup>
<span class="aside"><sup>2</sup>I should give an additional thanks to Miles, who spent a lot of time hand-tuning the particular parameters and contours in the final image.</span>
In my <a href="https://arxiv.org/abs/2002.05234">work on visualizing modular forms</a>, I developed a few techniques for color selection from matplotlib style colormaps, and produced several variants. I've collected a few of these below.</p>
<p><img class="center" src="/wp-content/uploads/2020/08/assorted_covers1.png" alt="" width="800" height="800" /></p>
<p><img class="center" src="/wp-content/uploads/2020/08/assorted_covers2.png" alt="" width="800" height="800" /></p>
<p><img class="center" src="/wp-content/uploads/2020/08/assorted_covers3.png" alt="" width="800" height="800" /></p>https://davidlowryduda.com/cover-for-prsaWed, 19 Aug 2020 03:14:15 +0000phase_mag_plot: a sage package for plotting complex functionshttps://davidlowryduda.com/phase_mag_plot-a-sage-package-for-plotting-complex-functionsDavid Lowry-Duda<p>Inspired by conversations with Elias Wegert and Frank Farris at the <a href="https://icerm.brown.edu/programs/sp-f19/">Illustrating Mathematics</a> semester program at ICERM last year, I wrote several plotting libraries for complex plotting. I wrote them with the intention of plotting modular forms in a variety of ways, leading to <a href="/notes-behind-a-talk-visualizing-modular-forms/">my talk at Bowdoin in November 2019</a> and <a href="https://arxiv.org/abs/2002.05234">my first post on the CS arxiv</a>.<sup>1</sup>
<span class="aside"><sup>1</sup>and when I learned that arxiv editors read papers close enough to reclassify them.</span></p>
<p>I've gotten several requests to make these plotting libraries available, and so I've made <a href="https://github.com/davidlowryduda/phase_mag_plot/">davidlowryduda/phase_mag_plot</a> available on github as a sage library. See the github page and README for examples and up-to-date information.</p>
<p>This version is capable of producing contour-type plots of complex functions.</p>
<p><a href="/wp-content/uploads/2020/08/poly_tiled.png"></a><a href="/wp-content/uploads/2020/08/polyplot_with_axis.png"><img class="aligncenter wp-image-2892" src="/wp-content/uploads/2020/08/polyplot_with_axis.png" alt="A plot of x^2(x-3)(x-3i) with magnitude-type contours" width="400" height="298" /></a><a href="/wp-content/uploads/2020/08/poly_tiled.png"><img class="aligncenter wp-image-2891" src="/wp-content/uploads/2020/08/poly_tiled.png" alt="A plot of x^2(x-3)(x-3i) with tile-type contours" width="400" height="298" /></a></p>
<p>This does not include any colormap capability yet, as that is a substantially more involved<sup>2</sup>
<span class="aside"><sup>2</sup>perhaps to be read as "hacky" in my current implementation</span>
process. But at some point in the future, I intend to look at revisiting the complex plotting within sage itself, perhaps updating it to allow plots of this nature.</p>https://davidlowryduda.com/phase_mag_plot-a-sage-package-for-plotting-complex-functionsFri, 07 Aug 2020 03:14:15 +0000Notes from a talk at Dartmouth on the Fibonacci zeta functionhttps://davidlowryduda.com/notes-from-a-talk-at-dartmouth-on-the-fibonacci-zeta-functionDavid Lowry-Duda<p>I recently gave a talk "at Dartmouth"<sup>1</sup>
<span class="aside"><sup>1</sup>i.e. over zoom, hosted and organized by Dartmouth</span>
. The focus of the talk was the (odd-indexed) Fibonacci zeta function:
$$ \sum_{n \geq 1} \frac{1}{F(2n-1)^s},$$
where $F(n)$ is the nth Fibonacci number. The theme is that the Fibonacci zeta function can be recognized as coming from an inner product of automorphic forms, and the continuation of the zeta function can be understood in terms of the spectral expansion of the associated automorphic forms.</p>
<p>This is a talk from ongoing research. I do not yet understand "what's really going on". But within the talk I describe a few different generalizations; firstly, there is a generalization to other zeta functions that can be viewed as traces of units on quadratic number fields, and secondly there is a generalization to quadratic forms recognizing solutions to Pell's equation.</p>
<p>I intend to describe additional ideas from this talk in the coming months, as I figure out how pieces fit together. But for now, <a href="/wp-content/uploads/2020/06/darmouth2020.pdf">here are the slides</a>.</p>https://davidlowryduda.com/notes-from-a-talk-at-dartmouth-on-the-fibonacci-zeta-functionMon, 01 Jun 2020 03:14:15 +0000Pictures of equidistribution - the linehttps://davidlowryduda.com/pictures-of-equidistribution-the-lineDavid Lowry-Duda<p>In my <a href="/points-on-x2-y2-2-equidistribute-with-respect-to-height/">previous
note</a>, we considered equidistribution of rational points on the circle $X^2
+ Y^2 = 2$. This is but one of a large family of equidistribution results
+ that + I'm not particularly familiar with.</p>
<p>This note is the first in a series of notes dedicated to exploring this type of
equidistribution visually. In this note, we will investigate a simpler case —
rational points on the line.</p>
<p>We know that $\mathbb{Q}$ is dense in $\mathbb{R}$. An equidistribution
statement is roughly a way of quantifying this density. Let $I \subseteq
\mathbb{R}$ be a finite interval. We will say that a sequence
${x_n}_{n \geq 1}$ of elements $x_n \in I$ is $\mu$-equidistributed
on $I$ (or equidistributed with respect to $\mu$) if for each subinterval $J
\subset I$ we have that
$$\lim_{X \to \infty} \frac{\#\{n \leq X : x_n \in J\}}{X} = \int_J \mu(x) dx.$$
If $I = (\alpha, \beta)$ and $\mu = 1/(\beta - \alpha)$, then we will simply say that the sequence ${x_n}$ is equidistributed in $I$.</p>
<p>Note that $\mu$-equidistribution of a sequence implies that the sequence is
dense in $I$, but the converse is not true. Further, ordering within the
sequence matters — it is possible (and indeed, not hard at all) to reorder an
equidistributed sequence ${ x_n }$ on $[0, 1]$ into a sequence
${ y_n }$ that is not equidistributed on $[0, 1]$. (For example,
interlace rationals in $[0, 1/2]$ sparsely among the rationals $[1/2, 1]$,
ordered by denominator size).</p>
<p>In this note, we will consider $\mathbb{Q}$. There are many enumerations of
$\mathbb{Q}$ — but which enumeration will form our sequence? We will consider
two such enumerations.</p>
<p>Let $q$ be a rational, written as $q = a/b$ (written in least terms). Define
$h_1(q) = b$, and define $h_2(q) = \sqrt{a^2 + b^2}$. These are two
<em>height</em> functions, giving some notion of the complexity of a number.
The first, $h_1$, says that the complexity of a rational is roughly the size of
the denominator. $h_2$ says that the complexity of a rational depends on both
the numerator and denominator, and numbers with a large numerator or
denominator are more complex.</p>
<p>For an implicit (finite) inverval $I$, let $X_1$ be an enumeration of rationals
$q \in I$, ordered by $h_1$, and let $X_2$ be an enumeration ordered by $h_2$.
We will consider equidistribution statements of the form
$$\lim_{X \to \infty} \frac{\#\{x_n \in J: h_i(x_n) \leq X\}}{\#\{x_n \in I : h_i(x_n) \leq X \}} = \int_J \mu(x) dx.$$
(This makes the ambiguous ordering of elements with the same height unimportant).</p>
<p>It turns out that both $X_1$ and $X_2$ are sequences with some sense of
equidistribution, <strong>but they are equidistributed with respect to
different functions.</strong> The sequence $X_1$ is simply equidistributed (and
this can be proved using little more than the pigeonhole principle). The
sequence $X_2$ is equistributed with respect to $$ \mu(x) = \frac{1}{\pi}
\frac{1}{1 + t^2}. $$ I do not find this result obvious at all. I only learned
this fact recently, and I do not reproduce a proof here.</p>
<h2>Visualizing these rational points</h2>
<p>Let us now turn to visualizing rational points in the two sequences $X_1$ and
$X_2$. Given a rational point $q$, we will visualize the data pair $(q, h(q))$,
where $h$ is a height function. Thus the <em>higher</em> that a point is within
the visualization, the higher the height.</p>
<p>These are images formed from all rationals on $[0, 3]$ with denominator bounded
by $400$ (first image) and $100$ (second image). Each point corresponds to a
point $(q, h_1(q))$.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X1_points.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rationals on $[0, 3]$ with denominators bounded by $400$.
</figcaption>
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X1_points_sparse.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rationals on $[0, 3]$ with denominators bounded by $100$.
</figcaption>
</figure>
<p>These are images formed from all rationals $q$ on $[0, 3]$ with $h_2(q)^2 \leq
70000$ (first image) and $20000$ (second image). Each point corresponds to a
point $(q, h_2(q))$.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X2_points.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with $h_2(q) \leq 70000$.
</figcaption>
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X2_points_sparse.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with $h_2(q) < 20000$.
</figcaption>
</figure>
<p>Several patterns emerge from the images. At the bottom of each graph, there are
<em>wells</em> in which no points occur. If you examine these wells, you can
see that at the bottom center of each well, there is a single rational point.
These notably occur around points of low height — most notably around $0,
1/2, 1, 3/2, 2, 5/2, 3$. The reason is simple: for a rational $q = a/b$ to be
near $1/2$ (say), we need $1/2 - a/b \sim 1/2b$ to be small, and thus $b$ to be
large. In both $h_1$ and $h_2$, the height grows linearly in the denominator
$b$. On the other hand, there will be many other pairs of points of similarly
bounded height that are much closer. For example, $1/b$ and $1/(b-1)$ are
approximately $1/b^2$ apart, which can be significantly closer.<sup>1</sup>
<span class="aside"><sup>1</sup>My
mathematical sibling Alex Walker first described this heuristic to me.</span></p>
<p>The graphs of $X_1$ visually become uniform, and appear almost to be mildly
textured swaths of grey. We can make this even more pronounced, which we do in
the next image. Graphs of $X_2$ show a clearly denser set of points at the left
(favoring smaller numerators).</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X1_extended.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with denominators bounded by $700$, shown in $4$
different shades of grey (based on point density).
</figcaption>
</figure>
<p>We now make a set of related images. In the images that follow, we extend each
line from a point $(q, h(q))$ upwards. The effect is that at any designated
height $H$, the horizontal line through $h(q) = H$ will include points whose
heights are bounded above by $H$. To indicate the density of lines, we use
shades of grey. The darker the line, the more rational points.</p>
<p>For $X1$, we make two images. First we have a <em>full</em> image, consisting
of points with $h(q) \leq 400$ and $20$ different shades of grey. Second, we
have a <em>sparser</em> image, consisting of points with $h(q) \leq 100$ and
$12$ different shades of grey. These are still over the interval $[0, 3]$.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X1_lines_full.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with denominator bounded by $400$, shown in 20 shades of grey.
</figcaption>
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X1_lines_sparse.png"
width="600" height="600"/>
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with denominator bounded by $100$, shown in 12 shades of grey.
</figcaption>
</figure>
<p>For $X2$, we first have those points with $h(q) \leq 70000$ and $16$ different
shades, and then an image with $h(q) \leq 10000$ and $10$ different shades.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X2_lines_full.png"
width="600" />
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with $h_2(q) < 70000$ with 16 shades of grey
(depending on pixel density).
</figcaption>
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2020/05/X2_lines_sparse.png"
width="600" />
<figcaption class="left" markdown="1">
Rational points on $[0, 3]$ with $h_2(q) < 10000$, shown in 10 shades of
grey.
</figcaption>
</figure>
<p>These images really emphasize the <em>wells</em> around rational points of low
height. These gives these images their texture.</p>https://davidlowryduda.com/pictures-of-equidistribution-the-lineWed, 13 May 2020 03:14:15 +0000Points on $X^2 + Y^2 = 2$ equidistribute with respect to heighthttps://davidlowryduda.com/points-on-x2-y2-2-equidistribute-with-respect-to-heightDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/points-on-x2-y2-2-equidistribute-with-respect-to-heightTue, 05 May 2020 03:14:15 +0000Proposal for new images for modular forms on the LMFDBhttps://davidlowryduda.com/proposal-for-new-images-for-modular-forms-on-the-lmfdbDavid Lowry-Duda<p>I recently gave a <a
href="/notes-behind-a-talk-visualizing-modular-forms/">talk
about different visualizations of modular forms</a>, including many new
visualizations that I have been developing and making. I have continued to
develop these images, and I now have a proposal for new visualizations for
modular forms in the <a href="https://www.lmfdb.org">LMFDB</a>.</p>
<p>To see a current visualization, look at <a
href="https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/1/12/a/a/">this
modular form page.</a> The image from that page (as it is currently) looks like
this.</p>
<p><img class="center" src="/wp-content/uploads/2019/12/lmfdb_existing_delta.png" alt="" width="200" height="200" /></p>
<p>This is a plot on a disk model. To make sense of this plot, I note that the
real axis in the upper-half-plane model is the circumference of the circle, and
the imaginary axis in the upper-half-plane model is the vertical diameter of
the circle. In particular, $z = 0$ is the bottom of the circle, $z = i$ is the
center of the circle, and $z = \infty$ is the top of the circle. The magnitude
is currently displayed — the big blue region is where the magnitude is very
small. In a neighborhood of the blue blob, there are a few bands of color that
are meaningful — but then things change too quickly and the graph becomes a
graph of noise.</p>
<p>I propose one of the following alternatives. I maintain the same badge and
model for the space, but I change what is plotted and what colors to use. Also,
I plot them larger so that we can get a good look at them; for the LMFDB they
would probably be produced at the same (small) size.</p>
<h2>Plots with "Contours"</h2>
<p>I have made three plots with contours. They are all morally the same, except
for the underlying colorscheme. The "default" sage colorscheme leads to the
following plot.</p>
<p><img class="center" src="/wp-content/uploads/2019/12/proposal_contour_defaultcolor.png" alt="" width="487" height="487" /></p>
<p>The good thing is that it's visually striking. But I <a
href="https://jakevdp.github.io/blog/2014/10/16/how-bad-is-your-colormap/">recently
learned that this colorscheme is hated</a>, and it's widely thought to be a
poor choice in almost every situation.</p>
<p>A little bit ago, matplotlib added two colorschemes designed to fix the
problems with the default colorscheme. (sage's preferences are behind — the
new matplotlib default has changed). This is one of them, called
<em>twilight</em>.</p>
<p><img class="center" src="/wp-content/uploads/2019/12/proposal_contour_twilight.png" alt="" width="487" height="487" /></p>
<p>And this is the other default, called <em>viridis</em>. I don't actually think
this should be used, since the hues change from bright yellow to dark blue at
complex-argument pi to negative pi. This gives the strong lines, which
correspond to those places where the argument of the modular form is pi.</p>
<p><img class="center" src="/wp-content/uploads/2019/12/semiproposal_contour_viridis.png"
alt="" width="487" height="487" /></a></p>
<h2>Plots without Contours</h2>
<p>I've also prepared these plots without the contours, and I think they're quite nice as well.</p>
<p>First <em>jet.</em></p>
<p><img class="center" src="/wp-content/uploads/2019/12/proposal_nomag_jet_correct.png" alt="" width="487" height="487" /></p>
<p>Then <em>twilight</em>. At the talk I recently gave, this was the favorite — but I hadn't yet implemented the contour-plots above for non-default colorschemes.</p>
<p><img class="center" src="/wp-content/uploads/2019/12/proposal_nomag_twilight.png" alt="" width="487" height="487" /></p>
<p>Then <em>viridis.</em> (I'm still not serious about this one — but I think it's pretty).</p>
<p><img class="center" src="/wp-content/uploads/2019/12/semiproposal_nomag_viridis.png" alt="" width="487" height="487" /></p>
<h2>Note on other Possibilities</h2>
<p>There are other possibilities, such as perhaps plotting on a portion of the
upper half-plane instead of a disk-model. I describe a few of these
possibilities and give examples in the <a
href="/notes-behind-a-talk-visualizing-modular-forms/">notes
from my last talk</a>. I should note that I can now produce contour-type plots
there as well, though I haven't done that.</p>
<p>For fun, here is the default colorscheme, but rotated. This came about
accidentally (as did so many other plots in this excursion), but I think it
highlights how odd jet is.</p>
<p><img class="center" src="/wp-content/uploads/2019/12/proposal_nomag_jet.png"
alt="" width="487" height="487" /></p>
<h2>Gathering Opinions</h2>
<p>This concludes my proposal. I am collecting opinions. If you are struck by an
idea or an opinion and would like to share it with me, please let me know.</p>https://davidlowryduda.com/proposal-for-new-images-for-modular-forms-on-the-lmfdbFri, 06 Dec 2019 03:14:15 +0000Notes behind a talk: visualizing modular formshttps://davidlowryduda.com/notes-behind-a-talk-visualizing-modular-formsDavid Lowry-Duda<p>Today, I’ll be at Bowdoin College giving a talk on visualizing modular forms. This is a talk about the actual process and choices involved in illustrating a modular form; it’s not about what little lies one might hold in their head in order to form some mental image of a modular form.<sup>1</sup>
<span class="aside"><sup>1</sup>I was asked before how a mathematician is able to visualize some 16 dimensional space when visualizing 4 seems hard already. But the answer, somehow, is to visualize 3 dimensional space and to loudly think to oneself “16”.</span></p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2019/11/my_g_onH-300x224.png"
width="300" />
</figure>
<p>This is a talk heavily inspired by the ICERM semester program on Illustrating Mathematics (currently wrapping up). In particular, I draw on<sup>2</sup>
<span class="aside"><sup>2</sup>Pun absolutely intended</span>
conversations with Frank Farris (about using color to highlight desired features), Elias Wegert (about using logarithmically scaling contours), Ed Harriss (about the choice of colorscheme), and Brendan Hassett (about overall design choices).</p>
<p>There are very many pictures in the talk!</p>
<p><a href="https://davidlowryduda.com/static/Talks/Bowdoin19/visualizing_modular_forms.pdf">Here are the slides for the talk</a>.</p>
<p>I wrote a few different complex-plotting routines for this project. At their core, they are based on sage’s complex_plot. There are two major variants that I use.</p>
<p>The first (currently called “ccomplex_plot”. Not a good name) overwrites how sage handles lightness in complex_plot in order to produce “contours” at spots where the magnitude is a two-power. These contours are actually a sudden jump in brightness.</p>
<p>The second (currently called “raw_complex_plot”, also not a good name) is even less formal. It vectorizes the computation and produces an object containing the magnitude and argument information for each pixel to be drawn. It then uses numpy and matplotlib to convert these magnitudes and phases into RGB colors according to a matplotlib-compatible colormap.</p>
<p>I am happy to send either of these pieces of code to anyone who wants to see them, but they are very much written for my own use at the moment. I intend to improve them for general use later, after I’ve experimented further.</p>
<p>In addition, I generated all the images for this talk in a single sagemath jupyter notebook (with the two .spyx cython dependencies I allude to above). <a href="https://davidlowryduda.com/static/Talks/Bowdoin19/complexplots.ipynb">This is also available here</a>. (Note that using a service like nbviewer or nbconvert to view or convert it to html might be a reasonable idea).</p>
<p>As a final note, I’ll add that I mistyped several times in the preparation of the images for this talk. Included below are a few of the interesting-looking mistakes. The first two resulted from incorrectly applied conformal mappings, while the third came from incorrectly applied color correction.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2019/11/g_mistake_H.png"
width="600" />
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2019/11/g_mistake_D.png"
width="600" />
</figure>
<figure class="center shadowed">
<img src="/wp-content/uploads/2019/11/f_mistake_twilight_H.png"
width="600" />
</figure>https://davidlowryduda.com/notes-behind-a-talk-visualizing-modular-formsFri, 22 Nov 2019 03:14:15 +0000Making Plots of Modular Formshttps://davidlowryduda.com/making-plots-of-modular-formsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/making-plots-of-modular-formsTue, 05 Nov 2019 03:14:15 +0000Non-real poles and irregularity of distribution Ihttps://davidlowryduda.com/irregularity-of-distributionDavid Lowry-Duda<p>$\DeclareMathOperator{\SL}{SL}$ $\DeclareMathOperator{\MT}{MT}$After the positive feedback from the Maine-Quebec Number Theory conference, I have taken some time to write (and slightly strengthen) these results.</p>
<p>We study the general theory of Dirichlet series $D(s) = \sum_{n \geq 1} a(n) n^{-s}$ and the associated summatory function of the coefficients, $A(x) = \sum_{n \leq x}' a(n)$ (where the prime over the summation means the last term is to be multiplied by $1/2$ if $x$ is an integer). For convenience, we will suppose that the coefficients $a(n)$ are real, that not all $a(n)$ are zero, that each Dirichlet series converges in some half-plane, and that each Dirichlet series has meromorphic continuation to $\mathbb{C}$. Perron's formula (or more generally, the forward and inverse Mellin transforms) show that $D(s)$ and $A(x)$ are duals and satisfy \begin{equation}\label{eq:basic_duality} \frac{D(s)}{s} = \int_1^\infty \frac{A(x)}{x^{s+1}} dx, \quad A(x) = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} \frac{D(s)}{s} x^s ds \end{equation} for an appropriate choice of $\sigma$.</p>
<p>Many results in analytic number theory take the form of showing that $A(x) = \MT(x) + E(x)$ for a "Main Term" $\MT(x)$ and an "Error Term" $E(x)$. Roughly speaking, the terms in the main term $\MT(x)$ correspond to poles from $D(s)$, while $E(x)$ is hard to understand. Upper bounds for the error term give bounds for how much $A(x)$ can deviate from the expected size, and thus describe the regularity in the distribution of the coefficients ${a(n)}$. In this article, we investigate lower bounds for the error term, corresponding to <i>irregularity in the distribution</i> of the coefficients.</p>
<p>To get the best understanding of the error terms, it is often necessary to work with smoothed sums $A_v(x) = \sum_{n \geq 1} a(n) v(n/x)$ for a weight function $v(\cdot)$. In this article, we consider <i>nice</i> weight functions, i.e. weight functions with good behavior and whose Mellin transforms have good behavior. For almost all applications, it suffices to consider weight function $v(x)$ that are piecewise smooth on the positive real numbers, and which take values halfway between jump discontinuities.</p>
<p>For a weight function $v(\cdot)$, denote its Mellin transform by \begin{equation} V(s) = \int_0^\infty v(x)x^{s} \frac{dx}{x}. \end{equation} Then we can study the more general dual family \begin{equation}\label{eq:general_duality} D(s) V(s) = \int_1^\infty \frac{A_v(x)}{x^{s+1}} dx, \quad A_v(x) = \frac{1}{2 \pi i} \int_{\sigma - i \infty}^{\sigma + i \infty} D(s) V(s) x^s ds. \end{equation}</p>
<p>We prove two results governing the irregularity of distribution of weighted sums. Firstly, we prove that a non-real pole of $D(s)V(s)$ guarantees an oscillatory error term for $A_v(x)$.</p>
<div class="theorem">
<h3>Theorem 1</h3>
Suppose $D(s)V(s)$ has a pole at $s = \sigma_0 + it_0$ with $t_0 \neq 0$ of order $r$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) - \MT(x) = \Omega_\pm\big( x^{\sigma_0} \log^{r-1} x\big). \end{equation}
</div>
<hr />
<p>Here and below, we use the notation $f(x) = \Omega_+ g(x)$ to mean that there is a constant $k > 0$ such that $\limsup f(x)/\lvert g(x) \rvert > k$ and $f(x) = \Omega_- g(x)$ to mean that $\liminf f(x)/\lvert g(x) \rvert < -k$. When both are true, we write $f(x) = \Omega_\pm g(x)$. This means that $f(x)$ is at least as positive as $\lvert g(x) \rvert$ and at least as negative as $-\lvert g(x) \rvert$ infinitely often.</p>
<h3>Theorem 2</h3>
<div class="theorem">
Suppose $D(s)V(s)$ has at least one non-real pole, and that the supremum of the real parts of the non-real poles of $D(s)V(s)$ is $\sigma_0$. Let $\MT(x)$ be the sum of the residues of $D(s)V(s)X^s$ at all real poles $s = \sigma$ with $\sigma \geq \sigma_0$.Then for any $\epsilon > 0$, \begin{equation} \sum_{n \geq 1} a(n) v(\tfrac{n}{x}) - \MT(x) = \Omega_\pm( x^{\sigma_0 - \epsilon} ). \end{equation}
<hr />
</div>
<p>The idea at the core of these theorems is old, and was first noticed during the investigation of the error term in the prime number theorem. To prove them, we generalize proofs given in Chapter 5 of Ingham's Distribution of Prime Numbers (originally published in 1932, but recently republished). There, Ingham proves that $\psi(x) - x = \Omega_\pm(x^{\Theta - \epsilon})$ and $\psi(x) - x = \Omega_\pm(x^{1/2})$, where $\psi(x) = \sum_{p^n \leq x} \log p$ is Chebyshev's second function and $\Theta \geq \frac{1}{2}$ is the supremum of the real parts of the non-trivial zeros of $\zeta(s)$. (Peter Humphries let me know that chapter 15 of Montgomery and Vaughan's text also has these. This text might be more readily available and perhaps in more modern notation. In fact, I have a copy — but I suppose I either never got to chapter 15 or didn't have it nicely digested when I needed it).</p>
<h2 id="motivation-and-application">Motivation and Application</h2>
<p>Infinite lines of poorly understood poles appear regularly while studying shifted convolution series of the shape \begin{equation} D(s) = \sum_{n \geq 1} \frac{a(n) a(n \pm h)}{n^s} \end{equation} for a fixed $h$. When $a(n)$ denotes the (non-normalized) coefficients of a weight $k$ cuspidal Hecke eigenform on a congruence subgroup of $\SL(2, \mathbb{Z})$, for instance, meromorphic continuation can be gotten for the shifted convolution series $D(s)$ through spectral expansion in terms of Maass forms and Eisenstein series, and the Maass forms contribute infinite lines of poles.</p>
<p>Explicit asymptotics take the form \begin{equation} \sum_{n \geq 1} a(n)a(n-h) e^{-n/X} = \sum_j C_j X^{\frac{1}{2} + \sigma_j + it_j} \log^m X \end{equation} where neither the residues nor the imaginary parts $it_j$ are well-understood. Might it be possible for these infinitely many rapidly oscillating terms to experience massive cancellation for all $X$? The theorems above prove that this is not possible.</p>
<p>In this case, applying Theorem 2 with the Perron-weight \begin{equation} v(x) = \begin{cases} 1 & x < 1 \\ \frac{1}{2} & x = 1 \\ 0 & x > 1 \end{cases} \end{equation} shows that \begin{equation} \sideset{}{'}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(\sqrt X). \end{equation} Similarly, Theorem 1 shows that \begin{equation} \sideset{}{'}\sum_{n \leq X} \frac{a(n)a(n-h)}{n^{k-1}} = \Omega_\pm(X^{\frac{1}{2} + \Theta - \epsilon}), \end{equation} where $\Theta < 7/64$ is the supremum of the deviations to Selberg's Eigenvalue Conjecture (sometimes called the the non-arithmetic Ramanujan Conjecture).</p>
<p>More generally, these shifted convolution series appear when studying the sizes of sums of coefficients of modular forms. A few years ago, Hulse, Kuan, Walker, and I began an investigation of the Dirichlet series whose coefficients were themselves $\lvert A(n) \rvert^2$ (where $A(n)$ is the sum of the first $n$ coefficients of a modular form) was shown to have meromorphic continuation to $\mathbb{C}$. The behavior of the infinite lines of poles in the discrete spectrum played an important role in the analysis, but we did not yet understand how they affected the resulting asymptotics. I plan on revisiting these results, and others, with these results in mind.</p>https://davidlowryduda.com/irregularity-of-distributionFri, 18 Oct 2019 03:14:15 +0000Notes from a talk at the Maine-Quebec Number Theory Conferencehttps://davidlowryduda.com/notes-from-a-talk-at-the-maine-quebec-number-theory-conferenceDavid Lowry-Duda<p>Today I will be giving a talk at the Maine-Quebec Number Theory conference. Each year that I attend this conference, I marvel at how friendly and inviting an environment it is — I highly recommend checking the conference out (and perhaps modelling other conferences after it).</p>
<p>The theme of my talk is about spectral poles and their contribution towards asymptotics (especially of error terms). I describe a few problems in which spectral poles appear in asymptotics. Unlike the nice simple cases where a single pole (or possibly a few poles) appear, in these cases infinite lines of poles appear.</p>
<p>For a bit over a year, I have encountered these and not known what to make of them. Could you have the pathological case that residues of these poles generically cancel? Could they combine to be larger than expected? How do we make sense of them?</p>
<p>The resolution came only very recently.<sup>1</sup>
<span class="aside"><sup>1</sup>In fact, I had originally intended to give this talk as a plea for advice and suggestions in considering these questions. But then I happened to read work from Ingham in the 1920s, carrying an idea that was new to me. The talk concludes with this idea. It's not groundbreaking — but it's new to me.</span></p>
<p>I will later write a dedicated note to this new idea (involving Dirichlet integrals and Landau's theorem in this context), but for now — here are the <a href="/wp-content/uploads/2019/10/lines_of_poles.pdf">slides for my talk</a>.</p>https://davidlowryduda.com/notes-from-a-talk-at-the-maine-quebec-number-theory-conferenceSat, 05 Oct 2019 03:14:15 +0000The Insidiousness of Mathematicshttps://davidlowryduda.com/the-insidiousness-of-mathematicsDavid Lowry-Duda<blockquote>
<strong>insidious</strong> (adjective)
1.
a. Having a gradual and cumulative effect
b. of a disease : developing so gradually as to be well established before becoming apparent
2.
a. awaiting a chance to entrap
b. harmful but enticing
— Merriam-Webster Dictionary
</blockquote>
<p>In early topics in mathematics, one can often approach a topic from a combination of intution and first principles in order to deduce the desired results. In later topics, it becomes necessary to repeatedly sharpen intuition while taking advantage of the insights of the many mathematicians who came before — one sees much further by standing on the giants. Somewhere in the middle, it becomes necessary to accept the idea that there are topics and ideas that are not at all obvious. They might appear to have been plucked out of thin air. And this is a conceptual boundary.</p>
<p>In my experience, calculus is often the class where students primarily confront the idea that it is necessary to take advantage of the good ideas of the past. It sneaks up. The main ideas of calculus are intuitive — local rates of change can be approximated by slopes of secant lines and areas under curves can be approximated by sums of areas of boxes. That these are deeply connected is surprising.</p>
<p>To many students, Taylor's Theorem is one of the first examples of a commonly-used result whose proof has some aspect which appears to have been plucked out of thin air.<sup>1</sup>
<span class="aside"><sup>1</sup>It may be that the handwavy proofs involved with proving Rolle's Theorem or the Mean Value Theorem appear equally mysterious. But many calculus courses don't really prove these or don't show that they're useful (or most likely, they don't do either).</span>
Learning Taylor's Theorem in high school was one of the things that inspired me to begin to revisit calculus with an eye towards <em>why</em> each result was true.</p>
<p>I also began to try to prove the fundamental theorems of single and multivariable calculus with as little machinery as possible. High school me thought that topology was overcomplicated and unnecessary for something so intuitive as calculus.<sup>2</sup>
<span class="aside"><sup>2</sup>It wasn't until much later, after I gained a bit of mathematical maturity, that I learned that topological ideas <em>do</em> matter.</span></p>
<p>This train of thought led to my previous note, on another proof of Taylor's Theorem. That note is a simplified version of one of the first proofs I devised on my own.</p>
<p>Much less obviously, this train of thought also led to the paper on the mean value theorem written with Miles. Originally I had thought that "nice" functions should clearly have continuous choices for mean value abscissae, and I thought that this could be used to provide alternate proofs for some fundamental calculus theorems. It turns out that there are very nice functions that don't have continuous choices for mean value abscissae, <em>and</em> that actually using that result to prove classical calculus results is often more technical than the typical proofs.</p>
<p>The flow of ideas is turbulent, highly nonlinear.</p>
<p>I used to think that developing extra rigor early on in my mathematical education was the right way to get to deeper ideas more quickly. There is a kernel of truth to this, as transitioning from pre-rigorous mathematics to rigorous mathematics is very important. But it is also necessary to transition to post-rigorous mathematics (and more generally, to choose one's battles) in order to organize and communicate one's thoughts.</p>
<p>In hindsight, I think now that I was focused on the wrong aspect. As a high school student, I had hoped to discover the obvious, clear, intuitive proofs of every result. Of course it is great to find these proofs when they exist, but it would have been better to grasp earlier that sometimes these proofs don't exist. And rarely does actual research proceed so cleanly — it's messy and uncertain and full of backtracking and random exploration.</p>https://davidlowryduda.com/the-insidiousness-of-mathematicsFri, 05 Jul 2019 03:14:15 +0000Another proof of Taylor's Theoremhttps://davidlowryduda.com/another-proof-of-taylors-theoremDavid Lowry-Duda<p>In this note, we produce a proof of Taylor's Theorem. As in many proofs of Taylor's Theorem, we begin with a curious start and then follow our noses forward.</p>
<p>Is this a new proof? I think so. But I wouldn't bet a lot of money on it. It's certainly new to me.</p>
<p>Is this a groundbreaking proof? No, not at all. But it's cute, and I like it.<sup>1</sup>
<span class="aside"><sup>1</sup>Though I must admit that it is not my favorite proof of Taylor's Theorem.</span></p>
<p>We begin with the following simple observation. Suppose that $f$ is two times continuously differentiable. Then for any $t \neq 0$, we see that \begin{equation} f'(t) - f'(0) = \frac{f'(t) - f'(0)}{t} t. \end{equation} Integrating each side from $0$ to $x$, we find that \begin{equation} f(x) - f(0) - f'(0) x = \int_0^x \frac{f'(t) - f'(0)}{t} t dt. \end{equation} To interpret the integral on the right in a different way, we will use the mean value theorem for integrals.</p>
<div class="theorem">
<blockquote><strong>Mean Value Theorem for Integrals</strong>
Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn't change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \int_0^x g(t) h(t) dt = g(c) \int_0^x h(t) dt. \end{equation}</blockquote>
</div>
<div class="proof">
Suppose without loss of generality that $h(t)$ is nonnegative. Since $g$ is continuous on $[0, x]$, it attains its minimum $m$ and maximum $M$ on this interval. Thus \begin{equation} m \int_0^x h(t) dt \leq \int_0^x g(t)h(t)dt \leq M \int_0^x h(t) dt. \end{equation} Let $I = \int_0^x h(t) dt$. If $I = 0$ (or equivalently, if $h(t) \equiv 0$), then the theorem is trivially true, so suppose instead that $I \neq 0$. Then \begin{equation} m \leq \frac{1}{I} \int_0^x g(t) h(t) dt \leq M. \end{equation} By the intermediate value theorem, $g(t)$ attains every value between $m$ and $M$, and thus there exists some $c$ such that \begin{equation} g(c) = \frac{1}{I} \int_0^x g(t) h(t) dt. \end{equation} Rearranging proves the theorem.
</div>
<p>For this application, let $g(t) = (f'(t) - f'(0))/t$ for $t \neq 0$, and $g(0) =f'{}'(0)$. The continuity of $g$ at $0$ is exactly the condition that $f'{}'(0)$exists. We also let $h(t) = t$.</p>
<p>For $x > 0$, it follows from the mean value theorem for integrals that there exists a $c \in [0, x]$ such that \begin{equation} \int_0^x \frac{f'(t) - f'(0)}{t} t dt = \frac{f'(c) - f'(0)}{c} \int_0^x t dt = \frac{f'(c) - f'(0)}{c} \frac{x^2}{2}. \end{equation} (Very similar reasoning applies for $x < 0$). Finally, by the mean value theorem (applied to $f'$), there exists a point $\xi \in (0, c)$ such that \begin{equation} f'{}'(\xi) = \frac{f'(c) - f'(0)}{c}. \end{equation} Putting this together, we have proved that there is a $\xi \in (0, x)$ such that \begin{equation} f(x) - f(0) - f'(0) x = f'{}'(\xi) \frac{x^2}{2}, \end{equation} which is one version of Taylor's Theorem with a linear approximating polynomial.</p>
<p>This approach generalizes. Suppose $f$ is a $(k+1)$ times continuously differentiable function, and begin with the trivial observation that \begin{equation} f^{(k)}(t) - f^{(k)}(0) = \frac{f^{(k)}(t) - f^{(k)}(0)}{t} t. \end{equation} Iteratively integrate $k$ times: first from $0$ to $t_1$, then from $0$ to $t_2$, and so on, with the $k$th interval being from $0$ to $t_k = x$.</p>
<p>Then the left hand side becomes \begin{equation} f(x) - \sum_{n = 0}^k f^{(n)}(0)\frac{x^n}{n!}, \end{equation} the difference between $f$ and its degree $k$ Taylor polynomial. The right hand side is
\begin{equation}\label{eq:only}\underbrace{\int _0^{t_k = x} \cdots \int _0^{t _1}} _{k \text{ times}} \frac{f^{(k)}(t) - f^{(k)}(0)}{t} t \, dt \, dt _1 \cdots dt _{k-1}.\end{equation}</p>
<p>To handle this, we note the following variant of the mean value theorem for integrals.</p>
<div class="theorem">
<blockquote><strong>Mean value theorem for iterated integrals</strong>
Suppose that $g$ and $h$ are continuous functions, and that $h$ doesn't change sign in $[0, x]$. Then there is a $c \in [0, x]$ such that \begin{equation} \underbrace{\int_0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} g(t) h(t) dt =g(c) \underbrace{\int _0^{t _k=x} \cdots \int _0^{t _1}} _{k \; \text{times}} h(t) dt. \end{equation}</blockquote>
</div>
<p>In fact, this can be proved in almost exactly the same way as in the single-integral version, so we do not repeat the proof.</p>
<p>With this theorem, there is a $c \in [0, x]$ such that we see that \eqref{eq:only} can be written as \begin{equation} \frac{f^{(k)}(c) - f^{(k)}(0)}{c} \underbrace{\int _0^{t _k = x} \cdots \int _0^{t _1}} _{k \; \text{times}} t \, dt \, dt _1 \cdots dt _{k-1}. \end{equation} By the mean value theorem, the factor in front of the integrals can be written as $f^{(k+1)}(\xi)$ for some $\xi \in (0, x)$. The integrals can be directly evaluated to be $x^{k+1}/(k+1)! $.</p>
<p>Thus overall, we find that \begin{equation} f(x) = \sum_{n = 0}^n f^{(n)}(0) \frac{x^n}{n!} + f^{(k+1)}(\xi) \frac{x^{k+1}}{(k+1)!} \end{equation} for some $\xi \in (0, x)$. Thus we have proved Taylor's Theorem (with Lagrange's error bound).</p>https://davidlowryduda.com/another-proof-of-taylors-theoremFri, 28 Jun 2019 03:14:15 +0000Email configuration for mutt on a webfaction serverhttps://davidlowryduda.com/email-configuration-for-mutt-on-a-webfaction-serverDavid Lowry-Duda<p>I have email setup for my sites through webfaction. I have some number of mailboxes and some number of users, and a few users share the same mailboxes.</p>
<p>For a long time I used either a direct webmail or forwarded my site email to a different account, but I'm moving towards more email self-reliance.</p>
<p>A few minutes of searching didn't tell me how to set up mutt on webfaction. Here is a minimal configuration for what I did.</p>
<p>I will assume that we are configuring email for <code>user@mysite.com</code> with mailbox <code>MAILBOX</code>, and where the password for that mailbox is <code>MAILBOXPASSWORD</code>. I will also assume that the user, mailbox, and password have already been set up. The missing step is to connect it to mutt.</p>
<p>My <code>.muttrc</code> looks like</p>
<div class="codehilite"><pre><span></span><code><span class="nb">set</span> <span class="nv">realname</span> <span class="o">=</span> <span class="s2">"FIRST LAST"</span>
<span class="nb">set</span> <span class="nv">from</span> <span class="o">=</span> <span class="s2">"user@mysite.com"</span>
<span class="nb">set</span> <span class="nv">use_from</span> <span class="o">=</span> yes
<span class="nb">set</span> <span class="nv">edit_headers</span> <span class="o">=</span> yes
<span class="nb">set</span> <span class="nv">imap_user</span> <span class="o">=</span> <span class="s1">'MAILBOX'</span>
<span class="nb">set</span> <span class="nv">imap_pass</span> <span class="o">=</span> <span class="s1">'MAILBOXPASSWORD'</span>
<span class="nb">set</span> <span class="nv">folder</span> <span class="o">=</span> <span class="s2">"imaps://mail.webfaction.com:993"</span>
<span class="nb">set</span> <span class="nv">spoolfile</span> <span class="o">=</span> <span class="s2">"+INBOX"</span>
<span class="nb">set</span> <span class="nv">record</span> <span class="o">=</span> <span class="s2">"+sent"</span>
<span class="nb">set</span> <span class="nv">postponed</span> <span class="o">=</span> <span class="s2">"+postponed"</span>
<span class="nb">set</span> <span class="nv">smtp_url</span> <span class="o">=</span> <span class="s2">"smtp://MAILBOX@smtp.webfaction.com:587/"</span>
<span class="nb">set</span> <span class="nv">smtp_pass</span> <span class="o">=</span> <span class="s2">"MAILBOXPASSWORD"</span>
<span class="c1"># optional caching and ensure security</span>
<span class="nb">set</span> <span class="nv">header_cache</span> <span class="o">=</span> <span class="s2">"~/.mutt/cache/headers"</span>
<span class="nb">set</span> <span class="nv">message_cachedir</span> <span class="o">=</span> <span class="s2">"~/.mutt/cache/bodies"</span>
<span class="nb">set</span> <span class="nv">certificate_file</span> <span class="o">=</span> <span class="s2">"~/.mutt/certificates"</span>
<span class="nb">set</span> <span class="nv">ssl_starttls</span><span class="o">=</span>yes
<span class="nb">set</span> <span class="nv">ssl_force_tls</span><span class="o">=</span>yes
</code></pre></div>
<p>It's not particularly complicated, but it wasn't obvious to me at first either.</p>https://davidlowryduda.com/email-configuration-for-mutt-on-a-webfaction-serverWed, 12 Jun 2019 03:14:15 +0000Choosing functions and generating figures for "When are there continuous choices for the mean value abscissa?"https://davidlowryduda.com/choosing-functions-for-mvt-abscissaDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/choosing-functions-for-mvt-abscissaWed, 05 Jun 2019 03:14:15 +0000Paper: When are there continuous choices for the Mean Value Abscissa?https://davidlowryduda.com/paper-continuous-choices-mvtDavid Lowry-Duda<h1>When are there continuous choices for the Mean Value Abscissa?</h1>
<p>Miles Wheeler and I have recently uploaded a paper to the arXiv called "When
are there continuous choices for the mean value abscissa?", which we have
submitted to an expository journal. The underlying question is simple but
nontrivial.<span class="aside"> <strong>Later Update</strong>: this paper was published in the
<em>American Mathematical Monthly</em> in 2021, and <a href="/halmos-ford-award">won the 2022 Halmos-Ford
award</a>.</span></p>
<p>The mean value theorem of calculus states that, given a differentiable function $f$ on an interval $[a, b]$, then there exists a $c \in (a, b)$ such that
$$ \frac{f(b) - f(a)}{b - a} = f'(c).$$
We call $c$ the <em>mean value abscissa</em>.
Our question concerns potential behavior of this abscissa when we fix the left endpoint $a$ of the interval and vary $b$. For each $b$, there is at least one abscissa $c_b$ such that the mean value theorem holds with that abscissa. But generically there may be more than one choice of abscissa for each interval. When can we choose $c_b$ as a continuous function of $b$? That is, when can we write $c = c(b)$ such that
$$ \frac{f(b) - f(a)}{b - a} = f'(c(b))$$
for all $b$ in some interval?
We think of this as a continuous choice for the mean value abscissa.</p>
<p>This is a great question. It's widely understandable — even to students with only one semester of calculus. Further it encourages a proper understanding of what a <em>function</em> is, as thinking of $c$ as potentially a function of $b$ is atypical and interesting.</p>
<p>But I also like this question because the answer is not as simple as you might think, and there are a few nice ideas that get to the answer.</p>
<p>Should you find yourself reading this without knowing the answer, I encourage you to consider it right now. Should continuous choices of abscissas exist? What if the function is really well-behaved? What if it's smooth? Or analytic?</p>
<p>Let's focus on the smooth question. Suppose that $f$ is smooth — that it is infinitely differentiable. These are a distinguished class of functions. But it turns out that being smooth is not sufficient: here is a counterexample.</p>
<p><a href="/wp-content/uploads/2019/06/figure5.png"><img class="aligncenter size-full wp-image-2760" src="/wp-content/uploads/2019/06/figure5.png" alt="" width="489" height="318" /></a></p>
<p>In this figure, there are points $b$ arbitrarily near $b_0$ such that the
secant line from $a_0$ to $b$ have positive slope, and points arbitrarily near
such that the secant lines have negative slope. There are infinitely many mean
value abscissae with $f'(c_0) = 0$, but all of them are either far from a point
$c$ where $f'(c) > 0$ or far from a point $c$ where $f'(c) < 0$. And thus
there is no continuous choice.</p>
<p>From a theorem oriented point of view, our main
theorem is that if $f$ is analytic, then there is <em>always</em> a locally
continuous choice. That is, for every interval $[a_0, b_0]$, there exists a
mean value abscissa $c$ such that $c = c(b)$ for some interval $B$ containing
$b_0$.</p>
<p>But the purpose of this article isn't simply to prove this theorem. The
purpose is to exposit how the ideas that are used to study this problem and to
prove these results are fundamentally based only on a couple of central ideas
covered in introductory single and multivariable calculus.
All of this paper is
completely accessible to a student having studied only single variable calculus
(and who is willing to believe that partial derivatives exist are a reasonable
object).</p>
<p>We prove and use simple-but-nontrivial versions of the contraction
mapping theorem, the implicit function theorem, and Morse's lemma.</p>
<p>The implicit
function theorem is enough to say that any abscissa $c_0$ such that $f''(c_0)
\neq 0$ has a unique continuous extension.
Thus immediately for "most"
intervals on "most" reasonable functions, we answer in the affirmative.</p>
<p>Morse's
lemma allows us to say a bit more about the case when $f''(c_0) = 0$ but
$f'{}'{}'(c_0) \neq 0$. In this case there are either multiple continuous
extensions or none. And a few small ingredients and the idea behind Morse's
lemma, combined with the implicit function theorem again, is enough to prove
the main result.</p>
<h2>Student projects</h2>
<p>A calculus student looking for a project
to dive into and sharpen their calculus skills could find ideas here to sink
their teeth into. Beginning by understanding this paper is a great start. A
good motivating question would be to carry on one additional step, and to study
explicitly the behavior of a function near a point where $f''(c_0) =
f'{}'{}'(c_0) = 0$, but $f^{(4)}(c_0) \neq 0$.</p>
<p>A slightly more open question
that we lightly touch on (but leave largely implicit) is ther inverse question:
when can one find a mean value abscissa $c$ such that the right endpoint $b$
can be written as a continuous function $b(c)$ for some neighborhood $C$
containing the initial point $c_0$? Much of the analysis is the same, but
figuring it out would require some attention.</p>
<p>A much deeper question is to
consider the abscissa as a function of both the left endpoint $a$ and the right
endpoint $b$. The guiding question here could be to decide when one can write
the abscissa as a continuous function $c(a, b)$ in a neighborhood of $(a_0,
b_0)$.</p>
<p>I would be interested to see a graphical description of the possible
shapes of these functions — I'm not quite sure what they might look like.</p>
<p>There is also a nice computational problem. In the paper, we include several
plots of solution curves in $(b, c)$ space. But we did this with a meshed
implicit function theorem solver. A computationally inclined student could
devise an explicit way of constructing solutions.</p>
<p>On the one hand, this is
guaranteed to work since one can apply contraction mappings explicitly to make
the resulting function from the implicit function theorem explicit. But on the
other hand, many (most?) applications of the implicit function theorem are in
more complicated high dimensional spaces, whereas the situation in this paper
is the smallest nontrivial example.</p>
<h2>Producing the graphs</h2>
<p>We made 13 graphs
in 5 figures for this article. These pictures were created using <a
href="https://matplotlib.org/">matplotlib</a>. The data was created using
numpy, scipy, and sympy from within the scipy/numpy python stack, and the
actual creation was done interactively within a jupyter notebook. The actual
notebook is available <a
href="https://github.com/davidlowryduda/notebooks/blob/master/Papers/ContinuousChoicesOfMeanValueAbscissa.ipynb">here</a>,
(along with other relatively raw jupyter notebooks). The most complicated graph
is this one.</p>
<p><a href="/wp-content/uploads/2019/06/figure4-dark.png"><img class="aligncenter size-full wp-image-2767" src="/wp-content/uploads/2019/06/figure4-dark.png" alt="" width="821" height="589" /></a></p>
<p>This figure has graphs of three functions along the top. In each graph, the interval $[0, 3]$ is considered in the mean value theorem, and the point $c_0 = 1$ is a mean value abscissa. In each, we also have $f''(c_0) = 0$, and the point is that the behavior of $f''(b_0)$ has a large impact on the nature of the implicit functions.</p>
<p>The three graphs along the bottom are in $(b, c)$ space and present all mean value abscissa for each $b$. This is not a function, but the local structure of the graphs are interesting and visually distinct.</p>
<p>The process of making these examples and making these figures is interesting in itself. We did not make these figures explicitly, but instead chose certain points and certain values of derivatives at those points, and used Hermite interpolation find polynomials with those points.<sup>1</sup>
<span class="aside"><sup>1</sup>Actually, since our polynomials are not of large degree, we wrote a sort of naive Hermite interpolator in sympy and had sympy bruteforce solve for the polynomial. Arguably it would be both faster and easier to encode the problem as a linear system of equations and use a fast matrix solver... but we didn't do that.</span></p>
<p>In the future I plan on writing a note on the creation of these
figures.<sup>2</sup>
<span class="aside"><sup>2</sup>This is now done! See <a href="https://davidlowryduda.com/choosing-functions-for-mvt-abscissa/">Choosing functions and generating
figures...</a>
for more.</span></p>https://davidlowryduda.com/paper-continuous-choices-mvtTue, 04 Jun 2019 03:14:15 +0000How do we decide how many representatives there are for each state?https://davidlowryduda.com/how-do-we-decide-how-many-representatives-there-are-for-each-stateDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/how-do-we-decide-how-many-representatives-there-are-for-each-stateWed, 03 Apr 2019 03:14:15 +0000African clawed froghttps://davidlowryduda.com/african-clawed-frogDavid Lowry-Duda<p>In the early 1930s, Hillel Shapiro and Harry Zwarenstein, two South African researchers, discovered that injecting a pregnant woman's urine into an African clawed frog (Xenopus laevis) caused the frog to ovulate within the next 18 hours. This became a common (and apparently reliable) pregnancy test until more modern pregnancy tests started to become available in the 1960s.</p>
<p>Behold the marvels of science! (Unless you're a frog).</p>
<p>When I first heard this, I was both astounded and... astounded. How would you discover this? How many things were injected into how many animals before someone realized this would happen?</p>
<h3>Sources</h3>
<ul>
<li>https://en.wikipedia.org/wiki/African_clawed_frog</p></li>
<li><p>Hillel Harry, Shapiro Zwarenstein (March 1935). "A test for the early diagnosis of pregnancy". South African Medical Journal. 9: 202.</p></li>
<li><p>Shapiro, H. A.; Zwarenstein, H. (1934-05-19). "A Rapid Test for Pregnancy on Xenopus lævis". Nature. 133 (3368): 762</p></li>
</ul>
<h2>Before frogs, there were mice</h2>
<p>In 1928, early-endocrinologist Bernhard Zondek and biologist Selmar Aschheim were studying hormones and human biology. As far as I can tell, they hypothesized that hormones associated to pregnancy might still be present in pregnant women's urine. They decided to see if other animals would react to the presence of this hormone, so they then went and collected the urine of pregnant women in order to... test their hypothesis.<sup>1</sup>
<span class="aside"><sup>1</sup>This is one of the cases where I really wish negative results were published. Do you think that they tried first with the blood of pregnant women? Really, what fluids did they apply to what animals? Science!</span>
It turns out that they were right. The hormone human chrionic gonadotropin (hCG) is produced by the placenta shortly after a woman becomes pregnant. And this hormone is present in the urine of pregnant women. But as far as I can tell, hCG itself wasn't identified until the 50s — so there was still some guesswork going on. Nonetheless, identifying hCG is common in many home-pregnancy tests today. Zondek and Aschheim developed a test (creatively referred to as the Aschheim-Zondek test<sup>2</sup>
<span class="aside"><sup>2</sup>in a rare deviation from Stigler's Law of Eponymy</span>
) that worked like this:
<ol>
<li>Take a young female mouse between 3 and 5 weeks old. Actually, take about 5 mice, as one should expect that a few of the mice won't survive long enough for the test to be complete.</li>
<li>Inject urine into the bloodstream of each mouse three times a day for three days.</li>
<li>Two days after the final injection, kill any surviving mouse and disect them.<sup>3</sup>
<span class="aside"><sup>3</sup>Science?</span>
</li>
<li>If the ovaries are enlarged (i.e. 2-3 times normal size) and show red dots, then the urine comes from a pregnant woman. If the ovaries are merely enlarged, but there are no red dots, then the woman isn't pregnant.<sup>4</sup>
<span class="aside"><sup>4</sup>In fact, the ovaries apparently always become enlarged. This is due to different hormones present in the urine.</span>
</li>
</ol>
In a trial, this test was performed on 2000 different women and had a 98.9 percent successful identification rate.
From this perspective, it's not as surprising that young biologists and doctors sought to inject pregnant women's urine into various animals and to see what happens. In many ways, frogs were superior to mice, as one doesn't need to kill the frog to determine if the woman is pregnant.
<h3>Sources</h3>
<ul>
<li>Ettinger, G. H., G. L. M. Smith, and E. W. McHenry. “The Diagnosis of Pregnancy with the Aschheim-Zondek Test.” Canadian Medical Association Journal 24 (1931): 491–2.</li>
<li>Evans, Herbert, and Miriam Simpson. “Aschheim-Zondek Test for Pregnancy–Its Present Status.” California and Western Medicine 32 (1930): 145.</li>
</ul>
<h2>And rabbits too</h2>
Maurice Friedman, at the University of Pennsylvania, discovered that one could use rabbits instead of mice. (Aside from the animal, it's essentially the same test).
Apparently this became a very common pregnancy test in the United States. A common misconception arose, where it was thought that the rabbits death indicated pregnancy. People might say that "the rabbit died" to mean that they were pregnant.
But in fact, just like mice, all rabbits used for these pregnancy tests died, as they were dissected.<sup>5</sup>
<span class="aside"><sup>5</sup>For the gods of science too demand animal sacrifice. Maybe.</span>
<h3>Sources</h3>
<ul>
<li>Friedman, M. H. (1939). The assay of gonadotropic extracts in the post-partum rabbit. Endocrinology, 24(5), 617-625.</li>
</ul>https://davidlowryduda.com/african-clawed-frogThu, 28 Mar 2019 03:14:15 +0000Notes from a talk: Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangleshttps://davidlowryduda.com/talk-finding-congruent-numbers-arithmetic-progressions-of-squares-and-trianglesDavid Lowry-Duda<p>Here are some notes for my talk <strong>Finding Congruent Numbers, Arithmetic Progressions of Squares, and Triangles</strong> (an invitation to analytic number theory), which I'm giving on Tuesday 26 February at Macalester College.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2019/02/graphic-CNP-1024x482.png"
width="90%" />
</figure>
<p>The slides for my talk are available <a href="/wp-content/uploads/2019/02/congruent_number_handout.pdf">here</a>.</p>
<p>The overarching idea of the talk is to explore the deep relationship between</p>
<ol>
<li>right triangles with rational side lengths and area $n$,</li>
<li>three-term arithmetic progressions of squares with common difference $n$, and</li>
<li>rational points on the elliptic curve $Y^2 = X^3 - n^2 X$.</li>
</ol>
<p>If one of these exist, then all three exist, and in fact there are one-to-one correspondences between each of them. Such an $n$ is called a <strong>congruent number</strong>.</p>
<p>By understanding this relationship, we also describe the ideas and results in the paper <a href="https://arxiv.org/abs/1804.02570">A Shifted Sum for the Congruent Number Problem</a>, which I wrote jointly with Tom Hulse, Chan Ieong Kuan, and Alex Walker.</p>
<p>Towards the end of the talk, I say that in practice, the best way to decide if a (reasonably sized) number is congruent is through elliptic curves. Given a computer, we can investigate whether the number $n$ is congruent through a computer algebra system like <a href="http://www.sagemath.org/">sage</a>.<sup>1</sup>
<span class="aside"><sup>1</sup>There are other computer algebra systems with elliptic curve functionality, but sage is free and open-source, and I help contribute to and develop sage so I'm biased.</span></p>
<p>For the rest of this note, I'll describe how one can use sage to determine whether a number is congruent, and how to use sage to add points on elliptic curves to generate more triangles corresponding to a particular congruent number.</p>
<p>Firstly, one needs access to sage. It's free to install, but it's quite large. The easiest way to begin using sage immediately is to use <a href="https://cocalc.com/">cocalc.com</a>, a free interface to sage (and other tools) that was created by William Stein, who also created sage.</p>
<p>In a sage session, we can create an elliptic curve through</p>
<p><pre><code class="python">
> E6 = EllipticCurve([-36, 0])
> E6
Elliptic Curve defined by y^2 = x^3 - 36*x over Rational Field
</code></pre></p>
<p>More generally, to create the curve corresponding to whether or not $n$ is congruent, you can use</p>
<p><pre><code class="python">
> n = 6 # (or anything you want)
> E = EllipticCurve([-n**2, 0])
</code></pre></p>
<p>We can ask sage whether our curve has many rational points by asking it to (try to) compute the rank.</p>
<p><pre><code class="python">
> E6.rank()
1
</code></pre></p>
<p>If the rank is at least $1$, then there are infinitely many rational points on the curve and $n$ is a congruent number. If the rank is $0$, then $n$ is not congruent.<sup>2</sup>
<span class="aside"><sup>2</sup>Behind the scenes, sage is using a particular C++ library written by John Cremona (who happens to have been my postdoctoral advisor) to try to compute the rank of the curve. This usually works, but for large inputs it can sometimes time out or fail. This approach has limitations.</span></p>
<p>For the curve $Y^2 = X^3 - 36 X$ corresponding to whether $6$ is congruent, sage returns that the rank is $1$. We can ask sage to try to find a rational point on the elliptic curve through</p>
<p><pre><code class="python">
> E6.point_search(10)
[(-3 : 9 : 1)]
</code></pre></p>
<p>The <code>10</code> in this code is a limit on the complexity of the point. The precise definition isn't important — using $10$ is a reasonable limit for us.</p>
<p>We see that this output something. When sage examines the elliptic curve, it uses the equation $Y^2 Z = X^3 - 36 X Z^2$ — it turns out that in many cases, it's easier to perform computations when every term is a polynomial of the same degree. The coordinates it's giving us are of the form $(X : Y : Z)$, which looks a bit odd. We can ask sage to return just the XY coordinates as well.</p>
<p><pre><code class="python">
> Pt = E6.point_search(10)[0] # The [0] means to return the first element of the list
> Pt.xy()
(-3, 9)
</code></pre></p>
<p>In my talk, I describe a correspondence between points on elliptic curves and rational right triangles. In the talk, it arises as the choice of coordinates. But what matters for us right now is that the correspondence taking a point $(x, y)$ on an elliptic curve to a triangle $(a, b, c)$ is given by
$$(x, y) \mapsto \Big( \frac{n^2-x^2}{y}, \frac{-2 \cdot x \cdot y}{y}, \frac{n^2 + x^2}{y} \Big).$$</p>
<p>We can write a sage function to perform this map for us, through</p>
<p><pre><code class="python">
> def pt_to_triangle(P):
x, y = P.xy()
return (36 - x**2)/y, (-2*x*6/y), (36+x**2)/y
> pt_to_triangle(Pt)
(3, 4, 5)
</code></pre></p>
<p>This returns the $(3, 4, 5)$ triangle!</p>
<p>Of course, we knew this triangle the whole time. But we can use sage to get more points. A very cool fact is that rational points on elliptic curves form a group under a sort of addition — we can add points on elliptic curves together and get more rational points. Sage is very happy to perform this addition for us, and then to see what triangle results.</p>
<p><pre><code class="python">
> Pt2 = Pt + Pt
> Pt2.xy()
(25/4, -35/8)
> pt_to_triangle(Pt2)
(7/10, 120/7, -1201/70)
</code></pre></p>
<p>Another rational triangle with area $6$ is the $(7/10, 120/7, 1201/70)$ triangle. (You might notice that sage returned a negative hypotenuse, but it's the absolute values that matter for the area). After scaling this to an integer triangle, we get the integer right triangle $(49, 1200, 1201)$ (and we can check that the squarefree part of the area is $6$).</p>
<p>Let's do one more.</p>
<p><pre><code class="python">
> Pt3 = Pt + Pt + Pt
> Pt3.xy()
(-1587/1369, -321057/50653)
> pt_to_triangle(Pt3)
(-4653/851, -3404/1551, -7776485/1319901)
</code></pre></p>
<p>That's a complicated triangle! It may be fun to experiment some more — the triangles rapidly become very, very complicated. In fact, it was very important to the main result of our paper that these triangles become so complicated so quickly!</p>https://davidlowryduda.com/talk-finding-congruent-numbers-arithmetic-progressions-of-squares-and-trianglesMon, 25 Feb 2019 03:14:15 +0000Writing a Python Script to be Used in Vimhttps://davidlowryduda.com/writing-a-python-script-to-be-used-in-vimDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/writing-a-python-script-to-be-used-in-vimFri, 22 Feb 2019 03:14:15 +0000Slides for a talk at JMM 2019https://davidlowryduda.com/slides-for-a-talk-at-jmm-2019David Lowry-Duda<p>Today, I'm giving a talk on <em>Zeroes of L-functions associated to half-integral weight modular forms</em>, which includes some joint work with Li-Mei Lim and Tom Hulse, and which alludes to other joint work touched on previously with Jeff Hoffstein and Min Lee (and which perhaps should have been finished a few years ago).</p>
<p><a href="/wp-content/uploads/2019/01/half_weight_zeroes.pdf">Here are the slides for my talk</a>.</p>https://davidlowryduda.com/slides-for-a-talk-at-jmm-2019Fri, 18 Jan 2019 03:14:15 +0000Update to Second Moments in the Generalized Gauss Circle Problemhttps://davidlowryduda.com/update-to-second-moments-in-the-generalized-gauss-circle-problemDavid Lowry-Duda<p>Last year, my coauthors Tom Hulse, Chan Ieong Kuan, and Alex Walker posted a <a href="https://arxiv.org/abs/1703.10347">paper</a> to the arXiv called "Second Moments in the Generalized Gauss Circle Problem". I've <a href="/second-moments-in-the-generalized-gauss-circle-problem/">briefly described its contents before</a>.</p>
<p>This paper has been accepted and will appear in <a href="https://www.cambridge.org/core/journals/forum-of-mathematics-sigma">Forum of Mathematics: Sigma</a>.<a href="/wp-content/uploads/2018/12/out22.png"><img class="alignright wp-image-2717 size-medium" src="/wp-content/uploads/2018/12/out22-300x300.png" alt="More randomized squares art" width="300" height="300" /></a></p>
<p>This is the first time I've submitted to the Forum of Mathematics, and I must say that this has been a very good journal experience. One interesting aspect about FoM: Sigma is that they are immediate (gold) open access, and they don't release in issues. Instead, articles become available (for free) from them once the submission process is done. I was reviewing a publication-proof of the paper yesterday, and they appear to be very quick with regards to editing. Perhaps the paper will appear before the end of the year.</p>
<p>An updated version (the version from before the handling of proofs at the journal, so there will be a number of mostly aesthetic differences with the published version) of the paper will appear on the arXiv on Monday 10 December.<sup>1</sup>
<span class="aside"><sup>1</sup>due to the way the arXiv handles paper updates over weekends.</span></p>
<h2>A new appendix has appeared</h2>
<p>There is one major addition to the paper that didn't appear in the original preprint. At one of the referee's suggestions, Chan and I wrote an appendix. The major content of this appendix concerns a technical detail about Rankin-Selberg convolutions.</p>
<p>If $f$ and $g$ are weight $k$ cusp forms on $\mathrm{SL}(2, \mathbb{Z})$ with expansions $$ f(z) = \sum_ {n \geq 1} a(n) e(nz), \quad g(z) = \sum_ {n \geq 1} b(n) e(nz), $$ then one can use a (real analytic) Eisenstein series $$ E(s, z) = \sum_ {\gamma \in \mathrm{SL}(2, \mathbb{Z})_ \infty \backslash \mathrm{SL}(2, \mathbb{Q})} \mathrm{Im}(\gamma z)^s $$ to recognize the Rankin-Selberg $L$-function \begin{equation}\label{RS} L(s, f \otimes g) := \zeta(s) \sum_ {n \geq 1} \frac{a(n)b(n)}{n^{s + k - 1}} = h(s) \langle f g y^k, E(s, z) \rangle, \end{equation} where $h(s)$ is an easily-understandable function of $s$ and where $\langle \cdot, \cdot \rangle$ denotes the Petersson inner product.</p>
<p>When $f$ and $g$ are not cusp forms, or when $f$ and $g$ are modular with respect to a congruence subgroup of $\mathrm{SL}(2, \mathbb{Z})$, then there are adjustments that must be made to the typical construction of $L(s, f \otimes g)$.</p>
<p>When $f$ and $g$ are not cusp forms, then Zagier<sup>2</sup>
<span class="aside"><sup>2</sup>Zagier, Don. "The Rankin-Selberg method for automorphic functions which are not of rapid decay." J. Fac. Sci. Univ. Tokyo Sect. IA Math 28.3 (1981): 415-437.</span>
provided a way to recognize $L(s, f \otimes g)$ when $f$ and $g$ are modular on the full modular group $\mathrm{SL}(2, \mathbb{Z})$. And under certain conditions that he describes, he shows that one can still recognize $L(s, f \otimes g)$ as an inner product with an Eisenstein series as in \eqref{RS}.</p>
<p>In principle, his method of proof would apply for non-cuspidal forms defined on congruence subgroups, but in practice this becomes too annoying and bogged down with details to work with. Fortunately, in 2000, Gupta<sup>3</sup>
<span class="aside"><sup>3</sup>Gupta, Shamita Dutta. "The Rankin-Selberg method on congruence subgroups." Illinois Journal of Mathematics 44.1 (2000): 95-103.</span>
gave a different construction of $L(s, f \otimes g)$ that generalizes more readily to non-cuspidal forms on congruence subgroups. His construction is very convenient, and it shows that $L(s, f \otimes g)$ has all of the properties expected of it.</p>
<p>However Gupta does not show that there are certain conditions under which one can recognize $L(s, f \otimes g)$ as an inner product against an Eisenstein series.<sup>4</sup>
<span class="aside"><sup>4</sup>or something analogous to that, as the story is slightly more complicated on congruence subgroups.</span>
For this paper, we need to deal very explicitly and concretely with $L(s, \theta^2 \otimes \overline{\theta^2})$, which is formed from the modular form $\theta^2$, non-cuspidal on a congruence subgroup.</p>
<p>The Appendix to the paper can be thought of as an extension of Gupta's paper: it uses Gupta's ideas and techniques to prove a result analogous to \eqref{RS}. We then use this to get the explicit understanding necessary to tackle the Gauss Sphere problem.</p>
<p>There is more to this story. I'll return to it in a later note.</p>
<h2>Other submission details for FoM: Sigma</h2>
<p>I should say that there are many other revisions between the original preprint and the final one. These are mainly due to the extraordinary efforts of two Referees. One Referee was kind enough to give us approximately 10 pages of itemized suggestions and comments.</p>
<p>When I first opened these comments, I was a bit afraid. Having <em>so many comments</em> was daunting. But this Referee really took his or her time to point us in the right direction, and the resulting paper is vastly improved (and in many cases shortened, although the appendix has hidden the simplified arguments cut in length).</p>
<p>More broadly, the Referee acted as a sort of mentor with respect to my technical writing. I have a lot of opinions on technical writing,<sup>5</sup>
<span class="aside"><sup>5</sup>and on writing in general... and actually on LaTeX source as well. I'm an opinionated person, I guess.</span>
but this process changed and helped sharpen my ideas concerning good technical math writing.</p>
<p>I sometimes hear lots of negative aspects about peer review, but this particular pair of Referees turned the publication process into an opportunity to learn about good mathematical exposition — I didn't expect this.</p>
<p>I was also surprised by the infrastructure that existed at the University of Warwick for handling a gold open access submission. As part of their open access funding, Forum of Math: Sigma has an author-pays model. Or rather, the author's institution pays. It took essentially no time at all for Warwick to arrange the payment (about 500 pounds).</p>
<p>This is a not-inconsequential amount of money, but it is much less than the 1500 dollars that PLoS One uses. The comparison with PLoS One is perhaps apt. PLoS is older, and perhaps paved the way for modern gold open access journals like FoM. PLoS was started by group of established biologists and chemists, including a Nobel prize winner; FoM was started by a group of established mathematicians, including multiple Fields medalists.<sup>6</sup>
<span class="aside"><sup>6</sup>One might learn from this that it's necessary to have a little "oh" in your acronym in order to be a successful high-ranking gold open access journal.</span></p>
<p>I will certainly consider Forum of Mathematics in the future.</p>https://davidlowryduda.com/update-to-second-moments-in-the-generalized-gauss-circle-problemFri, 07 Dec 2018 03:14:15 +0000The wrong way to compute a sum: addendumhttps://davidlowryduda.com/the-wrong-way-to-compute-a-sum-addendumDavid Lowry-Duda<p><a href="/wp-content/uploads/2018/11/cellular_automata_106.png"><img class="alignright wp-image-2702 size-medium" src="/wp-content/uploads/2018/11/cellular_automata_106-300x298.png" alt="Cellular Automata from Rule 106 (random initial configuration)" width="300" height="298" /></a></p>
<p>In <a href="/the-wrong-way-to-compute-a-sum/">my previous note</a>, I looked at an amusing but inefficient way to compute the sum $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n - 1}$$ using Mellin and inverse Mellin transforms. This was great fun, but the amount of work required was more intense than the more straightforward approach offered immediately by using Lambert series.</p>
<p>However, Adam Harper suggested that there is a nice shortcut that we can use (although coming up with this shortcut requires either a lot of familiarity with Mellin transforms or knowledge of the answer).</p>
<p>In the Lambert series approach, one shows quickly that $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n - 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ and then evaluates this last sum directly. For the Mellin transform approach, we might ask: do the two functions $$ f(x) = \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} - 1}$$ and $$ g(x) = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ have the same Mellin transforms? From the previous note, we know that they have the same values at $1$.</p>
<p>We also showed very quickly that $$ \mathcal{M} [f] = \frac{1}{(\log 2)^2} \Gamma(s) \zeta(s-1). $$ The more difficult parts from the previous note arose in the evaluation of the inverse Mellin transform at $x=1$.</p>
<p>Let us compute the Mellin transform of $g$. We find that $$ \begin{align}
\mathcal{M}[g] &= \sum_{n \geq 1} n \int_0^\infty \frac{1}{2^{nx}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} n \int_0^\infty \frac{1}{e^{nx \log 2}} x^s \frac{dx}{x} \notag \\
&= \sum_{n \geq 1} \frac{n}{(n \log 2)^s} \int_0^\infty x^s e^{-x} \frac{dx}{x} \notag \\
&= \frac{1}{(\log 2)^2} \zeta(s-1)\Gamma(s). \notag
\end{align}$$ To go from the second line to the third line, we did the change of variables $x \mapsto x/(n \log 2)$, yielding an integral which is precisely the definition of the Gamma function.</p>
<p>Thus we see that $$ \mathcal{M}[g] = \frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1) = \mathcal{M}[f],$$ and thus $f(x) = g(x)$. ("Nice" functions with the same "nice" Mellin transforms are also the same, exactly as with Fourier transforms).</p>
<p>This shows that not only is $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^n - 1} = \sum_{n \geq 1} \frac{n}{2^n},$$ but in fact $$ \sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} - 1} = \sum_{n \geq 1} \frac{n}{2^{nx}}$$ for all $x > 1$.</p>https://davidlowryduda.com/the-wrong-way-to-compute-a-sum-addendumSat, 10 Nov 2018 03:14:15 +0000The wrong way to compute a sumhttps://davidlowryduda.com/the-wrong-way-to-compute-a-sumDavid Lowry-Duda<p>At a recent colloquium at the University of Warwick, the fact that
\begin{equation}\label{question}
\sum_ {n \geq 1} \frac{\varphi(n)}{2^n - 1} = 2.
\end{equation}
Although this was mentioned in passing, John Cremona asked — <em>How do you prove that</em>?</p>
<p>It almost fails a heuristic check, as one can quickly check that
\begin{equation}\label{similar}
\sum_ {n \geq 1} \frac{n}{2^n} = 2,
\end{equation}
which is surprisingly similar to \eqref{question}. I wish I knew more examples of pairs with a similar flavor.</p>
<p><strong>[Edit:</strong> Note that an addendum to this note <a href="/the-wrong-way-to-compute-a-sum-addendum/">has been added here</a>. In it, we see that there is a way to shortcut the "hard part" of the long computation.<strong>]</strong></p>
<h2>The right way</h2>
<p>Shortly afterwards, Adam Harper and Samir Siksek pointed out that this can be determined from Lambert series, and in fact that Hardy and Wright include a similar exercise in their book. This proof is delightful and short.</p>
<p>The idea is that, by expanding the denominator in power series, one has that
\begin{equation}
\sum_{n \geq 1} a(n) \frac{x^n}{1 - x^n} \notag
= \sum_ {n \geq 1} a(n) \sum_{m \geq 1} x^{mn}
= \sum_ {n \geq 1} \Big( \sum_{d \mid n} a(d) \Big) x^n,
\end{equation}
where the inner sum is a sum over the divisors of $d$. This all converges beautifully for $\lvert x \rvert < 1$.</p>
<p>Applied to \eqref{question}, we find that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^n - 1} \notag
= \sum_ {n \geq 1} \varphi(n) \frac{2^{-n}}{1 - 2^{-n}}
= \sum_ {n \geq 1} 2^{-n} \sum_{d \mid n} \varphi(d),
\end{equation}
and as
\begin{equation}
\sum_ {d \mid n} \varphi(d) = n, \notag
\end{equation}
we see that \eqref{question} can be rewritten as \eqref{similar} after all, and thus both evaluate to $2$.</p>
<p>That's a nice derivation using a series that I hadn't come across before. But that's not what this short note is about. This note is about evaluating \eqref{question} in a different way, arguably the wrong way. But it's a wrong way that works out in a nice way that at least one person<sup>1</sup>
<span class="aside"><sup>1</sup>and perhaps exactly one person</span>
finds appealing.</p>
<h2>The wrong way</h2>
<p>We will use Mellin inversion — this is essentially Fourier inversion, but in a change of coordinates.</p>
<p>Let $f$ denote the function
\begin{equation}
f(x) = \frac{1}{2^x - 1}. \notag
\end{equation}
Denote by $f^ * $ the Mellin transform of $f$,
\begin{equation}
f * (s):= \mathcal{M} [f(x)] (s) := \int_ 0^\infty f(x) x^s \frac{dx}{x}
= \frac{1}{(\log 2)^2} \Gamma(s)\zeta(s),\notag
\end{equation}
where $\Gamma(s)$ and $\zeta(s)$ are the Gamma function and Riemann zeta functions.<sup>2</sup>
<span class="aside"><sup>2</sup>These are functions near and dear to my heart, so I feel comfort when I see them. But I recognize that others might think that this is an awfully complicated way to start answering this question. And I must say, those people are probably right.</span></p>
<p>For a general nice function $g(x)$, its Mellin transform satisfies
\begin{equation}
\mathcal{M}[f(nx)] (s)
= \int_0^\infty g(nx) x^s \frac{dx}{x}
= \frac{1}{n^s} \int_0^\infty g(x) x^s \frac{dx}{x}
= \frac{1}{n^s} g^ * (s).\notag
\end{equation}
Further, the Mellin transform is linear. Thus
\begin{equation}\label{mellinbase}
\mathcal{M}[\sum_{n \geq 1} \varphi(n) f(nx)] (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} f^ * (s)
= \sum_ {n \geq 1} \frac{\varphi(n)}{n^s} \frac{\Gamma(s) \zeta(s)}{(\log 2)^s}.
\end{equation}</p>
<p>The Euler phi function $\varphi(n)$ is multiplicative and nice, and its Dirichlet series can be rewritten as
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{n^s} \notag
= \frac{\zeta(s-1)}{\zeta(s)}.
\end{equation}
Thus the Mellin transform in \eqref{mellinbase} can be written as
\begin{equation}
\frac{1}{(\log 2)^s} \Gamma(s) \zeta(s-1). \notag
\end{equation}</p>
<p>By the fundamental theorem of Mellin inversion (which is analogous to Fourier inversion, but again in different coordinates), the inverse Mellin transform will return the original function. The inverse Mellin transform of a function $h(s)$ is defined to be
\begin{equation}
\mathcal{M}^{-1}[h(s)] (x) \notag
:=
\frac{1}{2\pi i} \int_ {c - i \infty}^{c + i\infty} x^s h(s) ds,
\end{equation}
where $c$ is taken so that the integral converges beautifully, and the integral is over the vertical line with real part $c$. I'll write $(c)$ as a shorthand for the limits of integration. Thus
\begin{equation}\label{mellininverse}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{nx} - 1}
= \frac{1}{2\pi i} \int_ {(3)} \frac{1}{(\log 2)^s}
\Gamma(s) \zeta(s-1) x^{-s} ds.
\end{equation}</p>
<p>We can now describe the end goal: evaluate \eqref{mellininverse} at $x=1$, which will recover the value of the original sum in \eqref{question}.</p>
<p>How can we hope to do that? The idea is to shift the line of integration arbitrarily far to the left, pick up the infinitely many residues guaranteed by Cauchy's residue theorem, and to recognize the infinite sum as a classical series.</p>
<p>The integrand has residues at $s = 2, 0, -2, -4, \ldots$, coming from the zeta function ($s = 2$) and the Gamma function (all the others). Note that there aren't poles at negative odd integers, since the zeta function has zeroes at negative even integers.</p>
<p>Recall, $\zeta(s)$ has residue $1$ at $s = 1$ and $\Gamma(s)$ has residue $(-1)^n/{n!}$ at $s = -n$. Then shifting the line of integration and picking up all the residues reveals that
\begin{equation}
\sum_{n \geq 1} \frac{\varphi(n)}{2^{n} - 1} \notag
=\frac{1}{\log^2 2} + \zeta(-1) + \frac{\zeta(-3)}{2!} \log^2 2 +
\frac{\zeta(-5)}{4!} \log^4 2 + \cdots
\end{equation}</p>
<p>The zeta function at negative integers has a very well-known relation to the Bernoulli numbers,
\begin{equation}\label{zeta_bern}
\zeta(-n) = - \frac{B_ {n+1}}{n+1},
\end{equation}
where Bernoulli numbers are the coefficients in the expansion
\begin{equation}\label{bern_gen}
\frac{t}{1 - e^{-t}} = \sum_{m \geq 0} B_m \frac{t^m}{m!}.
\end{equation}
Many general proofs for the values of $\zeta(2n)$ use this relation and the functional equation, as well as a computation of the Bernoulli numbers themselves. Another important aspect of Bernoulli numbers that is apparent through \eqref{zeta_bern} is that $B_{2n+1} = 0$ for $n \geq 1$, lining up with the trivial zeroes of the zeta function.</p>
<p>Translating the zeta values into Bernoulli numbers, we find that
\eqref{question} is equal to
\begin{align}
&\frac{1}{\log^2 2} - \frac{B_2}{2} - \frac{B_4}{2! \cdot 4} \log^2 2 -
\frac{B_6}{4! \cdot 6} \log^4 2 - \frac{B_8}{6! \cdot 8} \cdots \notag \\
&=
-\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!}. \label{recog}
\end{align}
This last sum is excellent, and can be recognized.</p>
<p>For a general exponential generating series
\begin{equation}
F(t) = \sum_{m \geq 0} a(m) \frac{t^m}{m!},\notag
\end{equation}
we see that
\begin{equation}
\frac{d}{dt} \frac{1}{t} F(t) \notag
=\sum_{m \geq 0} (m-1) a(m) \frac{t^{m-2}}{m!}.
\end{equation}
Applying this to the series defining the Bernoulli numbers from \eqref{bern_gen}, we find that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 - e^{-t}} \notag
=- \frac{e^{-t}}{(1 - e^{-t})^2},
\end{equation}
and also that
\begin{equation}
\frac{d}{dt} \frac{1}{t} \frac{t}{1 - e^{-t}} \notag
=\sum_{m \geq 0} (m-1) B_m \frac{(t)^{m-2}}{m!}.
\end{equation}
This is exactly the sum that appears in \eqref{recog}, with $t = \log 2$.</p>
<p>Putting this together, we find that
\begin{equation}
\sum_{m \geq 0} (m-1) B_m \frac{(\log 2)^{m-2}}{m!} \notag
=\frac{e^{-\log 2}}{(1 - e^{-\log 2})^2}
= \frac{1/2}{(1/2)^2} = 2.
\end{equation}
Thus we find that \eqref{question} really is equal to $2$, as we had sought to show.</p>https://davidlowryduda.com/the-wrong-way-to-compute-a-sumFri, 02 Nov 2018 03:14:15 +0000Using lcalc to compute half-integral weight L-functionshttps://davidlowryduda.com/using-lcalc-to-compute-half-integral-weight-l-functionsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/using-lcalc-to-compute-half-integral-weight-l-functionsTue, 09 Oct 2018 03:14:15 +0000Extra comparisons in Python's timsorthttps://davidlowryduda.com/extra-comparisons-in-pythons-timsortDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/extra-comparisons-in-pythons-timsortFri, 14 Sep 2018 03:14:15 +0000Speed talks should be at every conferencehttps://davidlowryduda.com/speed-talks-should-be-at-every-conferenceDavid Lowry-Duda<p>I recently attended Building Bridges 4, an automorphic forms summer school and workshop. A major goal of the conference is to foster communication and relationships between researchers from North America and Europe, especially junior researchers and graduate students.</p>
<p>It was a great conference, and definitely one of the better conferences that I've attended. What made it so good? For one thing, it was in Budapest, and I love Budapest. Many of the main topics were close to my heart, which is a big plus.</p>
<p>But what I think really set it apart was that there were lots of relatively short talks, and almost everyone attended almost every talk.<sup>1</sup>
<span class="aside"><sup>1</sup>although this year there were a few parallel sessions, so sometimes the group split in half. The reason for this is that every attendee was encouraged to give a talk, which is an excellent idea, and the fact that there were enough to require parallel sessions is to be applauded.</span></p>
<p>The amount of time allotted to a talk carries extreme power in deciding what sort of talk it will be. A typical hour-long seminar talk is long enough to give context, describe a line of research leading to a set of results, discuss how these results fit into the literature, and even to give a non-rushed description of how something is proved. Sometimes a good speaker will even distill a few major ideas and discuss how they are related. A long talk can have multiple major ideas (although just one presented very well can make a good talk too).</p>
<p>In comparison, 50, 40, and 30 minute talks require much more discipline. As the amount of time decreases, the number of ideas that can be inserted into a talk decreases. And this relationship is not linear! Thirty minutes is just about long enough to describe one idea pretty well, and to do anything more is very hard.<sup>2</sup>
<span class="aside"><sup>2</sup>... even though it can be very tempting.</span></p>
<p>Something interesting happens for shorter talks. For 20 minute, 15 minute, and 10 minute talks, the limitation almost serves as a source of inspiration.<sup>3</sup>
<span class="aside"><sup>3</sup>In the same way as "Necessity is the mother of invention".</span>
Being forced to focus on what's important is a powerful organizing force.</p>
<p>The median talk length was 20 minutes, which is a very comfortable number. This is long enough to state a result and give context. It's also long enough to tempt speakers into describing methodology behind a proof, but not long enough to effectively teach someone how the proof works.</p>
<p>An extraordinary aspect of a 20 minute talk is also that it's short enough to pay attention to, even if it's only an okay talk. It is perhaps not a surprise to most conference goers that most talks are not so great. To be a skilled orator is to be exceptional.</p>
<p>At Building Bridges, I was introduced to math <em>speed talks</em>. These are two minute talks. I've seen many programming <em>lightning talks</em> (often used to plug a particular product or solution to a common programming problem), but these math <em>speed talks</em> were different.</p>
<p>People used their two minutes to introduce an idea, or a result. And they either chose to give the broadest possible context, or a singular idea in the proof.</p>
<p>People were talking about <em>real mathematics</em> in <strong>two minutes</strong>. And I loved it.</p>
<p>Simply having a task where you distill some real mathematics into a two minute coherent description is worthwhile. <em>What's important? What do you really want to say? Why?</em></p>
<p>Two minutes is so short that it feels silly. And silly means that it doesn't feel dangerous or scary, and thus many people felt willing to give it a try. At Building Bridges, the organizers gamified the speed talks, so that getting the closest to 2 minutes was rewarded with applause and going over two minutes resulted in a buzzer going off. It was a game, and it was <strong>fun</strong>. It was encouraging.</p>
<p>I firmly support any activity that encourages people who normally don't speak so much, especially students and junior researchers. You learn a lot by giving a talk, even if it's only a two minute talk.<sup>4</sup>
<span class="aside"><sup>4</sup>Of course, giving only two minute talks would be bad. You would learn an incomplete set of skills. But you might get over stage fright, and that's a big enough hump.</span></p>
<p>This conference had 19 (I think) speed talks over a three day stretch. They were given in clumps after the last regular talk each day. Since people were there for the big talk, everyone attended the speed talks. This is also important! In conferences like the Joint Math Meetings, where there might even be something like speed talks, it's essentially impossible to pay attention since there are too many people in too many places and you never can step in the same river twice. Here, speed talks were given on the same stage as long talks, to the same audience, and with the same equipment.</p>
<p>Every conference should have speed talks. And they should be treated as first-class talks, with the exception that they are irrefutably silly.</p>
<p>Go forth and spread the speed talk gospel.</p>https://davidlowryduda.com/speed-talks-should-be-at-every-conferenceThu, 06 Sep 2018 03:14:15 +0000Seeing color shouldn't feel like a superpowerhttps://davidlowryduda.com/seeing-colorDavid Lowry-Duda<p>In the last month, I have found myself pair programming with three different people. All three times involved working on the <a href="http://www.lmfdb.org/">LMFDB</a>. I rarely pair program outside a mentor-mentee or instructor-student situation.<sup>1</sup>
<span class="aside"><sup>1</sup>and to be fair, one of these was mostly me showing someone how to contribute to the LMFDB.</span></p>
<p>This is fun. It's fun seeing other people's workflows. (In these cases, it happened to be that the other person was usually the one at the keyboard and typing, and I was backseat driving). I live in the terminal, subscribe to the Unix-is-my-IDE general philosophy: vim is my text editor; a mixture of makefiles, linters, and fifos with tmux perform automated building, testing, and linting; git for source control; and a medium-sized but consistently growing set of homegrown bash/python/c tools and scripts make it fun and work how I want.</p>
<p>I'm distinctly interested in seeing tools other people have made for their own workflows. Those scripts that aren't polished, but get the work done. There is a whole world of git-hooks and aliases that amaze me.</p>
<p>But my recent encounters with pair programming exposed me to a totally different and unexpected experience: two of my programming partners were color blind.<sup>2</sup>
<span class="aside"><sup>2</sup>Coincidence.</span></p>
<p>At first, I didn't think much of it. I had thought that you might set some colorblind-friendly colorschemes, and otherwise configure your way around it. But as is so often the case with accessibility problems, I underestimated both the number of challenges and the difficulty in solving them (lousy but true aside: <strong>most companies almost completely ignore problems with accessibility</strong>).</p>
<p>I first noticed differences while trying to fix bugs and review bugfixes in the LMFDB. We use Travis CI for automated testing, and we were examining a build that had failed. We brought up the Travic CI interface and scroll through the log. I immediately point out the failure, since I see something like this.<sup>3</sup>
<span class="aside"><sup>3</sup>It was a different set of failures somewhere mid-test. I just took the most recent failing build as examples.</span></p>
<p><a href="/wp-content/uploads/2018/07/rb_blind_compare.png">
<img class="aligncenter wp-image-2649 size-full" src="/wp-content/uploads/2018/07/rb_blind_compare.png" alt="an image from Travic CI showing "FAIL" in red and "PASS" in green." width="658" height="482" /></a><em>How do you know something failed? </em>asks John, my partner for the day. <em>Oh, it's because the output is colored, isn't it? I didn't know.</em> With the help of the color-blindness.com <a href="https://www.color-blindness.com/coblis-color-blindness-simulator/">color-blindness simulator</a>, I now see that John saw something like<a href="/wp-content/uploads/2018/07/rg_blind.png">
<img class="aligncenter wp-image-2650 size-full" src="/wp-content/uploads/2018/07/rg_blind.png" alt="an image from Travic CI that has been altered to appear with contrast as a red-green colorblind person might see it. Now "FAIL" and "PASS" appear in essentially the same shade." width="775" height="580" /></a>With red-green colorblindness, there is essentially no difference in the shades of PASSED and FAILED. That's sort of annoying.</p>
<p>We'd make a few changes, and then rerun some tests. Now we were running tests in a terminal, and the testlogs are scolling by. We're chatting about emacs wizardy (or c++ magic, or compiler differences between gcc and clang, or something), and I point out that we can stop the tests since three tests have already failed.</p>
<p>He stared at me a bit dumbfoundedly. It was like I had superpowers. I could recognize failures without paying almost any attention, since flashes of red stand out.</p>
<p>But if you don't recognize differences in color, how would you even know that the terminal outputs different colors for PASSED and FAILED? (We use pytest, which does). A quick look for different colorschemes led to frustration, as there are different sorts of colorblindness and no single solution that will work for everyone (and changing colorschemes is sort of annoying anyway).<sup>4</sup>
<span class="aside"><sup>4</sup>This leads to small but significant differences. Our test suite takes approximately 15 minutes to run in full, and a nontrivial amount of time to run in part. Early error recognition can save minutes each time, and minutes add up.</span></p>
<p>I should say that the Travis team has made some accessibility improvements for colorblind users in the past. The build-passing and build-failing icons used to be little circles that were red or green, as shown here.</p>
<p><a href="/wp-content/uploads/2018/07/old_travis_pass_fail.png"><img class="size-full wp-image-2651 aligncenter" src="/wp-content/uploads/2018/07/old_travis_pass_fail.png" alt="" width="185" height="432" /></a>That means the build status was effectively invisible to colorblind users. After <a href="https://github.com/travis-ci/travis-ci/issues/754">an issue was raised and discussed</a>, they moved to the current green-checkmark-circle for passing and red-exed-circle for failing, which is a big improvement.</p>
<p>The colorscheme used for Travic CI's online logs is based on the <a href="https://github.com/arcticicestudio/nord">nord</a> color palette, and there is no colorscheme-switching option. It's a beautiful and well-researched theme <em>for me</em>, but not for everybody.</p>
<p>The colors on the page are controllable by CSS, but not in a uniform way that works on many sites. (Or at least, not to my knowledge. I would be interested if someone else knew more about this and knew a generic approach. The people I was pair-programming with didn't have a good solution to this problem).</p>
<p>Should you really need to write your own solution to every colorblind accessibility problem?</p>
<p>In the next post, I'll give a (lousy but functional) bookmarklet that injects CSS into the page to see Travis CI FAILs immediately.</p>https://davidlowryduda.com/seeing-colorSat, 28 Jul 2018 03:14:15 +0000A bookmarklet to inject colorblind friendly CSS into Travis CIhttps://davidlowryduda.com/a-bookmarklet-to-inject-colorblind-friendly-css-into-travis-ciDavid Lowry-Duda<p>In my <a href="/seeing-color/">previous post</a>, I noted that the ability to see in color gave me an apparent superpower in quickly analyzing Travis CI and pytest logs.</p>
<p>I wondered: <em>how hard is it to use colorblind friendly colors here?</em></p>
<p>I had in the back of my mind the thought of the next time I sit down and pair program with someone who is colorblind (which will definitely happen). Pair programming is largely about sharing experiences and ideas, and color disambiguation shouldn't be a wedge.</p>
<p>I decided that loading customized CSS is the way to go. There are different ways to do this, but an easy method for quick replicability is to create a bookmarklet that adds CSS into the page. So, I did that.</p>
<p>You can get that <a href="/static/travis-colorblind.html">bookmarklet here</a>. (Due to very sensible security reasons, Wordpress doesn't want to allow me to provide a link which is actually a javascript function. So I make it available on a static, handwritten page).<sup>1</sup>
<span class="aside"><sup>1</sup>I suppose one could interpret this as a not so subtle hint that javascript injection can be unsafe. Fortunately this bookmark is very transparent.</span></p>
<p>Here's how it works. A Travis log looks typically like this:</p>
<p><a href="/wp-content/uploads/2018/07/travis_pre_bookmark.png"><img class="size-full wp-image-2657 aligncenter" src="/wp-content/uploads/2018/07/travis_pre_bookmark.png" alt="" width="583" height="121" /></a></p>
<p>After clicking on the bookmarklet, it looks like</p>
<p><a href="/wp-content/uploads/2018/07/travis_post_bookmark.png"><img class="size-full wp-image-2656 aligncenter" src="/wp-content/uploads/2018/07/travis_post_bookmark.png" alt="" width="568" height="141" /></a>This is not beautiful, but it works and it's very noticable. Nonetheless, when the goal is just to be able to quickly recognize if errors are occuring, or to recognize exceptional lines on a quick scroll-by, the black-text-on-white-box wins the standout crown.</p>
<p>The LMFDB uses pytest, which conveniently produces error summaries at the end of the test. (We used to use nosetest, and we hadn't set it up to have nice summaries before transitioning to pytest). This bookmark will also effect the error summary, so that it now looks like</p>
<p><a href="/wp-content/uploads/2018/07/travis_error_post_bookmark.png"><img class="size-full wp-image-2658 aligncenter" src="/wp-content/uploads/2018/07/travis_error_post_bookmark.png" alt="" width="590" height="308" /></a>Again, I would say this is not beautiful, but definitely noticeable.</p>
<hr />
<p>As an aside, I also looked through the variety of colorschemes that I have collected over the years. And it turns out that 100 percent of them are unkind to colorblind users, with the exception of the monotone or monochromatic schemes (which are equal in the <a href="http://www.tnellen.com/westside/harrison.pdf">Harrison Bergeron sense</a>).</p>
<p>We should do better.</p>https://davidlowryduda.com/a-bookmarklet-to-inject-colorblind-friendly-css-into-travis-ciSat, 28 Jul 2018 03:14:15 +0000Notes from a Talk at Building Bridges 4https://davidlowryduda.com/notes-from-a-talk-at-building-bridges-4David Lowry-Duda<p>On 18 July 2018 I gave a talk at the 4th Building Bridges Automorphic Forms Workshop, which is hosted at the Renyi Institute in Budapest, Hungary this year. In this talk, I spoke about counting points on hyperboloids, with a certain focus on counting points on the three dimensional hyperboloid</p>
<p>$$\begin{equation} X^2 + Y^2 = Z^2 + h \end{equation}$$</p>
<p>for any fixed integer $h$.</p>
<p>I gave a similar talk at the 32nd Automorphic Forms Workshop in Tufts in March. I don't say this during my talk, but a big reason for giving these talks is to continue to inspire me to finish the corresponding paper. (There are still a couple of rough edges that need some attention).</p>
<p>The methodology for the result relies on the spectral expansion of half-integral weight modular forms. This is unfriendly to those unfamiliar with the subject, and particularly mysterious to students. But there is a nice connection to a topic discussed by Arpad Toth during the previous week's associated summer school.</p>
<p>Arpad sketched a proof of the spectral decomposition of holomorphic modular cusp forms on $\Gamma = \mathrm{SL}(2, \mathbb{Z})$. He showed that
$$\begin{equation} L^2(\Gamma \backslash \mathcal{H}) = \textrm{cuspidal} \oplus \textrm{Eisenstein}, \tag{1}
\end{equation}$$
where the <em>cuspidal</em> contribution comes from Maass forms and the <em>Eisenstein</em> contribution comes from line integrals against Eisenstein series.</p>
<p>The typical Eisenstein series $$\begin{equation} E(z, s) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} \textrm{Im}(\gamma z)^s \end{equation}$$ only converges for $\mathrm{Re}(s) > 1$, and the initial decomposition in $(1)$ implicitly has $s$ in this range.</p>
<p>To write down the integrals appearing in the Eisenstein spectrum explicitly, one normally shifts the line of integration to $1/2$. As Arpad explained, classically this produces a pole at $s = 1$ (which is the constant function).</p>
<p>In half-integral weight, the Eisenstein series has a pole at $s = 3/4$, with the standard theta function</p>
<p>$$\begin{equation} \theta(z) = \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z} \end{equation}$$</p>
<p>as the residue. (More precisely, it's a constant times $y^{1/4} \theta(z)$, or a related theta function for $\Gamma_0(N)$). I refer to this portion of the spectrum as <em>the residual spectrum</em>, since it comes from often-forgotten residues of Eisenstein series. Thus the spectral decomposition for half-integral weight objects is a bit more complicated than the normal case.</p>
<p>When giving talks involving half-integral weight spectral expansions to audiences including non-experts, I usually omit description of this. But for those who attended the summer school, it's possible to at least recognize where these additional terms come from.</p>
<p>The slides for this talk are available <a href="/wp-content/uploads/2018/07/BB18.pdf">here</a>.</p>https://davidlowryduda.com/notes-from-a-talk-at-building-bridges-4Wed, 18 Jul 2018 03:14:15 +0000Splitting Easy MathSE Questions into a New NoviceMathSE Is a Bad Ideahttps://davidlowryduda.com/splitting-mathse-into-novicemathse-is-a-bad-ideaDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/splitting-mathse-into-novicemathse-is-a-bad-ideaFri, 11 May 2018 03:14:15 +0000Ghosts of Forums Pasthttps://davidlowryduda.com/ghosts-of-forums-pastDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/ghosts-of-forums-pastTue, 01 May 2018 03:14:15 +0000Challenges facing community cohesion (and Math.StackExchange in particular)https://davidlowryduda.com/challenges-facing-community-cohesion-and-math-stackexchange-in-particularDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/challenges-facing-community-cohesion-and-math-stackexchange-in-particularTue, 24 Apr 2018 03:14:15 +0000Paper: A Shifted Sum for the Congruent Number Problemhttps://davidlowryduda.com/paper-announcement-a-shifted-sum-for-the-congruent-number-problemDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/paper-announcement-a-shifted-sum-for-the-congruent-number-problemFri, 13 Apr 2018 03:14:15 +0000Cracking Codes with Python: A Book Reviewhttps://davidlowryduda.com/cracking-codes-with-python-a-book-reviewDavid Lowry-Duda<p>How do you begin to learn a technical subject?</p>
<p>My first experience in "programming" was following a semi-tutorial on how to patch the Starcraft exe in order to make it understand replays from previous versions. I was about 10, and I cobbled together my understanding from internet mailing lists and chatrooms. The documentation was awful and the original description was flawed, and to make it worse, I didn't know anything about any sort of programming yet. But I trawled these lists and chatroom logs and made it work, and learned a few things. Each time Starcraft was updated, the old replay system broke completely and it was necessary to make some changes, and I got pretty good at figuring out what changes were necessary and how to perform these patches.</p>
<p>On the other hand, my first formal experience in programming was taking a course at Georgia Tech many years later, in which a typical activity would revolve around an exciting topic like concatenating two strings or understanding object polymorphism. These were dry topics presented to us dryly, but I knew that I wanted to understand what was going on and so I suffered the straight-faced-ness of the class and used the course as an opportunity to build some technical depth.</p>
<p>Now I recognize that these two approaches cover most first experiences learning a technical subject: a motivated survey versus monographic study. At the heart of the distinction is a decision to view and alight on many topics (but not delving deeply in most) or to spend as much time as is necessary to understand completely each topic (and hence to not touch too many different topics). Each has their place, but each draws a very different crowd.</p>
<p>The book <em>Cracking Codes with Python: An Introduction to Building and Breaking Ciphers</em> by Al Sweigart<sup>1</sup>
<span class="aside"><sup>1</sup> Side note: this is the fifth book I've been asked to review, and by far the most approachable</span>
is very much a motivated flight through various topics in programming and cryptography, and not at all a deep technical study of any individual topic. A more accurate (though admittedly less beckoning) title might be <em>An Introduction to Programming Concepts Through Building and Breaking Ciphers in Python.</em> The main goal is to promote programmatical thinking by exploring basic ciphers, and the medium happens to be python.</p>
<p>But ciphers are cool. And breaking them is cool. And if you think you might want to learn something about programming and you might want to learn something about ciphers, then this is a great book to read.</p>
<p>Sweigart has a knack for writing approachable descriptions of what's going on without delving into too many details. In fact, in some sense Sweigart has already written this book before: his other books <em>Automate the Boring Stuff with Python</em> and <em>Invent your own Computer Games with Python</em> are similarly survey material using python as the medium, though with different motivating topics.</p>
<p>Each chapter of this book is centered around exploring a different aspect of a cipher, and introduces additional programming topics to do so. For example, one chapter introduces the classic Caesar cipher, as well as the "if", "else", and "elif" conditionals (and a few other python functions). Another chapter introduces brute-force breaking the Caesar cipher (as well as string formatting in python).</p>
<p>In each chapter, Sweigart begins by giving a high-level overview of the topics in that chapter, followed by python code which accomplishes the goal of the chapter, followed by a detailed description of what each block of code accomplishes. Readers get to see fully written code that does nontrivial things very quickly, but on the other hand the onus of code generation is entirely on the book and readers may have trouble adapting the concepts to other programming tasks. (But remember, this is more survey, less technical description). Further, Sweigart uses a number of good practices in his code that implicitly encourages good programming behaviors: modules are written with many well-named functions and well-named variables, and sufficiently modularly that earlier modules are imported and reused later.</p>
<p>But this book is not without faults. As with any survey material, one can always disagree on what topics are or are not included. The book covers five classical ciphers (Caesar, transposition, substitution, Vigenere, and affine) and one modern cipher (textbook-RSA), as well as the write-backwards cipher (to introduce python concepts) and the one-time-pad (presented oddly as a Vigenere cipher whose key is the same length as the message). For some unknown reason, Sweigart chooses to refer to RSA almost everywhere as "the public key cipher", which I find both misleading (there are other public key ciphers) and giving insufficient attribution (the cipher is implemented in chapter 24, but "RSA" appears only once as a footnote in that chapter. Hopefully the reader was paying lots of attention, as otherwise it would be rather hard to find out more about it).</p>
<p>Further, the choice of python topics (and their order) is sometimes odd. In truth, this book is almost language agnostic and one could very easily adapt the presentation to any other scripting language, such as C.</p>
<p>In summary, this book is an excellent resource for the complete beginner who wants to learn something about programming and wants to learn something about ciphers. After reading this book, the reader will be a mid-beginner student of python (knee-deep is apt) and well-versed in classical ciphers. Should the reader feel inspired to learn more python, then he or she would probably feel comfortable diving into a tutorial or reference for their area of interest (like Full Stack Python if interested in web dev, or Python for Data Analysis if interested in data science). Or he or she might dive into a more complete monograph like Dive into Python or the monolithic Learn Python. Many fundamental topics (such as classes and objects, list comprehensions, data structures or algorithms) are not covered, and so "advanced" python resources would not be appropriate.</p>
<p>Further, should the reader feel inspired to learn more about cryptography, then I recommend that he or she consider Cryptanalysis by Gaines, which is a fun book aimed at diving deeper into classical pre-computer ciphers, or the slightly heavier but still fun resource would "Codebreakers" by Kahn. For much further cryptography, it's necessary to develop a bit of mathematical maturity, which is its own hurdle.</p>
<p>This book is not appropriate for everyone. An experienced python programmer could read this book in an hour, skipping the various descriptions of how python works along the way. An experienced programmer who doesn't know python could similarly read this book in a lazy afternoon. Both would probably do better reading either a more advanced overview of either cryptography or python, based on what originally drew them to the book.</p>https://davidlowryduda.com/cracking-codes-with-python-a-book-reviewMon, 09 Apr 2018 03:14:15 +0000Notes from a talk at Tufts, Automorphic Forms Workshophttps://davidlowryduda.com/notes-from-a-talk-at-tufts-automorphic-forms-workshopDavid Lowry-Duda<p>On 19 March I gave a talk at the 32nd Automorphic Forms Workshop, which ishosted by Tufts this year. The content of the talk concerned counting points on hyperboloids, and inparticular counting points on the three dimensional hyperboloid</p>
<p>$$\begin{equation}
X^2 + Y^2 = Z^2 + h
\end{equation}$$</p>
<p>for any fixed integer $h$. But thematically, I wanted to give another concrete example of using modularforms to compute some sort of arithmetic data, and to mention how the perhapsapparently unrelated topic of spectral theory appears even in such an arithmeticapplication.</p>
<p>Somehow, starting from counting points on $X^2 + Y^2 = Z^2 + h$ (which appearssimple enough on its own that I could probably put this in front of anelementary number theory class and they would feel comfortable experimentingaway on the topic), one gets to very scary-looking expressions like</p>
<p>$$\begin{equation}
\sum_{t_j}
\langle P_h^k, \mu_j \rangle
\langle \theta^2 \overline{\theta} y^{3/4}, \mu_j \rangle +
\sum_{\mathfrak{a}}\int_{(1/2)}
\langle P_h^k, E_h^k(\cdot, u) \rangle
\langle \theta^2 \overline{\theta} y^{3/4}, E_h^k(\cdot, u) \rangle du,
\end{equation}$$</p>
<p>which is full of lots of non-obvious symbols and is generically intimidating.</p>
<p>Part of the theme of this talk is to give a very direct idea of how one gets tothe very complicated spectral expansion from the original lattice-countingproblem. Stated differently, perhaps part of the theme is to describe a simple-lookingnail and a scary-looking hammer, and show that the hammer actually works quitewell in this case.</p>
<p>The slides for this talk are <a href="/wp-content/uploads/2018/03/Hyperboloids.pdf">available here</a>.</p>https://davidlowryduda.com/notes-from-a-talk-at-tufts-automorphic-forms-workshopWed, 21 Mar 2018 03:14:15 +0000Hosting a Flask App on WebFaction on a Non-root Domainhttps://davidlowryduda.com/hosting-a-flask-app-on-webfaction-on-a-non-root-domainDavid Lowry-Duda<p>Since I came to Warwick, I've been working extensively on the <a href="http://www.lmfdb.org/">LMFDB</a>, which uses python, sage, flask, and mongodb at its core. Thus I've become very familiar with flask. Writing a simple flask application is very quick and easy. So I thought it would be a good idea to figure out how to deploy a flask app on the server which runs this website, which is currently at WebFaction.</p>
<p>In short, it was not too hard, and now the app is set up for use. (It's not a public tool, so I won't link to it).</p>
<p>But there were a few things that I had to think figure out which I would quickly forget. Following the variety of information I found online, the only nontrivial aspect was configuring the site to run on a non-root domain (like <code>davidlowryduda.com/subdomain</code> instead of at <code>davidlowryduda.com</code>). I'm writing this so as to not need to figure this out when I write and hoost more flask apps. (Which I'll almost certainly do, as it's so straightforward).</p>
<p>There are some uninteresting things one must do on WebFaction.</p>
<ol>
<li>Log into your account.</li>
<li>Add a new application of type <code>mod_wsgi</code> (and the desired version of python, which is hopefully 3.6+).</li>
<li>Add this application to the desired website and subdomain in the WebFaction control panel.</li>
</ol>
<p>After this, WebFaction will set up a skeleton "Hello World" mod_wsgi application with many reasonable server setting defaults. The remainder of the setup is done on the server itself.</p>
<p>In <code>~/webapps/application_name</code> there will now appear</p>
<p><pre><code class="bash">
apache2/ # Apache config files and bin
htdocs/ # Default location where Apache looks for the app
</code></pre></p>
<p>We won't change that structure. In htdocs<sup>1</sup>
<span class="aside"><sup>1</sup>I think htdocs is so named because it once stood for HyperText DOCuments. A bit of trivia.</span>
there is a file <code>index.py</code>, which is where apache expects to find a python wsgi application called <code>application</code>. We will place the flask app along this structure and point to it in <code>htdocs/index.py</code>.</p>
<p>Usually I will use a virtualenv here. So in <code>~/webapps/application_name</code>, I will run something like <code>virtualenv flask_app_venv</code> and <code>virtualenv activate</code> (or actually out of habit I frequently source the <code>flask_app_venv/bin/activate</code> file). Then pip install flask and whatever other python modules are necessary for the application to run. We will configure the server to use this virtual environment to run the app in a moment.</p>
<p>Copy the flask app, so that the resulting structure looks something like</p>
<p><pre><code class="bash">~/webapps/application_name:
- apache2/
- htdocs/
- flask_app_venv/
- flask_app/ # My flask app
- config.py
- libs/
- main/
- static/
- templates/
- __init__.py
- views.py
- models.my
</code></pre></p>
<p>I find it conceptually easiest if I have <code>flask_app/main/<strong>init</strong>.py</code> to directly contain the flask <code>app</code> to reference it by name in <code>htdocs/index.py</code>. It can be made elsewhere (for instance, perhaps in a file like <code>flask_app/main/app.py</code>, which appears to be a common structure), but I assume that it is at least imported in <code><strong>init</strong>.py</code>.</p>
<p>For example, <code><strong>init</strong>.py</code> might look something like</p>
<p><pre><code class="python"># application_name/flask_app/main/__init__.py
# ... other import statements from project if necessary
from flask import Flask
app = Flask(__name__)
app.config.from_object('config')
# Importing the views for the rest of our site
# We do this here to avoid circular imports
# Note that I call it "main" where many call it "app"
from main import views
if __name__ == '__main__':
app.run()
</code></pre></p>
<p>The Flask constructor returns exactly the sort of wsgi application that apache expects. With this structure, we can edit the <code>htdocs/index.py</code> file to look like</p>
<p><pre><code># application_name/htdocs/index.py
import sys
# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')
# launching our app
from main import app as application
</code></pre></p>
<p>Now the server knows the correct wsgi_application to serve.</p>
<p>We must configure it to use our python virtual environment (and we'll add a few additional convenience pieces). We edit <code>/apache2/conf/httpd.conf</code> as follows. Near the top of the file, certain modules are loaded. Add in the alias module, so that the modules look something like</p>
<p><pre><code class="apache">
#... other modules
LoadModule wsgi_module modules/mod_wsgi.so
LoadModule alias_module modules/mod_alias.so # <-- added
</code></pre></p>
<p>This allows us to alias the root of the site. Since all site functionality is routed through <code>htdocs/index.py</code>, we want to think of the root <code>/</code> as beginning with <code>/htdocs/index.py</code>. At the end of the file</p>
<p><pre><code class="apache">
Alias / /home/username/webapps/application_name/htdocs/index.py/
</code></pre></p>
<p>We now set the virtual environment to be used properly. There will be a set of lines containing names like <code>WSGIDaemonProcess</code> and <code>WSGIProcessGroup</code>. We edit these to refer to the correct python. WebFaction will have configured <code>WSGIDaemonProcess</code> to point to a local version of python by setting the python-path. Remove that, making that line look like</p>
<p><pre><code>
WSGIDaemonProcess application_name processes=2 threads=12
</code></pre></p>
<p>(or similar). We set the python path below, adding the line</p>
<p><pre><code>
WSGIPythonHome /home/username/webapps/application_name/flask_app_venv
</code></pre></p>
<p>I believe that this could also actually be done by setting puthon-path in WSGIDaemonProcess, but I find this more aesthetically pleasing.</p>
<p>We must also modify the section. Edit it to look like</p>
<p><pre><code>
<Directory /home/username/webapps/application_name/htdocs>
AddHandler wsgi-script .py
RewriteEngine On # <-- added
RewriteBase / # <-- added
WSGIScriptReloading On # <-- added
<Directory>
</code></pre></p>
<p>It may very well be that I don't use the RewriteEngine at all, but if I do then this is where it's done. Script reloading is a nice convenience, especially while reloading and changing the app.</p>
<p>I note that it may be convenient to add an additional alias for static file hosting,</p>
<p><pre><code class="apache">
Alias /static/ /home/your_username/webapps/application_name/app/main/static/
</code></pre></p>
<p>though I have not used this so far. (I get the same functionality through controlling the flask views appropriately).</p>
<p>The rest of this file has been setup by WebFaction for us upon creating the wsgi application.</p>
<h2>If the application is on a non-root domain...</h2>
<p>If the application is to be run on a non-root domain, such as <code>davidlowryduda.com/subdomain</code>, then there is currently a problem. In flask, when using url getters like <code>url_for</code>, urls will be returned as though there is no subdomain. And thus all urls will be incorrect. It is necessary to alter provided urls in some way.</p>
<p>The way that worked for me was to insert a tiny bit of middleware in the wsgi_application. Alter <code>htdocs/index.py</code> to read</p>
<p><pre><code>#application_name/htdocs/index.py
import sys
# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')
# subdomain url rerouting middleware
from webfaction_middleware import Middleware
from main import app
# set app through middleware
application = Middleware(app)
</code></pre></p>
<p>Now of course we need to write this middleware.</p>
<p>In <code>application_name/flask_app</code>, I create a file called <code>webfaction_middleware.py</code>, which reads</p>
<p><pre><code># application_name/flask_app/webfaction_middleware.py
class Middleware(object): # python2 aware
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
app_url = '/subdomain'
if app_url != '/':
environ['SCRIPT_NAME'] = app_url
return self.app(environ, start_response)
</code></pre></p>
<p>I now have a template file in which I keep <code>app_url = '/'</code> so that I can forget this and not worry, but that is where the subdomain url is prepended. <em>Note that the leading slash is necessary.</em> When I first tried using this, I omitted the leading slash. The application worked sometimes, and horribly failed in some other places. Some urls were correcty constructed, but most were not. I didn't try to figure out which ones were doomed to fail — but it took me an embarassingly long time to realize that prepending a slash solved all problems.</p>
<p>The magical-names of <code>environ</code> and <code>start_response</code> are because the flask app is a wsgi_application, and this is the api of wsgi_applications generically.</p>
<h2>Now it's ready</h2>
<p>Restart the apache server (<code>/apache2/bin/restart</code>) and go. Note that when incrementally making changes above, some changes can take a few minutes to fully propogate. It's only doing it the first time which takes some thought.</p>https://davidlowryduda.com/hosting-a-flask-app-on-webfaction-on-a-non-root-domainSun, 04 Mar 2018 03:14:15 +0000Segregation, Gerrymandering, and Schelling's Modelhttps://davidlowryduda.com/segregation-gerrymandering-and-schellings-modelDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/segregation-gerrymandering-and-schellings-modelSat, 03 Feb 2018 03:14:15 +0000The Hawaiian Missile Crisishttps://davidlowryduda.com/the-hawaiian-missile-crisisDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/the-hawaiian-missile-crisisWed, 24 Jan 2018 03:14:15 +0000Slides from a talk at the Joint Math Meetings 2018https://davidlowryduda.com/slides-from-a-talk-at-the-joint-math-meetings-2018David Lowry-Duda<p>I'm in San Diego, and it's charming here. (It's certainly much nicer outside than the feet of snow in Boston. I've apparently brought some British rain with me, though).</p>
<p>Today I give a talk on counting lattice points on one-sheeted hyperboloids. These are the shapes described by
$$ X_1^2 + \cdots + X_{d-1}^2 = X_d^2 + h,$$
where $h > 0$ is a positive integer. The question is: how many lattice points $x$ are on such a hyperboloid with $| x |^2 \leq R$; or equivalently, how many lattice points are on such a hyperboloid and contained within a ball of radius $\sqrt R$ centered at the origin?</p>
<p>I describe my general approach of transforming this into a question about the behavior of modular forms, and then using spectral techniques from the theory of modular forms to understand this behavior. This becomes a question of understanding the shifted convolution Dirichlet series
$$ \sum_{n \geq 0} \frac{r_{d-1}(n+h)r_1(n)}{(2n + h)^s}.$$
Ultimately this comes from the modular form $\theta^{d-1}(z) \overline{\theta(z)}$, where
$$ \theta(z) = \sum_{m \in \mathbb{Z}} e^{2 \pi i m^2 z}.$$</p>
<p>Here are the <a href="/wp-content/uploads/2018/01/Hyperboloids.pdf">slides for this talk</a>. Note that this talk is based on chapter 5 of my thesis.</p>https://davidlowryduda.com/slides-from-a-talk-at-the-joint-math-meetings-2018Wed, 10 Jan 2018 03:14:15 +0000We begin bombing Korea in five minutes: Parallels to Reagan in 1984https://davidlowryduda.com/we-begin-bombing-korea-in-five-minutes-parallels-to-reagan-in-1984David Lowry-Duda<p>On a day when President and Commander-in-Chief Donald Trump tweets belligerent messages aimed at North Korea, I ask: "Have we seen anything like this ever before?" In fact, we have. Let's review a tale from Reagan.</p>
<p>August 11, 1984: President Reagan is preparing for his weekly NPR radio address. The opening line of his address was to be</p>
<blockquote>My fellow Americans, I'm pleased to tell you that today I signed legislation that will allow student religious groups to begin enjoying a right they've too long been denied — the freedom to meet in public high schools during nonschool hours, just as other student groups are allowed to do.<sup>1</sup>
<span class="aside"><sup>1</sup>The whole address can be read <a href="https://www.reaganlibrary.gov/sites/default/files/archives/speeches/1984/81184a.htm">here</a>.</span>
</blockquote>
<p>During the sound check, President Reagan joked</p>
<blockquote>My fellow Americans, I'm pleased to tell you today that I've signed legislation that will outlaw Russia forever. We begin bombing in five minutes.</blockquote>
<p>[audio mp3="http://davidlowryduda.com/wp-content/uploads/2018/01/ReaganBombsRussia.mp3"][/audio]</p>
<p>This was met with mild chuckles from the audio technicians, and it wasn't broadcast intentionally. But it was leaked, and reached the Russians shortly thereafter.</p>
<p>They were not amused.</p>
<p>The Soviet army was placed on alert once they heard what Reagan joked during the sound check. They dropped their alert later, presumably when the bombing didn't begin. Over the next week, this gaffe drew a lot of attention. Here is NBC Tom Brokaw addressing "the joke heard round the world"</p>
<iframe src="https://www.youtube.com/embed/bN5wL1nw7XA" width="640" height="360" frameborder="0" allowfullscreen="allowfullscreen"></iframe>
<p>The Pittsburgh Post-Gazette <a href="https://news.google.com/newspapers?nid=1129&dat=19840816&id=6NVRAAAAIBAJ&sjid=Hm4DAAAAIBAJ&pg=3722,4149076">ran an article</a> containing some of the Soviet responses five days later, on 16 August 1984.<sup>2</sup>
<span class="aside"><sup>2</sup>And amazingly, google news has direct and free access to this article.</span>
Similar articles ran in most major US newspapers that week, including the New York Times (which apparently retyped or OCR'd these statements, and <a href="http://www.nytimes.com/1984/08/16/world/texts-of-statements-by-us-and-soviet-on-jest.html">these are now available on their site</a>).</p>
<p>The major Russian papers Pravda and Izvestia, as well as the Soviet News Agency TASS, all decried the President's remarks. Of particular note are two paragraphs from TASS. The first is reminiscent of many responses on Twitter today,</p>
<blockquote>Tass is authorized to state that the Soviet Union deplores the U.S. President's invective, unprecedentedly hostile toward the U.S.S.R. and dangerous to the cause of peace.</blockquote>
<p>The second is a bit chilling, especially with modern context,</p>
<blockquote>This conduct is incompatible with the high responsibility borne by leaders of states, particularly nuclear powers, for the destinies of their own peoples and for the destinies of mankind.</blockquote>
<p>In 1984, an accidental microphone gaffe on behalf of the President led to public outcry both foreign and domestic; Soviet news outlets jumped on the opportunity to include additional propaganda<sup>3</sup>
<span class="aside"><sup>3</sup>which I do not bother repeating, but some can be read on the second page of the Pittsburgh Post-Gazette or the New York Times articles linked above</span>
. It is easy to confuse some of Donald Trump's deliberate actions today with others' mistakes. I hope that he knows what he is doing.</p>https://davidlowryduda.com/we-begin-bombing-korea-in-five-minutes-parallels-to-reagan-in-1984Thu, 04 Jan 2018 03:14:15 +0000A Jupyter Notebook from a SageMath tutorialhttps://davidlowryduda.com/a-jupyter-notebook-from-a-sagemath-tutorialDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/a-jupyter-notebook-from-a-sagemath-tutorialThu, 02 Nov 2017 03:14:15 +0000Having no internet for four half weeks isn't necessarily all badhttps://davidlowryduda.com/having-no-internet-for-four-half-weeks-inst-necessarily-all-badDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/having-no-internet-for-four-half-weeks-inst-necessarily-all-badSat, 07 Oct 2017 03:14:15 +0000A Short Note on Gaps Between Powers of Consecutive Primeshttps://davidlowryduda.com/on-gaps-between-powers-of-consecutive-primesDavid Lowry-Duda<h2>Introduction</h2>
<p>The primary purpose of this note is to collect a few hitherto unnoticed or unpublished results concerning gaps between powers of consecutive primes. The study of gaps between primes has attracted many mathematicians and led to many deep realizations in number theory. The literature is full of conjectures, both open and closed, concerning the nature of primes.</p>
<p>In a series of stunning developments, Zhang, Maynard, and Tao<sup>1</sup>
<span class="aside"><sup>1</sup>James Maynard. Small gaps between primes. Ann. of Math. (2), 181(1):383–413, 2015.</span>
<sup>2</sup>
<span class="aside"><sup>2</sup> Yitang Zhang. Bounded gaps between primes. Ann. of Math. (2), 179(3):1121–1174, 2014.</span>
made the first major progress towards proving the prime $k$-tuple conjecture, and successfully proved the existence of infinitely many pairs of primes differing by a fixed number. As of now, the best known result is due to the massive collaborative Polymath8 project,<sup>3</sup>
<span class="aside"><sup>3</sup> D. H. J. Polymath. Variants of the {S}elberg sieve, and bounded intervals containing many primes. Res. Math. Sci., 1:Art. 12, 83, 2014.</span>
which showed that there are infinitely many pairs of primes of the form $p, p+246$. In the excellent expository article, <sup>4</sup>
<span class="aside"><sup>4</sup> Andrew Granville. Primes in intervals of bounded length. Bull. Amer. Math. Soc. (N.S.), 52(2):171–222, 2015.</span>
Granville describes the history and ideas leading to this breakthrough, and also discusses some of the potential impact of the results. This note should be thought of as a few more results following from the ideas of Zhang, Maynard, Tao, and the Polymath8 project.</p>
<p>Throughout, $p_n$ will refer to the $n$th prime number. In a paper, <sup>5</sup>
<span class="aside"><sup>5</sup> Dorin Andrica. Note on a conjecture in prime number theory. Studia Univ. Babe\c s-Bolyai Math., 31(4):44–48, 1986.</span>
Andrica conjectured that
\begin{equation}\label{eq:Andrica_conj}
\sqrt{p_{n+1}} - \sqrt{p_n} < 1
\end{equation}
holds for all $n$. This conjecture, and related statements, is described in Guy's Unsolved Problems in Number Theory.
<sup>6</sup>
<span class="aside"><sup>6</sup> Richard K. Guy. Unsolved problems in number theory. Problem Books in Mathematics. Springer-Verlag, New York, third edition, 2004.</span>
It is quickly checked that this holds for primes up to $4.26 \cdot 10^{8}$ in <a href="http://www.sagemath.org/">sagemath</a></p>
<p><pre><code class="python"># Sage version 8.0.rc1
# started with `sage -ipython`
# sage has pari/GP, which can generate primes super quickly
from sage.all import primes_first_n
# import izip since we'll be zipping a huge list, and sage uses python2 which has
# non-iterable zip by default
from itertools import izip
# The magic number 23150000 appears because pari/GP can't compute
# primes above 436273290 due to fixed precision arithmetic
ps = primes_first_n(23150000) # This is every prime up to 436006979
# Verify Andrica's Conjecture for all prime pairs = up to 436006979
gap = 0
for a,b in izip(ps[:-1], ps[1:]):
if b**.5 - a**.5 > gap:
A, B, gap = a, b, b**.5 - a**.5
print(gap)
print("")
print(A)
print(B)
</code></pre></p>
<p>In approximately 20 seconds on my machine (so it would not be harder to go much higher, except that I would have to go beyond pari/GP to generate primes), this completes and prints out the following output.</p>
<p><pre><code class="python">0.317837245196
0.504017169931
0.670873479291
7
11
</code></pre></p>
<p>Thus the largest value of $\sqrt{p_{n+1}} - \sqrt{p_n}$ was merely $0.670\ldots$, and occurred on the gap between $7$ and $11$.</p>
<p>So it appears very likely that the conjecture is true. However it is also likely that new, novel ideas are necessary before the conjecture is decided.</p>
<p>Andrica's Conjecture can also be stated in terms of prime gaps. Let $g_n = p_{n+1} - p_n$ be the gap between the $n$th prime and the $(n+1)$st prime. Then Andrica's Conjecture is equivalent to the claim that $g_n < 2 \sqrt{p_n} + 1$. In this direction, the best known result is due to Baker, Harman, and Pintz, <sup>7</sup>
<span class="aside"><sup>7</sup> R. C. Baker, G. Harman, and J. Pintz. The difference between consecutive primes. {II}. Proc. London Math. Soc. (3), 83(3):532–562, 2001. </span>
who show that $g_n \ll p_n^{0.525}$.</p>
<p>In 1985, Sandor <sup>8</sup>
<span class="aside"><sup>8</sup> Joszsef Sandor. On certain sequences and series with applications in prime number theory. Gaz. Mat. Met. Inf, 6:1–2, 1985. </span>
proved that \begin{equation}\label{eq:Sandor} \liminf_{n \to \infty} \sqrt[4]{p_n} (\sqrt{p_{n+1}} - \sqrt{p_n}) = 0. \end{equation} The close relation to Andrica's Conjecture \eqref{eq:Andrica_conj} is clear. The first result of this note is to strengthen this result.</p>
<blockquote>
<div class="theorem">
<strong>Theorem</strong>
Let $\alpha, \beta \geq 0$, and $\alpha + \beta < 1$. Then
\begin{equation}\label{eq:main}
\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha - p_n^\alpha) = 0.
\end{equation}
</div></blockquote>
<p>We prove this theorem below. Choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{4}$ verifies Sandor's result \eqref{eq:Sandor}. But choosing $\alpha = \frac{1}{2}, \beta = \frac{1}{2} - \epsilon$ for a small $\epsilon > 0$ gives stronger results.</p>
<p>This theorem leads naturally to the following conjecture.</p>
<blockquote>
<div class="conjecture">
<strong>Conjecture</strong>
For any $0 \leq \alpha < 1$, there exists a constant $C(\alpha)$ such that
\begin{equation}
p_{n+1}^\alpha - p_{n}^\alpha \leq C(\alpha)
\end{equation}
for all $n$.
</div></blockquote>
<p>A simple heuristic argument, given in the last section below, shows that this Conjecture follows from Cramer's Conjecture.</p>
<p>It is interesting to note that there are generalizations of Andrica's Conjecture. One can ask what the smallest $\gamma$ is such that
\begin{equation}
p_{n+1}^{\gamma} - p_n^{\gamma} = 1
\end{equation}
has a solution. This is known as the Smarandache Conjecture, and it is believed that the smallest such $\gamma$ is approximately
\begin{equation}
\gamma \approx 0.5671481302539\ldots
\end{equation}
The digits of this constant, sometimes called 'the Smarandache constant,' are the contents of sequence A038458 on the OEIS. It is possible to generalize this question as well.</p>
<blockquote>
<div class="conjecture">
<strong>Open Question</strong>
For any fixed constant $C$, what is the smallest $\alpha = \alpha(C)$ such that
\begin{equation}
p_{n+1}^\alpha - p_n^\alpha = C
\end{equation}
has solutions? In particular, how does $\alpha(C)$ behave as a function of $C$?
</div></blockquote>
<p>This question does not seem to have been approached in any sort of generality, aside from the case when $C = 1$.</p>
<h2>Proof of Theorem</h2>
<p>The idea of the proof is very straightforward. We estimate \eqref{eq:main} across prime pairs $p, p+246$, relying on the recent proof from Polymath8 that infinitely many such primes exist.</p>
<p>Fix $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$. Applying the mean value theorem of calculus on the function $x \mapsto x^\alpha$ shows that
\begin{align}
p^\beta \big( (p+246)^\alpha - p^\alpha \big) &= p^\beta \cdot 246 \alpha q^{\alpha - 1} \\\
&\leq p^\beta \cdot 246 \alpha p^{\alpha - 1} = 246 \alpha p^{\alpha + \beta - 1}, \label{eq:bound}
\end{align}
for some $q \in [p, p+246]$. Passing to the inequality in the second line is done by realizing that $q^{\alpha - 1}$ is a decreasing function in $q$. As $\alpha + \beta - 1 < 0$, as $p \to \infty$ we see that\eqref{eq:bound} goes to zero.</p>
<p>Therefore
\begin{equation}
\liminf_{n \to \infty} p_n^\beta (p_{n+1}^\alpha - p_n^\alpha) = 0,
\end{equation}
as was to be proved.</p>
<h2>Further Heuristics</h2>
<p>Cramer's Conjecture states that there exists a constant $C$ such that for all sufficiently large $n$,
\begin{equation}
p_{n+1} - p_n < C(\log n)^2.
\end{equation}
Thus for a sufficiently large prime $p$, the subsequent prime is at most $p + C (\log p)^2$. Performing a similar estimation as above shows that
\begin{equation}
(p + C (\log p)^2)^\alpha - p^\alpha \leq C (\log p)^2 \alpha p^{\alpha - 1} =
C \alpha \frac{(\log p)^2}{p^{1 - \alpha}}.
\end{equation}
As the right hand side vanishes as $p \to \infty$, we see that it is natural to expect that the main Conjecture above is true. More generally, we should expect the following, stronger conjecture.</p>
<blockquote>
<div class="conjecture">
<strong>Conjecture'</strong>
For any $\alpha, \beta \geq 0$ with $\alpha + \beta < 1$, there exists a constant $C(\alpha, \beta)$ such that
\begin{equation}
p_n^\beta (p_{n+1}^\alpha - p_n^\alpha) \leq C(\alpha, \beta).
\end{equation}
</div></blockquote>
<h2>Additional Notes</h2>
<p>I wrote this note in between waiting in never-ending queues while I sort out my internet service and other mundane activities necessary upon moving to another country. I had just read some papers on the arXiv, and I noticed a paper which referred to unknown statuses concerning Andrica's Conjecture. So then I sat down and wrote this up.</p>
<p>I am somewhat interested in qualitative information concerning the Open Question in the introduction, and I may return to this subject unless someone beats me to it.</p>
<p>This note is (mostly, minus the code) <a href="/wp-content/uploads/2017/09/consecprimes.pdf">available as a pdf</a> and (will shortly) appears on the arXiv. This was originally written in LaTeX and converted for display on this site using a set of tools I've written based around <a href="https://github.com/davidlowryduda/latex2jax">latex2jax</a>, which is available on my github.</p>https://davidlowryduda.com/on-gaps-between-powers-of-consecutive-primesFri, 22 Sep 2017 03:14:15 +0000Sage Days 87 Demo: Interfacing between sage and the LMFDBhttps://davidlowryduda.com/sage-days-87-demo-interfacing-between-sage-and-the-lmfdbDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/sage-days-87-demo-interfacing-between-sage-and-the-lmfdbThu, 20 Jul 2017 03:14:15 +0000Paper: Second moments in the generalized Gauss circle problemhttps://davidlowryduda.com/second-moments-in-the-generalized-gauss-circle-problemDavid Lowry-Duda<p>This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alexander Walker. This is a natural successor to our previous work (see their announcements: <a href="/paper-the-second-moments-of-sums-of-fourier-coefficients-of-cusp-forms/">one</a>, <a href="/papershort-interval-averages-of-sums-of-fourier-coefficients-of-cusp-forms/">two</a>, <a href="/paper-sign-changes-of-coefficients-and-sums-of-coefficients-of-cusp-forms/">three</a>) concerning bounds and asymptotics for sums of coefficients of modular forms.</p>
<p>We now have a variety of results concerning the behavior of the partial sums</p>
<p>$$ S_f(X) = \sum_{n \leq X} a(n) $$</p>
<p>where $f(z) = \sum_{n \geq 1} a(n) e(nz)$ is a GL(2) cuspform. The primary focus of our previous work was to understand the Dirichlet series</p>
<p>$$ D(s, S_f \times S_f) = \sum_{n \geq 1} \frac{S_f(n)^2}{n^s} $$</p>
<p>completely, give its meromorphic continuation to the plane (this was the major topic of the first paper in the series), and to perform classical complex analysis on this object in order to describe the behavior of $S_f(n)$ and $S_f(n)^2$ (this was done in the first paper, and was the major topic of the second paper of the series). One motivation for studying this type of problem is that bounds for $S_f(n)$ are analogous to understanding the error term in lattice point discrepancy with circles.</p>
<p>That is, let $S_2(R)$ denote the number of lattice points in a circle of radius $\sqrt{R}$ centered at the origin. Then we expect that $S_2(R)$ is approximately the area of the circle, plus or minus some error term. We write this as</p>
<p>$$ S_2(R) = \pi R + P_2(R),$$</p>
<p>where $P_2(R)$ is the error term. We refer to $P_2(R)$ as the "lattice point discrepancy" — it describes the discrepancy between the number of lattice points in the circle and the area of the circle. Determining the size of $P_2(R)$ is a very famous problem called the Gauss circle problem, and it has been studied for over 200 years. We believe that $P_2(R) = O(R^{1/4 + \epsilon})$, but that is not known to be true.</p>
<p>The Gauss circle problem can be cast in the language of modular forms. Let $\theta(z)$ denote the standard Jacobi theta series,</p>
<p>$$ \theta(z) = \sum_{n \in \mathbb{Z}} e^{2\pi i n^2 z}.$$</p>
<p>Then</p>
<p>$$ \theta^2(z) = 1 + \sum_{n \geq 1} r_2(n) e^{2\pi i n z},$$</p>
<p>where $r_2(n)$ denotes the number of representations of $n$ as a sum of $2$ (positive or negative) squares. The function $\theta^2(z)$ is a modular form of weight $1$ on $\Gamma_0(4)$, but it is not a cuspform. However, the sum</p>
<p>$$ \sum_{n \leq R} r_2(n) = S_2(R),$$</p>
<p>and so the partial sums of the coefficients of $\theta^2(z)$ indicate the number of lattice points in the circle of radius $\sqrt R$. Thus $\theta^2(z)$ gives access to the Gauss circle problem.</p>
<p>More generally, one can consider the number of lattice points in a $k$-dimensional sphere of radius $\sqrt R$ centered at the origin, which should approximately be the volume of that sphere,</p>
<p>$$ S_k(R) = \mathrm{Vol}(B(\sqrt R)) + P_k(R) = \sum_{n \leq R} r_k(n),$$</p>
<p>giving a $k$-dimensional lattice point discrepancy. For large dimension $k$, one should expect that the circle problem is sufficient to give good bounds and understanding of the size and error of $S_k(R)$. For $k \geq 5$, the true order of growth for $P_k(R)$ is known (up to constants).</p>
<p>Therefore it happens to be that the small (meaning 2 or 3) dimensional cases are both the most interesting, given our predilection for 2 and 3 dimensional geometry, and the most enigmatic. For a variety of reasons, the three dimensional case is very challenging to understand, and is perhaps even more enigmatic than the two dimensional case.</p>
<p>Strong evidence for the conjectured size of the lattice point discrepancy comes in the form of mean square estimates. By looking at the square, one doesn't need to worry about oscillation from positive to negative values. And by averaging over many radii, one hopes to smooth out some of the individual bumps. These mean square estimates take the form</p>
<p>$$\begin{align}
\int_0^X P_2(t)^2 dt &= C X^{3/2} + O(X \log^2 X) \\
\int_0^X P_3(t)^2 dt &= C' X^2 \log X + O(X^2 (\sqrt{ \log X})).
\end{align}$$</p>
<p>These indicate that the average size of $P_2(R)$ is $R^{1/4}$. and that the average size of $P_3(R)$ is $R^{1/2}$. In the two dimensional case, notice that the error term in the mean square asymptotic has pretty significant separation. It has essentially a $\sqrt X$ power-savings over the main term. But in the three dimensional case, there is no power separation. Even with significant averaging, we are only just capable of distinguishing a main term at all.</p>
<p>It is also interesting, but for more complicated reasons, that the main term in the three dimensional case has a log term within it. This is unique to the three dimensional case. But that is a description for another time.</p>
<p>In a <a href="https://arxiv.org/abs/1703.10347">paper that we recently posted to the arxiv</a>, we show that the Dirichlet series</p>
<p>$$ \sum_{n \geq 1} \frac{S_k(n)^2}{n^s} $$</p>
<p>and</p>
<p>$$ \sum_{n \geq 1} \frac{P_k(n)^2}{n^s} $$</p>
<p>for $k \geq 3$ have understandable meromorphic continuation to the plane. Of particular interest is the $k = 3$ case, of course. We then investigate smoothed and unsmoothed mean square results. In particular, we prove a result stated following.</p>
<blockquote><strong>Theorem</strong>
$$\begin{align} \int_0^\infty P_k(t)^2 e^{-t/X} &= C_3 X^2 \log X + C_4 X^{5/2} \\ &\quad + C_kX^{k-1} + O(X^{k-2} \end{align}$$</blockquote>
<p>In this statement, the term with $C_3$ only appears in dimension $3$, and the term with $C_4$ only appears in dimension $4$. This should really thought of as saying that we understand the Laplace transform of the square of the lattice point discrepancy as well as can be desired.</p>
<p>We are also able to improve the sharp second mean in the dimension 3 case, showing in particular the following.</p>
<blockquote><strong>Theorem</strong>
There exists $\lambda > 0$ such that
$$\int_0^X P_3(t)^2 dt = C X^2 \log X + D X^2 + O(X^{2 - \lambda}).$$</blockquote>
<p>We do not actually compute what we might take $\lambda$ to be, but we believe (informally) that $\lambda$ can be taken as $1/5$.</p>
<p>The major themes behind these new results are already present in the first paper in the series. The new ingredient involves handling the behavior on non-cuspforms at the cusps on the analytic side, and handling the apparent main terms (int his case, the volume of the ball) on the combinatorial side.</p>
<p>There is an additional difficulty that arises in the dimension 2 case which makes it distinct. But soon I will describe a different forthcoming work in that case.</p>https://davidlowryduda.com/second-moments-in-the-generalized-gauss-circle-problemFri, 26 May 2017 03:14:15 +0000How fat would we have to get to balance carbon emissions?https://davidlowryduda.com/how-fat-would-we-have-to-get-to-balance-carbon-emissionsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/how-fat-would-we-have-to-get-to-balance-carbon-emissionsSun, 21 May 2017 03:14:15 +0000Slides from a Dissertation Defensehttps://davidlowryduda.com/slides-from-a-dissertation-defenseDavid Lowry-Duda<p>I just defended my dissertation. Thank you to Jeff, Jill, and Dinakar for being on my Defense Committee. In this talk, I discuss some of the ideas and follow-ups on my <a href="/wp-content/uploads/2017/04/thesis.pdf">thesis</a>. I'll also take this moment to include the dedication in my thesis.</p>
<p><a href="/wp-content/uploads/2017/04/ThesisDedication.png"><img class="wp-image-2316 aligncenter" src="/wp-content/uploads/2017/04/ThesisDedication.png" alt="ThesisDedication" width="436" height="217" /></a>Here are <a href="/wp-content/uploads/2017/04/DissertationDefense.pdf">the slides</a> from my defense.</p>
<p>After the defense, I gave Jeff and Jill a <a href="/wp-content/uploads/2017/04/family_tree.pdf">poster of our family tree</a>. I made this using data from Math Genealogy, which has so much data.</p>
<p><a href="/wp-content/uploads/2017/04/ThesisFrontPage.png"><img class="wp-image-2315 aligncenter" src="/wp-content/uploads/2017/04/ThesisFrontPage-243x300.png" alt="ThesisFrontPage" width="320" height="395" /></a></p>https://davidlowryduda.com/slides-from-a-dissertation-defenseFri, 21 Apr 2017 03:14:15 +0000Experimenting with latex2html5: PSTricks to HTML interactivityhttps://davidlowryduda.com/experimenting-with-latex2html5-pstricks-to-html-interactivityDavid Lowry-Duda<p>I recently learned about about <a href="http://latex2html5.com/">latex2html5</a>, a javascript library which allows one to write LaTeX and PSTricks to produce interactive objects on websites.At its core, it functions in a similar way to MathJax, which is what I use to generate mathematics on this (and my other) sites. As an example of MathJax, I can write the following.</p>
<p>$$ \int_0^1 f(x) dx = F(1) - F(0). $$</p>
<p>The dream of latex2html5 is to be able to describe a diagram using the language of PSTricks inside LaTeX, throw in a bit of sugar to describe how interactivity should work on the web, and then render this to a beautiful svg using javascript.</p>
<p>Unfortunately, I did not try to make this work on Wordpress (as Wordpress is a bit finicky about how it interacts with javascript). So instead, I wrote a more detailed description about latex2html5, including some examples and some criticisms, on my non-Wordpress website <a href="http://david.lowryduda.com/onlatex2html5.html">david.lowryduda.com</a>.</p>
<p><strong>EDITED LATER</strong>: As I've transitioned away from Wordpress, I can now include
this here. This is done in <a href="/pstricks-test">this post</a>.</p>https://davidlowryduda.com/experimenting-with-latex2html5-pstricks-to-html-interactivityTue, 11 Apr 2017 03:14:15 +0000pstrickshttps://davidlowryduda.com/pstricks-testDavid Lowry-Duda<div class="content">
<div class="main">
I recently learned about about <a href="http://latex2html5.com/">latex2html5</a>, a javascript library which
allows one to write LaTeX and PSTricks to produce interactive objects on websites.<sup>1</sup>
<span class="aside"><sup>1</sup>This is part of a larger project called <a href="https://github.com/pyramation/Mathapedia">Mathapedia</a>,
which is an interesting endeavor that I haven't played around with too much yet. But more importantly,
the author notes <a href="https://news.ycombinator.com/item?id=6758992">elsewhere</a> that this is a
proof of concept for a deeper project. So it is to be expected that this is a bit incomplete, and from
this point of view this is a great proof of concept.</span>
At its core, it
functions in a similar way to MathJax, which is what I use to generate
mathematics on this (and my other) sites.
As an example of MathJax, I can write the following.<sup>2</sup>
<span class="aside"><sup>2</sup>This is used all over
this site.</span>
\begin{equation*}
\int_0^1 f(x) dx = F(1) - F(0).
\end{equation*}
With latex2html5, it is very easy to produce interactive diagrams such as the following: (if you do not
see a diagram, it may be necessary to reload the page once — I haven't quite figured that out yet)
<div class="pstricks">
<script type="text/latex">
\begin{pspicture}(-3,-3)(3,3)
\pscircle(0,0){2}
\userline[linewidth=1.5 pt, linecolor=red]{->}(0,0)(2,2)
\end{pspicture}
</script>
</div>
(move your mouse
around near the circle: the line will follow your mouse):
This is created with the extremely simple PSTricks code:
<pre>
\begin{pspicture}(-3,-3)(3,3)
\pscircle(0,0){2}
\userline[linewidth=1.5 pt, linecolor=red]{->}(0,0)(2,2)
\end{pspicture}
</pre>
The library is not perfect, however. For instance, I cannot find a way within the library itself to make
the circle white.
One nice feature of MathJax is that it plays very nicely with css.
There are color features in the PSTricks library that latex2html5 bases its syntactical commands on,
but not all of these features are implemented.
However, one can make some very nice looking interactive diagrams, such as the following:<sup>3</sup>
<span class="aside"><sup>3</sup>It turns out that this doesn't play well with modern mathjax. The console will now show that this next script raises an error. I've removed mathjax interaction in the following script, so there are no longer labels.</span>
<div class="pstricks">
<script type="text/latex">
\begin{pspicture}(-5,-5)(5,5)
\psline{->}(0,-3.75)(0,3.75)
\psline{->}(-3.75,0)(3.75,0)
\pscircle(0,0){ 3 }
\userline[linewidth=1.5 pt]{->}(1.500,0.000)(2.121,2.121)
\userline[linewidth=1.5 pt,linecolor=white]{->}(0,0.000)(2.121,2.121){(x>0) ? 3 * cos( atan(-y/x) ) :
-3 * cos( atan(-y/x) ) }{ (x>0) ? -3 * sin( atan(-y/x) ) : 3 * sin( atan(-y/x) )}
\userline[linewidth=1.5 pt,linestyle=dashed](-1.500,0.000)(2.121,2.121){x}{0}{x}{y}
\userline[linewidth=1.5 pt,linestyle=dashed](-1.500,0.000)(2.121,2.121){0}{y}{x}{y}
\psline{<->}(-3,-4)(1.5,-4)
\psline{<->}(1.5,-4)(3,-4)
\psline[linestyle=dashed](3,-4.5)(3,0)
\psline[linestyle=dashed](-3,-4.5)(-3,0)
\psline[linestyle=dashed](1.5,-4.5)(1.5,0)
\end{pspicture}
</script>
</div>
This corresponds to the following PSTricks code.
<pre>
\begin{pspicture}(-5,-5)(5,5)
\rput(0.3,3.75){ $Im$ }
\psline{->}(0,-3.75)(0,3.75)
\rput(3.75,0.3){ $Re$ }
\psline{->}(-3.75,0)(3.75,0)
\pscircle(0,0){ 3 }
\rput(2.3,1){$e^{i\omega}-\alpha$}
\userline[linewidth=1.5 pt]{->}(1.500,0.000)(2.121,2.121)
\userline[linewidth=1.5 pt,linecolor=white]{->}(0,0.000)(2.121,2.121){(x>0) ? 3 * cos( atan(-y/x) ) :
-3 * cos( atan(-y/x) ) }{ (x>0) ? -3 * sin( atan(-y/x) ) : 3 * sin( atan(-y/x) )}
\userline[linewidth=1.5 pt,linestyle=dashed](-1.500,0.000)(2.121,2.121){x}{0}{x}{y}
\userline[linewidth=1.5 pt,linestyle=dashed](-1.500,0.000)(2.121,2.121){0}{y}{x}{y}
\rput(-0.75,-4.25){$1+\alpha$}
\rput(2.25,-4.25){$1-\alpha$}
\psline{<->}(-3,-4)(1.5,-4)
\psline{<->}(1.5,-4)(3,-4)
\psline[linestyle=dashed](3,-4.5)(3,0)
\psline[linestyle=dashed](-3,-4.5)(-3,0)
\psline[linestyle=dashed](1.5,-4.5)(1.5,0)
\end{pspicture}
</pre>
I've played around quite a bit with these interactive pieces now, seeing what colors one can change and which
pieces play better or worse together.
Overall, this strikes me as one of the easiest ways for those familiar with LaTeX and PSTricks (like me and
many other mathematicians) to produce interactive images on the web.
The library is written well enough for me to easily integrate it into the flavor of this site, minus the color
problems — which is a pretty serious detractor.
It was designed for people to write much more in LaTeX, with direct transition, but simply enough to work
in other contexts.
However, there are some aspects that are truly broken.
Consider the following PSTricks code.
<pre>
\begin{pspicture}(-3,-3)(3,3)
\pscircle[linecolor=white](0,0){2}
\psarc[fillcolor=white](0,0){2}{215}{0}
\userline[linewidth=1.5 pt, linecolor=white]{->}(0,0)(2,2)
\end{pspicture}
</pre>
This should create a white circle, and part of this circle should be filled in with white, and there is a line
that follows the mouse around.
Instead, we get the following (totally broken) image.<sup>4</sup>
<span class="aside"><sup>4</sup>Where did the color blue come from? Unfortunately, there doesn't seem to be clear documentation on what aspects of PSTricks are or are not covered, so it is not a priori possible to determine what should or shouldn't work.</span>
<div class="pstricks">
<script type="text/latex">
\begin{pspicture}(-3,-3)(3,3)
\pscircle[linecolor=white](0,0){2}
\psarc[fillcolor=white](0,0){2}{215}{0}
\userline[linewidth=1.5 pt, linecolor=white]{->}(0,0)(2,2)
\end{pspicture}
</script>
</div>
<script type="text/javascript">
LaTeX2HTML5.init();
</script>
I have hope for this project.
I think this method of enabling interactivity is particularly clean (if you
know pstricks), and I would love to see this work beautifully.
</divhttps://davidlowryduda.com/pstricks-testTue, 11 Apr 2017 03:14:15 +0000Slides from a Talk at the Dartmouth Number Theory Seminarhttps://davidlowryduda.com/slides-from-a-talk-at-the-dartmouth-number-theory-seminarDavid Lowry-Duda<p>I recently gave a talk at the Dartmouth Number Theory Seminar (thank you Edgar for inviting me and to Edgar, Naomi, and John for being such good hosts). In this talk, I described the recent successes we've had on working with variants of the Gauss Circle Problem.</p>
<p>The story began when (with Tom Hulse, Chan Ieong Kuan, and Alex Walker — and with helpful input from Mehmet Kiral, Jeff Hoffstein, and others) we introduced and studied the Dirichlet series
$$\begin{equation}
\sum_{n \geq 1} \frac{S(n)^2}{n^s}, \notag
\end{equation}$$
where $S(n)$ is a sum of the first $n$ Fourier coefficients of an automorphic form on GL(2). We've done this successfully with a variety of automorphic forms, leading to new results for averages, short-interval averages, sign changes, and mean-square estimates of the error for several classical problems. Many of these papers and results have been discussed in other places on this site.</p>
<p>Ultimately, the problem becomes acquiring sufficiently detailed understandings of the spectral behavior of various forms (or more correctly, the behavior of the spectral expansion of a Poincare series against various forms).
We are continuing to research and study a variety of problems through this general approach.</p>
<p>The slides for this talk are <a href="/wp-content/uploads/2017/03/OnSomeProblemsRelatedToGaussCircleProblem-Dartmouth-2017.pdf">available here</a>.</p>https://davidlowryduda.com/slides-from-a-talk-at-the-dartmouth-number-theory-seminarWed, 29 Mar 2017 03:14:15 +0000Smooth Sums to Sharp Sumshttps://davidlowryduda.com/smooth-sums-to-sharp-sums-1David Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/smooth-sums-to-sharp-sums-1Sat, 11 Mar 2017 03:14:15 +0000About 2017https://davidlowryduda.com/about-2017David Lowry-Duda<p>While idly thinking while heading back from the office, and then more later while thinking after dinner with my academic little brother Alex Walker and my future academic little sister-in-law Sara Schulz, we began to think about $2017$, the number.</p>
<h2>General Patterns</h2>
<ul>
<li>2017 is a prime number. 2017 is the 306th prime. The 2017th prime is 17539.</li>
<li>As 2011 is also prime, we call 2017 a <a href="https://en.wikipedia.org/wiki/Sexy_prime">sexy prime</a>.</li>
<li>2017 can be written as a sum of two squares,
$$ 2017 = 9^2 +44^2,$$
and this is the only way to write it as a sum of two squares.</li>
<li>Similarly, 2017 appears as the hypotenuse of a primitive Pythagorean triangle,
$$ 2017^2 = 792^2 + 1855^2,$$
and this is the only such right triangle.</li>
<li>2017 is uniquely identified as the first odd prime that leaves a remainder of $2$ when divided by $5$, $13$, and $31$. That is,
$$ 2017 \equiv 2 \pmod {5, 13, 31}.$$</li>
<li>In different bases,
$$ \begin{align} (2017)_{10} &= (2681)_9 = (3741)_8 = (5611)_7 = (13201)_6 \notag \\ &= (31032)_5 = (133201)_4 = (2202201)_3 = (11111100001)_2 \notag \end{align}$$
The base $2$ and base $3$ expressions are sort of nice, including repetition.</li>
</ul>
<!–more–>
<h2>Counting to 20</h2>
<p>$$\begin{array}{ll}
1 = 2\cdot 0 + 1^7 & 11 = 2 + 0! + 1 + 7 \\
2 = 2 + 0 \cdot 1 \cdot 7 & 12 = 20 - 1 - 7 = -2 + (0! + 1)\cdot 7 \\
3 = (20 + 1)/7 = 20 - 17 & 13 = 20 - 1 \cdot 7 \\
4 = -2 + 0 - 1 + 7 & 14 = 20 - (-1 + 7) \\
5 = -2 + 0\cdot 1 + 7 & 15 = -2 + 0 + 17 \\
6 = -2 + 0 + 1 + 7 & 16 = -(2^0) + 17 \\
7 = 2^0 - 1 + 7 & 17 = 2\cdot 0 + 17 \\
8 = 2 + 0 - 1 + 7 & 18 = 2^0 + 17 \\
9 = 2 + 0\cdot 1 + 7 & 19 = 2\cdot 0! + 17 \\
10 = 2 + 0 + 1 + 7 & 20 = 2 + 0! + 17.
\end{array}$$</p>
<p>In each expression, the digits $2, 0, 1, 7$ appear, in order, with basic mathematical symbols. I wonder what the first number is that can't be nicely expressed (subjectively, of course)?</p>
<h2>Iterative Maps on 2017</h2>
<p>Now let's look at less-common manipulations with numbers.</p>
<ul>
<li>The digit sum of $2017$ is $10$, which has digit sum $1$.</li>
<li>Take $2017$ and its reverse, $7102$. The difference between these two numbers is $5085$. Repeating gives $720$. Continuing, we get
$$ 2017 \mapsto 5085 \mapsto 720 \mapsto 693 \mapsto 297 \mapsto 495 \mapsto 99 \mapsto 0.$$
So it takes seven iterations to hit $0$, where the iteration stabilizes.</li>
<li>Take $2017$ and its reverse, $7102$. Add them. We get $9119$, a palindromic number. Continuing, we get
$$ \begin{align} 2017 &\mapsto 9119 \mapsto 18238 \mapsto 101519 \notag \\ &\mapsto 1016620 \mapsto 1282721 \mapsto 2555542 \mapsto 5011094 \mapsto 9912199. \notag \end{align}$$
It takes one map to get to the first palindrome, and then seven more maps to get to the next palindrome. Another five maps would yield the next palindrome.</li>
<li>Rearrange the digits of $2017$ into decreasing order, $7210$, and subtract the digits in increasing order, $0127$. This gives $7083$. Repeating once gives $8352$. Repeating again gives $6174$, at which point the iteration stabilizes. This is called <a href="https://en.wikipedia.org/wiki/6174_(number)">Kaprekar's Constant</a>.</li>
<li>Consider Collatz: If $n$ is even, replace $n$ by $n/2$. Otherwise, replace $n$ by $3\cdot n + 1$. On $2017$, this gives
$$\begin{align}
2017 &\mapsto 6052 \mapsto 3026 \mapsto 1513 \mapsto 4540 \mapsto \notag \\
&\mapsto 2270 \mapsto 1135 \mapsto 3406 \mapsto 1703 \mapsto 5110 \mapsto \notag \\
&\mapsto 2555 \mapsto 7666 \mapsto 3833 \mapsto 11500 \mapsto 5750 \mapsto \notag \\
&\mapsto 2875 \mapsto 8626 \mapsto 4313 \mapsto 12940 \mapsto 6470 \mapsto \notag \\
&\mapsto 3235 \mapsto 9706 \mapsto 4853 \mapsto 14560 \mapsto 7280 \mapsto \notag \\
&\mapsto 3640 \mapsto 1820 \mapsto 910 \mapsto 455 \mapsto 1366 \mapsto \notag \\
&\mapsto 683 \mapsto 2050 \mapsto 1025 \mapsto 3076 \mapsto 1538 \mapsto \notag \\
&\mapsto 769 \mapsto 2308 \mapsto 1154 \mapsto 577 \mapsto 1732 \mapsto \notag \\
&\mapsto 866 \mapsto 433 \mapsto 1300 \mapsto 650 \mapsto 325 \mapsto \notag \\
&\mapsto 976 \mapsto 488 \mapsto 244 \mapsto 122 \mapsto 61 \mapsto \notag \\
&\mapsto 184 \mapsto 92 \mapsto 46 \mapsto 23 \mapsto 70 \mapsto \notag \\
&\mapsto 35 \mapsto 106 \mapsto 53 \mapsto 160 \mapsto 80 \mapsto \notag \\
&\mapsto 40 \mapsto 20 \mapsto 10 \mapsto 5 \mapsto 16 \mapsto \notag \\
&\mapsto 8 \mapsto 4 \mapsto 2 \mapsto 1 \notag
\end{align}$$
It takes $69$ steps to reach the seemingly inevitable $1$. This is much shorter than the $113$ steps necessary for $2016$ or the $113$ (yes, same number) steps necessary for $2018$.</li>
<li>Consider the digits $2,1,7$ (in that order). To generate the next number, take the units digit of the product of the previous $3$. This yields
$$2,1,7,4,8,4,8,6,2,6,2,4,8,4,\ldots$$
This immediately jumps into a periodic pattern of length $8$, but $217$ is not part of the period. So this is preperiodic.</li>
<li>Consider the digits $2,0,1,7$. To generate the next number, take the units digit of the sum of the previous $4$. This yields
$$ 2,0,1,7,0,8,6,1,5,0,2,8,\ldots, 2,0,1,7.$$
After 1560 steps, this produces $2,0,1,7$ again, yielding a cycle. Interestingly, the loop starting with $2018$ and $2019$ also repeat after $1560$ steps.</li>
<li>Take the digits $2,0,1,7$, square them, and add the result. This gives $2^2 + 0^2 + 1^2 + 7^2 = 54$. Repeating, this gives
$$ \begin{align} 2017 &\mapsto 54 \mapsto 41 \mapsto 17 \mapsto 50 \mapsto 25 \mapsto 29 \notag \\ &\mapsto 85 \mapsto 89 \mapsto 145 \mapsto 42 \mapsto 20 \mapsto 4 \notag \\ &\mapsto 16 \mapsto 37 \mapsto 58 \mapsto 89\notag\end{align}$$
and then it reaches a cycle.</li>
<li>Take the digits $2,0,1,7$, cube them, and add the result. This gives $352$. Repeating, we get $160$, and then $217$, and then $352$. This is a very tight loop.</li>
</ul>
<h2>A Few Matrices</h2>
<ul>
<li>One can make $2017$ from determinants of basic matrices in a few ways. For instance,
$$ \begin{align}
\left \lvert \begin{pmatrix} 1&2&3 \\ 4&6&7 \\ 5&8&9 \end{pmatrix}\right \rvert &= 2, \qquad
\left \lvert \begin{pmatrix} 1&2&3 \\ 4&5&6 \\ 7&8&9 \end{pmatrix}\right \rvert &= 0\notag \\
\left \lvert \begin{pmatrix} 1&2&3 \\ 4&7&6 \\ 5&9&8 \end{pmatrix}\right \rvert &= 1 , \qquad
\left \lvert \begin{pmatrix} 1&2&3 \\ 4&5&7 \\ 6&8&9 \end{pmatrix}\right \rvert &= 7\notag
\end{align}$$
The matrix with determinant $0$ has the numbers $1$ through $9$ in the most obvious configuration. The other matrices are very close in configuration.</li>
<li>Alternately,
$$ \begin{align}
\left \lvert \begin{pmatrix} 1&2&3 \\ 5&6&9 \\ 4&8&7 \end{pmatrix}\right \rvert &= 20 \notag \\
\left \lvert \begin{pmatrix} 1&2&3 \\ 6&8&9 \\ 5&7&4 \end{pmatrix}\right \rvert &= 17 \notag
\end{align}$$
So one can form $20$ and $27$ separately from determinants.</li>
<li>One cannot make $2017$ from a determinant using the digits $1$ through $9$ (without repetition).</li>
<li>If one uses the digits from the first $9$ primes, it is interesting that one can choose configurations with determinants equal to $2016$ or $2018$, but there is no such configuration with determinant equal to $2017$.</li>
</ul>https://davidlowryduda.com/about-2017Sat, 21 Jan 2017 03:14:15 +0000Revealing zero in fully homomorphic encryption is a Bad Thinghttps://davidlowryduda.com/revealing-zero-in-fully-homomorphic-encryption-is-a-bad-thingDavid Lowry-Duda<p>When I was first learning number theory, cryptography seemed really fun and really practical. I thought elementary number theory was elegant, and that cryptography was an elegant application. As I continued to learn more about mathematics, and in particular modern mathematics, I began to realize that decades of instruction and improvement (and perhaps of more useful points of view) have simplified the presentation of elementary number theory, and that modern mathematics is less elegant in presentation.</p>
<p>Similarly, as I learned more about cryptography, I learned that though the basic ideas are very simple, their application is often very inelegant. For example, the basis of RSA follows immediately from Euler's Theorem as learned while studying elementary number theory, or alternately from Lagrange's Theorem as learned while studying group theory or abstract algebra. And further, these are very early topics in these two areas of study!</p>
<p>But a naive implementation of RSA is doomed (For that matter, many professional implementations have their flaws too). Every now and then, a very clever expert comes up with a new attack on popular cryptosystems, generating new guidelines and recommendations. Some guidelines make intuitive sense [e.g. don't use too small of an exponent for either the public or secret keys in RSA], but many are more complicated or designed to prevent more sophisticated attacks [especially side-channel attacks].</p>
<p>In the summer of 2013, I participated in the ICERM IdeaLab working towards more efficient homomorphic encryption. We were playing with existing homomorphic encryption schemes and trying to come up with new methods. One guideline that we followed is that an attacker should not be able to recognize an encryption of zero. This seems like a reasonable guideline, but I didn't really understand why, until I was chatting with others at the 2017 Joint Mathematics Meetings in Atlanta.</p>
<p>It turns out that revealing zero isn't just against generally sound advice. Revealing zero is a capital B capital T Bad Thing.</p>
<h2>Basic Setup</h2>
<p>For the rest of this note, I'll try to identify some of this reasoning.</p>
<p>In a typical cryptosystem, the basic setup is as follows. Andrew has a message that he wants to send to Beatrice. So Andrew converts the message into a list of numbers $M$, and uses some sort of encryption function $E(\cdot)$ to encrypt $M$, forming a ciphertext $C$. We can represent this as $C = E(M)$. Andrew transmits $C$ to Beatrice. If an eavesdropper Eve happens to intercept $C$, it should be very hard for Eve to recover any information about the original message from $C$. But when Beatrice receives $C$, she uses a corresponding decryption function $D(\cdot)$ to decrypt $C$, $M = d(C)$.</p>
<p>Often, the encryption and decryption techniques are based on number theoretic or combinatorial primitives. Some of these have extra structure (or at least they do with basic implementation). For instance, the RSA cryptosystem involves a public exponent $e$, a public mod $N$, and a private exponent $d$. Andrew encrypts the message $M$ by computing $C = E(M) \equiv M^e \bmod N$. Beatrice decrypts the message by computing $M = C^d \equiv M^{ed} \bmod N$.</p>
<p>Notice that in the RSA system, given two messages $M_1, M_2$ and corresponding ciphertexts $C_1, C_2$, we have that
\begin{equation}
E(M_1 M_2) \equiv (M_1 M_2)^e \equiv M_1^e M_2^e \equiv E(M_1) E(M_2) \pmod N. \notag
\end{equation}
The encryption function $E(\cdot)$ is a group homomorphism. This is an example of extra structure.</p>
<p>A fully homomorphic cryptosystem has an encryption function $E(\cdot)$ satisfying both $E(M_1 + M_2) = E(M_1) + E(M_2)$ and $E(M_1M_2) = E(M_1)E(M_2)$ (or more generally an analogous pair of operations). That is, $E(\cdot)$ is a ring homomorphism.</p>
<p>This extra structure allows for (a lot of) extra utility. A fully homomorphic $E(\cdot)$ would allow one to perform meaningful operations on encrypted data, even though you can't read the data itself. For example, a clinic could store (encrypted) medical information on an external server. A doctor or nurse could pull out a cellphone or tablet with relatively little computing power or memory and securely query the medical data. Fully homomorphic encryption would allow one to securely outsource data infrastructure.</p>
<p>A different usage model suggests that we use a different mental model. So suppose Alice has sensitive data that she wants to store for use on EveCorp's servers. Alice knows an encryption method $E(\cdot)$ and a decryption method $D(\cdot)$, while EveCorp only ever has mountains of ciphertexts, and cannot read the data [even though they have it].</p>
<h2>Why revealing zero is a Bad Thing</h2>
<p>Let us now consider some basic cryptographic attacks. We should assume that EveCorp has access to a long list of plaintext messages $M_i$ and their corresponding ciphertexts $C_i$. Not everything, but perhaps from small leaks or other avenues. Among the messages $M_i$ it is very likely that there are two messages $M_1, M_2$ which are relatively prime. Then an application of the Euclidean Algorithm gives a linear combination of $M_1$ and $M_2$ such that
\begin{equation}
M_1 x + M_2 y = 1 \notag
\end{equation}
for some integers $x,y$. Even though EveCorp doesn't know the encryption method $E(\cdot)$, since we are assuming that they have access to the corresponding ciphertexts $C_1$ and $C_2$, EveCorp has access to an encryption of $1$ using the ring homomorphism properties:
\begin{equation}\label{eq:encryption_of_one}
E(1) = E(M_1 x + M_2 y) = x E(M_1) + y E(M_2) = x C_1 + y C_2.
\end{equation}
By multiplying $E(1)$ by $m$, EveCorp has access to a plaintext and encryption of $m$ for any message $m$.</p>
<p>Now suppose that EveCorp can always recognize an encryption of $0$. Then EveCorp can mount a variety of attacks exposing information about the data it holds.</p>
<p>For example, EveCorp can test whether a particular message $m$ is contained in the encrypted dataset. First, EveCorp generates a ciphertext $C_m$ for $m$ by multiplying $E(1)$ by $m$, as in \eqref{eq:encryption_of_one}. Then for each ciphertext $C$ in the dataset, EveCorp computes $C - C_m$. If $m$ is contained in the dataset, then $C - C_m$ will be an encryption of $0$ for the $C$ corresponding to $m$. EveCorp recognizes this, and now knows that $m$ is in the data. To be more specific, perhaps a list of encrypted names of medical patients appears in the data, and EveCorp wants to see if JohnDoe is in that list. If they can recognize encryptions of $0$, then EveCorp can access this information.</p>
<p>And thus it is unacceptable for external entities to be able to consistently recognize encryptions of $0$.</p>
<p>Up to now, I've been a bit loose by saying "an encryption of zero" or "an encryption of $m$". The reason for this is that to protect against recognition of encryptions of $0$, some entropy is added to the encryption function $E(\cdot)$, making it multivalued. So if we have a message $M$ and we encrypt it once to get $E(M)$, and we encrypt $M$ later and get $E'(M)$, it is often not true that $E(M) = E'(M)$, even though they are both encryptions of the same message. But these systems are designed so that it is true that $C(E(M)) = C(E'(M)) = M$, so that the entropy doesn't matter.</p>https://davidlowryduda.com/revealing-zero-in-fully-homomorphic-encryption-is-a-bad-thingSat, 07 Jan 2017 03:14:15 +0000Teachinghttps://davidlowryduda.com/teachingDavid Lowry-Duda<h1>Teaching</h1>
<p>I am not currently teaching.</p>
<h2>Past Teaching</h2>
<ul>
<li>Fall 2016: <a href="/math-100-fall-2016">Math 100</a> Calculus II at Brown University</li>
<li>Spring 2016: <a href="/introduction-to-number-theory">Math 42</a> Elementary
Number Theory at Brown University, which I designed and taught. This course
ended with final projects. This was great! <a href="/math-42-spring-2016-student-showcase/">Check them out here.</a></li>
<li>Summer 2015: Summer Number Theory for high school students at Summer@Brown</li>
<li>Fall 2014: <a href="/math-170/">Math 170</a> Advanced Placement Calculus II at Brown
University.</li>
<li>Summer 2014: Summer Number Theory for high school students at Summer@Brown</li>
<li>Summer 2013: Summer Number Theory for high school students at Summer@Brown</li>
<li>Summer 2013: Precalculus</li>
</ul>
<p>I was previously a TA (many times) for several courses, both as an undergrad
and twice at the beginning of grad school. I didn't save my supplementary
teaching materials for most of these courses.</p>
<ul>
<li>Math 90, Calculus I at Brown University</li>
<li>Math 100, Calculus II at Brown University</li>
<li>Math 1501, Calculus I at Georgia Tech</li>
<li>Math 1502, Calculus II and Linear Algebra at Georgia Tech</li>
<li>Math 2401, Multivariable Calculus at Georgia Tech</li>
</ul>
<h2>Teaching Notes</h2>
<p>See <a href="/teaching-notes/">here</a> for a few supplementary notes I've written.</p>
<h2>Supervision</h2>
<ul>
<li>
<p>Summer 2022: I supervsed four dedicated high school students at PROMYS on a
project I call <em>Königsberg Pseudoprimes</em>. We'll have a project report in the
future.</p>
</li>
<li>
<p>Summer 2021: I supervised three dedicated high school students at PROMYS,
culminating in <a href="/project-report-on-prime-sums/">their project</a>.</p>
</li>
<li>
<p>MSc Student Andrew Darlington at the University of Warwick (2019) on
<em>Half-integral weight modular forms</em>.</p>
</li>
<li>
<p>Undergraduate research projects at the University of Warwick (2018): Andrew
Darlington, Eleri Williams.</p>
</li>
</ul>https://davidlowryduda.com/teachingSun, 01 Jan 2017 03:14:15 +0000Math 100 - Concluding Remarkshttps://davidlowryduda.com/math-100-fall-2016-concluding-remarksDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/math-100-fall-2016-concluding-remarksWed, 21 Dec 2016 03:14:15 +0000Computing pi with tools from calculushttps://davidlowryduda.com/computing-pi-with-calculusDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/computing-pi-with-calculusSun, 04 Dec 2016 03:14:15 +0000Series convergence tests with prototypical exampleshttps://davidlowryduda.com/series-convergence-with-examplesDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/series-convergence-with-examplesTue, 25 Oct 2016 03:14:15 +0000Teaching Noteshttps://davidlowryduda.com/teaching-notesDavid Lowry-Duda<p>When I teach, I frequently write supplementary notes for my students. Some
notes which have appeared to be particularly effective are listed here.</p>
<ul>
<li><a href="/an-intuitive-introduction-to-calculus/">An Intuitive Introduction to Calculus</a></li>
<li><a href="/an-intuitive-overview-of-taylor-series/">An Intuitive Overview of Taylor Series</a></li>
<li><a href="/series-convergence-with-examples/">Series convergence tests with examples</a></li>
<li>A <a href="/math-420-supplement-on-gaussian-integers/">first</a>
and <a href="/math-420-supplement-on-gaussian-integers-ii/">second</a>
supplemental set of notes on the Gaussian integers.</li>
</ul>
<p>These notes lie a bit outside the typical curriculum.</p>
<ul>
<li><a href="/a-brief-notebook-on-cryptography/">A Brief Notebook on Cryptography</a></li>
<li><a href="/trigonometric-and-related-substitutions-in-integrals/">Trigonometric and related substitutions in integrals</a></li>
</ul>
<p>I've also written expository papers that dive deeper into undergraduate math
topics. See more at</p>
<ul>
<li><a href="/on-functions-whose-mean-value-abscissas-are-midpoints/">On functions whose mean-value abscissas are midpoints</a></li>
<li><a href="/paper-continuous-choices-mvt/">When are there continuous choices for the mean value abscissa?</a></li>
</ul>https://davidlowryduda.com/teaching-notesSun, 16 Oct 2016 03:14:15 +0000A notebook preparing for a talk at Quebec-Mainehttps://davidlowryduda.com/a-notebook-preparing-for-talk-at-quebec-maineDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/a-notebook-preparing-for-talk-at-quebec-maineTue, 04 Oct 2016 03:14:15 +0000Math 100 - Completing the partial fractions example from classhttps://davidlowryduda.com/math-100-completing-the-partial-fractions-example-from-classDavid Lowry-Duda<h3>An Unfinished Example</h3>
<p>At the end of class today, someone asked if we could do another example of a partial fractions integral involving an irreducible quadratic. We decided to look at the integral</p>
<p>$$ \int \frac{1}{(x^2 + 4)(x+1)}dx. $$
Notice that ${x^2 + 4}$ is an irreducible quadratic polynomial. So when setting up the partial fraction decomposition, we treat the ${x^2 + 4}$ term as a whole.</p>
<p>So we seek to find a decomposition of the form</p>
<p>$$ \frac{1}{(x^2 + 4)(x+1)} = \frac{A}{x+1} + \frac{Bx + C}{x^2 + 4}. $$
Now that we have the decomposition set up, we need to solve for ${A,B,}$ and ${C}$ using whatever methods we feel most comfortable with. Multiplying through by ${(x^2 + 4)(x+1)}$ leads to</p>
<p>$$ 1 = A(x^2 + 4) + (Bx + C)(x+1) = (A + B)x^2 + (B + C)x + (4A + C). $$
Matching up coefficients leads to the system of equations</p>
<p>$$\begin{align}
0 &= A + B \\
0 &= B + C \\
1 &= 4A + C.
\end{align}$$
So we learn that ${A = -B = C}$, and ${A = 1/5}$. So ${B = -1/5}$ and ${C = 1/5}$.</p>
<p>Together, this means that</p>
<p>$$ \frac{1}{(x^2 + 4)(x+1)} = \frac{1}{5}\frac{1}{x+1} + \frac{1}{5} \frac{-x + 1}{x^2 + 4}. $$
Recall that if you wanted to, you could check this decomposition by finding a common denominator and checking through.</p>
<p>Now that we have performed the decomposition, we can return to the integral. We now have that</p>
<p>$$ \int \frac{1}{(x^2 + 4)(x+1)}dx = \underbrace{\int \frac{1}{5}\frac{1}{x+1}dx}_ {\text{first integral}} + \underbrace{\int \frac{1}{5} \frac{-x + 1}{x^2 + 4} dx.}_ {\text{second integral}} $$
We can handle both of the integrals on the right hand side.</p>
<p>The first integral is</p>
<p>$$ \frac{1}{5} \int \frac{1}{x+1} dx = \frac{1}{5} \ln (x+1) + C. $$</p>
<p>The second integral is a bit more complicated. It's good to see if there is a simple ${u}$-substition, since there is an ${x}$ in the numerator and an ${x^2}$ in the denominator. But unfortunately, this integral needs to be further broken into two pieces that we know how to handle separately.</p>
<p>$$ \frac{1}{5} \int \frac{-x + 1}{x^2 + 4} dx = \underbrace{\frac{-1}{5} \int \frac{x}{x^2 + 4}dx}_ {\text{first piece}} + \underbrace{\frac{1}{5} \int \frac{1}{x^2 + 4}dx.}_ {\text{second piece}} $$</p>
<p>The first piece is now a ${u}$-substitution problem with ${u = x^2 + 4}$. Then ${du = 2x dx}$, and so</p>
<p>$$ \frac{-1}{5} \int \frac{x}{x^2 + 4}dx = \frac{-1}{10} \int \frac{du}{u} = \frac{-1}{10} \ln u + C = \frac{-1}{10} \ln (x^2 + 4) + C. $$</p>
<p>The second piece is one of the classic trig substitions. So we draw a triangle.</p>
<p><a href="/wp-content/uploads/2016/09/triangle.png"><img class="size-full wp-image-2107 aligncenter" src="/wp-content/uploads/2016/09/triangle.png" alt="triangle" width="466" height="296" /></a></p>
<p>In this triangle, thinking of the bottom-left angle as ${\theta}$ (sorry, I forgot to label it), then we have that ${2\tan \theta = x}$ so that ${2 \sec^2 \theta d \theta = dx}$. We can express the so-called hard part of the triangle by ${2\sec \theta = \sqrt{x^2 + 4}}$.</p>
<p>Going back to our integral, we can think of ${x^2 + 4}$ as ${(\sqrt{x^2 + 4})^2}$ so that ${x^2 + 4 = (2 \sec \theta)^2 = 4 \sec^2 \theta}$. We can now write our integral as</p>
<p>$$ \frac{1}{5} \int \frac{1}{x^2 + 4}dx = \frac{1}{5} \int \frac{1}{4 \sec^2 \theta} 2 \sec^2 \theta d \theta = \frac{1}{5} \int \frac{1}{2} d\theta = \frac{1}{10} \theta. $$
As ${2 \tan \theta = x}$, we have that ${\theta = \text{arctan}(x/2)}$. Inserting this into our expression, we have</p>
<p>$$ \frac{1}{10} \int \frac{1}{x^2 + 4} dx = \frac{1}{10} \text{arctan}(x/2) + C. $$</p>
<p>Combining the first integral and the first and second parts of the second integral together (and combining all the constants ${C}$ into a single constant, which we also denote by ${C}$), we reach the final expression</p>
<p>$$ \int \frac{1}{(x^2 + 4)(x + 1)} dx = \frac{1}{5} \ln (x+1) - \frac{1}{10} \ln(x^2 + 4) + \frac{1}{10} \text{arctan}(x/2) + C. $$</p>
<p>And this is the answer.</p>
<h3>Other Notes</h3>
<p>If you have any questions or concerns, please let me know. As a reminder, I have office hours on Tuesday from 9:30–11:30 (or perhaps noon) in my office, and I highly recommend attending the Math Resource Center in the Kassar House from 8pm-10pm, offered Monday-Thursday. [Especially on Tuesday and Thursdays, when there tend to be fewer people there].</p>
<p>On my course page, I have linked to two additional resources. One is to Paul's Online Math notes for partial fraction decomposition (which I think is quite a good resource). The other is to the Khan Academy for some additional worked through examples on polynomial long division, in case you wanted to see more worked examples. This note can also be found on my website, or in <a href="/wp-content/uploads/2016/09/irred_quadratic_partial_fracs.pdf">pdf form</a>.</p>
<p>Good luck, and I'll see you in class.</p>https://davidlowryduda.com/math-100-completing-the-partial-fractions-example-from-classThu, 29 Sep 2016 03:14:15 +0000Math 100 - Fall 2016https://davidlowryduda.com/math-100-fall-2016David Lowry-Duda<h1>Math 100 - Fall 2016 </h1>
<p>This is the course page for David Lowry-Duda's students in Math 100, Fall 2016, at Brown University. Note that there is a main webpage for all Math 100 courses taught this semester, located at <a href="https://sites.google.com/a/brown.edu/fa16-math0100/">https://sites.google.com/a/brown.edu/fa16-math0100/</a>. Homeworks for the course will be posted and updated there — this site is for any additional notes or materials provided to this class.</p>
<h3>Additional Materials</h3>
<p>Any additional notes or materials that I will provide will be linked to here.</p>
<ol>
<li><a href="/?p=1259">An Intuitive Introduction to Calculus</a>, which reviews some of the basic and big ideas of calculus leading into Math 100.</li>
<li>Paul's Online Math Notes have good supplementary material on the use of <a href="http://tutorial.math.lamar.edu/Classes/CalcII/PartialFractions.aspx">partial fractions in integration</a>.</li>
<li>The Khan Academy is a source of some more guided examples on <a href="https://www.khanacademy.org/math/algebra2/arithmetic-with-polynomials/long-division-of-polynomials/v/polynomial-division">polynomial long division</a>.</li>
<li>An additional note on a partial fraction integral example we started in class, <a href="/?p=2106">on this site</a> and in <a href="/wp-content/uploads/2016/09/irred_quadratic_partial_fracs.pdf">pdf form</a>.</li>
<li><a href="/series-convergence-with-examples/">Series Convergence Tests with Prototypical Examples</a>. [Also a <a href="/wp-content/uploads/2016/10/SeriesConvergenceTechniques.pdf">direct link to the note</a> as a pdf, and a link to <a href="/wp-content/uploads/2016/10/JustTheTests.pdf">just the statements of the convergence tests</a>].</li>
<li><a href="/an-intuitive-overview-of-taylor-series/">An Intuitive Overview of Taylor Series</a>, introducing our next big topic.</li>
</ol>
<h3></h3>
<h3>Administrative Details</h3>
<p>Instructor Name: David Lowry-Duda
Email Address: djlowry@math.brown.edu
Websites: <a href="https://sites.google.com/a/brown.edu/fa16-math0100/">https://sites.google.com/a/brown.edu/fa16-math0100/</a> and <a href="http://davidlowryduda.com/">davidlowryduda.com</a>
Homework Site: <a href="https://sites.google.com/a/brown.edu/fa16-math0100/homework">https://sites.google.com/a/brown.edu/fa16-math0100/homework</a>
Office Hours: Tuesdays from 9:30am to 11:30am in Kassar House 010, or by appointment.
Class: TR 1:00-2:20PM in Barus & Holley 159.</p>https://davidlowryduda.com/math-100-fall-2016Wed, 07 Sep 2016 03:14:15 +0000Paper: On Functions Whose Mean Value Abscissas are Midpoints, with Connections to Harmonic Functions (with Paul Carter)https://davidlowryduda.com/on-functions-whose-mean-value-abscissas-are-midpointsDavid Lowry-Duda<p>This is joint work with Paul Carter.<span class="aside">We completed this while on a cross-country drive as we moved the newly minted Dr. Carter from Brown to Arizona.</span></p>
<p>I've had a longtime fascination with the standard mean value theorem of calculus.</p>
<blockquote><strong>Mean Value Theorem</strong>
Suppose $f$ is a differentiable function. Then there is some $c \in (a,b)$ such that
\begin{equation}
\frac{f(b) - f(a)}{b-a} = f'(c).
\end{equation}</blockquote>
<p>The idea for this project started with a simple question: what happens when we interpret the mean value theorem as a differential equation and try to solve it? As stated, this is too broad. To narrow it down, we might specify some restriction on the $c$, which we refer to as the <em>mean value abscissa</em>, guaranteed by the Mean Value Theorem.</p>
<p>So I thought to try to find functions satisfying
\begin{equation}
\frac{f(b) - f(a)}{b-a} = f' \left( \frac{a + b}{2} \right)
\end{equation}
for all $a$ and $b$ as a differential equation. In other words, let's try to find all functions whose mean value abscissas are midpoints.</p>
<p>This looks like a differential equation, which I only know some things about. But my friend and colleague Paul Carter knows a lot about them, so I thought it would be fun to ask him about it.</p>
<p>He very quickly told me that it's essentially impossible to solve this from the perspective of differential equations. But like a proper mathematician with applied math leanings, he thought we should explore some potential solutions in terms of their Taylor expansions. Proceeding naively in this way very quickly leads to the answer that those (assumed smooth) solutions are precisely quadratic polynomials.</p>
<p>It turns out that was too simple. It was later pointed out to us that verifying that quadratic polynomials satisfy the midpoint mean value property is a common exercise in calculus textbooks, including the one we use to teach from at Brown. Digging around a bit reveals that this was even known (in geometric terms) to Archimedes.</p>
<p>So I thought we might try to go one step higher, and see what's up with
\begin{equation}\label{eq:original_midpoint}
\frac{f(b) - f(a)}{b-a} = f' (\lambda a + (1-\lambda) b), \tag{1}
\end{equation}
where $\lambda \in (0,1)$ is a weight. So let's find all functions whose mean value abscissas are weighted averages. A quick analysis with Taylor expansions show that (assumed smooth) solutions are precisely linear polynomials, except when $\lambda = \frac{1}{2}$ (in which case we're looking back at the original question).</p>
<p>That's a bit odd. It turns out that the midpoint itself is distinguished in this way. Why might that be the case?</p>
<p>It is beneficial to look at the mean value property as an integral property instead of a differential property,
\begin{equation}
\frac{1}{b-a} \int_a^b f'(t) dt = f'\big(c(a,b)\big).
\end{equation}
We are currently examining cases when $c = c_\lambda(a,b) = \lambda a + (1-\lambda b)$. We can see the right-hand side is differentiable by differentiating the left-hand side directly. Since any point can be a weighted midpoint, one sees that $f$ is at least twice-differentiable. One can actually iterate this argument to show that any $f$ satisfying one of the weighted mean value properties is actually smooth, justifying the Taylor expansion analysis indicated above.</p>
<p>An attentive eye might notice that the midpoint mean value theorem, written as the integral property
\begin{equation}
\frac{1}{b-a} \int_a^b f'(t) dt = f' \left( \frac{a + b}{2} \right)
\end{equation}
is exactly the one-dimensional case of the harmonic mean value property, usually written
\begin{equation}
\frac{1}{\lvert B_h \rvert} = \int_{B_h(x)} g(t) dV = g(x).
\end{equation}
Here, $B_h(x)$ is the ball of radius $h$ and center $x$. Any harmonic function satisfies this mean value property, and any function satisfying this mean value property is harmonic.</p>
<p>From this viewpoint, functions satisfying our original midpoint mean value property~\eqref{eq:original_midpoint} have harmonic derivatives. But the only one-dimensional harmonic functions are affine functions $g(x) = cx + d$. This gives immediately that the set of solutions to~\eqref{eq:original_midpoint} are quadratic polynomials.</p>
<p>The weighted mean value property can also be written as an integral property. Trying to connect it similarly to harmonic functions led us to consider functions satisfying
\begin{equation}
\frac{1}{\lvert B_h \rvert} = \int_{B_h(x)} g(t) dV = g(c_\lambda(x,h)),
\end{equation}
where $c_\lambda(x,h)$ should be thought of as some distinguished point in the ball $B_h(x)$ with a weight parameter $\lambda$. More specifically,</p>
<p>Are there <em>weighted</em> harmonic functions corresponding to a <em>weighted</em> harmonic mean value property?
In one dimension, the answer is no, as seen above. But there are many more multivariable harmonic functions [in fact, I've never thought of harmonic functions on $\mathbb{R}^1$ until this project, as they're too trivial]. So maybe there are <em>weighted</em> harmonic functions in higher dimensions?</p>
<p>This ends up being the focus of the latter half of our paper. Unexpectedly (to us), an analogous methodology to our approach in the one-dimensional case works, with only a few differences.</p>
<p>It turns out that <em>no</em>, there are no <em>weighted</em> harmonic functions on $\mathbb{R}^n$ other than trivial extensions of harmonic functions from $\mathbb{R}^{n-1}$.</p>
<p>Harmonic functions are very special, and even more special than we had thought. The paper is a fun read, and can be found <a href="http://arxiv.org/abs/1608.02558">on the arxiv</a> now. It has been accepted and will appear in American Mathematical Monthly.</p>https://davidlowryduda.com/on-functions-whose-mean-value-abscissas-are-midpointsFri, 12 Aug 2016 03:14:15 +0000Paper: Sign Changes of Coefficients and Sums of Coefficients of Cusp Formshttps://davidlowryduda.com/paper-sign-changes-of-coefficients-and-sums-of-coefficients-of-cusp-formsDavid Lowry-Duda<p>This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker, and is a another sequel to our previous work. This is the third in a <a href="/?p=1869">trio</a> of <a href="/?p=1883">papers</a>, and completes an answer to a question posed by our advisor Jeff Hoffstein two years ago.</p>
<p>We have just uploaded <a href="http://arxiv.org/abs/1606.00067">a preprint</a> to the arXiv giving conditions that guarantee that a sequence of numbers contains infinitely many sign changes. More generally, if the sequence consists of complex numbers, then we give conditions that guarantee sign changes in a <em>generalized</em> sense.</p>
<p>Let $\mathcal{W}(\theta_1, \theta_2) := { re^{i\theta} : r \geq 0, \theta \in [\theta_1, \theta_2]}$ denote a wedge of complex plane.</p>
<p>Suppose ${a(n)}$ is a sequence of complex numbers satisfying the following conditions:</p>
<ol>
<li>$a(n) \ll n^\alpha$,</li>
<li>$\sum_{n \leq X} a(n) \ll X^\beta$,</li>
<li>$\sum_{n \leq X} \lvert a(n) \rvert^2 = c_1 X^{\gamma_1} + O(X^{\eta_1})$,</li>
</ol>
<p>where $\alpha, \beta, c_1, \gamma_1$, and $\eta_1$ are all real numbers $\geq 0$. Then for any $r$ satisfying $\max(\alpha+\beta, \eta_1) - (\gamma_1 - 1) < r < 1$, the sequence ${a(n)}$ has at least one term outside any wedge $\mathcal{W}(\theta_1, \theta_2)$ with $0 \theta_2 - \theta_1 < \pi$ for some $n \in [X, X+X^r)$ for all sufficiently large $X$.</p>
<p>These wedges can be thought of as just slightly smaller than a half-plane. For a complex number to escape a half plane is analogous to a real number changing sign. So we should think of this result as guaranteeing a sort of sign change in intervals of width $X^r$ for all sufficiently large $X$.</p>
<p>The intuition behind this result is very straightforward. If the sum of coefficients is small while the sum of the squares of the coefficients are large, then the sum of coefficients must experience a lot of cancellation. The fact that we can get quantitative results on the number of sign changes is merely a task of bookkeeping.</p>
<p>Both the statement and proof are based on very similar criteria for sign changes when ${a(n)}$ is a sequence of real numbers first noticed by Ram Murty and Jaban Meher. However, if in addition it is known that</p>
<p>\begin{equation}
\sum_{n \leq X} (a(n))^2 = c_2 X^{\gamma_2} + O(X^{\eta_2}),
\end{equation}</p>
<p>and that $\max(\alpha+\beta, \eta_1, \eta_2) - (\max(\gamma_1, \gamma_2) - 1) < r < 1$, then generically both sequences ${\text{Re} (a(n)) }$ and ${ \text{Im} (a(n)) }$ contain at least one sign change for some $n$ in $[X , X + X^r)$ for all sufficiently large $X$. In other words, we can detect sign changes for both the real and imaginary parts in intervals, which is a bit more special.</p>
<p>It is natural to ask for even more specific detection of sign changes. For instance, knowing specific information about the distribution of the arguments of $a(n)$ would be interesting, and very closely reltated to the Sato-Tate Conjectures. But we do not yet know how to investigate this distribution.</p>
<p>In practice, we often understand the various criteria for the application of these two sign changes results by investigating the Dirichlet series
\begin{align}
&\sum_{n \geq 1} \frac{a(n)}{n^s} \\
&\sum_{n \geq 1} \frac{S_f(n)}{n^s} \\
&\sum_{n \geq 1} \frac{\lvert S_f(n) \rvert^2}{n^s} \\
&\sum_{n \geq 1} \frac{S_f(n)^2}{n^s},
\end{align}
where
\begin{equation}
S_f(n) = \sum_{m \leq n} a(n).
\end{equation}</p>
<p>In the case of holomorphic cusp forms, the two previous joint projects with this group investigated exactly the Dirichlet series above. In the paper, we formulate some slightly more general criteria guaranteeing sign changes based directly on the analytic properties of the Dirichlet series involved.</p>
<p>In this paper, we apply our sign change results to our previous work to show that $S_f(n)$ changes sign in each interval $[X, X + X^{\frac{2}{3} + \epsilon})$ for sufficiently large $X$. Further, if there are coefficients with $\text{Im} a(n) \neq 0$, then the real and imaginary parts each change signs in those intervals.</p>
<p>We apply our sign change results to single coefficients of $\text{GL}(2)$ cusp forms (and specifically full integral weight holomorphic cusp forms, half-integral weight holomorphic cusp forms, and Maass forms). In large part these are minor improvements over folklore and what is known, except for the extension to complex coefficients.</p>
<p>We also apply our sign change results to single isolated coefficients $A(1,m)$ of $\text{GL}(3)$ Maass forms. This seems to be a novel result, and adds to the very sparse literature on sign changes of sequences associated to $\text{GL}(3)$ objects. Murty and Meher recently proved a general sign change result for $\text{GL}(n)$ objects which is similar in feel.</p>
<p>As a final application, we also consider sign changes of partial sums of $\nu$-normalized coefficients. Let
\begin{equation}
S_f^\nu(X) := \sum_{n \leq X} \frac{a(n)}{n^{\nu}}.
\end{equation}
As $\nu$ gets larger, the individual coefficients $a(n)n^{-\nu}$ become smaller. So one should expect that sign changes in ${S_f^\nu(n)}$ to change based on $\nu$. And in particular, as $\nu$ gets very large, the number of sign changes of $S_f^\nu$ should decrease.</p>
<p>Interestingly, in the case of holomorphic cusp forms of weight $k$, we are able to show that there are sign changes of $S_f^\nu(n)$ in intervals even for normalizations $\nu$ a bit above $\nu = \frac{k-1}{2}$. This is particularly interesting as $a(n) \ll n^{\frac{k-1}{2} + \epsilon}$, so for $\nu > \frac{k-1}{2}$ the coefficients are \emph{decreasing} with $n$. We are able to show that when $\nu = \frac{k-1}{2} + \frac{1}{6} - \epsilon$, the sequence ${S_f^\nu(n)}$ has at least one sign change for $n$ in $[X, 2X)$ for all sufficiently large $X$.</p>
<p>It may help to consider a simpler example to understand why this is surprising. Consider the classic example of a sequence of $b(n)$, where $b(n) = 1$ or $b(n) = -1$, randomly, with equal probability. Then the expected size of the sums of $b(n)$ is about $\sqrt n$. This is an example of \emph{square-root cancellation}, and such behaviour is a common point of comparison. Similarly, the number of sign changes of the partial sums of $b(n)$ is also expected to be about $\sqrt n$.</p>
<p>Suppose now that $b(n) = \frac{\pm 1}{\sqrt n}$. If the first term is $1$, then it takes more then the second term being negative to make the overall sum negative. And if the first two terms are positive, then it would take more then the following three terms being negative to make the overall sum negative. So sign changes of the partial sums are much rarer. In fact, they're exceedingly rare, and one might barely detect more than a dozen through computational experiment (although one should still expect infinitely many).</p>
<p>This regularity, in spite of the decreasing size of the individual coefficients $a(n)n^{-\nu}$, suggests an interesting regularity in the sign changes of the individual $a(n)$. We do not know how to understand or measure this effect or its regularity, and for now it remains an entirely qualitative observation.</p>
<p>For more details and specific references, see <a href="http://arxiv.org/abs/1606.00067">the paper</a> on the arXiv.</p>https://davidlowryduda.com/paper-sign-changes-of-coefficients-and-sums-of-coefficients-of-cusp-formsFri, 03 Jun 2016 03:14:15 +0000Math 42 Spring 2016 Student Showcasehttps://davidlowryduda.com/math-42-spring-2016-student-showcaseDavid Lowry-Duda<p>This spring, I taught Math 42: An Introduction to Elementary Number Theory at Brown University. An important aspect of the course was the final project. In these projects, students either followed up on topics that interested them from the semester, or chose and investigated topics related to number theory. Projects could be done individual or in small groups.</p>
<p>I thought it would be nice to showcase some excellent student projects from my class. Most of the projects were quite good, and some showed extraordinary effort. Some students really dove in and used this as an opportunity to explore and digest a topic far more thoroughly than could possibly be expected from an introductory class such as this one. With the students' permission, I've chosen five student projects (in no particular order) for a blog showcase (impressed by similar <a href="http://www.scottaaronson.com/blog/?p=515">sorts</a> of showcases from Scott Aaronson).</p>
<ul>
<li><p><a href="/wp-content/uploads/2016/05/NunezShaw-FinalProject.pdf"><em>Factorization Techniques</em>, by Elvis Nunez and Chris Shaw</a>. In this project, Elvis and Chris look at Fermat Factorization, which looks to factor $n$ by expressing $n = a^2 - b^2$. Further, they investigate improvements to Fermat's Algorithm by Dixon and Kraitchik. Following this line of investigation leads to the development of the modern quadratic sieve and factor base methods of factorization.</p></li>
<li><p><a href="/wp-content/uploads/2016/05/Riemer-FinalProject.pdf"><em>Pseudoprimes and Carmichael Numbers</em>, by Emily Riemer</a>. Fermat's Little Theorem is one of the first "big idea" theorems we encounter in the course, and we came back to it again and again throughout. Emily explored the Fermat's Little Theorem as a primality test, leading to pseudoprimes, strong pseudoprimes, and Carmichael numbers. [As an aside, one of her references concerning Carmichael numbers were notes from an algebraic number theory class taught by Matt Baker, who first got me interested in number theory].</p></li>
<li><p><a href="/wp-content/uploads/2016/05/LahnSpiegel-FinalProject.pdf"><em>Continued Fractions and Pell's Equation</em>, by Max Lahn and Jonathan Spiegel</a>. As it happened, I did not have time to teach continued fractions in the course. So Max and Jonathan decided to look at them on their own. They explore some ideas related to the convergence of continued fractions and see how one uses continued fractions to solve Pell's Equation.</p></li>
<li><p><a href="/wp-content/uploads/2016/05/HuLong-FinalProject.pdf"><em>Quantum Computing</em>, by Edward Hu and Chris Long</a>. Edward and Chris explore quantum computing with particular emphasis towards gaining some idea of how Shor's factorization algorithm works. For some of the more complicated ideas, like the quantum Fourier transform, they make use of heuristic and analogy to purvey the main ideas.</p></li>
<li><p><a href="/wp-content/uploads/2016/05/GroosSchudrowitzBerglund-FinalProject.pdf"><em>Fermat's Last Theorem</em>, by Dylan Groos, Natalie Schudrowitz, and Kenneth Berglund</a>. Dylan, Natalie, and Kenneth provide a historical look at attacks on Fermat's Last Theorem. They examine proofs for $n=4$ and Sophie Germaine's remarkable advances. They also touch on elliptic curves and modular forms, hinting at some of the deep ideas lying beneath the surface.</p></li>
</ul>https://davidlowryduda.com/math-42-spring-2016-student-showcaseWed, 25 May 2016 03:14:15 +0000Math 42 - Concluding Remarkshttps://davidlowryduda.com/math-42-concluding-remarksDavid Lowry-Duda<p>As this semester draws to an end, it is time to reflect on what we've done. What worked well? What didn't work well? What would I change if I taught this course again?</p>
<h2>Origins of the course</h2>
<p>This course was created by my advisor, Jeff Hoffstein, many years ago in order to offer a sort of bridge between high school math and "real math." The problem is that in primary and secondary school, students are not exposed to the grand, modern ideas of mathematics. They are forced to drill exercises and repeat formulae. Often, the greatest and largest exposure to mathematical reasoning is hidden among statements of congruent triangles and Side-Angle-Side theorems. Most students arrive at university thinking that math is over and done with. What else could there possibly remain to do in math?</p>
<p>Math 42 was designed to attract nonscience majors, especially those not intending to pursue the standard calculus sequence, and to convince them to study some meaningful mathematics. Ideally, students begin to think mathematically and experience some of the thrill of independent intellectual discovery.</p>
<p>It is always a bit surprising to me that so many students find their way into this class each spring. This class does not have a natural lead-in, it satisfies no prerequisites, and it is not in the normal track for math concentrators. One cannot even pretend to make the argument that number theory is a useful day-to-day skill. Yet number theory has a certain appeal... there are so many immediate and natural questions. It is possible to get a hint that there is something deep going on within the first two classes.</p>
<p>Further, I think there is something special about the first homework assigned in a course. Homeworks send a really strong signal about the content of a course. I want this course to be more about the students exploring, asking questions, and experimenting than about repeating the same old examples and techniques from the class. So the first several questions on the first homework are dedicated to open-ended exploration.</p>
<p>There are side effects to this approach. Open ended exploration is uncertain, and therefore scary. I hope that it's intriguing enough (and different enough) that students push through initial discomfort, but I'm acutely aware that this can be an intimidating time. Perhaps in another time, students were more comfortable with uncertainty — but that is a discussion for later. I'm pretty sure that many fears are assuaged after the first week, once the idea that it is okay to not know what you're going to learn before you learn it. In fact, it's more fun that way! (One also learns much more rapidly).</p>
<p>My approach to this course is strongly influenced by my experiences teaching number theory to high schoolers as part of the Summer@Brown program for the past several years. During my first summer teaching that course, I co-taught it with Jackie Anderson, who is an excellent and thoughtful instructor. I also strongly draw on the excellent textbook <em>A Friendly Introduction to Number Theory</em> by Joe Silverman, written specifically for this course about 20 years ago.</p>
<h2>Trying final projects</h2>
<p>I am still surprised each time I teach this course. I tried one major new thing in this course that I've thought about for a long time — students were required to do a final project on a topic of their choice. It turns out that this is a great idea, and I would absolutely do it again. The paths available to the students really opens up in the second half of the course. With a few basic tools (in particular once we've mastered modular arithmetic, linear congruences, and the Chinese Remainder Theorem) the number of deeply interesting <strong>and</strong> accessible topics is huge. There is some great truth here, about how a few basic structural tools allow one to explore all sorts of new playgrounds.</p>
<p>A final project allows students to realize this for themselves. It also fits in with the motif of experimentation and self-discovery that pervades the whole course. The structural understanding from the first half of the course is enough to pursue some really interesting, and occasionally even useful, topics. More importantly, they learn that they are capable of finding, learning, and understanding complex mathematical ideas on their own. And usually, they enjoy it, since it's fun to learn cool things.</p>
<p>For various reasons, I had thought it would be a good idea to offer students an alternative to final projects, in the form of a somewhat challenging final exam. In hindsight, I now think this was not such a good idea. Students who do not perform final projects miss out on a sort of representative capstone experience in the course.</p>
<p>There are a few other things that I would have done differently, if given the chance. I would ask students to be on the lookout for topics and groups much earlier, perhaps about a third of the way into the semester (instead of about two thirds of the way into the course). My students did an extraordinary job at their projects this semester. But I think with some additional rumination time, more groups would pursue projects based more on their own particular interests. Or perhaps not — I'll see next time.</p>
<p>Several of my students who really dove into their final projects have agreed to have their work showcased here (which is something that I'll get back to in a later note). This means that students in later courses will have something to refer back to. [Whether this is a good thing remainds to be seen, but I suspect it's for the better].</p>
<h2>Interesting correlations</h2>
<p>In these concluding notes, I often like to try to draw correlations between certain patterns of behaviour and success in the course. I'm very often interested in the question of how early on in a course one can accurately predict a final grade. In calculus courses, it seems one can very accurately predict a final grade using only the first midterm grade.</p>
<p>In this course, fewer such correlations are meaningful. Most notably, a large percentage of the class took the course as pass/fail. [I can't blame them, as this course is supposed to draw students a bit out of their comfort zone into a topic they know little about]. This distorts the entire incentive structure of the course in relation to other demands of college life.</p>
<p>There is a very strong correlation between taking the class for a grade and receiving a high numeric grade at the end. I think this comes largely from two causes: students who are more confident with the subject material coming into the course decide to take it for a grade, and then perform in line with their expectations; and taking a class for pass/fail creates an incentive structure with high emphasis on learning enough material to pass, but not necessarily mastering all the material.</p>
<p>In sharp contrast to my experience in calculus courses, there is a pretty strong correlation between homework grade and with overall performance. While this may seem obvious, this has absolutely not been my experience in calculus courses. Generally poor homework grades correlate extremely strongly with poor final grades, but strong homework has had almost no correlation with strong performance.</p>
<p>I think that a reason why homework might be a better predicter in this course is that homework is harder. There are always open-ended problems, and every homework had at least one or two problems designed to take a lot of experimentation and thought. Students who did well on the homework put in that experimentation and thinking time, reflecting better study habits, higher commitment, and more grit (like in <a href="https://www.ted.com/talks/angela_lee_duckworth_the_key_to_success_grit?language=en">this TED talk</a>).</p>
<p>Finally, there is an extremely high correlation with students attending office hours and strong performance in the course. It will always remain a mystery to me why more students don't take advantage of office hours. [It might be that this is also a measure other characteristics, such as commitment, study habits, and grit].</p>
<h2>Don't forget the coffee</h2>
<p>This was my favorite course I've taught at Brown. On the flipside, I think my students enjoyed this class more than any other class I've taught at Brown. This is one of those courses that rejuvenates the soul.</p>
<p>While returning home after one of the final classes, I flipped on NPR and listened to Innovation Hub. On the program was Steven Strogatz, a well-known mathematician and expositor, talking about his general dislike of the Calculus-Is-The-Pinnacle-Of-Mathematics style approach that is so common today in high schools and colleges. The program in particular can be found <a href="http://blogs.wgbh.org/innovation-hub/2016/5/6/full-show-status-quo/">here</a>.</p>
<p>He argues that standard math education is missing some important topics, especially related to financial numeracy. But he also argues that the current emphasis is not on the beauty or attraction of mathematics, but on a very particular set of applications [and in particular, towards creating rockets].</p>
<p>While this course isn't perfect, I do think that it is the sort of course that Strogatz would approve of — somewhat like a Survey of Shakespeare course, but in mathematics.</p>https://davidlowryduda.com/math-42-concluding-remarksFri, 20 May 2016 03:14:15 +0000Math 420 - Supplement on Gaussian integershttps://davidlowryduda.com/math-420-supplement-on-gaussian-integersDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/math-420-supplement-on-gaussian-integersSun, 10 Apr 2016 03:14:15 +0000A brief notebook on cryptographyhttps://davidlowryduda.com/a-brief-notebook-on-cryptographyDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/a-brief-notebook-on-cryptographyTue, 22 Mar 2016 03:14:15 +0000Motivating a change of variables in $\int \log x / (x^2 + ax + b) dx$https://davidlowryduda.com/motivating-a-change-of-variables-in-int-log-x-x2-ax-b-dxDavid Lowry-Duda<p>We will consider the improper definite integral ${\int_0^\infty \frac{\log x}{x^2 + ax + b}dx}$ for ${a,b > 0}$ (to guarantee convergence). This can be done through in many ways, but the purpose of this brief note is to motivate a particular way of writing integrals to look for symmetries to exploit while evaluating them.</p>
<p>Before we begin, let us note something special about integrals of the form</p>
<p>$$ \int_0^\infty f(x) \frac{dx}{x}. \tag{1}$$
Under the change of variables ${x \mapsto \frac{1}{x}}$, we see that</p>
<p>$$ \int_0^\infty f(x) \frac{dx}{x} = \int_0^\infty f(1/x) \frac{dx}{x}. \tag{2}$$
And under the change of variables ${x \mapsto \alpha x}$, we see that</p>
<p>$$ \int_0^\infty f(x) \frac{dx}{x} = \int_0^\infty f(\alpha x) \frac{dx}{x}. \tag{3}$$
In other words, the integral is <em>almost</em> invariant under these changes of variables — only the integrand ${f(x)}$ is affected while the bounds of integration and the measure ${\frac{dx}{x}}$ remain unaffected.</p>
<p>In fact, the measure ${\frac{dx}{x}}$ is the Haar measure associated to the line ${\mathbb{R}_+}$, so this integral property is not random. When working with integrals over the positive real line, it can often be fortuitous to explicitly write the integral against ${\frac{dx}{x}}$ before attempting symmetry arguments.</p>
<p>Here, we rewrite our integral as</p>
<p>$$ \int_0^\infty \frac{\log x}{x + a + \frac{b}{x}} \frac{dx}{x}. \tag{4}$$
The denominator is clearly invariant under the map ${x \mapsto \frac{b}{x}}$, while ${\log x}$ becomes ${\log(\frac{b}{x}) = \log b - \log x}$. Along with the special property above, this means that</p>
<p>$$ \int_0^\infty \frac{\log x}{x^2 + ax + b}dx = \int_0^\infty \frac{\log b - \log x}{x^2 + ax + b} dx. \tag{5}$$
Adding our original integral to both sides, we see that</p>
<p>$$ \int_0^\infty \frac{\log x}{x^2 + ax + b} dx = \frac{\log b}{2} \int_0^\infty \frac{1}{x^2 + ax + b}dx. \tag{6}$$</p>
<p>This now becomes a totally routine integral, albeit not entirely pleasant, to evaluate. Generally, one can complete the square and then either perform an argument by partial fractions or an argument through trig substitution (alternately, always use partial fractions and allow some complex numbers; or use hyperbolic trig sub; etc.). Let ${c = b - \frac{a^2}{4}}$, which arises naturally when completing the square in the denominator. If ${c = 0}$, then the change of variables ${x \mapsto x - \frac{a}{2}}$ transforms our integral into</p>
<p>$$ \frac{\log b}{2} \int_{a/2}^\infty \frac{dx}{x^2} = \frac{\log b}{a}. \tag{7}$$</p>
<p>When ${c \neq 0}$, performing the change of variables ${x \mapsto \sqrt{\lvert c \rvert} x - \frac{a}{2}}$ transforms our integral into</p>
<p>$$ \frac{\log b}{2\sqrt{\lvert c \rvert}} \int_{\frac{a}{2\sqrt{\lvert c \rvert}}}^\infty \frac{dx}{x^2 + 1} = \frac{\log b}{2\sqrt{\lvert c \rvert}} \left(\frac{\pi}{2} - \arctan\left(\frac{a}{2\sqrt{\lvert c \rvert}}\right)\right) \tag{8}$$
when ${c > 0}$, or</p>
<p>$$ \frac{\log b}{2\sqrt{\lvert c \rvert}} \int_{\frac{a}{2\sqrt{\lvert c \rvert}}}^\infty \frac{dx}{x^2 - 1} = \frac{\log b}{4\sqrt{\lvert c \rvert}} \log\frac{a + 2\sqrt{\lvert c \rvert}}{a - 2\sqrt{\lvert c \rvert}} \tag{9}$$
when ${c < 0}$.</p>https://davidlowryduda.com/motivating-a-change-of-variables-in-int-log-x-x2-ax-b-dxSat, 12 Mar 2016 03:14:15 +0000Math 420 - Second Weekhttps://davidlowryduda.com/math-420-second-week-homeworkDavid Lowry-Duda<p>Firstly, we have three administrative notes.</p>
<ol>
<li>I've posted the second homework set. You can find it <a href="/wp-content/uploads/2016/02/HW2.pdf">here</a>.</li>
<li>I've also written solutions to the first homework set. You can find those <a href="/wp-content/uploads/2016/02/HW1_sols.pdf">here</a>.</li>
<li>After feedback from the first week, I'm setting stable office hours. My office hours will be from 1-3pm on Monday and 2:30-3:30pm on Tuesday (immediately following our class). [Or we can set up an appointment].</li>
</ol>
<p>I'll see you on Tuesday, when we will continue to talk about the Euclidean Algorithm and greatest common divisors.</p>https://davidlowryduda.com/math-420-second-week-homeworkThu, 04 Feb 2016 03:14:15 +0000Math 420 - First week homework and referenceshttps://davidlowryduda.com/math-420-first-week-homework-and-referencesDavid Lowry-Duda<p>Firstly, there are three administrative notes.</p>
<ol>
<li>I've posted the first homework set. This is due on Thursday, and you can find it <a href="/wp-content/uploads/2016/01/HW1.pdf">here</a>.</li>
<li>I haven't set official office hour times yet. But I will have office hours on Monday from noon to 2pm on Monday, 1 Feb 2016, in my office at the Science Library.</li>
<li>If you haven't yet, I encourage you to read the <a href="/wp-content/uploads/2016/01/math42Spring2016syllabus.pdf">syllabus</a>.</li>
</ol>
<p>We mentioned several good and interesting "number theoretic" problems in class
today. I'd like to remind you of some of them, and link you to some additional
places for information.</p>
<h3>Pythagorean Theorem</h3>
<p>We've found all primitive Pythagorean triples in integers, which is a very nice theorem for an hour. But I also mentioned some of the history of the Pythagorean Theorem and the significance of numbers and number theory to the Greeks.</p>
<p>I told the class a story about how the Pythagorean student who revealed that there were irrational numbers was stoned. This is apocryphal. In fact, there is little exact record, but his name was Hippasus and <a href="https://en.wikipedia.org/wiki/Hippasus" target="_blank" rel="noopener">it is more likely that he was drowned</a> for releasing this information.</p>
<p>For this and other reasons, the Pythagorean school of thought <a href="https://en.wikipedia.org/wiki/Pythagoreanism#Two_schools" target="_blank" rel="noopener">split into two sects</a>, one from Pythagoras and one from Hippasus.</p>
<h3>Goldbach's Conjecture</h3>
<p>Is it the case that every even integer is the sum of two primes? We think so. But we do not know.</p>
<p>I mentioned the Ternary Goldbach Conjecture, also known as the Weak Goldbach Conjecture, which says that every odd integer greater than $5$ is the sum of three odd primes. This was proved very recently. If you're interested in what a mathematical paper looks like, you can give <a href="http://arxiv.org/abs/1312.7748" target="_blank" rel="noopener">this paper</a> a look. [Do not expect to be able to understand the paper — but it is interesting what sorts of tools can be used towards number theory]</p>
<h3>Fermat's Last Theorem</h3>
<p>Are there nontrivial integer solutions to $X^n + Y^n = Z^n$ where $n \geq 3$?</p>
<p>This is one of the most storied and studied problems in mathematics. I think this has to do with how simple the statement looks. Further, we managed to fully classify all solutions when $n = 2$ in one class period. It doesn't seem like it should be too hard to extend that to other exponents, does it?</p>
<p>If time and interest permits, we will return to this topic at the end of the course. There is no way that we could present a proof, or even fully motivate the proof. But we might be able to say a few words about how progress towards the theorem spurred and created mathematics, and maybe we can give a hint of the breadth of the ideas used to finally produce a proof.</p>
<h3>Twin Prime Conjecture</h3>
<p>Are there infinitely many primes $p$ such that $p+2$ is also prime? We think so, but we don't know. Two years ago, we had absolutely no idea at all. Then Yitang Zhang had a brilliant idea (and not much later a graduate student named James Maynard had another brilliant idea) which allowed some sort of progress.</p>
<p>This culminated with the Polymath8 Project <a href="http://michaelnielsen.org/polymath1/index.php?title=Bounded_gaps_between_primes" target="_blank" rel="noopener">Bounded Gaps Between Primes</a>. Math can be a social sport, and the polymath projects are massively collaborative online and open projects towards math problems. They're still a bit new, and a bit experimental. But Polymath8 is certainly extremely successful.</p>
<p>What is known is that there exists at least one even number $H \leq 246$ such that $p$ and $p + H$ is prime infinitely often. In fact, James Maynard showed that you can make more complicated ensembles of prime distances.</p>
<p>The ideas that led to this result can likely be sharpened to give better results, but actually proving that there are infinitely many twin primes is almost certainly going to require a brand new idea and methodology.</p>
<p>The best related result comes from Chinese mathematician Chen Jingrun, who <a href="https://en.wikipedia.org/wiki/Chen%27s_theorem" target="_blank" rel="noopener">proved that</a> every sufficiently large even integer can be written either as a sum of two primes, or as a sum of a prime and a number with exactly two prime factors. Although this seems very close, it is also likely that this idea cannot be sharpened further.</p>
<h3>Writing Numbers as Sums of Squares, Cubes, and So On</h3>
<p>Can every integer be written as the sum of three squares? What about four squares? More generally, is there a number $n$ so that every integer can be written as a sum of at most $n$ squares?</p>
<p>Similarly, is there a number $n$ so that every integer can be written as a sum of at most $n$ cubes? What about fourth powers?</p>
<p>These problems are all associated to something called <strong>Waring's Problem</strong>, about which much is known and much is unknown.</p>
<p>We also asked which primes can be written as a sum of two squares. Although we might have a hard time finding those primes right now, you might try to show that if $p$ is a prime that can be written as a sum of two squares, then either $p$ is $2$, or $p = 4z + 1$ for some integer $z$. The reasoning is very similar to some of the reasoning done in class today.</p>
<h3>Max's Conjecture</h3>
<p>For primitive Pythagorean triples $(a,b,c)$ with $a^2 + b^2 = c^2$, we showed that we can restrict out attention to cases where $a$ is odd, $b$ is even, and $c$ is odd. Max conjectured that those $c$ on the right are always of the form $4k + 1$ for some $k$, or equivalently $c$ is always an integer that leaves remainder $1$ after being divided by $4$.</p>
<p>We didn't return to this in class, but we can now. First, note that since $c$ is odd, we can write $c$ as $2z + 1$ for some $z$. But we can do more. We can actually write $c$ as either $4z + 1$ or $4z + 3$. <em>(Can you prove this?)</em></p>
<p>Max conjectured that it is always the case that $c = 4z + 1$. So we might ask, "What if $c = 4z + 3$?"</p>
<p>Writing $a = 2x + 1$ and $b = 2y$, we get the equation</p>
<p>$$ \begin{align}
a^2 + b^2 &= c^2 \\
4x^2 + 4x + 1 + 4y^2 &= 16z^2 + 24z + 9, \end{align}$$</p>
<p>which can be rewritten as
$$ 4x^2 + 4x + 4y^2 = 16z^2 + 24z + 2.$$
You can divide by $2$. Then we ask: what's the problem? Why is this bad? <em>(It is, and it's very similar to some questions we asked in class.)</em></p>
<p>So Max's Conjecture is true, and every number appearing as $c$ in a primitive Pythagorean triple is of the form $c = 4z + 1$ for some integer $z$.</p>https://davidlowryduda.com/math-420-first-week-homework-and-referencesThu, 28 Jan 2016 03:14:15 +0000Introduction to Number Theoryhttps://davidlowryduda.com/introduction-to-number-theoryDavid Lowry-Duda<h1>Introduction to Number Theory - Spring 2016</h1>
<p>Welcome to the course website for Math 420 An Introduction to Number Theory at
Brown University, spring 2016. On this page, you can find links to the current
syllabus and any relevant new information or notes about the course.</p>
<h2>Basic Course Information</h2>
<ul>
<li>Instructor: David Lowry-Duda</li>
<li>Location: CIT 165</li>
<li>Times: Tuesday and Thursday 1:00-2:20 PM</li>
<li>Instructor Email: djlowry@math.brown.edu</li>
</ul>
<p>The textbook for the course is A Friendly Introduction to Number Theory by
Joseph Silverman, fourth edition. The book is very approachable and you should
find it very different when compared to primary and secondary school math
texts.</p>
<p>A copy of the syllabus for the course can be found <a
href="/wp-content/uploads/2016/01/math42Spring2016syllabus.pdf">here</a>.</p>
<p>If you have any questions, comments, or concerns, feel free to ask me during or
after class, during office hours, or by commenting here or through email.</p>
<h2>Additional Notes and Materials</h2>
<p><a href="/?p=1912">First week note</a>, plus some references on statements made in class.</p>
<p><a href="/?p=1923">Second week announcements</a>.</p>
<p><a href="/?p=1990">Supplemental Note on the Gaussian Integers</a>.</p>
<p><a href="/?p=2005">Supplemental Note on the Gaussian Integers II (About Gaussian Primes and writing $p = a^2 + b^2$).</a></p>
<p><a href="/?p=2035">Student Project Showcase</a>.</p>
<h2>Homework</h2>
<p><a href="/wp-content/uploads/2016/01/HW1.pdf">Homework #1</a>, Due Thursday 4 February 2016. [<a href="/wp-content/uploads/2016/02/HW1_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/02/HW2.pdf">Homework #2</a>, Due Thursday 11 February 2016. [<a href="/wp-content/uploads/2016/01/HW2_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/01/HW3.pdf">Homework #3</a>, Due Thursday 18 February 2016. [<a href="/wp-content/uploads/2016/01/HW3_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/01/HW4.pdf">Homework #4</a>, Due Thursday 25 February 2016, right before the midterm.</p>
<p>No Homework due on 3 March, as you had a midterm. [<a href="/wp-content/uploads/2016/01/midterm_sols.pdf">Solutions</a> to midterm].</p>
<p><a href="/wp-content/uploads/2016/01/HW5.pdf">Homework #5</a>, Due Thursday 10 March 2016. [<a href="/wp-content/uploads/2016/01/HW5_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/01/HW6.pdf">Homework #6</a>, Due Thursday 24 March 2016. [<a href="/wp-content/uploads/2016/01/HW6_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/01/HW7.pdf">Homework #7</a>, Due Thursday 7 April 2016. [<a href="/wp-content/uploads/2016/01/HW7_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/01/FinalProjectDetails.pdf">Final Project Initial Notes</a>, Action Necessary by Monday 4 April 2016.</p>
<p><a href="/wp-content/uploads/2016/01/HW8.pdf">Homework #8</a>, Due Thursday 14 April 2016. [<a href="/wp-content/uploads/2016/01/HW8_sols.pdf">Solutions</a>]</p>
<p><a href="/wp-content/uploads/2016/01/FinalProjectDescription.pdf">Final Project Description</a>. Papers due Tuesday 3 May 2016.</p>
<p><a href="/wp-content/uploads/2016/01/HW9.pdf">Homework #9</a>, Due Thursday 21 April 2016, right before the midterm. This also includes some notes for the midterm.</p>
<p><a href="/wp-content/uploads/2016/01/HWX.pdf">Totally Optional Set of Problems #10</a>. I won't collect these.</p>
<p><a href="/wp-content/uploads/2016/01/presentation_order.pdf">Final Project Presentation Order</a>.</p>https://davidlowryduda.com/introduction-to-number-theoryWed, 20 Jan 2016 03:14:15 +0000Paper: Short-integral averages of sums of Fourier coefficients of Cusp Formshttps://davidlowryduda.com/papershort-interval-averages-of-sums-of-fourier-coefficients-of-cusp-formsDavid Lowry-Duda<p>This is joint work with Thomas Hulse, Chan Ieong Kuan, and Alex Walker, and is
a sequel to our previous paper.</p>
<p>We have just uploaded a <a href="http://arxiv.org/abs/1512.05502">paper</a> to
the arXiv on estimating the average size of sums of Fourier coefficients of
cusp forms over short intervals. (And by "just" I mean before the holidays).
This is the second in a trio of papers that we will be uploading and submitting
in the near future.</p>
<p>Suppose ${f(z)}$ is a weight ${k}$ holomorphic cusp form on
$\text{GL}_2$ with Fourier expansion</p>
<p>$$f(z) = \sum_{n \geq 1} a(n) e(nz).$$</p>
<p>Denote the sum of the first $n$ coefficients of $f$ by
$$S_f(n) := \sum_{m \leq n} a(m). \tag{1}$$
We consider upper bounds for the second moment of ${S_f(n)}$ over short
intervals.</p>
<p>In our earlier work, we mentioned the conjectured bound
$$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{4} + \epsilon}, \tag{2}$$
which we call the 'Classical Conjecture.' There has been some minor progress
towards the classical conjecture in recent years, but ignoring subpolynomial
bounds the best known result is of shape
$$ S_f(X) \ll X^{\frac{k-1}{2} + \frac{1}{3}}. \tag{3}$$</p>
<p>One can also consider how ${S_f(n)}$ behaves on average. Chandrasekharan
and Narasimhan [CN] proved that the Classical Conjecture is true on average by
showing that <a name="eqOnAverageSquares"></a>
$$ \sum_{n \leq X} \lvert S_f(n) \rvert^2 = CX^{k- 1 + \frac{3}{2}} + B(X), \tag{4}$$
where ${B(x)}$ is an error term. Later, Jutila [Ju] improved this result
to show that the Classical Conjecture is true on average over short intervals
of length ${X^{\frac{3}{4} + \epsilon}}$ around ${X}$ by showing
$$ X^{-(\frac{3}{4} + \epsilon)}\sum_{\lvert n - X \rvert < X^{3/4} + \epsilon}
\lvert S_f(n) \rvert^2 \ll X^{\frac{k-1}{2} + \frac{1}{4}}. \tag{5}$$
In fact, Jutila proved a much more complicated set of bounds, but this bound
can be read off from his work.</p>
<p>In our previous paper, we introduced the Dirichlet series
$$ D(s, S_f \times S_f) := \sum_{n \geq 1} \frac{S_f(n) \overline{S_f(n)}}{n^{s + k - 1}} \tag{6}$$
and provided its meromorphic continuation In this paper, we use the analytic
properties of ${D(s, S_f \times S_f)}$ to prove a short-intervals result
that improves upon the results of Jutila and Chandrasekharan and Narasimhan. In
short, we show the Classical Conjecture holds on average over short intervals
of width ${X^{\frac{2}{3}} (\log X)^{\frac{2}{3}}}$. More formally, we
prove the following.</p>
<div class="theorem">
<p>Suppose either that ${f}$ is a Hecke eigenform or that ${f}$ has
real coefficients. Then
\begin{equation*}
\frac{1}{X^{\frac{2}{3}} (\log
X)^{\frac{2}{3}}} \sum_{\lvert n - X \rvert < X^{\frac{2}{3}} (\log
X)^{\frac{2}{3}}} \lvert S_f(n) \rvert^2 \ll X^{\frac{k-1}{2} + \frac{1}{4}}.
\end{equation*}</p>
</div>
<p>We actually prove an ever so slightly stronger statement. Suppose ${y}$
is the solution to ${y (\log y)^2 = X}$. Then we prove that the Classical
Conjecture holds on average over intervals of width ${X/y}$ around ${X}$.</p>
<p>We also demonstrate improved bounds for short-interval estimates of width as
low as ${X^\frac{1}{2}}$.</p>
<p>There are two major obstructions to improving our result. Firstly, we morally
use the convexity result in the ${t}$-aspect for the size of ${L(\frac{1}{2} + it, f\times f)}$. If we insert the bound from the Lindelöf
Hypothesis into our methodology, the corresponding bounds are consistent with
the Classical Conjecture.</p>
<p>Secondly, we struggle with bounds for the spectral component $$ \sum_j
\rho_j(1) \langle \lvert f \rvert^2 y^k, \mu_j \rangle \frac{\Gamma(s -
\frac{3}{2} - it_j) \Gamma(s - \frac{3}{2} + it_j)}{\Gamma(s-1) \Gamma(s + k -
1)} L(s - \frac{3}{2}, \mu_j) V(X, s) \tag{7}$$
where ${\mu_j}$ are a basis of Maass forms and ${V(X,s)}$ is a term
of rapid decay. For our analysis, we end up bounding by absolute values and are
unable to understand cancellation from spin. An argument successfully capturing
some sort of stationary phase could significantly improve our bound.</p>
<p>Supposing these two obstructions were handled, the limit of our methodology
would be to show the Classical Conjecture in short-intervals of width ${X^{\frac{1}{2}}}$ around ${X}$. This would lead to better bounds on
individual ${S_f(X)}$ as well, but requires significant improvement.</p>
<p>For more details and specific references, see the paper on the arXiv.</p>https://davidlowryduda.com/papershort-interval-averages-of-sums-of-fourier-coefficients-of-cusp-formsSat, 02 Jan 2016 03:14:15 +0000Paper: The second moments of sums of Fourier coefficients of cusp formshttps://davidlowryduda.com/paper-the-second-moments-of-sums-of-fourier-coefficients-of-cusp-formsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/paper-the-second-moments-of-sums-of-fourier-coefficients-of-cusp-formsFri, 04 Dec 2015 03:14:15 +0000Estimating the number of squarefree integers up to $X$https://davidlowryduda.com/estimating-the-number-of-squarefree-integers-up-to-xDavid Lowry-Duda<p>I recently wrote an <a
href="http://math.stackexchange.com/a/1323406/9754">answer</a> to a <a
href="http://math.stackexchange.com/q/1323354/9754">question</a> on MSE about
estimating the number of squarefree integers up to $X$. Although the result is
known and not too hard, I very much like the proof and my approach. So I write
it down here.</p>
<p>First, let's see if we can understand why this "should" be true from an
analytic perspective.</p>
<p>We know that
$$ \sum_{n \geq 1} \frac{\mu(n)^2}{n^s} = \frac{\zeta(s)}{\zeta(2s)},$$
and a general way of extracting information from Dirichlet series is to perform a cutoff integral transform (or a type of Mellin transform). In this case, we get that
$$ \sum_{n \leq X} \mu(n)^2 = \frac{1}{2\pi i} \int_{(2)} \frac{\zeta(s)}{\zeta(2s)} X^s \frac{ds}{s},$$
where the contour is the vertical line $\text{Re }s = 2$. By Cauchy's theorem, we shift the line of integration left and poles contribute terms or large order. The pole of $\zeta(s)$ at $s = 1$ has residue
$$ \frac{X}{\zeta(2)},$$
so we expect this to be the leading order. Naively, since we know that there are no zeroes of $\zeta(2s)$ on the line $\text{Re } s = \frac{1}{2}$, we might expect to push our line to exactly there, leading to an error of $O(\sqrt X)$. But in fact, we know more. We know the zero-free region, which allows us to extend the line of integration ever so slightly inwards, leading to a $o(\sqrt X)$ result (or more specifically, something along the lines of $O(\sqrt X e^{-c (\log X)^\alpha})$ where $\alpha$ and $c$ come from the strength of our known zero-free region.</p>
<p>In this heuristic analysis, I have omitted bounding the top, bottom, and left boundaries of the rectangles of integration. But proceeding in a similar way as in the proof of the analytic prime number theorem, you could proceed here. So we expect the answer to look like
$$ \frac{X}{\zeta(2)} + O(\sqrt X e^{-c (\log X)^\alpha})$$
using no more than the zero-free region that goes into the prime number theorem.</p>
<p>We will now prove this result, but in an entirely elementary way (except that I will refer to a result from the prime number theorem).</p>
<p>We do this in a series of steps.</p>
<div class="lemma">
<p>\begin{equation*}
\sum_{d^2 \mid n} \mu(d)
= \begin{cases}
1 & \text{if } n \text{ is squarefree} \\
0 & \text{else}
\end{cases}
\end{equation*}</p>
</div>
<p><em>Proof.</em> This comes almost immediately upon noticing that this is a multiplicative function, and it's trivial to prove it for prime powers. $\spadesuit$</p>
<p>So to sum up the squarefree numbers up to $X$, we look at
$$ \sum_{n \leq X} \sum_{d^2 \mid n} \mu(d) = \sum_{d^2e \leq X} \mu(d)= \sum_{d^2 \leq X} \mu(d) \left\lfloor \frac{X}{d^2} \right\rfloor.$$</p>
<p>This last expression is written in one of the links in <a
href="http://math.stackexchange.com/a/1323363/9754">Marty's answer</a>, and
they prove it with inclusion-exclusion. I happen to find this derivation more
intuitive, but it's our launching point forwards.</p>
<p>We approximate the floored bit. Notice that
$$ \left \lfloor \frac{X}{d^2} \right \rfloor = \frac{X}{d^2} + E(X,d)$$
with $\lvert E(x,d) \rvert \leq 1$ (we think of it as the Error of the approximation). So the number of squarefree numbers up to $X$ is
$$ \sum_{d^2 \leq X} \mu(d) \frac{X}{d^2} + \sum_{d^2 \leq X}\mu(d) E(x,d).$$
We look at the two terms separately.</p>
<h3>The first term</h3>
<p>The first term can be approximated by the infinite series plus an error term.
$$ \sum_{d^2 \leq X} \frac{\mu(d)}{d^2} = X\sum_{d \geq 1} \frac{\mu(d)}{d^2} - X\sum_{d > \sqrt X} \frac{\mu(d)}{d^2} = \frac{X}{\zeta(2)} - X\sum_{d > \sqrt X} \frac{\mu(d)}{d^2}.$$</p>
<p>We must now be a bit careful. If we perform the naive bound, by bounding $\mu(n) \leq 1$, then this last sum is of size $O(\sqrt X)$. That's too big!</p>
<p>So instead, we integrate by parts (Riemann-Stieltjes integration) or equivalently we perform summation by parts to see that
$$ \sum_{d > \sqrt X} \frac{\mu(d)}{d^2} = O\left( \int_{\sqrt X}^\infty \frac{M(t)}{t^3} dt \right)$$
where
$$ M(t) = \sum_{n \leq t} \mu(n).$$
By the prime number theorem (and as you mention in your question), we know that $M(t) = o(t)$. (In fact, the analytic prime number theorem in one of the easy forms is that $M(X) = O(X e^{-c (\log X)^{1/9}})$, which we might use here). This means that this last term is bounded by
$$ o(\sqrt X)$$
if we just use that $M(t) = o(t)$, or
$$ O(\sqrt X e^{-c (\log X)^{1/9}})$$
if we use more. This completes the first term $\spadesuit$</p>
<h3>Second term</h3>
<p>This is easier now. We again use integration by parts. Notice that
$$ \begin{align}
\sum_{d \leq \sqrt X} \mu(d) E(X,d) &= \sum_{d \leq \sqrt X} (M(d) - M(d - 1))E(X,d) \\
&= M(\lfloor \sqrt X \rfloor) E(X, \lfloor \sqrt X \rfloor) + \sum_{d \leq \sqrt X - 1} M(d) (E(X, d) - E(X, d+1)).
\end{align}$$</p>
<p>By using that $E(X,d) \leq 1$ and $M(X) = o(\sqrt X)$ (or the stronger version), we match the results from the first part. $\spadesuit$</p>
<p>Putting these two results together, we have proven that the number of squarefree integers up to $X$ is
$$\frac{X}{\zeta(2)} + o(\sqrt X)$$
using only that $M(X) = o(\sqrt X)$, and alternately
$$ \frac{X}{\zeta(2)} + O(\sqrt X e^{-c (\log X)^{1/9}})$$
using a bit of the zero-free region. This completes the proof. $\diamondsuit$</p>https://davidlowryduda.com/estimating-the-number-of-squarefree-integers-up-to-xSat, 13 Jun 2015 03:14:15 +0000Some statistics on the growth of Math.SEhttps://davidlowryduda.com/some-statistics-on-the-growth-of-math-seDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/some-statistics-on-the-growth-of-math-seSun, 10 May 2015 03:14:15 +0000One integral served two wayshttps://davidlowryduda.com/one-cute-integral-served-two-waysDavid Lowry-Duda<p>Research kicks up, writing kicks back. So in this brief note, we examine a pair
of methods to examine an integral. They're both very clever, I think. We seek
to understand $$ I := \int_0^{\pi/2}\frac{\sin(x)}{\sin(x) + \cos(x)} dx $$</p>
<p>We base our first idea on an innocuous-seeming integral identity.</p>
<p>For ${f(x)}$ integrable on ${[0,a]}$, we have $$ \int_0^a f(x) dx =
\int_0^a f(a-x)dx. \tag{1}$$</p>
<p>The proof is extremely straightforward. Perform the substitution ${x
\mapsto a-x}$. The negative sign from the ${dx}$ cancels with the
negative coming from flipping the bounds of integration. ${\diamondsuit}$</p>
<p>Any time we have some sort of relationship that reflects into itself, we have
an opportunity to exploit symmetry. Our integral today is very symmetric. As
${\sin(\tfrac{\pi}{2} - x) = \cos x}$ and ${\cos(\tfrac{\pi}{2} -
x) = \sin x}$, notice that $$ I = \int_0^{\pi/2}\frac{\sin x}{\sin x + \cos
x}dx = \int_0^{\pi/2}\frac{\cos x}{\sin x + \cos x }dx. $$
Adding these two together, we see that $$ 2I = \int_0^{\pi/2}\frac{\sin x + \cos x}{\sin x + \cos x} dx = \frac{\pi}{2}, $$
and so we conclude that $$ I = \frac{\pi}{4}. $$
Wasn't that nice? ${\spadesuit}$</p>
<p>Let's show another clever argument. Now we rely on a classic across all
mathematics: add and subtract the same thing. \begin{align} I =
\int_0^{\pi/2}\frac{\sin x}{\sin x + \cos x}dx &= \frac{1}{2}
\int_0^{\pi/2} \frac{2\sin x + \cos x - \cos x}{\sin x + \cos x}dx \\
&= \frac{1}{2} \int_0^{\pi/2} \frac{\sin x + \cos x}{\sin x + \cos x}dx + \frac{1}{2}\int_0^{\pi/2}\frac{\sin x - \cos x}{\sin x + \cos x}dx. \end{align} The first term is easy, and evaluates to ${\tfrac{\pi}{4}}$. How do we handle the second term? In fact, we can explicitly write down its antiderivative. Notice that ${\sin x - \cos x = -\frac{d}{dx} (\sin x + \cos x)}$, and so the last term is of the form $$ -\frac{1}{2}\int_0^{\pi/2} \frac{f'(x)}{f(x)}dx $$
where ${f(x) = \sin x + \cos x}$. You may or may not remember that ${\frac{f'(x)}{f(x)}}$ is the logarithmic derivative of ${f(x)}$, or rather what you get if you differentiate ${\log f(x)}$. As we are integrating the derivative of ${\log f(x)}$, we see that $$ -\frac{1}{2} \int_0^{\pi/2}\frac{f'(x)}{f(x)}dx = -\frac{1}{2} \ln f(x) \bigg\rvert_0^{\pi/2}, $$
which for us is $$ -\frac{1}{2} \ln(\sin x + \cos x) \bigg\rvert_0^{\pi/2} = -\frac{1}{2} \left( \ln(1) - \ln(1) \right) = 0. $$</p>
<p>Putting these two together, we see again that ${I = \frac{\pi}{4}}$. ${\spadesuit}$</p>https://davidlowryduda.com/one-cute-integral-served-two-waysThu, 20 Nov 2014 03:14:15 +0000Friendly introduction to sieves with a look towards recent progresshttps://davidlowryduda.com/friendly-introduction-to-sieves-with-a-look-towards-progress-on-the-twin-primes-conjectureDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/friendly-introduction-to-sieves-with-a-look-towards-progress-on-the-twin-primes-conjectureSun, 16 Nov 2014 03:14:15 +0000The gamma function, beta function, and duplication formulahttps://davidlowryduda.com/the-gamma-function-beta-function-and-duplication-formulaDavid Lowry-Duda<p>The title might as well continue – <em>because I constantly forget them and
hope that writing about them will make me remember.</em> At least afterwards
I'll have a centralized repository for my preferred proofs, regardless.</p>
<p>In this note, we will play with the Gamma and Beta functions and eventually get
to Legendre's Duplication formula for the Gamma function. This is part
reference, so I first will write the results themselves.</p>
<p><b>1. Results </b></p>
<p>We define the Gamma function for ${s > 0}$ by $$ \Gamma(s) := \int_0^\infty t^s e^{t} \frac{dt}{t}. \tag{1}$$
Similarly, we define the Beta function by $$ B(a,b) := \int_0^1 t^{a-1}(1-t)^{b-1}dt \tag{2}$$
for ${a, b > 0}$.</p>
<p>From these defininitions, it is not so obvious that these two functions are intimately related - but they are! In fact, <a name="propgammaisbeta"></a> $$ \frac{\Gamma(x)\Gamma(y)}{\Gamma(x + y)} = B(x,y) \tag{3}$$</p>
<p>Evaluating the Gamma function at integers is easy. We can use the relation with the Beta function to evalate it at half-integers too. <a name="propgammahalf"></a> $$ \Gamma(\frac{1}{2}) = \sqrt \pi \tag{4}$$</p>
<p>Finally, we can relate the values at half-integers and integers in an intimate way. <a name="thmduplicationformula"></a> $$ \Gamma(z)\Gamma(z + 1/2)=2^{1-2z}\sqrt{\pi}\Gamma(2z) \tag{5}$$
for ${\text{Re}(z) > 0}$.</p>
<p><b> 1.1. Proof of 3 </b></p>
<p>We begin by writing down a different representation of the Beta function.</p>
<p>$$ B(a,b) = \int_0^\infty \frac{u^a}{(1+u)^{a+b}}\frac{du}{u}, $$
which is in terms of the Haar measure and is generally more agreeable. <em>Proof:</em> Consider the (un-inspired) substitution ${u = \frac{t}{1-t}}$, or equivalently ${t = \frac{u}{1+u}}$. Then the bounds ${0 \mapsto 0}$ and ${1 \mapsto \infty}$, and the integrand transforms exactly into the form in the proposition. $\Box$</p>
<p>We will also want a different representation of the Gamma function. $$ \int_0^\infty e^{-pt} t^z \frac{dt}{t} = \frac{\Gamma(z)}{p^z}. $$
<em>Proof:</em> This comes quite quickly. Performing the change of variables ${s = pt}$ in the integral definition of the Gamma function pops out the extra ${p^z}$ factor and gives this form of the integral. $\Box$</p>
<p>We can now put these together. Rearranging the lemma above gives $$ \frac{1}{p^z} = \frac{1}{\Gamma(z)}\int_0^\infty e^{-pt}t^{z-1}dt. $$
Thinking of ${p = 1+u}$ and ${z = a+b}$, we can substitute this expression inside the lemma-given integral expression for the Beta function.
$$\begin{align} B(a,b) &= \frac{1}{\Gamma(a+b)}\int_0^\infty e^{-t}t^{a+b-1}dt\int_0^\infty e^{-ut}u^{a-1}du \\
&= \frac{\Gamma(a)}{\Gamma(a+b)}\int_0^\infty e^{-t}t^{b-1}dt \\
&= \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}, \end{align}$$
where the first Gamma factor pulled out ${a}$ factors of ${t}$ from the first integral. This completes the proof of Prop <a href="#propgammaisbeta">3</a>.</p>
<p><b> 1.2. Proof of Prop <a href="#propgammahalf">4 </a></b></p>
<p>We begin with another integral representation of the Beta function. $$ B(a,b) = 2\int_0^{\pi/2}(\cos u)^{2a-1}(\sin u)^{2b - 1}du $$
for ${a,b}$ with positive real part. <em>Proof:</em> This comes immediately from the change of variables ${t = \cos^2 u}$ in the integral definition of the Beta function. It's necessary to flip the bounds of integration to cancel the negative sign from the sign of the change of variables. $\Box$</p>
<p>In this form, it is particularly easy to see that ${B(\frac{1}{2}, \frac{1}{2}) = \pi}$, since we integrate the constant function ${1}$ from ${0}$ to ${\pi/2}$ and multiply the result by ${2}$. And from Prop <a href="#propgammaisbeta">1</a>, we know that ${B(\frac{1}{2}, \frac{1}{2}) = (\Gamma(\frac{1}{2})^2}$ (as ${\Gamma(1) = 1}$).</p>
<p>Thus ${\Gamma(\frac{1}{2}) = \sqrt \pi}$, and we know it's the positive square root because ${\Gamma(\frac{1}{2})}$ is clearly positive. This completes the proof.</p>
<p>This is my favorite proof, as it uses neither complex analysis nor multivariable integration - both of which are dear to my heart, but separate from the pleasant theory of the Gamma function.</p>
<p><b> 1.3. Proof of Theorem <a href="#thmduplicationformula">5</a></b></p>
<p>Start from $$ \frac{\Gamma(z)\Gamma(z)}{\Gamma(2z)} = B(z,z) = \int_0^1 u^{z-1}(1-u)^{z-1}du. $$
Perform the substitution ${u = \frac{1+x}{2}}$, so that ${du = dx/2}$. This transforms the above integral into $$ 2^{1-2z}\cdot 2\int_0^1 (1-x^2)^{z-1}dx. $$
$$ B(m,n) = 2\int_0^1 x^{2m - 1}(1-x^2)^{n-1}dx $$
<em>Proof:</em> This is immediate upon the change of variables ${t = x^2}$ in the defining integral for the Beta function. $\Box$</p>
<p>This allows us to recognize the integral above $$ 2^{1-2z}\cdot 2\int_0^1 (1-x^2)^{z-1}dx = 2^{1-2z}B(\frac{1}{2}, z). $$
Rewriting in terms of Gammas, $$ B(\frac{1}{2}, z) = \frac{\Gamma(\frac{1}{2})\Gamma(z)}{\Gamma(z + \frac{1}{2})}. $$
In total, we have that $$ \frac{\Gamma(z)\Gamma(z)}{\Gamma(2z)} = 2^{1-2z}\frac{\Gamma(\frac{1}{2})\Gamma(z)}{\Gamma(z + \frac{1}{2})}. $$
Rearranging, and using that ${\Gamma(1/2)= \sqrt \pi}$ as above, we see that $$ \Gamma(2z) = \frac{2^{2z-1}}{\sqrt \pi} \Gamma(z)\Gamma(z + \frac{1}{2}), $$
which is what we wanted to show.</p>https://davidlowryduda.com/the-gamma-function-beta-function-and-duplication-formulaWed, 12 Nov 2014 03:14:15 +0000Notes from a talk - On the Mean Value Theoremhttps://davidlowryduda.com/notes-from-a-talk-on-the-mean-value-theoremDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/notes-from-a-talk-on-the-mean-value-theoremWed, 05 Nov 2014 03:14:15 +0000Three Conundrums on Infinityhttps://davidlowryduda.com/three-conundrums-on-infinityDavid Lowry-Duda<p>In this short post, we introduce three conundrums dealing with infinity. This
is inspired by my calculus class, as we explore various confusing and
confounding aspects of infinity and find that it's very confusing, sometimes
mindbending.</p>
<p><b> Order Matters </b></p>
<p>Consider the alternating unit series $$ \sum_{n \geq 0} (-1)^n. $$
We want to try to understand its convergence. If we write out the first several terms, it looks like $$ 1 - 1 + 1 - 1 + 1 - 1 + \cdots $$
What if we grouped the terms while we were summing them? Perhaps we should group them like so, $$ (1 - 1) + (1 - 1) + (1 - 1) + \cdots = 0 + 0 + 0 + \cdots $$
so that the sum is very clearly ${0}$. Adding infinitely many zeroes certainly gives zero, right?</p>
<p>On the other hand, what if we group the terms like so, $$ 1 + (-1 + 1) + (-1 + 1) + \cdots = 1 + 0 + 0 + \cdots $$
which is very clearly ${1}$. After all, adding ${1}$ to infinitely many zeroes certainly gives one, right?</p>
<p>A related, perhaps deeper paradox is one we mentioned in class. For conditionally convergent series like the alternating harmonic series $$ \sum_{n = 1}^\infty \frac{(-1)^n}{n}, $$
if we are allowed to rearrange the terms then we can have the series <em>sum to any number that we want</em>. This is called the <a href="http://www.en.wikipedia.org/wiki/Riemann_series_theorem">Riemann Series Theorem</a>.</p>
<p><b> The Thief and the King </b></p>
<p>A very wealthy king keeps gold coins in his vault, but a sneaky thief knows how to get in. Suppose that each day, the king puts two more gold coins into the vault. And each day, the thief takes one gold coin out (so that the king won't notice that the vault is empty). After infinitely many days, how much gold is left in the vault?</p>
<p>Suppose that the king numbers each coin. So on day 1, the king puts in coins labelled 1 and 2, and on day 2 he puts in coins labelled 3 and 4, and so on. What if the thief steals the odd numbered coin each day? Then at the end of time, the king has all the even coins.</p>
<p>But what if instead, the thief steals from the bottom. So he first steals coin number 1, then number 2, and so on. At the end of time, no coin is left in the vault, since for any number ${n}$, the ${n}$th coin has been taken by the king.</p>
<p><b> Prevalence of Rarity </b></p>
<p>When I drove to Providence this morning, the car in front of me had the license place 637RB2. Think about it - out of the approximately ${10\cdot10\cdot10\cdot26\cdot 26 \cdot 10 = 6760000}$ possibilities, I happened across this one. Isn't that amazing! How could something so rare happen to me?</p>
<p>Amazingly, something just as rare happened last time I drove to Providence too!</p>https://davidlowryduda.com/three-conundrums-on-infinityWed, 29 Oct 2014 03:14:15 +0000Continuity of the Mean Value Abscissahttps://davidlowryduda.com/continuity-of-the-mean-valueDavid Lowry-Duda<p><b>1. Introduction </b></p>
<p>When I first learned the mean value theorem as a high schooler, I was thoroughly unimpressed. Part of this was because it's just like Rolle's Theorem, which feels obvious. But I think the greater part is because I thought it was useless. And I continued to think it was useless until I began my first proof-oriented treatment of calculus as a second year at Georgia Tech. Somehow, in the interceding years, I learned to value intuition and simple statements.</p>
<p>I have since completely changed my view on the mean value theorem. I now consider essentially all of one variable calculus to be the Mean Value Theorem, perhaps in various forms or disguises. In my earlier note <a href="/?p=1259">An Intuitive Introduction to Calculus</a>, we state and prove the Mean Value Theorem, and then show that we can prove the Fundamental Theorem of Calculus with the Mean Value Theorem and the Intermediate Value Theorem (which also felt silly to me as a high schooler, but which is not silly).</p>
<p>In this brief note, I want to consider one small aspect of the Mean Value
Theorem: can the 'mean value' be chosen continuously as a function of the
endpoints? To state this more clearly, first recall the theorem:</p>
<p>Suppose ${f}$ is a differentiable real-valued function on an interval ${[a,b]}$. Then there exists a point ${c}$ between ${a}$ and ${b}$ such that $$ \frac{f(b) - f(a)}{b - a} = f'(c), \tag{1}$$
which is to say that there is a point where the slope of ${f}$ is the same as the average slope from ${a}$ to ${b}$.</p>
<p>What if we allow the interval to vary? Suppose we are interested in a differentiable function ${f}$ on intervals of the form ${[0,b]}$, and we let ${b}$ vary. Then for each choice of ${b}$, the mean value theorem tells us that there exists ${c_b}$ such that $$ \frac{f(b) - f(0)}{b} = f'(c_b). $$
Then the question we consider today is, as a function of ${b}$, can ${c_b}$ be chosen continuously? We will see that we cannot, and we'll see explicit counterexamples.</p>
<p><b>2. A Counterexample </b></p>
<p>For ease, we will restrict ourselves to intervals of the form ${[0,b]}$, as mentioned above. A particularly easy counterexample is given by $$ f(x) = \begin{cases} x^2 - 2x & x \leq 1\\ -1 & -1 \leq x \leq 2\\ x^2 - 4x + 3 & x \geq 2 \end{cases} $$
This is a flattened parabola, that is, a parabola with a flattened middle section.</p>
<p><img class="size-full wp-image-1773 aligncenter" src="/wp-content/uploads/2014/10/continuity_of_mean_value_basicpic.png" alt="continuity_of_mean_value_basicpic" width="500" height="300" /></p>
<p>Clearly, the slope of the function ${f}$ is negative until ${x =
1}$, where it is ${0}$. It becomes (and stays) positive at ${x =
2}$. So if you consider intervals ${[0,b]}$ as ${b}$ is varying,
since ${f(b) < 0}$ for ${x < 3}$, we must have that ${c_b}$ is at a point when ${f'(c_b) < 0}$, meaning that ${c_b
\in [0, 1]}$. But as soon as ${b > 3}$, ${f(b) > 0}$ and
${c_b}$ must be a point with ${f'(c_b) > 0}$, meaning that
${c_b \in [2,b]}$.</p>
<p>In particular, ${c_b}$ jumps from at most ${1}$ to at least ${2}$ as ${b}$ goes to the left of ${3}$ to the right of ${3}$. So there is no way to choose the ${c_b}$ values locally in a
neighborhood of ${3}$ to make the mean values continuous there.</p>
<p>In the gif below, we have animated the process. The red line is the secant line
passing through ${(0, f(0))}$ and ${(b, f(b))}$. The two red dots
indicate the two points of intersection. The green line is the line guaranteed
by the mean value theorem. The small green dot is, in particular, what we've
been calling ${c_b}$: the guaranteed mean value. Notice that it jumps
when the red dot passes ${3}$. That is the essence of this proof.</p>
<p><img class="size-full wp-image-1774 aligncenter" src="/wp-content/uploads/2014/10/continuity_of_mean_value_gif.gif" alt="continuity_of_mean_value_gif" width="500" height="300" /></p>
<p>Further, although this example is not smooth, it is easy to see that if we
'smoothed' off the connection between the parabola and the straight line, like
through the use of bump functions, then the spirit of this counterexample
works, and not even smooth functions have locally continuous choices of the
mean value.</p>https://davidlowryduda.com/continuity-of-the-mean-valueWed, 22 Oct 2014 03:14:15 +0000Review of How Not to be Wrong by Jordan Ellenberghttps://davidlowryduda.com/review-of-how-not-to-be-wrong-by-jordan-ellenbergDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/review-of-how-not-to-be-wrong-by-jordan-ellenbergTue, 14 Oct 2014 03:14:15 +0000Another proof of Wilson's Theoremhttps://davidlowryduda.com/another-proof-of-wilsons-theoremDavid Lowry-Duda<p>While teaching a largely student-discovery style elementary number theory
course to high schoolers at the Summer@Brown program, we were looking for
instructive but interesting problems to challenge our students. By we, I mean
Alex Walker, my academic little brother, and me. After a bit of experimentation
with generators and orders, we stumbled across a proof of Wilson's Theorem,
different than the standard proof.</p>
<p>Wilson's theorem is a classic result of elementary number theory, and is used
in some elementary texts to prove Fermat's Little Theorem, or to introduce
primality testing algorithms that give no hint of the factorization.</p>
<blockquote><b>Theorem 1 (Wilson's Theorem)</b> <em> For a prime number ${p}$, we have $$ (p-1)! \equiv -1 \pmod p. \tag{1}$$
</em></blockquote>
<p>The theorem is clear for ${p = 2}$, so we only consider proofs for 'odd primes ${p}$.'</p>
<p>The standard proof of Wilson's Theorem included in almost every elementary number theory text starts with the factorial ${(p-1)!}$, the product of all the units mod ${p}$. Then as the only elements which are their own inverses are ${\pm 1}$ (as ${x^2 \equiv 1 \pmod p \iff p \mid (x^2 - 1) \iff p\mid x+1}$ or ${p \mid x-1}$), every element in the factorial multiples with its inverse to give ${1}$, except for ${-1}$. Thus ${(p-1)! \equiv -1 \pmod p.} \diamondsuit$</p>
<p>Now we present a different proof.</p>
<p>Take a primitive root ${g}$ of the unit group ${(\mathbb{Z}/p\mathbb{Z})^\times}$, so that each number ${1, \ldots, p-1}$ appears exactly once in ${g, g^2, \ldots, g^{p-1}}$. Recalling that ${1 + 2 + \ldots + n = \frac{n(n+1)}{2}}$ (a great example of classical pattern recognition in an elementary number theory class), we see that multiplying these together gives ${(p-1)!}$ on the one hand, and ${g^{(p-1)p/2}}$ on the other.</p>
<p>As ${g^{(p-1)/2}}$ is a solution to ${x^2 \equiv 1 \pmod p}$, and it is not ${1}$ since ${g}$ is a generator and thus has order ${p-1}$. So ${g^{(p-1)/2} \equiv -1 \pmod p}$, and raising ${-1}$ to an odd power yields ${-1}$, completing the proof $\diamondsuit$.</p>
<p>After posting this, we have since seen that this proof is suggested in a problem in Ireland and Rosen's extremely good number theory book. But it was pleasant to see it come up naturally, and it's nice to suggest to our students that you can stumble across proofs.</p>
<p>It may be interesting to question why ${x^2 \equiv 1 \pmod p \iff x \equiv \pm 1 \pmod p}$ appears in a fundamental way in both proofs.</p>
<p>This post appears on the author's personal website <a href="http://davidlowryduda.com">davidlowryduda.com</a> and on the Math.Stackexchange Community Blog <a href="http://math.blogoverflow.com">math.blogoverflow.com</a>. It is also available in pdf <a href="/wp-content/uploads/2014/09/WilsonsTheoremMM.pdf">note</a> form. It was typeset in \TeX, hosted on Wordpress sites, converted using the utility <a href="http://github.com/davidlowryduda/mse2wp">github.com/davidlowryduda/mse2wp</a>, and displayed with MathJax.</p>https://davidlowryduda.com/another-proof-of-wilsons-theoremTue, 07 Oct 2014 03:14:15 +0000Trigonometric and related substitutions in integralshttps://davidlowryduda.com/trigonometric-and-related-substitutions-in-integralsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/trigonometric-and-related-substitutions-in-integralsMon, 29 Sep 2014 03:14:15 +0000A bit more about partial fraction decompositionhttps://davidlowryduda.com/a-bit-more-about-partial-fraction-decompositionDavid Lowry-Duda<p>This is a short note written for my students in Math 170, talking about partial
fraction decomposition and some potentially confusing topics that have come up.
We'll remind ourselves what partial fraction decomposition is, and unlike the
text, we'll prove it. Finally, we'll look at some pitfalls in particular. All
this after the fold.</p>
<p><b>1. The Result Itself </b></p>
<p>We are interested in <em>rational functions</em> and their integrals. Recall that a polynomial ${f(x)}$ is a function of the form ${f(x) = a_nx^n + a_{n-1}x^{n-1} + \cdots + a_1x + a_0}$, where the ${a_i}$ are constants and ${x}$ is our "intederminate'' – and which we commonly imagine standing for a number (but this is not necessary).</p>
<p>Then a rational function ${R(x)}$ is a ratio of two polynomials ${p(x)}$ and ${q(x)}$, $$ R(x) = \frac{p(x)}{q(x)}. $$</p>
<p>Then the big result concerning partial fractions is the following:</p>
<p>If ${R(x) = \dfrac{p(x)}{q(x)}}$ is a rational function and the degree of ${p(x)}$ is less than the degree of ${q(x)}$, and if ${q(x)}$ factors into $$q(x) = (x-r_1)^{k_1}(x-r_2)^{k_2} \dots (x-r_l)^{k_l} (x^2 + a_{1,1}x + a_{1,2})^{v_1} \ldots (x^2 + a_{m,1}x + a_{m,2})^{v_m}, $$
then ${R(x)}$ can be written as a sum of fractions of the form ${\dfrac{A}{(x-r)^k}}$ or ${\dfrac{Ax + B}{(x^2 + a_1x + a_2)^v}}$, where in particular</p>
<ul>
<li>If ${(x-r)}$ appears in the denominator of ${R(x)}$, then there is a term ${\dfrac{A}{x - r}}$</li>
<li>If ${(x-r)^k}$ appears in the denominator of ${R(x)}$, then there is a collection of terms $$ \frac{A_1}{x-r} + \frac{A_2}{(x-r)^2} + \dots + \frac{A_k}{(x-r)^k} $$</li>
<li>If ${x^2 + ax + b}$ appears in the denominator of ${R(x)}$, then there is a term ${\dfrac{Ax + B}{x^2 + ax + b}}$</li>
<li>If ${(x^2 + ax + b)^v}$ appears in the denominator of ${R(x)}$, then there is a collection of terms $$ \frac{A_1x + B_1}{x^2 + ax + b} + \frac{A_2 x + B_2}{(x^2 + ax + b)^2} + \dots \frac{A_v x + B_v}{(x^2 + ax + b)^v} $$</li>
</ul>
<p>where in each of these, the capital ${A}$ and ${B}$ represent some constants that can be solved for through basic algebra.</p>
<p>I state this result this way because it is the one that leads to integrals that we can evaluate. But in principle, this theorem can be restated in a couple different ways.</p>
<p>Let's parse this theorem through an example.</p>
<p>Consider the rational function ${\frac{1}{x(x+1)^2}}$. The terms that
appear in the denominator are ${x}$ and ${(x + 1)^2}$. The ${x}$ part contributes an ${\dfrac{A}{x}}$ term. The ${(x + 1)^2}$
part contributes a ${\dfrac{B}{x+1} + \dfrac{C}{(x+1)^2}}$ pair of terms.
So we know that $$\frac{1}{x(x+1)^2} = \frac{A}{x} + \frac{B}{x+1} +
\frac{C}{(x+1)^2},$$
and we want to find out what ${A, B, C}$ are. Clearing denominators yields $$ 1 = A(x+1)^2 + Bx(x+1) + Cx = (A + B)x^2 + (2A + B + C)x + A,$$
and comparing coefficients of the polynomial ${1}$ and ${(A + B)x^2 + (2A + B + C)x + A}$ gives immediately that ${A = 1, B = -1, \text{and} C = -1}$. So $$ \frac{1}{x(x+1)^2} = \frac{1}{x} + \frac{-1}{x+1} + \frac{-1}{(x+1)^2}. $$
It is easy (and recommended!) to check these by adding up the terms on the right and making sure you get the term on the left.</p>
<p><b>2. Common Pitfalls </b></p>
<p>Very often in math classes, students are "lied to'' in one of two ways: either results are stated that are far weaker than normal, or things are said about the impossibility to do something\ldots when it's actually possible. For example, middle school teachers might often say that taking the square root of negative numbers "isn't allowed'' or "doesn't mean anything,'' when really there is a several hundred year tradition of doing just that. (On the other hand, things are much more complicated in some ways once we allow ${\sqrt{-1}}$, so it makes sense to defer its treatment).</p>
<p>Perhaps because of this, students often try to generalize the statement of partial fractions, which applies to <em>rational</em> functions, to other types of functions. But it is <em>very important</em> to remember that partial functions works for rational functions, i.e. for ratios of polynomials. So if you have ${\dfrac{1}{x\sqrt{x-1}}}$, you cannot naively apply the partial fractions algorithm, as ${x\sqrt{x - 1}}$ is not a polynomial.</p>
<p>As an aside, we can be a bit clever. If you call ${y = \sqrt{x - 1}}$, so that ${y^2 + 1 = x}$, then we see that ${\dfrac{1}{x\sqrt{x - 1}} = \dfrac{1}{y(y^2+1)}}$, which you <em>can</em> approach with partial fractions. You should check that $$ \dfrac{1}{y(y^2 + 1)} = \dfrac{1}{y} - \frac{y}{y^2 + 1}, $$
so that $$ \dfrac{1}{x\sqrt{x - 1}} = \dfrac{1}{\sqrt{x - 1}} - \dfrac{\sqrt{x-1}}{x}.$$
So while <em>something</em> is possible here, it's not a naive application of partial fractions.</p>
<p>Similarly, if you have something like ${\dfrac{\sin \theta}{\cos^2 \theta + \cos \theta}}$, you cannot apply partial fractions because you are not looking at a rational function.</p>
<p>There's another common danger, which has to do with what you assume is true. For example, if you assume that you <em>can</em> use partial fractions on ${\dfrac{1}{x\sqrt{x-1}}}$ (which you cannot!), then you might do something like <a name="eqpitfall"></a>$$ \dfrac{1}{x\sqrt{x-1}} = \frac{A}{x} + \frac{B}{\sqrt{x-1}}, \tag{1}$$
so that clearing denominators gives $$ 1 = A\sqrt{x - 1} + B x $$
You might then thing that setting ${x = 1}$ shows that ${B = 1}$, and setting ${x = 10}$ gives ${3A + B = 3A + 1 + 1}$, meaning that ${A = 0}$. And so ${Bx = 1}$. But this is clearly nonsense. And the issue here is that the initial equation~<a href="#eqpitfall">1</a> is not true - starting with faulty assumptions gets you no where.</p>
<p>A key thing to remember is that you can always check your work by just adding together the final decomposition after finding a common denominator! And if you have a good feel for functions, you should be able to realize that no linear combination of ${\sqrt{x - 1}}$ and ${x}$ will ever be the constant ${1}$ - so that final equality will never be possible.</p>
<p><b>3. A Proof (more or less) </b></p>
<p>Giving the proof for the repeated factor part is annoying, but very similar to the non-repeated root case. Suppose that we have a number ${r}$ and a polynomial ${q(x)}$ such that ${q(r) \neq 0}$. Under these assumptions, we will show that there is a polynomial ${p(x)}$ of degree less than ${q(x)}$ and a number ${A}$ such that $$ \frac{1}{q(x) (x-r)} = \frac{p(x)}{q(x)} + \frac{A}{x-r}. $$
This is clearly equivalent to finding a polynomial ${p(x)}$ and ${A}$ such that $$ 1 = p(x)(x-r) + Aq(x) $$
We want this as an equality of polynomials, meaning it holds for all ${x}$. So in particular, it should hold when ${x = r}$, leading us to the equality $$ 1 = Aq(r), $$
which can be rewritten as $$ A = \frac{1}{q(r)} $$
as ${q(r) \neq 0}$. So we have found ${A}$.</p>
<p>We are left with ${p(x)(x-r) = 1 - Aq(x)}$. By our choice of ${A}$, we see that the right hand side is ${0}$ when ${x = r}$, so that the right hand side has ${x - r}$ as a factor. So ${1 - Aq(x) = N(x)(x-r)}$ for some polynomial ${N}$ of degree smaller than the degree of ${q(x)}$. (We have used the Factor Theorem here, which says that if ${a}$ is a root of ${p(x)}$, then ${p(x) = p_1(x)(x-a)}$ for a smaller degree polynomial ${p_1(x)}$). Choosing ${p(x)}$ to be this ${N(x)}$ gives us this equality as well, so that we have found a satisfactory ${A}$ and ${p(x)}$.</p>
<p>This lets us peel off the (non-repeating) factors of the denominator one at a time, one after the other, to prove the theorem for cases without repeated roots. The case with repeated roots is essentially the exact same, and would be a reasonable thing to try to prove on your own. (Hint: there will be a point when you might want to divide everything by ${x - r}$).</p>
<p><b>4. Conclusion </b></p>
<p>So that's that about partial fractions. If there are any questions, feel free to let me know. This post was typeset in the \LaTeX typeset language, hosted on a Wordpress site <a title="A bit more about partial fraction decomposition" href="/?p=1711">davidlowryduda.com</a>, and displayed there with MathJax. This can also be found in <a href="/wp-content/uploads/2014/09/errors_in_partial_fraction_decomposition.pdf">pdf note</a> form, and the conversion from note to Wordpress is done using a customized version of latex2wp that I call mse2wp, located at <a title="My github" href="http://github.com/davidlowryduda/mse2wp">github.com/davidlowryduda/mse2wp</a>.</p>
<p>Thank you, and I'll see you in class.</p>https://davidlowryduda.com/a-bit-more-about-partial-fraction-decompositionMon, 22 Sep 2014 03:14:15 +0000Math 100 - Concluding Remarkshttps://davidlowryduda.com/math-100-fall-2013-concluding-remarksDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/math-100-fall-2013-concluding-remarksMon, 30 Dec 2013 03:14:15 +0000On the identity $1 + 2 + \ldots = -1/12$https://davidlowryduda.com/response-to-bnelo12s-question-on-redditDavid Lowry-Duda<p>bnelo12 <a href="http://www.reddit.com/r/math/comments/1so784/does_the_riemannzeta_function_give_any/">writes</a> (slightly paraphrased)</p>
<blockquote>Can you explain exactly how ${1 + 2 + 3 + 4 + \ldots = - \frac{1}{12}}$ in the context of the Riemann ${\zeta}$ function?</blockquote>
<p>We are going to approach this problem through a related problem that is easier to understand at first. Many are familiar with summing geometric series</p>
<p align="center">$\displaystyle g(r) = 1 + r + r^2 + r^3 + \ldots = \frac{1}{1-r}, $</p>
<p>which makes sense as long as ${|r| < 1}$. But if you're not, let's see how we do that. Let ${S(n)}$ denote the sum of the terms up to ${r^n}$, so that</p>
<p align="center">$\displaystyle S(n) = 1 + r + r^2 + \ldots + r^n. $</p>
<p>Then for a finite ${n}$, ${S(n)}$ makes complete sense. It's just a sum of a few numbers. What if we multiply ${S(n)}$ by ${r}$? Then we get</p>
<p align="center">$\displaystyle rS(n) = r + r^2 + \ldots + r^n + r^{n+1}. $</p>
<p>Notice how similar this is to ${S(n)}$. It's very similar, but missing the first term and containing an extra last term. If we subtract them, we get</p>
<p align="center">$\displaystyle S(n) - rS(n) = 1 - r^{n+1}, $</p>
<p>which is a very simple expression. But we can factor out the ${S(n)}$ on the left and solve for it. In total, we get</p>
<p align="center">$\displaystyle S(n) = \frac{1 - r^{n+1}}{1 - r}. \ \ \ \ \ (1)$</p>
<p>This works for any natural number ${n}$. What if we let ${n}$ get arbitrarily large? Then if ${|r|<1}$, then ${|r|^{n+1} \rightarrow 0}$, and so we get that the sum of the geometric series is</p>
<p align="center">$\displaystyle g(r) = 1 + r + r^2 + r^3 + \ldots = \frac{1}{1-r}. $</p>
<p>But this looks like it makes sense for almost any ${r}$, in that we can plug in any value for ${r}$ that we want on the right and get a number, unless ${r = 1}$. In this sense, we might say that ${\frac{1}{1-r}}$ <b>extends</b> the geometric series ${g(r)}$, in that whenever ${|r|<1}$, the geometric series ${g(r)}$ agrees with this function. But this function makes sense <b>in a larger domain</b> then ${g(r)}$.</p>
<p>People find it convenient to abuse notation slightly and call the new function ${\frac{1}{1-r} = g(r)}$, (i.e. use the same notation for the extension) because any time you might want to plug in ${r}$ when ${|r|<1}$, you still get the same value. But really, it's not true that ${\frac{1}{1-r} = g(r)}$, since the domain on the left is bigger than the domain on the right. This can be confusing. It's things like this that cause people to say that</p>
<p align="center">$\displaystyle 1 + 2 + 4 + 8 + 16 + \ldots = \frac{1}{1-2} = -1, $</p>
<p>simply because ${g(2) = -1}$. This is conflating two different ideas together. What this means is that the function that extends the geometric series takes the value ${-1}$ when ${r = 2}$. But this has nothing to do with actually summing up the ${2}$ powers at all.</p>
<p>So it is with the ${\zeta}$ function. Even though the ${\zeta}$ function only makes sense at first when ${\text{Re}(s) > 1}$, people have extended it for almost all ${s}$ in the complex plane. It just so happens that the great functional equation for the Riemann ${\zeta}$ function that relates the right and left half planes (across the line ${\text{Re}(s) = \frac{1}{2}}$) is</p>
<p align="center">$\displaystyle \pi^{\frac{-s}{2}}\Gamma\left( \frac{s}{2} \right) \zeta(s) = \pi^{\frac{s-1}{2}}\Gamma\left( \frac{1-s}{2} \right) \zeta(1-s), \ \ \ \ \ (2)$</p>
<p>where ${\Gamma}$ is the gamma function, a sort of generalization of the factorial function. If we solve for ${\zeta(1-s)}$, then we get</p>
<p align="center">$\displaystyle \zeta(1-s) = \frac{\pi^{\frac{-s}{2}}\Gamma\left( \frac{s}{2} \right) \zeta(s)}{\pi^{\frac{s-1}{2}}\Gamma\left( \frac{1-s}{2} \right)}. $</p>
<p>If we stick in ${s = 2}$, we get</p>
<p align="center">$\displaystyle \zeta(-1) = \frac{\pi^{-1}\Gamma(1) \zeta(2)}{\pi^{\frac{-1}{2}}\Gamma\left( \frac{-1}{2} \right)}. $</p>
<p>We happen to know that ${\zeta(2) = \frac{\pi^2}{6}}$ (this is called Basel's problem) and that ${\Gamma(\frac{1}{2}) = \sqrt \pi}$. We also happen to know that in general, ${\Gamma(t+1) = t\Gamma(t)}$ (it is partially in this sense that the ${\Gamma}$ function generalizes the factorial function), so that ${\Gamma(\frac{1}{2}) = \frac{-1}{2} \Gamma(\frac{-1}{2})}$, or rather that ${\Gamma(\frac{-1}{2}) = -2 \sqrt \pi.}$ Finally, ${\Gamma(1) = 1}$ (on integers, it agrees with the one-lower factorial).</p>
<p>Putting these together, we get that</p>
<p align="center">$\displaystyle \zeta(-1) = \frac{\pi^2/6}{-2\pi^2} = \frac{-1}{12}, $</p>
<p>which is what we wanted to show. ${\diamondsuit}$</p>
<p>The information I quoted about the Gamma function and the zeta function's functional equation can be found on Wikipedia or any introductory book on analytic number theory. Evaluating ${\zeta(2)}$ is a classic problem that has been in many ways, but is most often taught in a first course on complex analysis or as a clever iterated integral problem (you can prove it with Fubini's theorem). Evaluating ${\Gamma(\frac{1}{2})}$ is rarely done and is sort of a trick, usually done with Fourier analysis.</p>
<p>As usual, I have also created a paper version. You can find that <a href="/wp-content/uploads/2013/12/redditReBnelo12P.pdf">here</a>.</p>https://davidlowryduda.com/response-to-bnelo12s-question-on-redditThu, 12 Dec 2013 03:14:15 +0000An intuitive overview of Taylor serieshttps://davidlowryduda.com/an-intuitive-overview-of-taylor-seriesDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/an-intuitive-overview-of-taylor-seriesSat, 16 Nov 2013 03:14:15 +0000Understanding $\int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 + t^2)^n}$https://davidlowryduda.com/response-to-fattybakes-question-on-redditDavid Lowry-Duda<p>We <a href="http://www.reddit.com/r/math/comments/1qcor1/how_does_one_prove_something_like_this/">want to</a> understand the integral</p>
<p align="center">$\displaystyle \int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 +
t^2)^n}. \ \ \ \ \ (1)$</p>
<p>Although fattybake mentions the residue theorem, we won't use that at all.
Instead, we will be very clever.</p>
<p>We will do a technique that was once very common (up until the 1910s or so), but is much less common now: let's multiply by ${\displaystyle \Gamma(n) = \int_0^\infty u^n e^{-u} \frac{\mathrm{d}u}{u}}$. This yields</p>
<p align="center">$\displaystyle \int_0^\infty \int_{-\infty}^\infty \left(\frac{u}{1 + t^2}\right)^n e^{-u}\mathrm{d}t \frac{\mathrm{d}u}{u} = \int_{-\infty}^\infty \int_0^\infty \left(\frac{u}{1 + t^2}\right)^n e^{-u} \frac{\mathrm{d}u}{u}\mathrm{d}t, \ \ \ \ \ (2)$</p>
<p>where I interchanged the order of integration because everything converges
really really nicely. Do a change of variables, sending ${u \mapsto
u(1+t^2)}$. Notice that my nicely behaving measure ${\mathrm{d}u/u}$
completely ignores this change of variables, which is why I write my ${\Gamma}$ function that way. Also be pleased that we are squaring ${t}$,
so that this is positive and doesn't mess with where we are integrating. This
leads us to</p>
<p align="center">$\displaystyle \int_{-\infty}^\infty \int_0^\infty u^n e^{-u + -ut^2} \frac{\mathrm{d}u}{u}\mathrm{d}t = \int_0^\infty \int_{-\infty}^\infty u^n e^{-u + -ut^2} \mathrm{d}t\frac{\mathrm{d}u}{u},$</p>
<p>where I change the order of integration again. Now we have an inner ${t}$
integral that we can do, as it's just the standard Gaussian integral (google
this if this doesn't make sense to you). The inner integral is</p>
<p align="center">$\displaystyle \int_{-\infty}^\infty e^{-ut^2} \mathrm{d}t = \sqrt{\pi / u}. $</p>
<p>Putting this into the above yields</p>
<p align="center">$\displaystyle \sqrt{\pi} \int_0^\infty u^{n-1/2} e^{-u} \frac{\mathrm{d}u}{u}, \ \ \ \ \ (4)$</p>
<p>which is exactly the definition for ${\Gamma(n-\frac12) \cdot \sqrt \pi}$.</p>
<p>But remember, we multiplied everything by ${\Gamma(n)}$ to start with. So we divide by that to get the result:</p>
<p align="center">$\displaystyle \int_{-\infty}^\infty \frac{\mathrm{d}t}{(1 + t^2)^n} = \dfrac{\sqrt{\pi} \Gamma(n-\frac12)}{\Gamma(n)} \ \ \ \ \ (5)$</p>
<p style="text-align: left;" align="center">Finally, a copy of the latex file <a href="/wp-content/uploads/2013/11/redditResponsePost.pdf">itself</a>.</p>https://davidlowryduda.com/response-to-fattybakes-question-on-redditMon, 11 Nov 2013 03:14:15 +0000Math 100 - before second midtermhttps://davidlowryduda.com/math-100-before-second-midtermDavid Lowry-Duda<p>You have a midterm next week, and it's not going to be a cakewalk.</p>
<p>As requested, I'm uploading the last five weeks' worth of worksheets, with (my)
solutions. A comment on the solutions: not everything is presented in full
detail, but most things are presented with most detail (except for the
occasional one that is far far beyond what we actually expect you to be able to
do). If you have any questions about anything, let me know. Even better, ask it
here - maybe others have the same questions too.</p>
<p>Without further ado -</p>
<ul>
<li>Week 6 <a href="/wp-content/uploads/2013/11/fa13-math100-recitation-week061.pdf">worksheet</a> with <a href="/wp-content/uploads/2013/11/week6sols.pdf">solutions</a></li>
<li>Week 7 <a href="/wp-content/uploads/2013/11/fa13-math100-recitation-week071.pdf">worksheet</a> with <a href="/wp-content/uploads/2013/11/week7sols.pdf">solutions</a></li>
<li>Week 8 <a href="/wp-content/uploads/2013/11/fa13-math100-recitation-week081.pdf">worksheet</a> with <a href="/wp-content/uploads/2013/11/week8sols.pdf">solutions</a></li>
<li>Week 9 <a href="/wp-content/uploads/2013/11/fa13-math100-recitation-week091.pdf">worksheet</a> with <a href="/wp-content/uploads/2013/11/week9sols.pdf">solutions</a></li>
<li>Week 10 <a href="/wp-content/uploads/2013/11/fa13-math100-recitation-week101.pdf">worksheet</a> with <a href="/wp-content/uploads/2013/11/week10sols.pdf">solutions</a></li>
</ul>
<p>And since we were unable to go over the quiz in my afternoon recitation today, I'm attaching a worked <a href="/wp-content/uploads/2013/11/week10quizsol.pdf">solution</a> to the quiz as well.</p>
<p>Again, let me know if you have any questions. I will still have my office hours
on Tuesday from 2:30-4:30pm in my office (I'm aware that this happens to be
immediately before the exam - status not by design).</p>
<p>Study study study!</p>https://davidlowryduda.com/math-100-before-second-midtermThu, 07 Nov 2013 03:14:15 +0000Programminghttps://davidlowryduda.com/programmingDavid Lowry-Duda<h1>Programming</h1>
<p>I develop mathematical and scientific software. I also regularly contribute to
<a href="https://www.gnu.org/philosophy/free-sw.en.html">free</a><sup>1</sup>
<span class="aside"><sup>1</sup>as in speech, not
as in beer.</span>
software that I use.</p>
<p>Much of my contributions can be found through
<a href="https://github.com/davidlowryduda">my github</a>. I am also active on various
development mailing lists.</p>
<h2>Mathematical Software</h2>
<ul>
<li>
<p><a href="https://www.lmfdb.org/">LMFDB</a>, the L-function and modular form database. I
develop the <a href="https://github.com/LMFDB/lmfdb">code</a> running the database and
write software that computes or verifies the underyling data. I am currently
strongly affiliated with the character pages and the Maass form pages, as
well as the images for each modular form.</p>
</li>
<li>
<p><a href="http://sagemath.org/">SageMath</a>, an open source computer algebra system. I
reworked their complex function plotting routines, and have contributed to
various parts of the main library as well. If you have sage trouble or need
help reporting/fixing a bug, you can reach out to me.</p>
</li>
<li>
<p><a href="https://github.com/davidlowryduda/phase_mag_plot">phase_mag_plot</a>, tools to
visualize complex functions in sage. (Associated to my paper <a href="https://arxiv.org/abs/2002.05234">Visualizing
Modular Forms</a>. See also the <a href="/phase_mag_plot-a-sage-package-for-plotting-complex-functions/">announcement
page</a> and
<a href="/colormapplot/">followup page</a>). Note that <strong>this has now been incorporated
into sage</strong>.</p>
</li>
<li>
<p><a href="https://github.com/jwbober/conrey-dirichlet-characters">conrey-dirichlet-characters</a>,
sage/cython code for working with Dirichlet characters using the Conrey
indexing system. This library is primarily written by Jonathan Bober, but
I've helped maintain it, in particular when sage transitioned from python2 to
python3. (I also used this code to generate data for the LMFDB).</p>
</li>
</ul>
<p>I've contributed much less regularly to a wide variety of other pieces of
scientific software, including <a href="https://arblib.org/">arb</a>,
<a href="https://github.com/JohnCremona/eclib">eclib</a>,
<a href="https://github.com/ipython/ipython">ipython</a>, and
<a href="https://matplotlib.org/">matplotlib</a>. These are all great pieces of software
and I encourage others to check them out.</p>
<h2>Other Software</h2>
<ul>
<li>
<p><a href="https://github.com/davidlowryduda/tld">tld</a>: a minimal list manager for
people who want to do things, but with a little flexibility. I use this
behind the scenes for all sorts of things that aren't public-facing.</p>
</li>
<li>
<p><a href="https://github.com/davidlowryduda/simple_address_book.py">simple_address_book</a>:
a minimal address book that works with mutt. It's similar to <code>abook</code>, but
simpler, less featureful, <strong>maintained</strong>, and in python.</p>
</li>
<li>
<p><a href="https://github.com/davidlowryduda/latex2jax">latex2jax</a>: a python script to
turn texfiles into Wordpress-compatible html. I used this for years to write
math for this site. But I no longer use Wordpress, and I've built different
tooling to generate notes for this site.</p>
</li>
<li>
<p><a href="https://github.com/VundleVim/Vundle.vim">Vundle</a>: a plugin manager for
vim. (<em>Unfortunately, this is now <a href="https://github.com/VundleVim/Vundle.vim/issues/955#issuecomment-951388817">essentially
abandonware</a>.
But I still use it</em>).</p>
</li>
</ul>https://davidlowryduda.com/programmingThu, 24 Oct 2013 03:14:15 +0000Researchhttps://davidlowryduda.com/researchDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/researchSun, 13 Oct 2013 03:14:15 +0000Research Noteshttps://davidlowryduda.com/research-notesDavid Lowry-Duda<p>Currently blank.</p>https://davidlowryduda.com/research-notesSun, 13 Oct 2013 03:14:15 +0000Math 100 - Week 4https://davidlowryduda.com/math-100-week-4David Lowry-Duda<p>This is a post for my math 100 calculus class of fall 2013. In this post, I
give the 4th week's recitation worksheet. More pertinently, we will also go
over the most recent quiz and common mistakes. Trig substitution, it turns out,
is not so easy.</p>
<p>Before we hop into the details, I'd like to encourage you all to avail of each
other, your professor, your ta, and the MRC in preparation for the first
midterm (next week!).</p>
<p><b>1. The quiz </b></p>
<p>There were two versions of the quiz this week, but they were very similar. Both asked about a particular trig substitution</p>
<p align="center">$\displaystyle \int_3^6 \sqrt{36 - x^2} \mathrm{d} x $</p>
<p>And the other was</p>
<p align="center">$\displaystyle \int_{2\sqrt 2}^4 \sqrt{16 - x^2} \mathrm{d}x. $</p>
<p>They are very similar, so I'm only going to go over one of them. I'll go over the first one. We know we are to use trig substitution. I see two ways to proceed: either draw a reference triangle (which I recommend), or think through the Pythagorean trig identities until you find the one that works here (which I don't recommend).</p>
<p>We see a ${\sqrt{36 - x^2}}$, and this is hard to deal with. Let's draw a right triangle that has ${\sqrt{36 - x^2}}$ as a side. I've drawn one below. (Not fancy, but I need a better light).</p>
<p align="center"><img alt="" src="/wp-content/uploads/2013/09/week2triangle.jpg" width="200" /></p>
<p>In this picture, note that ${\sin \theta = \frac{x}{6}}$, or that ${x = 6 \sin \theta}$, and that ${\sqrt{36 - x^2} = 6 \cos \theta}$. If we substitute ${x = 6 \sin \theta}$ in our integral, this means that we can replace our ${\sqrt{36 - x^2}}$ with ${6 \cos \theta}$. But this is a substitution, so we need to think about ${\mathrm{d} x}$ too. Here, ${x = 6 \sin \theta}$ means that ${\mathrm{d}x = 6 \cos \theta}$.</p>
<p><em>Some people used the wrong trig substitution, meaning they used ${x = \tan \theta}$ or ${x = \sec \theta}$, and got stuck. It's okay to get stuck, but if you notice that something isn't working, it's better to try something else than to stare at the paper for 10 minutes. Other people use ${x = 6 \cos \theta}$, which is perfectly doable and parallel to what I write below.</em></p>
<p><em>Another common error was people forgetting about the ${\mathrm{d}x}$ term entirely. But it's important!</em>.</p>
<p>Substituting these into our integral gives</p>
<p align="center">$\displaystyle \int_{?}^{??} 36 \cos^2 (\theta) \mathrm{d}\theta, $</p>
<p>where I have included question marks for the limits because, as after most substitutions, they are different. You have a choice: you might go on and put everything back in terms of ${x}$ before you give your numerical answer; or you might find the new limits now.</p>
<p><em>It's not correct to continue writing down the old limits. The variable has changed, and we really don't want ${\theta}$ to go from ${3}$ to ${6}$.</em></p>
<p>If you were to find the new limits, then you need to consider: if ${x=3}$ and ${\frac{x}{6} = \sin \theta}$, then we want a ${\theta}$ such that ${\sin \theta = \frac{3}{6}= \frac{1}{2}}$, so we might use ${\theta = \pi/6}$. Similarly, when ${x = 6}$, we want ${\theta}$ such that ${\sin \theta = 1}$, like ${\theta = \pi/2}$. <em>Note that these were two arcsine calculations, which we would have to do even if we waited until after we put everything back in terms of ${x}$ to evaluate</em>.</p>
<p><em>Some people left their answers in terms of these arcsines. As far as mistakes go, this isn't a very serious one. But this is the sort of simplification that is expected of you on exams, quizzes, and homeworks. In particular, if something can be written in a much simpler way through the unit circle, then you should do it if you have the time.</em></p>
<p>So we could rewrite our integral as</p>
<p align="center">$\displaystyle \int_{\pi/6}^{\pi/2} 36 \cos^2 (\theta) \mathrm{d}\theta. $</p>
<p>How do we integrate ${\cos^2 \theta}$? We need to make use of the identity ${\cos^2 \theta = \dfrac{1 + \cos 2\theta}{2}}$. <b>You should know this identity for this midterm</b>. Now we have</p>
<p align="center">$\displaystyle 36 \int_{\pi/6}^{\pi/2}\left(\frac{1}{2} + \frac{\cos 2 \theta}{2}\right) \mathrm{d}\theta = 18 \int_{\pi/6}^{\pi/2}\mathrm{d}\theta + 18 \int_{\pi/6}^{\pi/2}\cos 2\theta \mathrm{d}\theta. $</p>
<p>The first integral is extremely simple and yields ${6\pi}$ The second integral has antiderivative ${\dfrac{\sin 2 \theta}{2}}$ (<em>Don't forget the ${2}$ on bottom!</em>), and we have to evaluate ${\big[9 \sin 2 \theta \big]_{\pi/6}^{\pi/2}}$, which gives ${-\dfrac{9 \sqrt 3}{2}}$. <b>You should know the unit circle sufficiently well to evaluate this for your midterm</b>.</p>
<p>And so the final answer is ${6 \pi - \dfrac{9 \sqrt 2}{2} \approx 11.0553}$. (You don't need to be able to do that approximation).</p>
<p>Let's go back a moment and suppose you didn't re"{e}valuate the limits once you substituted in ${\theta}$. Then, following the same steps as above, you'd be left with</p>
<p align="center">$\displaystyle 18 \int_{?}^{??}\mathrm{d}\theta + 18 \int_{?}^{??}\cos 2\theta \mathrm{d}\theta = \left[ 18 \theta \right]_?^{??} + \left[ 9 \sin 2 \theta \right]_?^{??}. $</p>
<p>Since ${\frac{x}{6} = \sin \theta}$, we know that ${\theta = \arcsin (x/6)}$. This is how we evaluate the left integral, and we are left with ${[18 \arcsin(x/6)]_3^6}$. This means we need to know the arcsine of ${1}$ and ${\frac 12}$. These are exactly the same two arcsine computations that I referenced above! Following them again, we get ${6\pi}$ as the answer.</p>
<p>We could do the same for the second part, since ${\sin ( 2 \arcsin (x/6))}$ when ${x = 3}$ is ${\sin (2 \arcsin \frac{1}{2} ) = \sin (2 \cdot \frac{\pi}{6} ) = \frac{\sqrt 3}{2}}$; and when ${x = 6}$ we get ${\sin (2 \arcsin 1) = \sin (2 \cdot \frac{\pi}{2}) = \sin (\pi) = 0}$.</p>
<p>Putting these together, we see that the answer is again ${6\pi - \frac{9\sqrt 3}{2}}$.</p>
<p>Or, throwing yet another option out there, we could do something else (a little bit wittier, maybe?). We have this ${\sin 2\theta}$ term to deal with. You might recall that ${\sin 2 \theta = 2 \sin \theta \cos \theta}$, the so-called double-angle identity.</p>
<p>Then ${9 \sin 2\theta = 18 \sin \theta \cos \theta}$. Going back to our reference triangle, we know that ${\cos \theta = \dfrac{\sqrt{36 - x^2}}{6}}$ and that ${\sin \theta = \dfrac{x}{6}}$. Putting these together,</p>
<p align="center">$\displaystyle 9 \sin 2 \theta = \dfrac{ x\sqrt{36 - x^2} }{2}. $</p>
<p>When ${x=6}$, this is ${0}$. When ${x = 3}$, we have ${\dfrac{ 3\sqrt {27}}{2} = \dfrac{9\sqrt 3}{2}}$.</p>
<p>And fortunately, we get the same answer again at the end of the day. (phew).</p>
<p><b>2. The worksheet </b></p>
<p>Finally, here is the <a href="/wp-content/uploads/2013/09/fa13-math100-recitation-week04.pdf">worksheet</a> for the day. I'm working on their solutions, and I'll have that up by late this evening (sorry for the delay).</p>
<p>Ending tidbits - when I was last a TA, I tried to see what were the good predictors of final grade. Some things weren't very surprising - there is a large correlation between exam scores and final grade. Some things were a bit surprising - low homework scores correlated well with low final grade, but high homework scores didn't really have a strong correlation with final grade at all; attendance also correlated weakly. But one thing that really stuck with me was the first midterm grade vs final grade in class: it was really strong. For a bit more on that, I refer you to my <a title="Math 90: Concluding Remarks" href="http://mixedmath.wordpress.com/2012/12/30/math-90-concluding-remarks/">final post</a> from my Math 90 posts.</p>https://davidlowryduda.com/math-100-week-4Sat, 28 Sep 2013 03:14:15 +0000Math 199 - Week 3 and pre-midtermhttps://davidlowryduda.com/math-100-week-3-and-pre-midtermDavid Lowry-Duda<p>This is a post for my Math 100 class of fall 2013. In this post, I give the
first three weeks' worksheets from recitation and the set of solutions to week
three's worksheet, as well as a few administrative details.</p>
<p>Firstly, here is the recitation work from the first three weeks:</p>
<ol>
<li><em>(there was no recitation the first week)</em></li>
<li>A <a href="/wp-content/uploads/2013/09/fa13-math100-recitation-week02.pdf">worksheet</a> focusing on review.</li>
<li>A <a href="/wp-content/uploads/2013/09/fa13-math100-recitation-week03.pdf">worksheet</a> focusing on integration by parts and u-substitution, with <a href="/wp-content/uploads/2013/09/w3_worksheet_solutions.pdf">solutions</a>.</li>
</ol>
<p>In addition, I'd like to remind you that I have office hours from 2-4pm (right
now) in Kassar 018. I've had multiple people set up appointments with me
outside of these hours, which I'm tempted to interpret as suggesting that I
change when my office hours are. If you have a preference, let me know, and
I'll try to incorporate it.</p>
<p>Finally, there will be an exam next Tuesday. I've been getting a lot of emails
about what material will be on the exam. The answer is that everything you have
learned up to now and by the end of this week is fair game for exam material.
<strong>This also means there could be exam questions on material that we have
not discussed in recitation</strong>. So be prepared. However, I will be
setting aside a much larger portion of recitation this Thursday for questions
than normal. So come prepared with your questions.</p>
<p>Best of luck, and I'll see you in class on Thursday.</p>https://davidlowryduda.com/math-100-week-3-and-pre-midtermTue, 24 Sep 2013 03:14:15 +0000Happy Birthday to the Science Guyhttps://davidlowryduda.com/bill-nye-pbsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/bill-nye-pbsTue, 10 Sep 2013 03:14:15 +0000An intuitive introduction to calculushttps://davidlowryduda.com/an-intuitive-introduction-to-calculusDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/an-intuitive-introduction-to-calculusSat, 07 Sep 2013 03:14:15 +0000Twenty Mathematicians, Two Hard Problems, One Week, IdeaLab2013https://davidlowryduda.com/idealab2013-iDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/idealab2013-iFri, 02 Aug 2013 03:14:15 +0000Chinese Remainder Theorem -SummerNThttps://davidlowryduda.com/chinese-remainder-theorem-summerntDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/chinese-remainder-theorem-summerntTue, 09 Jul 2013 03:14:15 +0000Notes on the first week - SummerNThttps://davidlowryduda.com/notes-on-the-first-week-summerntDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/notes-on-the-first-week-summerntMon, 01 Jul 2013 03:14:15 +0000A proof from the first sheet - SummerNThttps://davidlowryduda.com/a-proof-from-the-first-sheet-summerntDavid Lowry-Duda<p>In class today, we were asked to explain what was wrong with the following
proof:</p>
<div class="claim">
<p>As $x$ increases, the function
\begin{equation}
f(x)=\frac{100x^2+x^2\sin(1/x)+50000}{100x^2}
\end{equation}
approaches (gets arbitrarily close to) 1.</p>
</div>
<div class="proof">
<p>Look at values of $f(x)$ as $x$ gets larger and larger.
$$f(5) \approx 21.002$$
$$f(10)\approx 6.0010$$
$$f(25)\approx 1.8004$$
$$f(50)\approx 1.2002$$
$$f(100) \approx 1.0501$$
$$f(500) \approx 1.0020$$
These values are clearly getting closer to 1. QED</p>
</div>
<p>Of course, this is incorrect. Choosing a couple of numbers and thinking there
might be a pattern does not constitute a proof.</p>
<p>But on a related note, these sorts of questions (where you observe a pattern
and seek to prove it) can sometimes lead to strongly suspected conjectures,
which may or may not be true. Here's an interesting one (with a good picture <a
href="http://spikedmath.com/449.html">over at SpikedMath</a>):</p>
<blockquote>
<p>Draw $2$ points on the circumference of a circle, and connect them with
a line. How many regions is the circle divided into? (two). Draw another
point, and connect it to the previous points with a line. How many regions
are there now? Draw another point, connecting to the previous points with
lines. How many regions now? Do this once more. Do you see the pattern? You
might even begin to formulate a belief as to why it's true.</p>
<p>But then draw one more point and its lines, and carefully count the number of
regions formed in the circle. How many circles now? (It doesn't fit the obvious
pattern).</p>
</blockquote>
<p>So we know that the presented proof is incorrect. But lets say we want to know
if the statement is true. How can we prove it? Further, we want to prove it
without calculus - we are interested in an <em>elementary</em> proof. How
should we proceed?</p>
<p>Firstly, we should say something about <a
href="http://en.wikipedia.org/wiki/Radian">radians</a>. Recall that at an angle
$theta$ (in radians) on the unit circle, the arc-length subtended by the
angle $theta$ is exactly $theta$ (in fact, this is the defining
attribute of radians). And the value $sin theta$ is exactly the height,
or rather the $y$ value, of the part of the unit circle at angle $theta$. It's annoying to phrase, so we look for clarification at the hastily
drawn math below:</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2013/06/screenshot-from-2013-06-24-123053.png"
width="500" />
<figcaption class="left">
The arc length subtended by theta has length theta. The value of sin theta is
the length of the vertical line in black.
</figcaption>
</figure>
<p>Note in particular that the arc length is longer than the value of $\sin
\theta$, so that $\sin \theta < \theta$. (This relies critically on the
fact that the angle is positive). Further, we see that this is always true for
small, positive $\theta$. So it will be true that for large, positive
$x$, we'll have $\sin \frac{1}{x} < \frac{1}{x}$. For those of you
who know a bit more calculus, you might know that in fact, $\sin(\frac{1}{x}) = \frac{1}{x} - \frac{1}{x^33!} + O(\frac{1}{t^5})$, which is a
more precise statement.</p>
<p>What do we do with this? Well, I say that this allow us to finish the proof.</p>
<p style="text-align:center;">$\dfrac{100x^2 + x^2 \sin(1/x) + 50000}{100x^2} leq \dfrac{100x^2 + x + 50000}{100x^2} = 1 + \dfrac{1}{100x} + \dfrac{50000}{100x^2}$</p>
<p>, and it <em>is</em> clear that the last two terms go to zero as $x$ increases. $\spadesuit$</p>
<p>Finally, I'd like to remind you about the <a title="Summer Number Theory"
href="http://mixedmath.wordpress.com/summer-number-theory/">class webpage</a>.
We don't use it often, but it is an avenue for extra information or to ask
additional questions.</p>https://davidlowryduda.com/a-proof-from-the-first-sheet-summerntMon, 24 Jun 2013 03:14:15 +0000Summer Number Theory 2013https://davidlowryduda.com/summer-number-theoryDavid Lowry-Duda<p>Welcome to the page for Summer@Brown 2013 Number Theory with David Lowry-Duda!</p>
<p>This is the course website. Here, there (were — back in 2013) copies of the
syllabus, problem sets, exams, etc., as well as basic information about the
course.</p>
<p>But much more importantly, at the bottom of the page there is a comment
section, where I encourage you to write as many comments as you want. If you
have a question, concern, response, idea, or perhaps even a topic you'd really
like to go over, leave a comment! If this is your first time visiting this
page, <strong>leave a comment below</strong> so that you'll be able to comment
in the future (I moderate first-comments).</p>
<p>Course Information: Number Theory: An Introduction to Higher Mathematics
Instructor: David Lowry-Duda
Email: djlowry@math.brown.edu
Syllabus: available <a href="/wp-content/uploads/2013/06/syllabus.pdf">here</a></p>
<p>Links to discussion pages:</p>
<ul>
<li>A (real and correct) <a title="A proof from the first sheet (SummerNT)"
href="/a-proof-from-the-first-sheet-summernt/">proof</a>
of the $\sin$ problem from Day 1</li>
<li><a title="Notes on the first week (SummerNT)"
href="/notes-on-the-first-week-summernt/">Highpoints</a>
from the first week</li>
<li>A <a title="Chinese Remainder Theorem (SummerNT)"
href="/chinese-remainder-theorem-summernt/">note</a>
on the Chinese Remainder Theorem, with two applications we didn't talk
about in class</li>
</ul>https://davidlowryduda.com/summer-number-theorySun, 23 Jun 2013 03:14:15 +0000Recent developments in twin primes, Goldbach, and Open Accesshttps://davidlowryduda.com/recent-developments-in-twin-primes-goldbach-and-open-accessDavid Lowry-Duda<p>It has been a busy two weeks all over the math community. Well, at least it
seemed so to me. Some of my friends have defended their theses and need only to
walk to receive their PhDs; I completed my topics examination, Brown's take on
an oral examination; and I've given a trio of math talks.</p>
<p>Meanwhile, there have been developments in a relative of the Twin Primes
conjecture, the Goldbach conjecture, and Open Access math journals.</p>
<h2>1. Twin Primes Conjecture</h2>
<p>The Twin Primes Conjecture states that there are infinitely many primes $p$ such that $p+2$ is also a prime, and falls in the the more general <a
href="http://en.wikipedia.org/wiki/Polignac%27s_conjecture">Polignac's
Conjecture</a>, which says that for any even $n$, there are infinitely
many prime $p$ such that $p+n$ is also prime. This is another one
of those problems that is easy to state but seems <strong>tremendously</strong>
hard to solve. But recently, Dr. Yitang Zhang of the University of New
Hampshire has submitted a paper to the Annals of Mathematics (one of the most
respected and prestigious journals in the field). The paper is reputedly
extremely clear (in contrast to other recent monumental papers in number
theory, i.e. the phenomenally technical papers of Mochizuki on the ABC
conjecture), and the word on the street is that it went through the entire
review process in less than one month. At this time, there is no publicly
available preprint, so I have not had a chance to look at the paper. But word
is <a
href="https://plus.google.com/u/0/114134834346472219368/posts/XESxA9bL5um">spreading
</a>that credible experts have already carefully reviewed the paper and found
no serious flaws.</p>
<p>Dr. Zhang's paper proves that there are infinitely many primes that have a
corresponding prime at most $70000000$ or so away. And thus in particular
there is at least one number $k$ such that there are infinitely many
primes such that both $p$ and $p+k$ are prime. I did not think that
this was within the reach of current techniques. But it seems that Dr. Zhang
built on top of the <a href="http://arxiv.org/abs/math/0508185">work</a> of
Goldston, Pintz, and Yildirim to get his result. Further, it seems that
optimization of the result will occur and the difference will be brought way
down from $70000000$. However, as <a
href="http://mathoverflow.net/questions/131185/philosophy-behind-yitang-zhangs-work-on-the-twin-primes-conjecture/131188#131188">indicated
</a>by <a href="http://mathoverflow.net/users/630/mark-lewko">Mark Lewko</a> on
<a href="http://mathoverflow.net">MathOverflow</a>, this proof will probably
not extend naturally to a proof of the Twin Primes conjecture itself.
Optimally, it might prove the $p$ and $p+16$ - primes conjecture
(which is still amazing).</p>
<p>One should look out for his paper in an upcoming issue of the Annals.</p>
<h2>2. Goldbach Conjecture</h2>
<p>I feel strangely tied to the Goldbach Conjecture, as I get far more traffic,
emails, and spam concerning my <a title="The danger of confusing cosets
and numbers"
href="http://mixedmath.wordpress.com/2012/08/24/reviewing-goldbach/">previous
post</a> on an erroneous proof of Goldbach than on any other topic I've written
about. About a year ago, I <a title="Three number theory bits: One elementary,
the 3-Goldbach, and the ABC conjecture"
href="http://mixedmath.wordpress.com/2012/06/15/three-number-theory-bits-one-elementary-the-3-goldbach-and-the-abc-conjecture/">wrote
briefly</a> about progress that Dr. Harald Helfgott had made towards the
3-Goldbach Conjecture. This conjecture states that every odd integer greater
than five can be written as the sum of three primes. (This is another easy to
state problem that is not at all easy to approach).</p>
<p>One week ago, Helfgott posted a <a
href="http://arxiv.org/abs/1305.2897">preprint </a>to the arxiv that claims to
complete his previous work and prove 3-Goldbach. Further, he uses the <a
href="http://en.wikipedia.org/wiki/Hardy%E2%80%93Littlewood_circle_method">circle
method</a> and good old L-functions, so I feel like I should read over it more
closely to learn a few things as it's very close to my field. (Further still,
he's a Brandeis alum, and now that my wife will be a grad student at Brandeis I
suppose I should include it in my umbrella of self-association). While I cannot
say that I read the paper, understood it, and affirm its correctness, I can say
that the method seems right for the task (related to the 10th and most subtle
of Scott Aaronson's <a href="http://www.scottaaronson.com/blog/?p=304">list
</a>that I love to quote).</p>
<p>An interesting side bit to Helfgott's proof is that it only works for numbers
larger than $10^{30}$ or so. Fortunately, he's also given a <a
href="http://arxiv.org/abs/1305.3062">computer proof</a> for numbers less than
than on the arxiv, along with David Platt. $10^{30}$ is really, really,
really big. Even that is a very slick bit.</p>
<h2>3. FoM has opened</h2>
<p>I <a title="Math journals and the fight over open access"
href="http://mixedmath.wordpress.com/2013/01/18/side-of-journal/">care
</a>about open access. Fortunately, so do many of the big names. Two of the big
attempts to create a good, strong set of open access math journals have just
released their first articles. The<a
href="http://journals.cambridge.org/action/displaySpecialPage?pageId=3896">
Forum of Mathematics</a> <a
href="http://journals.cambridge.org/action/displayJournal?jid=FMS">Sigma
</a>and <a
href="http://journals.cambridge.org/action/displayJournal?jid=FMP">Pi
</a>journals have each released a paper on algebraic and complex geometry. And
they're completely open! I don't know what it takes for a journal to get off
the ground, but I know that it starts with people reading its articles. So read
up!</p>
<p>The two articles are</p>
<div style="background:#FFFFFF;margin:0 10px 10px 0;padding:0 10px 0 0;text-align:left;font-family:Arial, Helvetica, sans-serif;line-height:1em;">
<div style="font-size:11px;padding:0 0 10px;font-weight:bold;color:#045989;">GENERIC VANISHING THEORY VIA MIXED HODGE MODULES</div>
<div style="font-size:11px;"><b><!– ncastro –>
MIHNEA POPA<!– ncastro –>
and CHRISTIAN SCHNELL (). </b>
<a href="http://journals.cambridge.org/action/displayJournal?jid=FMS">Forum of Mathematics, Sigma</a>, <a href="http://journals.cambridge.org/action/displayJournal?jid=FMS&volumeId=1&bVolume=y#loc1><br />">Volume 1</a>, e1<a href="http://journals.cambridge.org/action/displayAbstract?aid=8919208">http://journals.cambridge.org/action/displayAbstract?aid=8919208</a></div>
and, in Pi
</div>
<div style="background:#FFFFFF;margin:0 10px 10px 0;padding:0 10px 0 0;text-align:left;font-family:Arial, Helvetica, sans-serif;line-height:1em;">
<div style="font-size:11px;padding:0 0 10px;font-weight:bold;color:#045989;"><img alt="$p$" src="/fulltext_content/FMP/FMP1/S2050508613000012_inline2.gif" height="12pt" />-ADIC HODGE THEORY FOR RIGID-ANALYTIC VARIETIES</div>
<div style="font-size:11px;"><b><!– ncastro –>
PETER SCHOLZE (). </b>
<a href="http://journals.cambridge.org/action/displayJournal?jid=FMP">Forum of Mathematics, Pi</a>, <a href="http://journals.cambridge.org/action/displayJournal?jid=FMP&volumeId=1&bVolume=y#loc1><br />">Volume 1</a>, e1
<a href="http://journals.cambridge.org/action/displayAbstract?aid=8920245">http://journals.cambridge.org/action/displayAbstract?aid=8920245</a></div>
</div>https://davidlowryduda.com/recent-developments-in-twin-primes-goldbach-and-open-accessTue, 21 May 2013 03:14:15 +0000Calculations with a Gauss-type sumhttps://davidlowryduda.com/calculations-with-a-gauss-type-sumDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/calculations-with-a-gauss-type-sumWed, 24 Apr 2013 03:14:15 +0000A Book Review of Count Down - The race for beautiful solutions at the IMOhttps://davidlowryduda.com/book-review-count-downDavid Lowry-Duda<p>I read a lot of popular science and math books. Scientific and mathematical
exposition to the public is a fundamental task that must be done; but for some
reason, it is simply not getting done well enough. One day, perhaps I'll write
expository (i.e. for non-math folk) math. But until then, I read everything I
can. I then thought that if I read them all, I should share what I think.</p>
<p>Today, I consider the book <a
href="http://www.amazon.com/Count-Down-Beautiful-International-Mathematical/dp/B0044KN1XU">Count
Down: The Race for Beautiful Solutions at the International Mathematics
Olympiad,</a> by Steve Olson.</p>
<figure class="center">
<img src="/wp-content/uploads/2013/02/cd.jpg" width="200" />
</figure>
<p>It is no secret that math and mathematicians traditionally have a bad rap in
the States (see, for example, this sardonically appropriate <a
href="http://blogs.scientificamerican.com/degrees-of-freedom/2011/10/15/we-hate-math-say-4-in-10-a-majority-of-americans/">newspaper
clipping</a> at Scientific American). Part of the fault lies in the great
disparity between the math taught in primary and secondary schools and the math
that mathematicians actually <em>do</em>. But part of the fault also falls upon
mathematicians, who often seem to not care if anyone understands their trade,
why it's interesting, and/or why it's important. Whatever the reason,
stereotypes about math, scientists and mathematicians tend to be negative,
casting mathematicians as socially awkward nerds. Images from popular culture
further these stereotypes. The movie <em>A Beautiful Mind</em>, where the protagonist
(and perhaps the antagonist) is the schizophrenic mathematician <a
href="http://en.wikipedia.org/wiki/John_Forbes_Nash,_Jr.">John Nash</a>, was
less about detailing Dr. Nash's tragic struggle and eventual <a
href="http://en.wikipedia.org/wiki/Pyrrhic_victory">pyrrhic victory</a> against
schizophrenia and more about aggrandizing Nash's delusions into an action film
and conflating madness and mathematics along the way. The widely successful
show <a href="http://www.cbs.com/shows/big_bang_theory/">The Big Bang
Theory</a>, a sitcom about three physicists and an engineer, portrays
scientists as socially inept, comic-loving geeks whose work is indecipherable
to the 'normal populace', represented by the leading actress Kaley Cuoco as the
waitress Penny.</p>
<p>In his book Count Down, Steve Olson tries to rectify these misconceptions. The
introduction to the book is about these misconceptions, and about the problems
facing the sciences in popular media. My wife happens to have read the book at
the same time as me; after she read the introduction, she even told me that she
felt a bit guilty for enjoying The Big Bang Theory so much. But after the
introduction, Olson goes into the main subject matter of the book: following
the six student on the 2001 US IMO team through some of the process of
preparing for and taking the Olympiad exam, including looking at the six
problems and parts of their solutions.</p>
<p>Olson saw a natural bijection between the six competitors and the six problems.
The book is structured so that each student has their own chapter, wherein a
particular positive characteristic is spoken about, attributed to the student,
and somehow reflected in that student's solution to one of the problems. The
characteristics are: Inspiration, Direction, Insight, Competitiveness, Talent,
Creativity, and Breadth (and A Sense of Wonder, if that counts).</p>
<p>While this is morally equivalent to <a
href="http://en.wikipedia.org/wiki/Priming_%28psychology%29">priming</a>, it is
a nice change. Such positive language and barefaced accolades tend to be
reserved for sports stars and action heroes. It doesn't hurt to try to cast
people who do math in a more positive light. Olson does his best to show that
the six kids on the IMO team are largely ordinary teens with a healthy degree
of curiosity and enough discipline to sit down and do some real work. He
describes the teens as having "casual good looks," liking games of fast wit and
ultimate frisbee, having an "easygoing nature" or being "unnervingly calm." A
few times, he writes about former IMO competitor Melanie Wood, and he describes
her as "an attractive, green-eyed, vivacious blond."</p>
<p>Olson does a great job of talking about tangents related to the Olympians and
their pasts, or to the IMO overall. He continually returns to mathematical
giants, like Andrew Wiles or Martin Gardner, and ideas about genius and talent.
He alludes to and provides further information about a mountain of different
sources. Olson is clearly knowledgeable and passionate himself.</p>
<p>Unfortunately, these overarching themes do not always play well with each
other. Although the book is purportedly about the six students and their story
taking the olympiad, there is so much tangential material that the kids are
largely left out. Further, in his struggle to present the olympians as largely
ordinary people as opposed to math-geeks, Olson leaves out much of the detail
that shows how interesting the olympians really are. At the end of the day, we
know that the six American teenagers selected to compete on the International
Math Olympiad have some sort of story - many hours of hard work and dedication,
some teacher, group of teachers, or mentor who pushed and helped. But we were
never privy to this through this book.</p>
<p>Instead, we heard brief testimony from teachers saying that the students were
good at many subjects. One of the olympians "was interested in history" and "a
good writer." Another teacher said that he "had to think of things to keep him
busy." These statements, where a teacher tasked with the education of a
precocious young teen realizes that the student teaches him just as much as he
teaches the student, are trite. But it would be tremendously interesting to
know about the teens, and to see how the teachers actually overcame the task of
educating such a quick-learner.</p>
<p>Each student is associated with a particular attribute and a particular
problem. While the attributes are generically good, they also seem generic
enough that there was no reason for the given associations between students and
attributes. Did young Oaz Nir lack talent, or creativity, or competitiveness?
Clearly he did not, as he was one of the better members of the team. The more
and more I read through the book, the more and more I felt this gimmick
detracted from 'the real story.'</p>
<p>When I was in middle and high school, I typically thought that math was boring
and easy. My friends and I often finished our assignments early, and in general
the teachers had nothing else for us to do. So we did other classes' homework,
or played chess, or the like. (This is not completely true - in eighth grade, I
had a math teacher named Mr. O'Brien, who kept me engaged with puzzles like the
Towers of Hanoi and whatnot. I'm not sure, but I suspect that I would not be a
mathematician were it not for him. And my senior year of high school was also
different). It turns out that my middle school also had a MathCounts program -
a program that Olson frequently mentions as 'an in' into the world of
competitive math olympiads - but I never knew about it. There is something to
be said for good educators, and I think Olson missed a big opportunity to
highlight the teens, their families, and their educators.</p>
<p>Count Down also presented sketch solutions to the six IMO problems, inherently
difficult problems; but the presentation is very approachable. There are many
times where Olson gives the heart of the proof but decides to omit the
computation, and I think he chose the exact right amount of rigor in his
proofs. He includes appendices in the back with additional exposition on the
proofs, but even they do not include all the computation. For example, in the
appendix for Problem 4, a question about a sum over a permutation group, he
gives the heart of the proof and skips through the tedium with "... through
some fancy calculating, you can show that this sum cannot be evenly divided
by..." He uses similar phrases throughout, but in my opinion, he captures the
essence of the proofs.</p>
<p>More importantly, the reader comes away with the feeling that
<em>he </em>understands the proofs as well. My wife did. This is perhaps
Olson's greatest success: challenging math does not <em>feel hard</em> in this
book. It's understandable, and feels more like a puzzle. It seems possible that
the reader steps away with the idea that math could be entertaining, and does
not have to be hard. As a math educator, I see students who have convinced
themselves that they will fail before any assignment is assigned because they
know that they "are not good at math" or that "math is hard." It is not easy to
overcome.</p>
<p>Overall, I thought Count Down was an entertaining read and I would recommend it
to others.</p>https://davidlowryduda.com/book-review-count-downSat, 16 Feb 2013 03:14:15 +0000Hurwitz zeta is a sum of Dirichlet $L$-functions, and vice-versahttps://davidlowryduda.com/hurwitz-zeta-is-a-sum-of-dirichlet-l-functions-and-vice-versaDavid Lowry-Duda<p>At least three times now, I have needed to use that Hurwitz Zeta functions are
a sum of L-functions and its converse, only to have forgotten how it goes. And
unfortunately, the current wikipedia article on the Hurwitz Zeta function has a
mistake, omitting the $\varphi$ term (although it will soon be corrected).
Instead of re-doing it each time, I write this detail here.</p>
<p>The <a href="http://en.wikipedia.org/wiki/Hurwitz_zeta_function">Hurwitz zeta function</a>,
for complex ${s}$ and real ${0 < a \leq 1}$ is ${\zeta(s,a) := \displaystyle \sum_{n = 0}^\infty \frac{1}{(n + a)^s}}$.
A <a href=""http://en.wikipedia.org/wiki/Dirichlet_L-function">Dirichlet $L$-function</a>
is a function ${L(s, \chi) = \displaystyle \sum_{n = 1}^\infty \frac{\chi
(n)}{n^s}}$, where ${\chi}$ is a Dirichlet character. This note contains
a few proofs of the following relations:</p>
<div class="lemma">
<p>\begin{equation}
\zeta(s, l/k) = \frac{k^s}{\varphi (k)} \sum_{\chi \mod k} \bar{\chi} (l) L(s, \chi) \tag{1}
\end{equation}
\begin{equation}
L(s, \chi) = \frac{1}{k^s} \sum_{n = 1}^k \chi(n) \zeta(s, \frac{n}{k}) \tag{2}
\end{equation}</p>
</div>
<div class="proof">
<p>We start by considering ${L(s, \chi)}$ for a Dirichlet Character ${\chi \mod k}$. We multiply by ${\bar{\chi}(l)}$ for some ${l}$
that is relatively prime to ${k}$ and sum over the different ${\chi
\mod k}$ to get
\begin{equation*}
\sum_\chi \bar{\chi}(l) L(s,\chi).
\end{equation*}
We then expand the L-function and sum over ${\chi}$ first.
\begin{equation*}
\sum_\chi \bar{\chi}(l) L(s,\chi)= \sum_\chi \bar{\chi} (l) \sum_n \frac{\chi(n)}{n^s} = \sum_n \sum_\chi \left( \bar{\chi}(l) \chi(n) \right) n^{-s}=
\end{equation*}
\begin{equation*}
= \sum_{\substack{ n > 0 \\ n \equiv l \mod k}} \varphi(k) n^{-s}.
\end{equation*}
In this last line, we used a fact commonly referred to as the
<a href="http://en.wikipedia.org/wiki/Character_theory#Orthogonality_relations">Orthogonality of Characters</a>, which says exactly that
\begin{equation*}
\sum_{\chi \mod k} \bar{\chi}(l) \chi{n} =
\begin{cases}
\varphi(k) & n \equiv l \mod k
\\
0 & \text{else} \end{cases}.
\end{equation*}</p>
<p>What are the values of ${n > 0, n \equiv l \mod k}$? They start ${l, k + l, 2k+l, \ldots}$. If we were to factor out a ${k}$, we would get
${l/k, 1 + l/k, 2 + l/k, \ldots}$. So we continue to get</p>
<p>\begin{equation}
\sum_{\substack{ n > 0 \\ n \equiv l \mod k}} \varphi(k) n^{-s} =
\varphi(k) \sum_n \frac{1}{k^s} \frac{1}{(n + l/k)^s} =
\frac{\varphi(k)}{k^s} \zeta(s, l/k) \tag{3}
\end{equation}</p>
<p>Rearranging the sides, we get that
\begin{equation}
\zeta(s, l/k) = \frac{k^s}{\varphi(k)} \sum_{\chi \mod k} \bar{\chi}(l) L(s, \chi)
\end{equation}
To write ${L(s,\chi)}$ as a sum of Hurwitz zeta functions, we multiply by
${\chi(l)}$ and sum across ${l}$. Since ${\chi(l)
\bar{\chi}(l) = 1}$, the sum on the right disappears, yielding a factor of
${\varphi(k)}$ since there are ${\varphi(k)}$ characters ${\mod k} \Box$.</p>
</div>
<p>I'd like to end that the exact same idea can be used to first show that an
L-function is a sum of Hurwitz zeta functions and to then conclude the converse
using the heart of the idea for of equation 3.</p>
<p>Further, this document was typed up using latex2wp, which I cannot recommend
highly enough.<sup>1</sup>
<span class="aside"><sup>1</sup>This was originally true, but is no longer true. In a
reorganization this post was re-set from the original latex into a format
better suited for the web.</span></p>https://davidlowryduda.com/hurwitz-zeta-is-a-sum-of-dirichlet-l-functions-and-vice-versaFri, 08 Feb 2013 03:14:15 +0000Math journals and the fight over open accesshttps://davidlowryduda.com/side-of-journalDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/side-of-journalFri, 18 Jan 2013 03:14:15 +0000Are the calculus MOOCs any good - after week 1https://davidlowryduda.com/are-the-calculus-moocs-any-good-after-week-1David Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/are-the-calculus-moocs-any-good-after-week-1Sat, 12 Jan 2013 03:14:15 +0000Are the calculus MOOCs any good?https://davidlowryduda.com/are-the-calculus-moocs-any-goodDavid Lowry-Duda<p>I like the idea of massive online collaboration in math. For example, I am a
big supporter of the ideas of the <a
href="http://polymathprojects.org/">polymath projects</a>. I contribute to
wikis and to <a href="http://www.sagemath.org/">Sage</a> (which I highly
recommend to everyone as an alternative to the M's: Maple, Mathematica, MatLab,
Magma). Now, there are <a
href="http://en.wikipedia.org/wiki/Massive_open_online_course">MOOC</a>s
(Massice open online courses) in many subjects, but in particular there are a
growing number of math MOOCs (a more or less complete list of MOOCs can be
found <a href="http://www.mooc-list.com/">here</a>). The idea of a MOOC is to
give people all over the world the opportunity to a good, diverse, and free
education.</p>
<p>I've looked at a few MOOCs in the past. I've taken a few <a
href="https://www.coursera.org/">Coursera </a>and <a
href="http://www.udacity.com/">Udacity </a>courses, and I have mixed reviews.
Actually, I've been very impressed with the Udacity courses I've taken. They
have a good polish. But there are only a couple dozen - it takes time to get
quality. There are hundreds of Coursera courses, though there is some overlap.
But I've been pretty unimpressed with most of them.</p>
<p>But there are two calculus courses being offered this semester (right now)
through Coursera. I've been a teaching assistant for calculus many times, and
there are things that I like and others that I don't like about my past
experiences. Perhaps the different perspective from a MOOC will lead to a
better form of calculus instruction?</p>
<p>There will be no teaching assistant led recitation sections, as the 'standard
university model' might suggest. Will there be textbooks? In both, there are
textbooks, or at least lecture notes (I'm not certain of their format yet). And
there will be lectures. But due to the sheer size of the class, it's much more
challenging for the instructors to answer individual students' questions. There
is a discussion forum which essentially means that students get to help each
other (I suppose that people like me, who know calculus, can also help people
through the discussion forums too). So in a few ways, this turns what I have
come to think of as the traditional model of calculus instruction on its head.</p>
<p>And this might be a good thing! (Or it might not!) Intro calculus instruction
has not really changed much in decades, since before the advent of computers
and handheld calculators. It would make sense that new tools might mean that
teaching methods should change. But I don't know yet.</p>
<p>So I'll be looking at the two courses this semester. The <a
href="https://www.coursera.org/course/calc1">first </a>is being offered by Dr.
Jim Fowler and is associated with Ohio State University. It's an
introductory-calculus course. The <a
href="https://www.coursera.org/course/calcsing">second </a>is being offered by
Dr. Robert Ghrist and is associated with the University of Pennsylvania. It's
sort of a funny class - it's designed for people who already know some
calculus. In particular, students should know what derivatives and integrals
are. There is a diagnostic test that involves taking a limit, computing some
derivatives, and computing an integral (and some precalculus problems as well).
Dr. Ghrist says that his course assumes that students have taken a high school
AP Calculus AB course or the equivalent. So it's not quite fair to compare the
two classes, as they're not on equal footing.</p>
<p>But I can certainly see what I think of the MOOC model for Calculus instruction.</p>https://davidlowryduda.com/are-the-calculus-moocs-any-goodTue, 08 Jan 2013 03:14:15 +0000Math 90 - Concluding Remarkshttps://davidlowryduda.com/math-90-concluding-remarksDavid Lowry-Duda<p>All is said and done with Math 90 for 2012, and the year is coming to a close.
I wanted to take this moment to write a few things about the course, what
seemed to go well and what didn't, and certain trends in the course. that I
think are interesting and illustrative.</p>
<p>First, we might just say some of the numbers. Math 90 is offered only as
pass/fail, with the possibility of 'passing with distinction' if you did
exceptionally well (I'll say what that meant here, though who knows what it
means in general). We had four people fail, three people 'pass with
distinction,' and everyone else got a passing mark. Everything else will be
after the fold.</p>
<p>The most common question asked of me this semester was what grade would merit a
passing mark for the class. I didn't know what to say, and so throughout the
semester I gave the only information that I could guarantee: a 70 would pass.
How accurate was I? Not so accurate, it turns out.</p>
<p>A hard 56% was the cutoff for a passing grade. The overall class mean was a 71,
with a 70 median. Our class mean was a 71, with a 73 median. The standard
deviation was almost a full 14 points, which is a little bit crazy. So this
means that the cutoff for passing was approximately one standard deviation
below the mean. In a world with a perfectly normal distribution, we would
expect something like 84% of the class to pass from this statistic, and in fact
88% actually passed.</p>
<p>It seemed to me that the 'goal grade' of the class was a 70%, a byproduct of
being only pass-fail. It's hard to say, but I think the quality of the course
suffered a bit from it. But the purpose of making Math 90 pass-fail is to
reduce the number of students repeating calculus in order to pad their grade.
Which is more important? It's hard to say.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2012/12/chart_3.png" width="450"/>
<figcaption class="left">
The horizontal labels indicate standard deviations above or below the mean on
the first midterm. The vertical indicate standard deviations from the mean of
the final grade.
</figcaption>
</figure>
<p>Interestingly, the first midterm was an incredibly accurate indicator of course
performance. I've attached a picture at right indicating how unbelievably
linear and strong the correlation between first midterm performance and final
grade are.</p>
<p>We always tell students to really work on their homework, so I think it's
natural to see how well one's homework grade served as a predictor of one's
final grade. And the answer might surprise - good homework grades had a weak
correlation with good final grade. But a bad homework grade had a pretty strong
correlation with bad final grade. In other words, if you didn't do the homework
well, you didn't do good in the class. But some people managed to complete the
homework without mastering the material, so to speak.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2012/12/chart_7.png"
width="450px" />
</figure>
<p>That graphic is also included here, with homework on horizontal and final grade
on vertical.</p>
<p>Passes with distinction were given to the students who got over 2 standard
deviations above the average, more or less. Interestingly, two of the three
'distinguished' people were the two most frequent visitors to my office hours
(the third never visited).</p>
<p>Recitation grade and recitation attendance had a very weak but positive
correlation to final grade. I wish we had data on lecture attendance, as that
would likely serve as a slightly stronger indicator than recitation attendance.</p>https://davidlowryduda.com/math-90-concluding-remarksSun, 30 Dec 2012 03:14:15 +0000Math 90 - Week 11https://davidlowryduda.com/math-90-week-11-and-midterm-solutionsDavid Lowry-Duda<p>We had a midterm this week, and did more review during recitation. The
solutions are now available.</p>
<p><a href="/wp-content/uploads/2012/11/midterm2sols.pdf">Solutions</a>
to the Midterm
A copy of the Midterm: (actually just a <a href="/wp-content/uploads/2012/11/fa12-math90-midterm2-draft.pdf">draft</a>)
If you have any questions, please let me know. Other than that, we are working towards integrals. Hurray!</p>https://davidlowryduda.com/math-90-week-11-and-midterm-solutionsSun, 18 Nov 2012 03:14:15 +0000An application of Mobius inversion to certain Asymptoticshttps://davidlowryduda.com/an-application-of-mobius-inversion-to-certain-asymptotics-iDavid Lowry-Duda<p>In this note, I consider an application of generalized Mobius Inversion to
extract information of arithmetical sums with asymptotics of the form $\displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1 - \epsilon})$ for a fixed
$j$ and a constant $a_1$, so that the sum is over both $n$
and $k$. We will see that $\displaystyle \sum_{nk^j \leq x} f(n) =
a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} +
O(x^{1 - \epsilon})$.</p>
<p>For completeness, let's prove the Mobius Inversion formula. Suppose we have an
arithmetic function $\alpha(n)$ that has <a
href="http://en.wikipedia.org/wiki/Dirichlet_convolution">Dirichlet inverse</a>
$\alpha^{-1}(n)$, so that $\alpha * \alpha^{-1} (n) = [n = 1] =
\begin{cases} 1 & \text{if } n = 1 \\ 0 & \text{else}
\end{cases}$, where I use $[n = 1]$ to denote the indicator function of
the condition $n = 1$</p>
<p>Then if $F(x)$ and $G(x)$ are complex-valued functions on $[1, \infty)$, then</p>
<blockquote>
<p><strong>Mobius Inversion </strong></p>
<p>$\displaystyle F(x) = \sum_{n \leq x} \alpha(n) G\left(\frac{x}{n}\right) = \alpha * G(x)$
if and only if
$\displaystyle G(x) = \sum_{n \leq x} \alpha^{-1}(n)
> F\left(\frac{x}{n}\right) = \alpha^{-1} * F(x)$</p>
</blockquote>
<p>Suppose that $\displaystyle F(x) = \sum_{n \leq x}\alpha(n)
G\left(\frac{x}{n}\right)$. Then $\displaystyle \sum_{n \leq x}
\alpha^{-1}(n) F\left(\frac{x}{n}\right) = \sum_{n \leq x} \alpha^{-1}(n)
\sum_{m \leq x/n} \alpha(m)G\left(\frac{x}{mn}\right) =$ $\displaystyle =
\sum_{n \leq x} \sum_{m \leq x/n} \alpha^{-1}(n) \alpha (m)
G\left(\frac{x}{mn}\right)$.</p>
<p>Let's collect terms. For any given $d$, the number of times $G\left(\frac{x}{d}\right)$ will occur will be one for every factorization of
$d$ (that is, one time for every way of writing $mn = d$). So
reorganizing the sum, we get that it's equal to $\displaystyle \sum_{d
\leq x} G\left(\frac{x}{d}\right) \sum_{e | d} \alpha(e)\alpha^{-1}\left(
\frac{d}{e}\right) = \sum_{d \leq x} G\left(\frac{x}{d}\right) (\alpha *
\alpha^{-1})(d)$. Since we know that $\alpha$ and $\alpha^{-1}$ are
Dirichlet inverses, the only term that survives is when $d = 1$. So we
have, after the dust settles, that our sum is nothing more than $G(x)$.
And the converse is the exact same argument. $\diamondsuit$.</p>
<p>We know some Dirichlet inverses already. If $u(n)$ is the function
sending everything to $1$, i.e. $u(n) \equiv 1$, then we know that
$\displaystyle \mu * u(n) = \sum_{d | n} \mu(d) = [n = 1]$, where $\mu(n)$ is the <a
href="http://en.wikipedia.org/wiki/M%C3%B6bius_function">Mobius function</a>.
This means that $\displaystyle \sum_n \frac{u(n)}{n^s} \sum_m
\frac{\mu(m)}{m^s} = \zeta(s) \sum \frac{\mu(m)}{m^s} = 1$. This also means
that we know that $\displaystyle \sum \frac{\mu(n)}{n^s} =
\frac{1}{\zeta(s)}$.</p>
<p>Let me take a brief aside to mention a particular notation I endorse: the <a
href="http://en.wikipedia.org/wiki/Iverson_bracket">Iverson bracket
notation</a>, which says that $[P] = \begin{cases} 1 & \text{if } P
\text{ true} \\ 0 & \text{else} \end{cases}$. Why? Well, a while
ago, I really liked to use the bolded 1 indicator function, but then I started
doing things with modules and representations, and someone bolded 1 became a
multiplicative identity. This is done in Artin's algebra text too, which I like
(but did not learn out of). Then I really liked the $chi$-style indicator
notation, until it became the default letter for group characters. So I've
turned to the Iverson Bracket, which saves space compared to the other two
anyway.</p>
<p>Back to Dirichlet inverses. Some are a bit more challenging to find. We might
say, let's find the Dirichlet inverse of the function $[n = a^j]$ for a
fixed $j$, i.e. the $j$th-power indicator function. This is one of
those things that we'll be using later, and I'm acting like one of those
teachers who just happens to consider the right thing at the right time.</p>
<div class="claim">
<p>The Dirichlet inverse of the function $[n = a^j]$
is the function $[n = a^j]\mu(a)$, which through slightly abusive
notation is to say the function that is zero on non-$j$th-powers, and
takes the value of $\mu(a)$ when $n = a^j$.</p>
</div>
<p><em>Possible Motivation of claim: </em>This doesn't need to have descended as a
gift from the gods. Examining $[n = a^j]$ on powers makes it seem like we
want $\mu$-like function, and a little computation would give a good
suggestion as to what to try.</p>
<p><em>Proof:</em>
Both sides are $1$ on $1$. That's a good start. Note also that both
sides are zero on non-$j$-th-powers, so we only consider $j$th
powers. For some $m > 1$, consider $n = m^j$. Then $\displaystyle \sum_{d | m^j} [d = a^j]\left[\frac{m^j}{d} = a'^j\right]\mu(a) =
$ $\displaystyle \sum_{d^j | m^j} [d^j = a^j]\left[\frac{m^j}{d^j} =
a^j\right] \mu(a') = $ $\displaystyle \sum_{d^j | m^j} \mu(d) =
\sum_{d|m} \mu(d) = 0$ as $m > 1$. So we are done. $\diamondsuit$</p>
<p>That's sort of nice. Let's go on to the main bit of the day.</p>
<div class="lemma">
<p>$\displaystyle \sum_{nk^j \leq x} f(n) = \sum_{n
\leq x} f * [n = k^j \text{ for some } k] (n)$.</p>
</div>
<p><em>Proof of lemma:
</em>$\displaystyle \sum_{nk^j \leq x} f(n) = \sum_{m \leq x} \sum_{nk^j
= m} f(n) = $ $\displaystyle \sum_{m \leq x} \sum_{d | m} [d =
k^j]f\left(\frac{m}{d}\right) = \sum_{m \leq x} f * [m = k^j] (m)$ $\diamondsuit$</p>
<div class="proposition">
<p>$\displaystyle \sum_{nk^j \leq x} f(n) =
a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} +
O(x^{1 - \epsilon})$, where $0 < \epsilon < 1 - \frac{j}{4}$.</p>
</div>
<p><em>Proof:</em> We now know that $\displaystyle F(x) = \sum_{nk^j \leq x}
f(n) = \sum_{n \leq x} f * [n = k^j] = $ $\displaystyle = \sum_{mn \leq
x} [m = k^j] f(n) = \sum_{m \leq x} [m = k^j] \sum_{n \leq x/m} f(n)$, which is
of the form $\displaystyle F(x) = \sum \alpha (n)G(x/n)$. So by Mobius
inversion, $\displaystyle F(x) = \sum_{m \leq x} [m = k^j] \sum_{n \leq
x/m} f(n) = a_1x + O(x^{1 - \epsilon}) \iff $ $\displaystyle \iff \sum_{n
\leq x} f(n) = \sum_{n \leq x} [n = k^j]\mu(k) F(x/n)$.</p>
<p>Let's look at this last sum. $\displaystyle \sum_{n \leq x} [n =
k^j]\mu(k) F(x/n) = \sum_{n^j \leq x} [n^j = k^j]\mu(n) F(x/n^j) = $ $\displaystyle = \sum_{n^j \leq x} \mu(n) \left( a_1\frac{x}{n^j} + O\left(
\frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right)\right) = $ $\displaystyle = \sum_{n^j \leq x} \mu(n)a_1 \frac{x}{n^j} + \sum_{n^j \leq x}
\mu(n)O\left( \frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right) = $ $\displaystyle = a_1 x \sum_{n^j \leq x} \frac{\mu(n)}{n^j} + \sum_{n^j \leq x}
\mu(n)O\left( \frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right)$.</p>
<p>For the main term, recall that we know that $\displaystyle
\frac{1}{\zeta(s)} = \sum_n \frac{\mu(n)}{n^s}$, so that (asymptotically) we
know that $\displaystyle \sum_{n^j \leq x} \frac{\mu(n)}{n^j} to
\frac{1}{\zeta(j)}$.</p>
<p>For the error term, note that $|\mu(n)| \leq 1$, so $\displaystyle
|\sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 - \epsilon}}{n^{j(1 -
\epsilon)}}\right)| \leq $ $\displaystyle \leq O\left( x^{1 - \epsilon}
\sum \frac{1}{n^{j(1 - \epsilon)}}\right) = O(x^{1 - \epsilon})$, and where the
convergence of this list sum is guaranteed by the condition on $\epsilon$
being not too big (in a sense, we can't maintain arbitrarily small error).</p>
<p>Every step is reversible, so putting it all together we get $\displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n
\leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 - \epsilon})$, as we'd wanted.
$\diamondsuit$</p>
<p>In general, if one has error term $O(x^\alpha)$ for some $\alpha<1$, then this process will yield an error term $O(x^{\alpha'})$
with $\alpha' \geq \alpha$, but it will still be true that $\alpha'
< 1$. I have much more to say about applying Mobius Inversion to asymptotics
of this form, and will follow this up in another note.</p>https://davidlowryduda.com/an-application-of-mobius-inversion-to-certain-asymptotics-iThu, 08 Nov 2012 03:14:15 +0000Math 90 - Week 9https://davidlowryduda.com/math-90-week-10David Lowry-Duda<p>We deviated from our regular course of action this week, so we did not have
preset examples to do in classes. So instead, I will say a few things, and this
can be the new posthead for questions.</p>
<p>We will have a test next week, but I will not have office hours on Monday and I
might miss morning recitation. I've asked Tom to be ready to step in, so
(barring acts of the gods) there will be class next week. I will still be in
MRC next week, and I'm planning on having office hours next Wednesday instead.
<strong>I will be holding additional office hours, in my office, from 11am to
2pm on Wednesday.</strong></p>
<p>Usually we preview the homework in class, but since we didn't do that this time
and I will not be available to help you with the homework before you have to
turn it in, I extra encourage you to use this space to ask any questions you
have here.</p>https://davidlowryduda.com/math-90-week-10Wed, 07 Nov 2012 03:14:15 +0000Math 90 - Week 8https://davidlowryduda.com/math-90-week-8David Lowry-Duda<p>Today, we had a set of problems as usual, and a quiz! (And I didn't tell you
about the quiz, even though others did, so I'm going to pretend that it was a
pop quiz)!. Below, you'll find the three problems, their solutions, and a
worked-out quiz.</p>
<p>We had three questions in recitation.</p>
<ol>
<li>A function $f(x)$ is continuous on $[1,5]$ and differentiable on $(1,5)$. We happen to know that $f'(x) > 2$.
<ol>
<li>What can you conclude about the function from the Mean Value Theorem?</li>
<li>Use the Mean Value Theorem to show that $f(5) > f(1) + 8$.</li>
</ol>
</li>
<li>Consider the following function $f(x) = \begin{cases} x^2 + 1 &
\text{if } x < -2 \ x^3 + 13 & \text{if } -2 \leq x < 0 \ x^4 + 13 & \text{if } 0 < x < 1 \ -14x + 28 & \text{else } \end{cases}$.
<ol>
<li>Without using derivatives, determine where $f(x)$ is increasing or decreasing.</li>
<li>Check by taking derivatives.</li>
<li>Identify the local maxes and mins of the function $f(x)$.</li>
</ol>
</li>
<li>Consider the function $g(x) = 3x^5 - 25x^3 + 60x + 2$.
<ol>
<li>Show that $-2, -1, 1, 2$ are the critical points of $g(x)$.</li>
<li>What are the local minima and maxima of $g(x)$?</li>
<li>Compute $g''(x)$ for each of the critical points of $g(x)$.</li>
<li>Do you notice a pattern between the second derivatives at the minima and maxima?</li>
</ol>
</li>
</ol>
<p>I'll consider the quiz after these problems.</p>
<h4>Question 1</h4>
<p>This question is designed around the mean-value theorem. The mean-value theorem
states that if you have a function $f(x)$ that is continuous on an
interval $[a,b]$ and differentiable on $(a,b)$. Then there is a
$c$ in the interval $(a,b)$ such that $f'(c) = \dfrac{f(b) -
f(a)}{b-a}$. In other words, the "average slope" gets hit by the derivative.</p>
<p>So in this problem, from the mean value theorem, we know that $\dfrac{f(5)
- f(1)}{5-1} = f'(c)$ for some $c$ in $(1,5)$. We know in addition
that $f'(x) > 2$ always. So in fact, the mean value theorem tells us
that $\dfrac{f(5) - f(1)}{5-1} = f'(c) > 2$, or that $f(5) - f(1)
> 2\cdot (4)$. Rewriting this, we get that $f(5) > 8 + f(1)$, which
is exactly what we were trying to show.</p>
<h4>Question 2</h4>
<p>Without using derivatives!</p>
<p>We'll consider each function-piece in turn. First, we have $x^2 + 1$ on
$(-\infty, -2)$. The thing that changes with $x$ here is $x^2$, and the size of $x^2$ depends only on the magnitude of $x$:
bigger input magnitude yields bigger output magnitude. So as $x$ is
increasing from $-\infty$ to $-2$, it is <em>decreasing</em> in magnitude,
and thus $x^2$ is decreasing. In a different way, we know that $x^2$ is a parabola with vertex above $x = 0$. It's decreasing for all
negative numbers and increasing on all positive numbers. So we have one part
down.</p>
<p>Next, we have $x^3 + 13$ on $[-2, 0)$. Our previous thought process
doesn't quite work now. $x^3$ will take negative numbers to more negative
numbers, and the larger the input (in magnitude), the larger the output. So as
$x$ is increasing from $-2$ to $0$, it is decreasing in
magnitude. This means that $x^3$ will be <em>increasing</em>, as the negative
numbers coming out will also be decreasing in magnitude. Thus $x^3$ is
increasing here. In a different way, we know the graph of $x^3$, and it's
always increasing.</p>
<p>$x^4 + 13$ behaves a lot like a parabola. Now we're in positive numbers,
which match our intuition a lot better than negative numbers. As the magnitude
of $x$ increases, so does the magnitude of $x^4$. So this function is
also <em>increasing.</em></p>
<p>Finally, we have a line. Thank goodness, a line! This line has negative slope,
so it's decreasing always.</p>
<p>Let's check using derivatives. The derivative of $x^2 + 1$ is $2x$,
and we're only looking at negative $x$. Thus $2x$ is always negative, and
so the original function is decreasing. The derivative of $x^3 + 13$ is
$3x^2$. This is always nonnegative, so the original function is
increasing. The derivative of $x^4 + 13$ is $4x^3$. We're now on
positive $x$, so $4x^3$ is always positive, and so the original
function is increasing. Finally, the derivative of our line $-14x + 28$
is $-14$, which is negative, and so the original function is decreasing.
So we were right above - good.</p>
<p>When we are trying to identify the local maxima and minima, it's tempting to
just try to use (what you're about to learn:) the first derivative test:
finding the derivatives and setting equal to zero. But that won't work here.
Notice that all the derivative we found are zero only at $0$. But our
function isn't differentiable everywhere. Instead, we should use the fact that
we now know when the function is increasing and decreasing. A local maximum
will occur when the function increases, and then decreases. A local minimum
will occur when the function decreases, then increases.We know that the
function is decreasing until $x = -1$, after which the function is
increasing. So there is a local min at $x = -2$. The function goes from
increasing to decreasing at $x = 1$ too. So we know that $x = 1$
will be a local maximum. We have found all the times when the function is
changing from increasing to decreasing, or from decreasing to increasing, so
we're done.</p>
<blockquote>
<p><strong>As an aside:</strong></p>
<p>This is really the intuition behind the first derivative test, too. The idea of
the first derivative test is that maxes and mins will occur when the derivative
is zero or on the boundary, so long as our function is differentiable. Why is
this the case? If our function is differentiable, then the slope of the
function changes from positive to negative (i.e. the function changes from
increasing to decreasing) when the derivative is zero. But here, our function
isn't differentiable everywhere, so we need to be a bit wittier.</p>
</blockquote>
<h4>Question 3</h4>
<p>So we are now looking at $g(x) = 3x^5 - 25x^3 + 60x + 2$. Let's
differentiate: we find that $g'(x) = 15x^4 - 75x^2 + 60$. If we plug in
$x = -2, -1, 1, 2$, we get zero. Yay!. In fact, we have that $g'(x)
= 15(x+2)(x-2)(x+1)(x-2)$. These are the places where the derivative is zero,
so we expect them to be our max and min candidates. By either plugging in
numbers or looking at when the function is increasing/decreasing, or by
realizing that this is a positive quintic with 4 extrema (if this makes sense -
awesome; if not - don't worry), we have maxes at $x = -2, 1$ and mins at
$x = -1, 2$.</p>
<p>The second derivative is $g''(x) = 60x^3 - 150x$. The idea behind the
second half of this problem is to motivate the second derivative test (which
you'll learn shortly). It just so happens that when you compute the second
derivative at these four points, it's negative at the two maxima and positive
at the two minima. This might lead you to make the conjecture that at local
maxima, the second derivative is always negative; and at local minima, the
second derivative is always positive. But <em>this would be wrong</em>.</p>
<p>The converse, however, is true. If the second derivative is negative at a
critical point, then that point is a local maximum. If it's positive at a
critical point, then that point is a local minimum.</p>
<blockquote>
<p>Why is this true? The second derivative $g''$ is the <em>first
derivative</em> of the first derivative. This means that when the second
derivative is positive, the first derivative is increasing. And when it's
negative, the first derivative is decreasing. So at a critical point, i.e.
when the first derivative is zero, if the second derivative is positive, this
means that the first derivative is increasing. Since the first derivative is
zero and increasing, this means that it was just negative and is about to be
positive. But <em>this</em> means that the original function was decreasing
and then increasing, i.e. that there is a local min. So when the second
derivative is positive at a critical point, we get a local min. This type of
reasoning works for when the second derivative is negative at a critical
point too. I encourage you to try it!</p>
</blockquote>
<p>I'll post up the quiz and its solutions shortly, but in a separate post. This
one is very long as is, I think. I'd like to finish by reminding you all that I
will not be in MRC next week. In addition, something was left in the classroom.
If you're missing something and you think you left it, I am holding on to it -
so let me know. (I'll give it to some lost and found office if I don't hear
anything).</p>https://davidlowryduda.com/math-90-week-8Wed, 24 Oct 2012 03:14:15 +0000Math 90 - Week 8 Quizhttps://davidlowryduda.com/math-90-week-8-quizDavid Lowry-Duda<p>There was a quiz this week - in this post, we consider the solutions, common mistakes, and the distribution.</p>
<p>The quiz was as follows:</p>
<p>A girl flies a kite that stays a constant 200 feet above the ground. The wind carries it away from her at 20 feet per second. We were first to draw a picture.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2012/10/drawing.png"
width="236" height=250/>
<figcaption class="left">
The girl and the kite from the problem. Notice "distance".
</figcaption>
</figure>
<p>I have included a picture at the right, excluding the initial position (which
is not really useful to this problem). In addition, we know that $x'(t) =
20$. Almost everyone drew the picture correctly, so I won't belabor this point
too much.</p>
<p>We were then asked to give an equation for the square of the distance from the
girl to the kite. The question explicitly asks for the square of the distance.
This is a right triangle, so we write $d(t)^2 = 200^2 + x(t)^2$, from the
Pythagorean Theorem.</p>
<p>Finally, we were asked to compute the rate at which the girl must let out
string when the kite was 300 feet away from the girl. The most common (and
often, only) mistake on the test was to interpret this to mean that $x =
300$ at this time. But the distance between two things is the length of the
straight line between them unless stated otherwise, so we actually care about
the time when $d = 300$.</p>
<p>So we have a triangle with hypotenuse $300$ and one leg $200$, so
the other leg is of length $\sqrt{300^2 - 200^2}=100 \sqrt 5=x$.</p>
<p>Now we put it all together. From $d(t)^2 = 200^2 + x(t)^2$, we
differentiate to get $2d(t)d'(t) = 2x(t)x'(t)$. We are looking to find
$d'(t)$ when $d(t) = 300$, and we found that at this time $x(t) = 100\sqrt{5}$. We also know that $x'(t) = 20$ at all times. Plugging
these in, we get that $2 \cdot 300 \cdot d'(t) = 2 \cdot 100\sqrt 5 \cdot 20$, or
that $d'(t)=\dfrac{200 \sqrt 5 \cdot 20}{600} = \dfrac{20 \sqrt 5}{3}$ feet
per second.</p>
<p>The class did really, really well on this quiz. I was impressed. The vase
majority of the class made at least an 8. The only mistake on average was to
mistake which leg was implied by "distance from the kite to the girl."</p>https://davidlowryduda.com/math-90-week-8-quizWed, 24 Oct 2012 03:14:15 +0000Math 90 - Week 7https://davidlowryduda.com/math-90-week-7David Lowry-Duda<p>I haven't quite yet finished writing up the solutions to the problems we did in
class yesterday. But I wanted to go ahead an make the solutions to the test
available. They can be found <a
href="/wp-content/uploads/2012/10/math90test1sols.pdf">here</a>.</p>
<p>But please note that <strong>there is an error in the key!</strong> In
particular, on problem 7(b), I forgot that we only care about $t \geq 0$.
So the final answer should not include $t = 1/2$.</p>
<p>We considered three basic questions today. Two were related rates problems, and
one was a preview of thinking of the extrema of graph, the zeroes of
derivatives, and the extreme-value theorem. Unless there are any questions,
I'll just go over the two related rates problems.</p>
<ol>
<li>The surface area of a cube is increasing at $72$ cm^2/s. At what rate is the side length increasing when the surface area is $96$ cm^2? At what rate is the volume increasing at that time?</li>
<li>Charlie is testing a new candy: the ever-stretching Laffy-Taffy. So he steps into his glass elevator with one end of the Laffy taffy in his hand. An oompa-loompa stands outside the elevator holding the other end, 4 meters away from Charlie. Charlie hits the button, and rises at 100 m/s. At what rate does the angle of inclination (from the oompa-loompa's perspective) change when the elevator is 4 m high? At what rate is the Laffy-Taffy stretching at that time?</li>
</ol>
<h4>Questions 1:</h4>
<p>Although I don't do it here - I still recommend that the first thing you do is
draw a picture. Here, we have a cube. We know how quickly the surface area is
changing. We want to know how quickly the side length is changing. How do we
relate surface area to side length?</p>
<p>Well - each side of a cube is a square with side length $s(t)$, where
$s(t)$ is the side length of the cube at time $t$. Since a cube has
six faces, this means that at time $t$, a cube has surface area $A(t) = 6s(t)^2$. This formula relates side length to surface area, so it's the
exact formula that we are seeking. Differentiating both sides with respect to
$t$, we get that $A'(t) = 12s(t)s'(t)$.</p>
<p>At our particular moment in time, we know that $A'(t) = 72$. We want to
know $s'(t)$ But we also need to know $s(t)$. Do we know this?
Well, we know that the surface area at this tie is $96$ cm^2. This means
that $6s^2 = 96$, so $s = 4$ in our situation. Thus from $A'(t) = 12s(t)s'(t)$,
we get $72 = 12 \cdot 4 s'$, or that $s' =
3/2$ cm/s.</p>
<p>Now we want to know how quickly the volume is changing. Well - the volume of a
cube satisfies $V(t) = s(t)^3$, so that $V'(t) = 3s(t)^2s'(t)$.
Since we just calculated $s(t)$ and $s'(t)$, we know that $V'
= 3 \cdot 4 \cdot 3/2 = 18$ cm^3/s.</p>
<h4>An aside</h4>
<p>As an aside - there is a common mathematical fallacy that comes up related to
this concept. We know that volume is 3-dimensional and length is 1-dimensional,
but we are so accustomed to cubing length to get volume that we often expect
that we should cube the rate of length-change to get the rate of volume-change.
<strong>But this isn't how it works!</strong> The ability to understand how to
estimate the rate of change of something from a known or at least an estimate
of another thing's rate of change is a fundamental task that we do all the
time. Understanding that this isn't a naive process might lead to a greater
grasp of numeracy (which I always emphasize is important).</p>
<h4>Question 2</h4>
<p>If someone asks me to, I will upload a picture describing this question. But at
the moment, I don't. Let's let $h(t)$ describe the height of the elevator
at time $t$, so that we know $h'(t)$ and we want to know things
when $h(t) = 4$ . If $\theta$ denotes the angle of inclination, then
we want to know $\theta'(t)$.</p>
<p>Since the oompa-loompa stands $4$ meters away from the base of the
elevator at the start, we can set up a triangle with base $4$ meters,
height $h(t)$, and with hypotenuse equal to the length of the
Laffy-Taffy. Then $tan \theta = \dfrac{h(t)}{4}$. Differentiating both
sides with respect to $t$, we get that $\sec^2 (\theta(t)) \theta'(t)
= h'(t)/4$. So at our time in question, we know that $h'(t) = 100$. What
is $\theta(t)$? Well, since we're interested in what happens when $h(t) = 4$,
our $\theta$ is $\pi/4$, since in the triangle the height
is equal to the width. And since $cos \pi/4 = 1/(\sqrt 2)$, we know that
$\sec^2 \pi/4 = 2$. So we have that $2 \theta ' = 100/4$, or rather
$\theta' = 100/8$ radians per second. That's pretty speedy.</p>
<p>We also want to know how quickly the Laffy-Taffy is stretching. Let $l(t)$ denote the length of the Laffy-Taffy. Then from our triangle, we know
that $l(t) = \sqrt{ 16 + h(t)^2}$. Differentiating, we see that $l'(t) = \frac{1}{2} (16 + h(t))^{-1/2} 2h(t) h'(t)$. Plugging everything in that
we know, we get that $l'(t) = \frac{1}{2} (16 + 16)^{-1/2} \cdot 2 \cdot 4
\cdot 100$. And simplifying yields the answer.</p>
<h4>Final Comments</h4>
<p>I've said it in class, but it merits resaying: related rates is an application
of the chain rule, and it comes up all the time. In many ways, it is one of the
bread-and-butter questions of the course. A related rates problem will
certainly appear on the next midterm, and one will appear on the final.</p>
<p>They're also one step further removed from standard arithmetic, in the sense
that they're often given as word problems without a designated path to the
solution. Often, you must relate the various values yourself (as if it were a
real problem). This can be hard to set up or to visualize. If you have any
trouble, let me know, and we'll see what we can do.</p>
<p>It was a pleasure to lecture to you all on Wednesday, but I'm sure you're happy
to be back in the capable hands of Tom.</p>
<p>Finally, I will be in the MRC on both Monday and Tuesday of next week. But I
won't be in the MRC at all on the following Tuesday.</p>
<p>Good luck, and have a good weekend.</p>https://davidlowryduda.com/math-90-week-7Wed, 17 Oct 2012 03:14:15 +0000Math 90 - Week 5https://davidlowryduda.com/math-90-week-5David Lowry-Duda<p>A few administrative notes before we review the day's material: I will not be
holding office hours this Wednesday. And there are no classes next Monday, when
my usual set of office hours are. But I've decided to do a sort of experiment:
I don't plan on reviewing for the exam specifically next week, but a large
portion of the class has said that they would come to office hours on Monday if
I were to have them. So I'm going to hold them to that - I'll be in Kassar
House 105 (the MRC room) from 7-8:30 (or so, later perhaps if there are a lot
of questions), and this will dually function as my office hours and a sort of
review session.</p>
<p>But this comes with a few strings attached: firstly, I'll be willing to answer
any question, but I'm not going to prepare a review; secondly, if there is poor
turnout, then this won't happen again. Alrighty!</p>
<p>The topic of the day was differentiation! The three questions of the day were -</p>
<ol>
<li>Differentiate the following functions:
<ol type="i">
<li>$e^x$</li>
<li>$e^{e^x}$</li>
<li>$e^{e^{e^x}}$</li>
<li>$\sin x$</li>
<li>$\sin (\sin x)$</li>
<li>$\sin (\sin (\sin x))$</li>
</ol>
</li>
<li>A particle moves along a line with its position described by the function $s(t) = a_0t^2 + a_1t + a_2$. If we know that it's acceleration is always $20$ m/s/s, that its velocity at $t = 1$ is $-10$ m/s, and its position at $t = 2$ is $20$ m. What are $a_0, a_1, a_2$?</li>
<li>Given that $u(x) = (x^2 + x + 2$, what are the following:
<ol type="i">
<li>$\frac{d}{dx} (u(x))^2$</li>
<li>$\frac{d}{dx} (u(x))^n$</li>
<li>$\frac{d}{dx} (5 + x^3)^{-3}$</li>
<li>$\frac{d}{dx} ((u(x))^n)^m$</li>
</ol>
</li>
</ol>
<h4>Question 1</h4>
<p>This is all about the chain rule. Please note that this is a big deal, so if
you have any trouble at all with the chain rule, seek extra help. The
derivative of $e^x$ is $e^x$. To compute the derivative of $e^{e^x}$, we might think of $u(x) = e^x$, so that we have $e^u$.
The derivative of $e^u$ will be $e^u u'$, which gives us $e^{e^x}e^x$. Let's look at the other way of understanding the chain rule to
compute the derivative of $e^{e^{e^x}}$. The "outer function" is $e^{(\cdot)}$. It's derivative is just itself. The first "inner function" is
$e^{e^x}$. We have just computed its derivative above (it's $e^{e^x} e^x$). So we multiply them together to get $e^{e^{e^x}}e^{e^x}e^x$.</p>
<p>Similarly, the derivative of $\sin x$ is $\cos x$. The derivative of
$\sin \sin x$ requires the chain rule. On the one hand, the outer function is
$\sin$, and the derivative of $\sin$ is $\cos$. So we know we will
have a $\cos (\sin x)$ in the answer. The inner function is also $\sin x$, so we need to multiply by its derivative. The final answer will be
$\cos (\sin x )\cos x$. To compute the derivative of $\sin \sin
\sin x$, we again use the chain rule. I will again use helper functions, to
illustrate their use. We might call $u(x) = \sin \sin x$, so that we are
computing the derivative of $\sin (u)$. Then we get $\cos u u'$. We
happen to have computed $u'$ just a moment ago, so the final answer is
$\cos \sin \sin x \cos \sin x \sin x$.</p>
<h4>Question 2</h4>
<p>The key idea of this question is to remember that the function $s(t)$
gives position at time $t$. So its derivative gives a result in terms of
position per time, the velocity. And the derivative of velocity will give a
result in terms of position per time per time, or acceleration. So the velocity
of our particle is $2a_0t + a_1$, and the acceleration is $2a_0$.
Since we know that the acceleration is always $20$, we know that $2a_0 = 20$ so that $a_0 = 10$. The velocity at $t = 1$ is $-10$, so we know that $2(10)(1) + a_1 = -10$, so that $a_1 = -30$.
Finally, our position at time $t = 2$ is $20$, so that $4(10)
+ 2(-30) + a_2 = 20$, so that $a_2 = 40$. I used different numbers
between the two classes, so don't pay too much attention if the exact details
are different between one class and the other.</p>
<h4>Question 3</h4>
<p>This is more about the chain-rule! This is sort of an explicit example of
helper functions. We first want to compute the derivative of $u(x)^2$. By
the chain rule, this will be $2u(x)u'(x)$. What is $u'(x)$?. It's
$2x + 1$. So the derivative of $u(x)^2$ is $2(x^2 + x + 2)(2x
+ 1)$. This is a single case of the slightly more general $u(x)^n$. Here,
the power rule tells us that the derivative will be $nu(x)^{n-1}u'(x)$,
which is $n (x^2 + x + 2)^{n-1}(2x + 1)$.</p>
<p>The idea behind the third question is to see if we can work out the same sort
of idea, but without starting with a helper function. (It's perfectly fine to
always use helper functions to use the chain rule - that's not a problem at
all). The derivative of $(5 + x^3)^{-3}$ will be $-3(5 +
x^3)^{-4}(3x^2)$. If we want to see the use of helper functions, call $v(x) = 5 + x^3$, so that we are computing the derivative of $v^{-3}$. The
derivative will be $-3v^{-4}v'$, which is exactly what we have above.</p>
<p>I look forward to seeing some of you on Monday, and happy studying!</p>https://davidlowryduda.com/math-90-week-5Wed, 03 Oct 2012 03:14:15 +0000Math 90 - Week 4https://davidlowryduda.com/math-90-week-4David Lowry-Duda<p>It was quiz-day!</p>
<p>The class did pretty well on the quiz. I wrote the quiz, and I'm pleased with
the skill-level demonstrated. The average was about a 77%, and the median was
an 80%. (For stat-witty folk, this means that the lower scores were somehow
'lower' than the average scores).</p>
<p>We had three questions in recitation.</p>
<ol type="I">
<li>Let's prove the power rule!
<ol type="1">
<li>Show that $z^n - x^n=$ $(z-n)(z^{n-1} + z^{n-2}x + \ldots + z^2x^{n-3} + zx^{n-2} + x^{n-1})$</li>
<li>Compute $\lim_{z \to x} \dfrac{z^n - x^n}{z - x}$</li>
<li>If $f(x) = x^n$, what is $f'(x)$?</li>
</ol>
</li>
<li>We had a classic problem that's based on the following question: A man starts to climb a path on a mountain at noon and ends at 8pm. He sleeps overnight on the mountain. At noon the next day, he climbs down the same path, reaching the bottom at 8pm. Show that there is a time of day where the man is at the same spot on the mountain.</li>
<li>A continuity question that was taken from the homework.</li>
</ol>
<p>Let's look at the solutions to the first two -</p>
<h4>Question 1</h4>
<p>The first part of the first problem gave a lot of heartache, so in recitation I
said to assume it and move on. But let's look at the solution anyway. We want
to show that $(z^n - x^n) = (z-x)(z^{n-1} + z^{n-2}x + \ldots + zx^{n-2}
+ x^{n-1})$. From what I could tell, the primary source of confusion came from
the ... bit. To understand that, let's consider a few quick examples.</p>
<ul>
<li>$1, 2, 3, \ldots, 7, 8, 9$ is the same as $1, 2, 3, 4, 5, 6, 7, 8, 9$</li>
<li>$1, 2, 3, \ldots, n-2, n-1, n$ is the same idea: all numbers from $1$ to $n$ in order</li>
<li>$x^7 + x^6 + \ldots + x^2 + x^1$ is the same as $x^7 + x^6 + x^5 + x^4 + x^3 + x^2 + x^1$</li>
<li>$x^{n-1} + x^{n-2} + \ldots + x^3 + x^2 + x^1$ is the same idea: the sum of the powers of $x$ from the $n-1$st post to the $1$st power</li>
</ul>
<p>The idea here is to distribute out on the right. So $(z-x)(z^{n-1} +
z^{n-2}x + \ldots + zx^{n-2} + x^{n-1}) = $ $(z^n + z^{n-1}x + \ldots +
z^2x^{n-2} + zx^{n-1})$ $- (z^{n-1}x + z^{n-2}x^2 + \ldots + zx^{n-1} +
x^n)$. For every positive term of the first summand except $z^n$, there
is a corresponding negative term in the second, except that $x^n$ is left
out. So we are left with $z^n - x^n$.</p>
<p>We can use this to compute the limit. Since $(z^n - x^n) = (z-x)(z^{n-1}
+ z^{n-2}x + \ldots + zx^{n-2} + x^{n-1})$, we can say that $\dfrac{(z^n
- x^n)}{(z-x)} = \dfrac{(z-x)}{z-x}(z^{n-1} + z^{n-2}x + \ldots + zx^{n-2} +
x^{n-1})$. Then in the limit, we can just let $z \to x$, and we have
$n$ copies of $x^{n-1}$. So the limit is $n x^{n-1}$.</p>
<p>Finally, if we recall that $\lim{h \to 0} \dfrac{f(x + h) - f(x)}{h}$ and
$\lim_{z \to x} \dfrac{f(z) - f(x)}{z-x}$ are both definitions
(equivalent definitions) of the derivative at the point $x$, then we see
that the calculation we've just done is, in fact, the derivative.</p>
<p>This is known as the <strong>Power Rule</strong> and is a nice simplification.</p>
<h4>Question 2</h4>
<p>This question is an Intermediate Value Theorem question. If we let $f(t)$
be the position on the trail on the first day, so that $f(\text{noon}) =
0$, and if we let $g(t)$ be the position on the second day, so that
$g(\text{noon}) = \text{top of mountain}$, then we can consider the (not
at all immediately obvious) function $h(t) = g(t) - f(t)$, then this is a
difference of two continuous functions. So it is continuous. At noon, it's
positive. At 8pm, it's negative. By the intermediate value theorem, since
$0$ is between negative and positive numbers, we know there is a $t_0$ such that $h(t_0) = 0$. This is precisely the statement that the
position is the same. So regardless of how the man climbs or descends the
mountain, there will always be a time of day when he is at the same
place.</p>
<p>This is another example of the idea that the great difficulty of this sort of math isn't the calculation and arithmetic aspects, but having the knowledge to see when it can be applied.</p>
<h4>Question 3</h4>
<p>There were a few homework questions that were a lot like this question, and
this was perhaps the most boring of the questions. If there is anything
unclear, let me know and I'll expand this problem.
So now, perhaps the part that you were waiting for - let's go over the quiz!</p>
<p>The quiz had two problems:</p>
<ol>
<li>Find with justification the following limit: $\lim_{x \to \infty} \dfrac{2\sqrt{x} - x^{-1}}{3x - 7}$</li>
<li>At $t$ seconds after liftoff, the height of a rocket is $2t^2$ feet. By explicitly using the definition of a derivative, determine how fast the rocket is climbing $5$ seconds after liftoff. In addition, no credit will be given for using only the power rule.</li>
</ol>
<h4>Solutions to the Quiz</h4>
<p>There were many ways to approach the first problem. A basic result on
horizontal asymptotes of rational functions states that when dealing with
ratios of polynomials, if the degree on the bottom is bigger than the degree on
the top, then there is a horizontal asymptote at $y = 0$. This means that
both the limit as $x \to \infty$ and as $x \to - \infty$ is $0$.</p>
<p>The most common solution was to divide the top and the bottom by $x$.
This would give $\dfrac{\frac{2 \sqrt x}{x}- x^{-2}}{3 - \frac{7}{x}}$.
Now, in looking at the limit as $x \to \infty$, we get something that
looks like $\dfrac{0 - 0}{3 - 0}$, which we can safely evaluate as $0$.</p>
<p>These were the two acceptable solution routes. Let's talk about some of the errors.</p>
<p>One error came up that worried me, because it's not a skill that we're going to
spend any classtime on. $\dfrac{2 \sqrt x - x^{-1}}{3x - 7} \neq \dfrac{2
\sqrt x}{3x - 7 - x}$. <strong>That is not how that works</strong>. The thing
that people must have been thinking of is that $x^{-1} = \frac{1}{x}$, so
that if you had something like $\dfrac{y^2 x^{-2}}{z^2}$, then you could
rewrite it like $\dfrac{y^2}{x^2z^2}$. But all this means is that you
could rewrite $\dfrac{2 \sqrt x - x^{-1}}{3x - 7}$ as $\dfrac{2
\sqrt x - \frac{1}{x}}{3x - 7}$. And that is that.</p>
<p>Other than that, there were a few algebra errors here and there, and that
happens sometimes. For grading, my standard rubric was: a point for the right
answer, up to 2 points for valid justification, and up to 2 points for correct
algebraic manipulations towards the solution.</p>
<p>The second problem had a lot less variability:</p>
<p>This is a derivative problem. The derivative of $f(x)$ at the point
$c$ is $\lim_{h \to 0} \dfrac{f(x+h) - f(x)}{h}$. We can plug in
$t = 5$ either before or after. I will plug it in after here. $\lim_{h \to 0} \dfrac{2(t+h)^2 - 2t^2}{h} = \lim_{h \to 0} \dfrac{2t^2 + 4th +
2h^2 - 2t^2}{h} = $ $\lim_{h \to 0} \dfrac{(4t + 2th)h}{h} = 4t$.</p>
<p>So at $t = 5$, we have that the speed of the rocket is $4(5) = 20$
feet per second.</p>
<p>Common errors were not remembering that the limit was as $h \to 0$ or
taking $h \to 5$. This surprised me, as we evaluated 2 derivatives in
recitation prior to the quiz, and we talked explicitly about the definition of
the derivative. The only systematic error that worried me was thinking that
$f(t + h) = 2t^2 + h$ This indicates a certain lack of fluency with the
idea of a function - if this describes you, I highly recommend coming and
talking to either Tom or I during our office hours. This is a very important
concept to have nailed down.</p>
<p>My standard grading rubric was as follows: 1 point for the right answer, 1
point for knowing and explicitly using the definition of a derivative
(including the proper limit), 1 point for properly plugging in the right things
into the function (i.e. $f(t + h) = 2(t + h)^2$ and so on), 1 point for
doing the necessary algebraic manipulations to evaluate the limit, and 1 point
for evaluating the limit.</p>
<p>I'd like to remind you that it's much easier for me to grade your work if you
write your work cleanly and by clearly indicating your logic.</p>
<p>I'll see you in recitation next week. Little bonus - Tom may be observing
recitation for some or all of it too. Yay!</p>https://davidlowryduda.com/math-90-week-4Fri, 28 Sep 2012 03:14:15 +0000Math 90 - Week 3https://davidlowryduda.com/math-90-week-3David Lowry-Duda<p>First and foremost: There is a quiz next week during recitation! <em>What is it
over?</em> you might ask. Any material from any of the first three homework
sets (i.e. all material covered in lecture up to Tuesday September 25th) will
be fair game.</p>
<p>Someone asked me in office hours this week: "Why do you end your posts with
something about a 'fold'?" I mention a 'fold' to indicate that there is more to
the post, and that you should click on the name of the post or on the
not-so-subtle (more...) at the bottom for the rest. Fittingly, the rest is
after the fold:<sup>1</sup>
<span class="aside"><sup>1</sup>This made more sense pre site-reorg.</span></p>
<p>We had three questions in recitation.</p>
<ol type="I">
<li>Compute the following limits:
<ol type="1">
<li>Given that $\frac{1}{2} - \frac{x}{6} \leq \frac{e^{-x} + x - 1}{x^2} \leq \frac{1}{2}$ for small $x$, evaluate $\lim_{x \to 0} \dfrac{e^{-x} + x - 1}{x^2}$</li>
<li>$\lim_{x \to 0} \dfrac{\sin x}{x} $ with justification.</li>
</ol>
</li>
<li>Consider the function $f(x) = \dfrac{2x^2}{\sqrt{4x^2 - 7} - \sqrt{7}}$.
<ol type="1">
<li>What is the domain of $f(x)$?</li>
<li>What is $\lim_{x \to 0} f(x)$?</li>
<li>Can $f(x)$ be "extended" to a continuous function of all of the real line?</li>
</ol>
</li>
<li>Cookie dough is poured into Purgatory Chasm at a rate of $3t^2$ gallons after $t$ hours.
<ol type="1">
<li>What is the average rate of cookie dough pumped per hour between $t = 2$ and $t = 4$ hours?</li>
<li>What is the average between $t = 2$ and $t = 3$ hours?</li>
<li>What is the "instantaneous" rate of flow at $t = 2$ hours?</li>
</ol>
</li>
</ol>
<p>It seemed that a lot of people had a bit more trouble with this than last week,
which is fine! And one of these is a whole lot harder than the other two. Let's
look at the solutions:</p>
<h4>Question 1</h4>
<p>This question is designed around the sandwich theorem. This says that if $g(x) < f(x) < h(x)$ and $\lim_{x \to c} g(x) = \lim_{x \to c} h(x)
= L$ (note both the lower and upper bounds), then we must also have that $\lim_{x \to c} f(x) = L$. It's literally 'sandwiched' between the lower and
upper bounds, so it has nowhere to go.</p>
<p>So to do the first question, we see that it's been delivered to us. Noting that
the limit of the lower and upper functions are both $\frac{1}{2}$, we
conclude by the sandwich theorem that our limit is also $\frac{1}{2}$.</p>
<p>As for the second part, it's not at all immediately obvious that we use the
sandwich theorem to do this. In fact, it's not at all clear how to do this at
all. So I helped set it up. Let's look at the setup now, too.</p>
<figure class="center shadowed">
<img src="/wp-content/uploads/2012/09/sinxoverx.png" width="500" height="447" />
</figure>
<p>The picture of what's going on is above, and it builds off of what we know from
our unit circle. We use the sandwich theorem on the <em>areas</em> at the
right. In particular, the area of the brown triangle is $\frac{1}{2} \cos
\theta \sin \theta$. The area of the pie-piece containing both the brown and
the green parts is $\frac{1}{2} \theta$ (do you remember why?). The area
of the large triangle, including the brown, green, and blue is $\frac{1}{2} \tan \theta$.</p>
<p>Thus for small $\theta$, we have that $\sin \theta \cos \theta <
\theta < \tan \theta$. I would like to divide through by $\sin
\theta$, but a complication arises: what if it's negative?</p>
<p>So we first compute $\lim_{\theta \to o^+} \dfrac{\sin \theta}{\theta}$,
where since I've restricted to positive and small $\theta$, I know that
$\sin \theta$ is positive. Then I know that</p>
<p>$\cos \theta < \dfrac{\theta}{\sin \theta} < \dfrac{\tan
\theta}{\sin \theta} = \dfrac{1}{\cos \theta}$</p>
<p>But now we know that the limit of the left and right as $\theta \to 0^+$
is $1$, and so we know that $\lim_{\theta \to 0^+}
\dfrac{\theta}{\sin \theta} = 1$.</p>
<p>Computing the left-hand limit yields $\cos \theta >
\dfrac{\theta}{\sin \theta} > \dfrac{\tan \theta}{\sin \theta} =
\dfrac{1}{\cos \theta}$ (the signs have flipped since we divided by a negative
sine). But the same reasoning holds!</p>
<p>So we can conclude that $\lim_{x \to 0} \dfrac{\sin \theta}{\theta} = 1$.
I'd like to point out that this is needed to understand the derivative of sine
(and cosine, really). Some people were tempted to "use L'Hopital" (if you don't
know what that means, don't worry - it's a more advanced calculus skill that we
haven't gone anywhere near yet), but this requires knowing the derivative of
sine, which requires this limit. In other words, that's circular! (get it?
sine, circular?) So this is somehow good.</p>
<h4>Question 2</h4>
<p>There are many nice elements to this question. What is the domain? We see that
the $x$ terms are squared within the square roots, so there will never be any
negatives under the radicals. But we need to worry about the denominator of the
fraction being zero, so we need $x \neq 0$. This gives us our domain: all
$x$ such that $x \neq 0$.</p>
<p>To compute the limit, we "rationalize the denominator." In other words, we look
at $\dfrac{2x^2}{\sqrt{4x^2 + 7} - \sqrt 7} \cdot \dfrac{\sqrt{4x^2 + 7}
+ \sqrt 7}{\sqrt{4x^2 + 7} + \sqrt 7}$. With this, we can compute the limit
using the "obvious methods."</p>
<p>We get $\dfrac{2x^2 (\sqrt{4x^2 + 7} + \sqrt 7)}{4x^2} = \dfrac{4x^2 + 7}
+ \sqrt{7}{2}$. So when we take the limit as $x \to 0$, we get $\sqrt{7}$.</p>
<p>Now, when we ask "can we extend this function to a continuous function," we are
asking if we can 'plug any holes.' Our function has a hole at $x = 0$,
but our function also has a limit at $x = 0$. Recall that a function is
continuous at a point if two things hold: the function has a value at that
point, and the limit of the function at that point exists and is equal to that
value. So if we define our function to be $\sqrt 7$ when $x = 0$,
then we will have 'plugged our hole' and thus will have a continuous function
of $x$.</p>
<h4>Question 3</h4>
<p>There were a few homework questions that were a lot like this question, and
this was perhaps the most boring of the questions. If there is anything
unclear, let me know and I'll expand this problem.</p>
<p>Remember that we have a quiz next week! And also feel free to come to my office
hours, or those of Tom.</p>https://davidlowryduda.com/math-90-week-3Wed, 19 Sep 2012 03:14:15 +0000Math 90 - Week 2https://davidlowryduda.com/math-90-week-2David Lowry-Duda<p>Yes, although it's the second week of class, this is after the first recitation.</p>
<p>In addition, I can now announce my office hours. They are from 7-8PM on Monday
in Kassar House room 105 and 12:30-1:30PM on Wednesday in my office (number 18,
my name is on the door) in the basement of the Kassar House. To get in the
building on Monday, you should look at the <a
href="http://www.math.brown.edu/mrc/">MRC page</a>, and in particular at this
(shaky youtube) <a href="http://www.youtube.com/watch?v=WKY4rS48RZ4">video</a>.</p>
<p>After my evening recitation, I'll post up the problems I gave out in class and
their solutions. Please feel free to ask any questions you want here. The
details are included after the fold:</p>
<p>In class today, I asked the following questions:</p>
<ol type="I">
<li>In 1960, the Australian jackalope population was $200000$, but has reduced by a third every $4$ years.
<ol type="1">
<li>How many jackalopes are there $t$ years after the year 1960?</li>
<li>How many jackalopes are there now?</li>
<li>When will there only be a single, very lonely jackalope?</li>
</ol>
</li>
<li>Simplify the following:
<ol type="1">
<li>$\log_2 \left( e^{\ln (\ln e^4)}\right)$</li>
<li>$\ln \left( e^{\log 100 - \log 1000} \right)$</li>
<li>$3^{\frac{\ln 1138}{\ln 3}}$ (it may be hard to tell, but that's an exponent)</li>
</ol>
</li>
<li>Calculate the inverse functions of:
<ol type="1">
<li>$y = \dfrac{\sqrt x}{2 \sqrt x - 4}$</li>
<li>$f(x) = \ln (x - 2) - \ln (x + 5)$</li>
</ol>
</li>
</ol>
<p>And you all did great! Let's go over the solutions, one by one.</p>
<h4>Question I</h4>
<p>This is a classic form of exponential growth and decay. We expect the answer to
be of the form $p(t) = ab^{rt}$ for some $a,b,r$. There are a few different
ways of going about this, but we'll just focus on one. We start with a
population of $200000$ jackalopes, but after $4$ years, only two-thirds of the
original population remains. If our time-measurement unit were groups of four
years, then we might say something like $p(t) = 200000 \left( \frac{2}{3}
\right)^{t}$. But we want to be more precise - our time-measurement unit will
be years. So instead we use $p(t) = 200000 \left( \frac{2}{3} \right)^{t/4}$.</p>
<p>Does this make sense? Yes - after $4$ years, we have only $2/3$ of the
population remaining. After another four years, our population decreases by
another third. So it is of the form we want and passes our intuition check.</p>
<p>The two remaining parts are relatively simple from the formula $p(t) = 200000
\left( \frac{2}{3} \right)^{t/4}$. Here, $t$ represents <em>years since
1960</em>, so when we ask how many jackalopes there are now, we want to know
$p(52) = 200000 \left( \frac{2}{3} \right) ^{52/4} \approx 1028$ jackalopes.
When we ask when there will be only one jackalope, we want to solve for $t$ in
$1 = 200000 \left( \frac{2}{3} \right) ^ {t/4}$.</p>
<p>To do this, since we have our $t$ in the exponent, we divide by $200000$, take
the natural log of both sides, and simplify. $\ln (1/200000) = (t/4) \ln (2/3)
\implies t = 4 \ln (1/200000) / \ln (2/3) \approx 120$ years. Poor jackalope.</p>
<h4>Question II</h4>
<p>This is the test of our understanding of logarithmic and exponential arithmetic
rules (this may feel useless, but it's something that will come up a lot in
this course, so get these rules down). In particular, we're going to use that
the exponential and log functions are inverses of each other (so $e^{\ln x}$
and $\ln e^x = x$ for $x$ in appropriate domains) a log.</p>
<p>So let's look at the first one: $\log_2 \left( e^{\ln (\ln e^4)}\right)$.
First, note that $\ln e^4 = 4$, and $e^{\ln 4} = 4$. So we are left with
$\log_2 4$, which is $2$.</p>
<p>The second one is a both trickier and easier, because it is tempting to use
many more rules then are necessary. One might proceed naively by writing $\ln
\left( e^{\log 100 - \log 1000} \right)$ $= \ln e^{\log (100/1000)}$ $= \log
(100/1000) = \log (1/10) = \log (10^{-1}) = -1$. And that isn't wrong - so
that's great. Or you might also realize that $\log 100 = 2$ and that $\log 1000
= 3$, so we are asking $\ln e^{2 - 3} = -1$.</p>
<p>The last is a bit different. We're going to do it in two different ways. We
might try to cancel out the $\ln 3$ in the denominator of the exponent. One way
to do this is to write $3$ as $e^{\ln 3}$ (this is still using that the
exponential and logs are inverses, but this time we are going from "simple" to
"complicated" in a sense). Then $(3)^{\ln 1138/\ln 3} = e^{(\ln 3)(\ln 1138 /
\ln 3)} = e^{\ln 1138} = 1138$.</p>
<p>Another way to do this is to remember the <a
href="http://en.wikipedia.org/wiki/Logarithm#Change_of_base">change-of-base
formula</a>, which states that $\ln 1138/\ln 3 = \log_3 1138$, so that we can
use that $3^x$ and $log_3 x$ are inverses to conclude that $3^{\log_3 1138} =
1138$.</p>
<h4>Question III</h4>
<p>This was similar to the problem on the homework that seemed to give the most
people the most trouble. Let's proceed naively: $y = \dfrac{\sqrt x}{2 \sqrt x
- 4}$, so $(2 \sqrt x - 4) y = \sqrt x$. Distributing, we get $2y \sqrt x - 4y
- \sqrt x = 0$. Factor out $\sqrt x$, we get $\sqrt x (2y - 1) = 4y$. Divide to
get $\sqrt{x} = \dfrac{4y}{2y-1}$, and we can square to find $x =
\dfrac{16y^2}{(2y-1)^2}$. And this "works," sort of. Why don't we specify the
domain and range, as in the homework problem? What is the domain of the
original function? $x > 0$ is necessary as real square roots can't take in
negative numbers. And $x \neq 4$, as the denominator can't be zero. But
everything else is fine.</p>
<p>What about its range?</p>
<p>To do this, we should graph the function. The graph looks like the graph
below<sup>1</sup>
<span class="aside"><sup>1</sup>Unfortunately, this image was lost.</span></p>
<p>It's very similar to a rational function. But there is a
key difference: the domain doesn't extend to any negative value of $x$. There
is an a horizontal asymptote at $y = 2$, and every value $y >2$ is in the
range. But the left side stops at height $0$, and so the range is
$(-\infty,0]\cup(2,\infty)$.</p>
<p>The domain and range for the inverse are just the domain and range of the
original function, but flipped. So there we are.</p>
<p>For the second, we won't find the domain and range here. It's a big pain. So
let's consider $y = \ln(x-2) - \ln(x+5)= \ln \left( \dfrac{x-2}{x+5} \right)$.
How do we get rid of the $\ln$? We exponentiate!</p>
<p>$e^y = \dfrac{x-2}{x+5}$, so $(x+5)e^y = x-2$. Distributing and bringing it all
to one side: $xe^y + 5e^y - x + 2 = 0$, so $x(e^y - 1) = -2 - 5e^y$. Dividing,
we conclude: $x = \dfrac{-2-5e^y}{e^y - 1}$</p>
<p>And there we have it.</p>
<p>As (will become, but which I will pretend) is usual, if you have any questions
on this week's homework, please feel free to comment below. And if you haven't
commented on the last week's post, please do so.</p>https://davidlowryduda.com/math-90-week-2Tue, 11 Sep 2012 03:14:15 +0000Math 90 - Week 1https://davidlowryduda.com/math-90-week-1David Lowry-Duda<p>This is a post related to how I plan to conduct my [Math 90] TA sessions. I
would like to use this space as a supplement to the class work. Each Tuesday
night, after my recitations, I will post my worksheets and their solutions
under a new page. That page will serve as a comment-forum for for any questions
students may have over that week. I will answer any comments posted here
periodically throughout the week. It is also possible I may post additional,
supplementary materials here if I feel it necessary.</p>
<blockquote>
<p><strong>Now I ask that my students please do the following:</strong></p>
<p>Below, you'll see a comment form. Write a comment using your name (this will
be displayed by each comment you make), your email address (this is not
displayed publicly), and a comment. Write anything you'd like. If you need a
prompt, write what you want to get out of this course, or ask me a question.</p>
<p>Alternatively, write a comment on this post!</p>
</blockquote>
<p>I look forward to seeing all of you in class. Please note that it will always
be easy to check out my [Math 90] <a title="Math 90"
href="/?p=710">posthead </a> by clicking on Math
90 in my pages menu at the top left, or by remembering that link. There will be
links to the different pages from the posthead, once there are different pages
to link to.</p>
<p>Link to Form<sup>1</sup>
<span class="aside"><sup>1</sup>This link has been removed and no longer exists.</span></p>https://davidlowryduda.com/math-90-week-1Tue, 04 Sep 2012 03:14:15 +0000Math 90 - Fall 2012https://davidlowryduda.com/math-90David Lowry-Duda<p>This is the fall 2012 Math 90 Introductory Calculus I posthead for David
Lowry's TA sections (which should be those in section 1 with Hulse).
<strong>This is not the main site for the whole course</strong> (which can be
found at <a
href="https://sites.google.com/a/brown.edu/fa12-math0090/">https://sites.google.com/a/brown.edu/fa12-math0090/</a>),
but it will contain helpful bits and is a good venue through which you can ask
questions.</p>
<p>In particular, the posts that have been put up so far can be found under the <a
href="http://mixedmath.wordpress.com/category/brown-university/math-90/">Math
90 </a>tag category. <strong>If this is your first time visiting, and you are
one of my students, please go to the <a title="Math 90: Week 1"
href="http://mixedmath.wordpress.com/2012/09/04/math-90-week-1/">Math 90: Week
1</a> page and leave a comment.</strong></p>
<p>Here are links to the pages themselves:</p>
<ul>
<li><a title="Math 90: Week 1" href="/math-90-week-1/">Week 1</a></li>
<li><a title="Math 90: Week 2" href="/math-90-week-2/">Week 2</a></li>
<li><a title="Math 90: Week 3" href="/math-90-week-3/">Week 3</a></li>
<li><a title="Math 90: Week 4" href="/math-90-week-4/">Week 4</a></li>
<li><a title="Math 90: Week 5" href="/math-90-week-5/">Week 5</a></li>
<li><a title="Math 90: Week 7" href="/math-90-week-7/">Week 7</a> (including test solutions)</li>
<li><a title="Math 90: Week 8" href="/math-90-week-8/">Week 8</a> (and the separate <a title="Math 90: Week 8 Quiz" href="/math-90-week-8-quiz/">quiz solutions</a>)</li>
<li><a title="Math 90: Week 10" href="/math-90-week-10/">Week 10</a></li>
<li><a title="Math 90: Week 11 and Midterm Solutions"
href="/math-90-week-11-and-midterm-solutions/">Week 11 </a>(including test solutions)</li>
<li><a title="Math 90: Concluding Remarks" href="/math-90-concluding-remarks/">Concluding Remarks</a></li>
</ul>
<p>And now, the administrative details (the rest can be found on the <a
href="https://sites.google.com/a/brown.edu/fa12-math0090/">main course
website</a>).</p>
<blockquote>
<p>TA Name: David Lowry</p>
<p>email address: djlowry [at] math [dot] brown.edu (although please only use email for private communication - math questions can be asked here, and others can benefit from their openness).</p>
<p>Instructor Name: Thomas Hulse</p>
</blockquote>https://davidlowryduda.com/math-90Tue, 28 Aug 2012 03:14:15 +0000The danger of confusing cosets and numbershttps://davidlowryduda.com/reviewing-goldbachDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/reviewing-goldbachFri, 24 Aug 2012 03:14:15 +0000An elementary proof of when 2 is a quadratic residuehttps://davidlowryduda.com/an-elementary-proof-of-when-2-is-a-quadratic-residueDavid Lowry-Duda<p>This has been a week of asking and answering questions from emails, as far as I
can see. I want to respond to two questions publicly, though, as they've some
interesting value. So this is the first of a pair of blog posts. One is a short
and sweet elementary proof of when $2$ is a quadratic residue of a prime,
responding to Moschops's comments on an <a title="Dancing ones PhD"
href="/?p=667">earlier blog post</a>. But to continue
my theme of some good and some bad, I'd also like to consider the latest
"proof" of the Goldbach conjecture (which I'll talk about in the next post
tomorrow).</p>
<p>In the MSE question <a
href="http://math.stackexchange.com/questions/180002/legendre-symbol-second-supplementary-law">Legendre
Symbol Second Supplementary Law</a> from the user <a
href="http://math.stackexchange.com/users/31462/yola">Yola</a>, I gave a <a
href="http://math.stackexchange.com/a/180022/9754">completely elementary
answer</a>. The question is to evaluate $\left( \frac{2}{p} \right)$, where we
are considering the Legendre Symbol. Moschops has asked me to clarify some of
the steps, so let's give a complete working.</p>
<p>We are assuming that $p$ is an odd prime. Let $s = (p-1)/2$. Now consider the
$s$ different equations</p>
<p style="text-align: center;">$1 = (-1)(-1)$
$2 = (2)(-1)^{2}$
$3 = (-3)(-1)^3$
$\dots$
$s = (\pm s)(-1)^s$</p>
<p>where we choose the sign on $s$ so that it multiplies with $(-1)^s$ to give
$s$. We want to multiply these $s$ equations together. The left side is very
easy to understand, and we just get $s!$. The right is a bit harder, as we have
both numbers and signs. Let us first deal with the numbers, ignoring the
$(-1)^{\text{stuff}}$ terms. In particular, we have a positive $2, 4, 6, \dots,
s $ and some negative odd numbers. In fact, if we think a bit harder, then
because $s = (p-1)/2$, we'll see that we have exactly half of the even numbers
less than $p$.</p>
<p>Here's the big trick. Note also that $2s \equiv -1 \mod p$ (so the largest even
number less than $p$ is the same as the smallest negative odd number), $2(s-1)
\equiv -3$ (the second largest even number less than $p$), and so on. Then we
see that o0ur odd negative numbers are the same as the <em>other</em> half of
the even numbers less than $p$. This is the big trick - the slick idea. This
means that $(-1)(2)(-3)(4) \dots (s) \equiv (2)(4)(6) \dots (p-3)(p-1) \equiv
2^s s! \mod p$ (to see this last equivalence, imagine that we associate each
term of the factorial with one of the powers of two, so that we only get even
factorial terms).</p>
<p>So we have evaluated the "numbers" portion of the product of the right hand
side. We still need to consider the "signs" portion, i.e. the product of the
$(-1)^{\text{stuff}}$ terms. This is not so bad, as $(-1)^{1 + 2 + \dots + s} =
(-1)^{s(s+1)/2}$. So the complete product of the RHS terms is $2^s s!
(-1)^{s(s+1)/2}$ and the product of the LHS is $s!$ (all done mod $p$).</p>
<p>Thus we are left with $2^s \equiv (-1)^{s(s+1)/2} = (-1)^{(p^2-1)/8}$, which is
the standard rule. Many thanks to Moschops for the question - I hope this
helps. At least now we have a clear comment thread for questions.</p>https://davidlowryduda.com/an-elementary-proof-of-when-2-is-a-quadratic-residueThu, 23 Aug 2012 03:14:15 +0000Dancing ones PhDhttps://davidlowryduda.com/dancing-ones-phdDavid Lowry-Duda<p>In my dealings with the internet this week, I am reminded of a quote by <a
href="http://en.wikipedia.org/wiki/William_Arthur_Ward">William Arthur
Ward</a>, the professional inspirator:</p>
<blockquote>We can throw stones, complain about them, stumble on them, climb
over them, or build with them.</blockquote>
<p>In particular, I have been notified by two different math-related things.
Firstly, most importantly and more interestingly, my friend Diana Davis created
a video entry for the "<a href="http://gonzolabs.org/dance/">Dance your
PhD</a>" contest. It's about <em>Cutting Sequences on the Double Pentagon</em>,
and you can (and should) look at it <a href="http://vimeo.com/47049144">on
vimeo</a>. It may even be the first math dance-your-PhD entry! You might even
notice that I'm in the video, and am even waving madly (I had thought it
surreptitious at the time) around 3:35.</p>
<p>That's the positive one, the "Building with the Internet," a creative use of
the now-common-commodity. After the fold is the travesty.</p>
<p>On the other hand, I have also been nominated for a blog award... but not in a
good way. I received this email and notification:</p>
<blockquote>
<p>Hi there,
An article you wrote in 2011 titled <strong>2401: Additional Examples for Test
3</strong> has earned your blog a nomination for a Fascination Award: 2012's
Most Fascinating Middle School Teacher blog.</p>
<p>The comments posted in response to your post prove that your content not only
inspires your audience, but it also creates discussion around your posts, both
of which are requirements for the nomination of a Fascination award.</p>
<p>As a nominee of this award, you have full permission to display the "Nominated" emblem on your website. To learn more about the contest, the rules, or the prizes, click here:
2012 Fascination Awards Rules & Prizes.</p>
<p>To get started:
<ol>
<li>Accept your nomination by replying to this email by August 15 (11:59 PM EST).</li>
<li>Claim your "Nominated" badge to display on your blog: Nominated Badge</li>
</ol>
Voting begins August 18th at 1:01 AM (EST). The blog with the most votes by
August 25th at 11:59 PM (EST) will win the grand prize, a $100 restaurant
gift card.</p>
<p>Good luck and thank you for your participation!
Matthew Pelletier
Director of Public Relations
Accelerated Degree Programs</p>
</blockquote>
<p>That's right - I am a fantastic middle-school science teacher, as demonstrated
by <a
href="http://mixedmath.wordpress.com/2011/04/14/2401-additional-examples-for-test-3/">my
previous blog post</a> when I was a TA for multivariable calculus at Georgia
Tech. And what was in that post? It's a series of links to the Khan Academy,
and my students asked me five questions in the comments. One might have hoped
that the Georgia Tech or Multivariable Calculus tags (or the word "integral" in
the post, or the explicitly done and written integrals in the comments which
<em>Accelerated Degree Programs</em> seems to have read so closely) would have clued
them off.</p>
<p>Either Georgia Tech's standards are really low these days, it takes an
extraordinary amount of effort to become a 'good middle school science teacher
blog,' or this is just another site that's senselessly trying to improve their
rating by spamming out things that link to their site. (I have removed all
links to their site from the email so as to not actually boost their <a
href="http://en.wikipedia.org/wiki/Search_engine_optimization">SEO</a>
attempt).</p>
<p>You might ask, what do they do? They seem to charge people about a hundred
thousand dollars or so to give them the privilege of attending an online degree
program, perhaps getting a bachelor's in 18 months. And what were they going to
give me? They were going to let me display</p>
<figure class="center">
<img src="/wp-content/uploads/2012/08/nominated-middle-school-teacher.png"
width="166" />
</figure>
<p>Or, if I won, they would let me display</p>
<figure class="center">
<img src="/wp-content/uploads/2012/08/winner-middle-school-teacher.png"
width="165" />
</figure>
<p>I have again removed the linking aspects of these two photos, so that these are
merely .pngs hanging out. Whoa - that's... exciting... or something. It's a
little worse, as when I didn't respond to 'accept my nomination,' I was emailed
by Matthew Pelletier again! They are insistent, at least. And that's the only
good thing I have to say about them.</p>https://davidlowryduda.com/dancing-ones-phdSat, 11 Aug 2012 03:14:15 +0000Precalculus Supplement - Synthetic Divisionhttps://davidlowryduda.com/precalculus-supplement-synthetic-divisionDavid Lowry-Duda<p>I think it is a sign.</p>
<p>In the question <a href="http://math.stackexchange.com/q/171191/9754">How does
Synthetic Division Work?</a> by the user <a
href="http://math.stackexchange.com/users/26649/riddler">Riddler</a> on
math.stackexchange, Riddler says that he's never seen a proof of Synthetic
Division. This gave me a great case of Mom's Corollary (the generalization of
the fact that when mothers tell you something, you are often reminded of
specific cases of it within three days - at least with my mom), as it came up
with a student whom I'm tutoring. It turns out many of my students haven't
liked synthetic division. I chatted with some of the other Brown grads, and in
general, they didn't like synthetic division either.</p>
<p>It was one of those things that was taught to us before we thought about why
different things worked. Somehow, it wasn't interesting or useful enough to be
revisited. But let's take a look at synthetic division after the fold:</p>
<p><a href="http://en.wikipedia.org/wiki/Synthetic_division">Synthetic
division</a> is a specialized method for dividing a monic polynomial (leading
coefficient $1$) by a monomial of the form $x - a$. The reason why
some people like synthetic division is because it can be done very quickly,
although as we'll see below, we are really just optimizing some of the steps
from doing regular <a
href="http://en.wikipedia.org/wiki/Polynomial_long_division">polynomial long
division</a>. The rule for synthetic division is best seen through example:</p>
<blockquote><strong>Synthetic Division Algorithm</strong>:
Say we have the polynomial $x^2 - 12x^2 - 42$ and we want to divide it by $x - 3$. Then we first write out the coefficients of the polynomial to be divided like this:
<p style="text-align:center;">$\begin{array}{c|cccc} & 1& -12 & 0 & -42 \\ \phantom{x} \\ \hline \end{array}$</p>
<p style="text-align:left;">Since we are dividing by $x - 3$, we will write out a $3$ to the left on the second line. Note that from $x - 3$ we got a $3$ and not a $-3$. This is important, and could mess up a whole lot of computation.</p>
<p style="text-align:center;">$\begin{array}{c|cccc} & 1& -12 & 0 & -42 \\ 3 \\ \hline \end{array}$</p>
<p style="text-align:left;">Now we have set up the polynomial division, and we just carry out the following steps: Copy down the first coefficient below the bar. Here, we drop down a $1$ like so:</p>
<p style="text-align:center;">$\begin{array}{c|cccc} & 1& -12& 0 & -42 \\ 3 \\ \hline & 1 \end{array}$</p>
<p style="text-align:left;">We then multiply the dropped number by the $3$ and place it in the next column. So beneath the $-12$, we now have a $3$. We then add in that column, so that we get a $-9$. Our diagram now looks like:</p>
<p style="text-align:center;">$\begin{array}{c|cccc} & 1& -12& 0 & -42 \\ 3 & & 3& \\ \hline & 1 &-9& \end{array}$</p>
<p style="text-align:left;">We now repeat to the end of the diagram. The completed diagram for this example will look like</p>
<p style="text-align:center;">$\begin{array}{c|cccc} & 1& -12& 0 & -42 \\ 3 & & 3& -27 & -81\\ \hline & 1 &-9& -27 & -123 \end{array}$</p>
<p style="text-align:left;">Ok, so now what? It turns out that this last line contains our answer. The last term gives the remainder, the next to last gives the constant term, then the linear term, then the quadratic (and so on, if there were more terms). So here, our remainder coefficient is $-123$, so it's $\frac{-123}{x-3}$, our constant term is $-27$, our linear coefficient is $-9$, and our quadratic coefficient is $1$. So we think that answer is $x^2 - 9x - 27 - \frac{123}{x-3}$. Multiplying it out, we even see that's it's correct.</p>
</blockquote>
<p style="text-align:left;">The general method is very similar. You drop down the first coefficient, bring multiply it by the left term to bring it to the next column, add that column, multiply it by the left term and bring it to the next column, and so on until you're out of columns. Let's look at another, but let's cheat to see what it will look like. We know that if we divide $(x-1)(x+1)(x-2) = x^3 - 2x^2 - x + 1$ by $(x-2)$ we should get $x^2 - 1$. The synthetic division diagram looks like</p>
<p style="text-align:center;">$\begin{array}{c|cccc} & 1 &-2 &-1 &2 \\ 2 & & 2 & 0 &-2 \\ \hline & 1 & 0 & -1 & 0 \end{array}$</p>
<p style="text-align:left;">And this is exactly correct, as our answer has no remainder and gives $x^2 - 1$. What if we were to divide it by $(x + 1)$ instead? This is a bit different, as you should note that we don't just put a $1$ out to the left. Synthetic division only works on divisors of the form $(x - a)$, so we write $(x + 1) = (x - (-1))$. Then the synthetic division diagram looks like:</p>
<p style="text-align:center;">$\begin{array}{c|cccc} & 1 &-2 &-1 &2 \\
-1 & & -1 & 3 &-2 \\
\hline & 1 & -3 & 2 & 0 \end{array}$</p>
<p style="text-align:left;">And this is again correct. That's handy. There is a way to do a synthetic-like division for dividing by quadratics, etc, but it's much longer. The greatest strength of synthetic division is that it's very compact, and if you know how it's done, it can be done very, very quickly. Combined with bits like the Rational Root Theorem and Factor Theorem, if can speed up the process of factoring and finding roots too.</p>
<p style="text-align:left;">All that's is well-said, but this might leave a gap in the pit of your stomach, or perhaps a pit in the gap of your stomach. Why does it work? Let's see why:</p>
<p style="text-align:left;">Let's go back to the original problem of dividing $x^3 - 12x^2 - 42$ by $x - 3$, and write it out using long division.</p>
<p style="text-align:center;">$\begin{array}{cccc|ccc}
x^3& -12x^2 & 0x & -42 & & x & -3 \\
\hline
-x^2(x & -3)&&& x^2 \\
0&-9x^2&0x &&\\
& -9x(x& -3) && & -9x\\
&0&-27x&-42&& \\
&&-27(x & -3) &&&-27 \\
&&&-123&&
\end{array}$</p>
<p style="text-align:left;">And so we again get $x^2 - 9x - 27 - \frac{123}{x-3}$. But why did this give the same answer. Let's look at the algorithm again,</p>
<p style="text-align:left;">It's clear that the first coefficient will be a $1$, because it's a monic polynomial. So this is no shock. To see what the next line is, we multiply $(x-3)$ by $x^2$ to get $x^3 - 3x^2$. Of course, we knew the cubic terms would cancel (that's why we chose to multiply $1$), so we only need to pay attention to the $-3x^2$. But since we're dividing by $(x-3)$, in particular that there is a $-3$ instead of a $+3$, carrying out the arithmetic leads to us <em>adding</em> $3x^2$ on the next line. This is why, after we switch the sign on $a$ in the $(x-a)$ term, we just multiply and add.</p>
<p style="text-align:left;">And this directly gives us the next coefficient, because we again are dealing with division by a monic polynomial $x-3$ (so no leading coefficient problem). Put another way, synthetic division is a clever way of combining two lines of the polynomial long division into one step, which is to 'multiply the number at the bottom of the column by $-a$ (corresponding to finding the result of the last multiplication and ignoring the leading term because it will cancel out in the long division) and adding (corresponding to carrying out the addition in the long division).</p>
<p style="text-align:left;">If you write out a couple side-by-side, a rigourous proof becomes very clear, though perhaps not fun to write.</p>https://davidlowryduda.com/precalculus-supplement-synthetic-divisionThu, 09 Aug 2012 03:14:15 +0000A MSE collection - topology book referenceshttps://davidlowryduda.com/a-mse-collection-topology-book-referencesDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/a-mse-collection-topology-book-referencesThu, 21 Jun 2012 03:14:15 +0000Three number theory bits - One elementary, the 3-Goldbach, and ABChttps://davidlowryduda.com/three-number-theory-bitsDavid Lowry-Duda<p>I've come to realize that I'm always tempted to start my posts with "Recently,
I've..." or "So and so gave me such and such a problem..." or "I happened
across this on..." It is as if my middle school English teachers (all of whom
were excellent) succeeded so well in forcing me to transition from one idea to
the next that I can't help it even today. But, my respect for my middle school
teachers aside, I think I'm going to try to <em>avoid</em> that here, and just
sort of jump in.</p>
<p>Firstly, as announced at <a
href="http://terrytao.wordpress.com/2012/06/04/two-polymath-projects/">Terry
Tao's Blog</a>, two new polymath items are on the horizon. There is a <a
href="http://polymathprojects.org/2012/06/03/polymath-proposal-the-hot-spots-conjecture-for-acute-triangles/">new
polymath proposal</a> at the polymath blog on the "Hot Spots Conjecture",
proposed by <a href="http://www.math.missouri.edu/~evanslc/Polymath/">Chris
Evans</a>, and that has already expanded beyond the proposal post into its <a
href="http://polymathprojects.org/2012/06/12/polymath7-research-thread-1-the-hot-spots-conjecture/">first
research discussion post</a>. (To prevent clutter and to maintain a certain
level or organization, the discussion gets cut up into 100-comment size chunks
or so, and someone summarizes some of the key points in the header each time. I
think it's a brilliant model). And the mini-polymath organized around the IMO
will happen at <a
href="http://michaelnielsen.org/polymath1/index.php?title=Imo_2012">the
wiki</a> starting on July 12.</p>
<p>Now, onto some number theory -</p>
<p>One of the few complaints I have about my undergraduate education at Georgia
Tech was how often a core group of concepts came up. In perhaps ten of my
classes, I learned about the Euclidean algorithm (I learned about equivalence
relations in even more).. The idea is so old-hat to me now that I really like
it when I see it some up in places I don't immediately expect.</p>
<h4>The Problem</h4>
<blockquote>Show that $\gcd (a^m - 1,a^n - 1) = a^{\gcd (m,n)} - 1$</blockquote>
<p>Let's look at a couple of solutions. One might see that the Euclidean algorithm
works on the exponents simply. In fact, $\gcd (a, a^m - 1) = 1 \forall
m$, and so we have that (assuming $n > m$ wlog)</p>
<p>$$\gcd (a^n-1,a^m-1) = \gcd (a^n - 1, a^n - a^{n-m} ) = \gcd (a^{m-n} -1, a^m - 1)$$</p>
<p>So one could continue to subtract one exponent from
the other, and then switch which exponent we're reducing, and so on, literally
performing the Euclidean algorithm on the exponents. But there's a pleasant way
of visualizing this. As $(a-1)|(a^n-1),(a^m-1),(a^{\gcd(m,n)} - 1)$, we
can look instead at $\gcd \left(\dfrac{a^n - 1}{a-1},
\dfrac{a^m-1}{a-1}\right)$. To do a stricter example, we might look at $\gcd \left(\dfrac{a^5 - 1}{a-1}, \dfrac{a^2-1}{a-1}\right)$, or $\gcd (1
+ a + a^2 + a^3 + a^4, 1 + a)$. The first, in this case, has $5$ terms,
and the second has $2$ terms, the same as the original exponents. By
multiplying $1 + a$ by $a^3$ and subtracting from $1 + a +
a^2 + a^3 + a^4$ leaves $1 + a + a^2$. In particular, it is very clear
that we can remove $m$ terms at a time from the $n$ terms, and that
this can be rotated. I really like this type of answer for a few reasons: it
was not immediately obvious to me that the Euclidean algorithm would play much
a role, and this argument is indepent of $a$ being a number (i.e. it
works in rings of polynomials). $\diamondsuit$</p>
<p>Another, essentially different way of solving this problem is to show that all
common divisors of $a^m - 1$ and $a^n - 1$ are divisors of $a^{\gcd(m,n)} - 1$. Suppose $d|(a^m - 1), (a^n - 1)$. Then $a^m
\equiv a^n \equiv 1 \mod d$, so that in particular $\text{order}(d)|m,n$.
But then $\text{order}(d)|\gcd(m,n)$, so in particular $a^{\gcd(m,n)} \equiv 1 \mod d$. This argument was a series of iff statements,
so that any divisor of $a^{\gcd(m,n)} - 1$ is a common divisor of $a^m - 1$ and $a^n - 1$ as well. Thus we have that $\gcd(a^m - 1, a^n - 1)
= a^{\gcd(m,n)} - 1$, as desired $\diamondsuit$</p>
<p>Is it forgiveable that Georgia Tech taught me the Euclidean algorithm in so
many of my classes? Although I complain, there was reason. There is a healthy
lack of duplication of classes between the different schools and colleges. So
programmers might take combinatorics, engineers might take prob/stat, anyone
might take intro to elementary number theory or, if they were daring, abstract
algebra, and mathies themselves would learn about it in the closest thing Tech
has to an intro-to-proofs class, the dedicated linear algebra course (called
abstract vector spaces). All of these teach the Euclidean algorithm (and most
teach combinations/permutations and equivalence relations, too), but there was
a general sense that classes were self-contained. Thus it was easy to take
classes out-of-major.</p>
<p>Brown graduate mathematics does not have this self-containment. I understand
this, and I doubt that any graduate math school would. Why reinvent the wheel?
But it was one of the few times when I transitioned to a new school and
actually had a different learning experience (maybe the only). This removes me
from a seemingly key component of Brown undergraduate life - the open
curriculum, also designed to allow students to take classes
out-of-concentration. So when I'm asked to comment on Brown undergraduate life
or the undergraduate math program (and I have been asked), I really don't have
anything to say. It makes me feel suddenly older, yet not any wiser. Go
figure.</p>
<p>Digression aside, I wanted to talk about progress on two conjectures. Firstly,
the <a href="http://en.wikipedia.org/wiki/Goldbach's_conjecture">Goldbach
conjecture</a>. The Goldbach conjecture states that <strong>Every even integer
greater than $2$ can be expressed as the sum of two primes. </strong>The
so called 'Ternary Goldbach conjecture' (sometimes called the <a
href="http://en.wikipedia.org/wiki/Goldbach's_weak_conjecture">weak Goldbach
conjecture</a>' states that <strong>Every odd number greater than $7$ can
be expressed as the sum of three primes. </strong></p>
<p>It is known that every odd number greater than $1$ is the sum of at most
five primes (link to arxiv, Terry Tao's <a
href="http://arxiv.org/abs/1201.6656">paper</a>). On 23 May, Harald Helfgott
posted a <a href="http://arxiv.org/abs/1205.5252">paper</a> on the arxiv that
makes a lot of progress towards the Ternary Goldbach. In particular, his
abstract states:</p>
<blockquote>
The ternary Goldbach conjecture states that every odd number $n\geq 7$ is
the sum of three primes. The estimation of sums of the form $\sum_{p\leq
x} e(\alpha p)$, $\alpha = a/q + O(1/q^2)$, has been a central part of
the main approach to the conjecture since (Vinogradov, 1937). Previous work
required $q$ or $x$ to be too large to make a proof of the
conjecture for all $n$ feasible.
The present paper gives new bounds on minor arcs and the tails of major arcs.
For $q\geq 4\cdot 10^6$, these bounds are of the strength needed to solve
the ternary Goldbach conjecture. Only the range $q\in \lbrack 10^5,
4\cdot 10^6\rbrack$ remains to be checked, possibly by brute force, before the
conjecture is proven for all $n$.
The new bounds are due to several qualitative improvements. In particular, this
paper presents a general method for reducing the cost of Vaughan's identity, as
well as a way to exploit the tails of minor arcs in the context of the large
sieve.
</blockquote>
<p>Pretty slick.</p>
<p>Finally, and this is complete hearsay, it is rumored that the <a href="http://en.wikipedia.org/wiki/Abc_conjecture">ABC conjecture</a> might have been solved. I read of potential progress by S. Mochizuki over at the <a href="http://sbseminar.wordpress.com/2012/06/12/abc-conjecture-rumor-2/">Secret Blogging Seminar</a>. To be honest, I don't really know much about the conjecture. But as they said at the Secret Blogging Seminar: "My understanding is that blogs are for such things." At least sometimes.</p>https://davidlowryduda.com/three-number-theory-bitsFri, 15 Jun 2012 03:14:15 +0000A MSE Collection - a list of integralshttps://davidlowryduda.com/a-mse-collection-a-list-of-basic-integralsDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/a-mse-collection-a-list-of-basic-integralsMon, 11 Jun 2012 03:14:15 +0000A pigeon for every hole, and then one (sort of)https://davidlowryduda.com/a-pigeon-for-every-holeDavid Lowry-Duda<p>There is a certain pattern to learning mathematics that I got used to in
primary and secondary school. It starts like this: first, there are only
positive numbers. We have 3 apples, or 2 apples, or maybe 0 apples, and that's
that. Sometime after realizing that 100 apples is a lot of apples (I'm sure
that's how my 6 year old self would have thought of it), we learn that we might
have a negative number. That's how I learned that they don't always tell us
everything, and that sometimes the things that they do tell us have silly
names.</p>
<p>We know how the story goes - for a while, there aren't remainders in division.
Imaginary numbers don't exist. Under no circumstance can we divide or multiply
by infinity, or divide by zero. And this doesn't go away: in my calculus
courses (and the ones I've helped instruct), almost every function is
continuous (at least mostly) and continuity is equivalent to 'being able to
draw it without lifting a pencil.' It would be absolutely impossible to
conceive of a function that's continuous precisely at the irrationals, for
instance (and let's not talk about $G_\delta$ or $F_\sigma$). And
so the pattern goes on.</p>
<p>So when I hit my first class where I learned and used the pigeon-hole principle
regularly (which I think was my combinatorics class? Michelle - if you're
reading this, perhaps you remember), I thought the name "pigeon-hole" was
another one of those names that will get tossed. And I was wrong.</p>
<p>I was in a seminar today, listening to someone talk about improving results
related to equidistribution theorems, approximating reals by rationals, and...
the Dirichlet Box Principle. And there was much talking of pigeons and their
holes (albeit a bit stranger, and far more ergodic-sounding than what I first
learned on).</p>
<p>Not knowing much ergodic theory (or any at all, really), I began to think about
a related problem. A standard application of pigeonholing is to show that any
real number can be approximated to arbitrary accuracy by a rational $\frac{p}{q}$. What if we restricted our $p,q$ to be prime? I.e., are
prime ratios dense in (say) $\mathbb{R}^+$?</p>
<p>So I seek to answer that question in a few different ways. It's nice to come
across problems that can be done, but that I haven't done before. We'll do
three (somewhat) independent proofs.</p>
<h4>First Method: Brute Force</h4>
<p>The Prime Number Theorem <a
href="http://en.wikipedia.org/wiki/Prime_number_theorem">(wiki)</a> asserts
that $\pi(n) \sim \frac{n}{\log n}$, and correspondingly that the nth
prime $p_n \approx n \log n$. So then we might hope that if $\frac{n \log n}{m \log m}$ is dense in $\mathbb{R}^+$, prime ratios would
be dense too. Fortunately, showing that $\frac{n \log n}{m \log m}$ is
dense is straightforward. For the rest, we use this proposition:</p>
<div class="proposition">
<p>If $p_n \sim q_n$, then $\frac{p_n}{p_m}$ is dense iff $\frac{q_n}{q_m}$ is dense.</p>
</div>
<div class="proof">
<p><em>Proof</em> : Since $p_n \sim q_n$, for any $\epsilon > 0$
there is some $N$ s.t. for all $n,m > N$, we have that $|1
- \frac{p_n}{q_n}| < \epsilon$. Thus $1 - \epsilon <
\frac{p_n}{q_n} < 1 + \epsilon$. Similarly, we can say that $1 -
\epsilon < \frac{q_m}{p_m} < 1 + \epsilon$.</p>
<p>Putting these together, we see that
$$(1 - \epsilon)^2 < \frac{p_n}{p_m} \frac{q_m}{q_n} < (1 + \epsilon)^2$$
$$\frac{p_m}{p_n}(1-\epsilon) ^2< \frac{q_m}{q_n} < \frac{p_m}{p_n} (1 + \epsilon)^2$$
If $\frac{p_m}{p_n}$ is dense, then in particular for any real number
$r$, we can choose some $n,m > N$ s.t. $r (1 - \epsilon)
< \frac{p_m}{p_n} < r(1 + \epsilon)$. Putting this together again, we see
that
$$r(1-\epsilon)^3 < \frac{q_m}{q_n} < r(1 - \epsilon)^3$$
And so $q_n/q_m$ is dense as well. The proof of the converse is identical. $\diamondsuit$</p>
</div>
<p>Now that we have that, we ask: is it true that $\frac{n \log n}{m \log
m}$ is dense? In short, yes. Now that we've gotten rid of the prime number
restriction, this is far simpler. So I leave it out - but leave a comment if
it's unclear.</p>
<h4>Method 2: Closer to Proper Pigeonholing</h4>
<p>In a paper by Dusart <a title="Estimates of some functions over primes"
href="http://arxiv.org/abs/1002.0442">(link to arxiv)</a>, Estimates of Some
Functions over Primes without R.H., it is proved that for $x > 400
000$, there is always a prime in the interval $[x, x(1 + \frac{1}{25
\log^2 x}]$. We can use this to show the density of prime ratios as well. In
fact, let's be a little slick. If prime ratios are dense in the rationals, then
since the rationals are dense in the reals we'll be set. So suppose we wanted
to get really close to some rational $\frac{a}{b}$. Then consider pairs
of intervals $[an, an(1 + \frac{1}{25 \log ^2 an}], [bn, bn(1 +
\frac{1}{25 log ^2 bn}]$ for $n$ large enough that $an, bn > 400
000$. We know there are primes $p_n, q_n$ in each pair of intervals.</p>
<p>Then our result follows from the fact that $\displaystyle \lim
\frac{an}{bn} = \frac{a}{b} = \lim \frac{an(1 + \frac{1}{25 \log ^2 an})}{bn(1
+ \frac{1}{25 \log ^2 bn})}$ and the sandwich theorem.</p>
<p>We pause for an aside: a friend of mine today, after the colloqium, was asking
about why the pigeonhole principle was called the pigeonhole principle. Why not
the balls in baskets lemma? Or the sock lemma or the box principle (which it is
also called, but with far less regularity as far as I can tell)? So we
considered calling it the sandwich theorem: if I have 4 different meats, but
only enough bread for 3 sandwiches, then one sandwich will get at least 2
meats. What if we simply called every theorem the sandwich theorem, and came up
with some equally unenlightening metaphorical explanation? Oof - deliberate
obfuscation.</p>
<h4>Method 3: First Principles</h4>
<p>We do not actually need the extreme power of Dusart's bound (which is not to
say it's not a great result - it is). In fact, we need nothing other than the
prime number theorem itself.</p>
<div class="lemma">
<p>For any $\epsilon > 0$, there exists some number $N$ s.t. for
all $x > N$, there is a prime in the interval $[x,
x(1+\epsilon)]$.</p>
</div>
<div class="proof">
<p>Directly use the prime number theorem to show that $\lim \frac{\pi(n(1 +
\epsilon)}{\pi(n)} = 1+\epsilon > 1$.</p>
</div>
<div class="proposition">
<p>Prime ratios are dense in the positive reals.</p>
</div>
<div class="proof">
<p>For any real $r$ and $\epsilon > 0$, we want primes $p,q$
s.t. $|p/q - r| < \epsilon$, or equivalently $qr - q\epsilon
< p < qr + q\epsilon$. Then let $\epsilon' = \epsilon/r$. Let
$N = N(\epsilon')$ be the bound from the last lemma, and let $q$ be
any prime with $qr > N$. Then since there's a prime in $[x, x(1
+ \epsilon')]$, let $x = qr$ to complete the proof. $\diamondsuit$</p>
</div>
<p>To end, I wanted to note a related, cool result. If $P$ is the set of
primes, then $\sin P$ is dense in $[-1,1]$. This is less trivial,
but it follows from a result from Vinogradov saying that the sequence of prime
multiples of a fixed irrational number is equidistributed modulo 1. And this
not at all immediately obvious to me.</p>https://davidlowryduda.com/a-pigeon-for-every-holeThu, 26 Apr 2012 03:14:15 +0000Precalculus Supplementhttps://davidlowryduda.com/precalculus-supplementDavid Lowry-Duda<p>It is here that I will be linking to any supplementary materials for the
precalculus study this summer.</p>
<p>Here is a copy of the syllabus:<a
href="/wp-content/uploads/2012/04/precalculus-plan.pdf">Precalculus
plan</a></p>
<p>At the end of the first week, there was an exam covering Appendix B, sections
1.1-1.6, and sections 2.1-2.5. That was <a
href="/wp-content/uploads/2012/07/precalc-exam1.pdf">this
exam.</a></p>
<p>I have written up a set of exercises that expand on the first week's lessons.
These are not at all required, and no material from these <a
href="/wp-content/uploads/2012/07/problemset1.pdf">exercises</a>
will be on any future test or quiz. But I recommend attempting them - they're
good for you.</p>
<p>At the end of the second week, there was an exam covering material from
chapters 3 and 4. That was <a
href="/wp-content/uploads/2012/07/precalcexam2.pdf">this
exam</a>.</p>
<p>We are about to finish our third week, and are over halfway through. I have
started to write up a resource aimed at reviewing the first three weeks'
material and developing speed in solving. I will add to it on Friday evening
and finish it by Saturday afternoon. These <strong>are required</strong>,
unlike the previous exercises. You can find the draft <a
href="/wp-content/uploads/2012/07/hwset3.pdf">here</a>.</p>
<p>At the end of the third week, there was an exam covering material from chapters
4-6. That was this <a
href="/wp-content/uploads/2012/04/precalcExam3.pdf">exam</a>.</p>
<p>I have updated the exercise list noticeably. The updated draft can be found <a
href="/wp-content/uploads/2012/08/hwset31.pdf">here</a>.
I want to note in particular the particularly long exercise 2.27. Depending on
how you want to approach it, you might want to start on it early.</p>
<p>At the end of the fourth week, there was an exam on conics. That was this <a
href="/wp-content/uploads/2012/08/precalcexam4.pdf">exam</a>.</p>
<p>I've updated it once more - but I lied about this being the last update.
There's one more. Monday evening update - <a
href="/wp-content/uploads/2012/08/hwset3mod.pdf">here</a>.</p>
<p>The end is almost here! I should note that I've put something on synthetic
division on the <a title="Precalculus Supplement: Synthetic Division"
href="/2012/08/09/precalculus-supplement-synthetic-division/">main
part of my blog</a>. I wanted to use colors in the long division - it makes it
really clear - but Wordpress doesn't have all the powers of $\LaTeX$ (the
markup language I use to make things mathy-looking). So, it's not quite as good
as I would have hoped. So it goes!</p>https://davidlowryduda.com/precalculus-supplementThu, 26 Apr 2012 03:14:15 +0000From the exchange - is it unheard of to like math but hate proofs?https://davidlowryduda.com/from-the-exchangeDavid Lowry-Duda<p>A flurry of activity at <a title="MSE"
href="http://math.stackexchange.com">Math.Stackexchange</a> was just enough to
rouse me from my blogging slumber. Last week, the following question was
posted.</p>
<blockquote>
<p>I have enjoyed math throughout my years of education (now a first year math
student in a post-secondary institute) and have done well–relative to the
amount of work I put in–and concepts learned were applicable and
straight-to-the-point. I understand that to major in any subject past high
school means to dive deeper into the unknown void of knowledge and learn the
"in's and out's" of said major, but I really hate proofs–to say the least.</p>
<p>I can do and understand Calculus, for one reason is because I find it
interesting how you can take a physics problem and solve it mathematically.
It's applications to real life are unlimited and the simplicity of that idea
strikes a burning curiosity inside, so I have come to realize that I will take
my Calculus knowledge to it's extent. Additionally, I find Linear Algebra to be
a little more tedious and "Alien-like", contrary to popular belief, but still
do-able nonetheless. Computer Programming and Statistics are also interesting
enough to enjoy the work and succeed to my own desire. Finally, Problems,
Proofs and Conjectures–that class is absolutely dreadful.</p>
<p>Before I touch upon my struggle in this course, let me briefly establish my
understanding of life thus far in my journey and my future plans: not
everything in life is sought after, sometimes you come across small sections in
your current chapter in which you must conquer in order to accomplish the
greater goal. I intend to complete my undergraduate degree and become a math
teacher at a high school. This career path is a smart choice, I think, seeing
as how math teachers are in demand, and all the elder math teachers just put
the students to sleep (might as well bring warm milk and cookies too). Now on
that notion and humour aside, let us return to Problems, Proofs and Conjectures
class.</p>
<p>Believe me, I am not trying to butcher pure math in any way, because it
definitely requires a skill to be successful without ripping your hair out.
Maybe my brain is wired to see things differently (most likely the case), but I
just do not understand the importance of learning these tools and techniques
for proving theorems, and propositions or lemmas, or whatever they are formally
labelled as, and how they will be beneficial to us in real life. For example,
when will I ever need to break out a white board and formally write the proof
to show the N x N is countable? I mean, let's face it, I doubt the job market
is in dire need for pure mathematicians to sit down and prove more theorems
(I'm sure most of them have already been proven anyways). The only real
aspiring career path of a pure mathematician, in my opinion, is to obtain a PHd
and earn title of Professor (which would be mighty cool), but you really have
to want it to get it–not for me.</p>
<p>Before I get caught up in this rant, to sum everything up, I find it very
difficult to study and understand proofs because I do not understand it's
importance. It would really bring peace and definitely decrease my stress
levels if one much more wise than myself would elaborate on the importance of
proofs in mathematics as a post-secondary education. More simply, do we really
need to learn it? Should my decision to pursue math be revised? Perhaps the
answer will motivate me to embrace this struggle.</p>
</blockquote>
<p>I happened to be the first to respond (the original question and answer can be
found <a href="http://math.stackexchange.com/q/129522/9754">here</a>, and I'm a
bit fond of my answer. So I reproduce it below.</p>
<p>I mean, let's face it, I doubt the job market is in dire need for pure
mathematicians to sit down and prove more theorems (I'm sure some of them have
already been proven anyways).</p>
<p>The idea that most of 'math' has already been solved, discovered, found,
founded, or whatnot is nonsense, a misconception arising from a system of
education focusing on mastering arithmetic skills rather than performing
mathematical thought.</p>
<p>It seems to me that what you like to do is arithmetic. I don't mean this
diminutively - I mean that it sounds to me that you like coming across problems
where a prescribed solution or apparent solution method exists, and then you
carry out that solution. In my undergraduate days, when I was surrounded by the
engineers at Georgia Tech, this was a common attitude. A stunning
characteristic of many of the multivariable calculus classes and differential
equations classes at Tech, which were almost exclusively aimed at the
overwhelming majority of engineers at the school, was that many concepts were
presented without any proof. And many of the students, whose interest in
understanding why what they were taught was true was dulled by years of largely
mindless arithmetic studies or whose trust is so great that they willingly give
up the responsibility of verification to others (for better, or for worse),
this was fine. And in this way, 2 of the four semesters of calculus at Tech are
largely arithmetic as well.</p>
<p>But as a mathematician (rather, as a sprouting mathematician), I draw a
distinction between mathematics and arithmetic. To approach a problem and come
up with a solution that you do not understand is not mathematics, nor is the
act of regurgitating formulae on patterned questions that come off a template.
While arithmetic skills and computation are important (and despite their
emphasis in primary and secondary schooling, still largely weak enough for a
vague innumeracy to be prevalent and, unfortunately enough, sometimes even
acceptable), they are not at the heart of mathematics. The single most
important question in math is <em>why?</em></p>
<p>I agree with one of the comments above, identifying two questions here: <em>Why
are proofs important to mathematics?</em> and <em>How would being able to prove
theorems make my life better?</em></p>
<p>For the former, I can only say that a mathematics without proofs isn't really
mathematics at all. What is it that you think mathematicians do all the time? I
assure you, we are not constantly computing larger and larger sums. Nor are we
coming up with more formulae, eagerly awaiting numeric inputs. A mathematician
finds an intriguing problem and then tries to answer it. The funny thing about
most things is that they're really complicated, and so most mathematicians must
go through some sort of process of repeated modelling and approximation. It is
in our nature to make as few assumptions as necessary to answer the problem at
hand, and this sometimes leads to more complications. And sometimes, we fail.
Other times, we don't.</p>
<p>But then, any self-respecting scholar (let alone a mathematician) who is also
interested in the problem gets to ask why the answer is, in fact, an answer.
Not everything is so simple as verifying arithmetic details. So everything must
be proven. And then some other mathematician might come around, assume more or
less, and come up with a different proof, or a different model, or a different
approximation, or an entirely different way of viewing the problem. And this is
exciting to a mathematician - it leads to connections in an increasingly
unwieldy field.</p>
<p>In response to your assumption that most theorems have been done by now, I
mention the quick fact that no mathematician alive could ever hope to learn a
respectable fraction of the amount of math that has been done. This is a very
big deal, and is strange. Just a few hundred years ago, men like Gauss or
Leibniz were familiar with the vast majority of the then-modern mathematics.
It's hard to express how vast mathematics is to someone who isn't familiar with
any of the content of mathematics, but ask around and you'll find that it's,
well, huge.</p>
<p>Finally, for the second question: in all honesty, the ability to prove the
theorems of calculus or linear algebra might not be fundamentally important to
the you live your life. But to lack any concept of any proof is to allow
yourself to be completely consumed by not only innumeracy, but also
irresponsibility. In particular, I would find it completely inappropriate for
someone who disdains proofs from teaching math in secondary school. This would
place yet another cog in the machine that creates generations of students who
think that math is just a big ocean of formulas and mathematicians are the
fisherman, so that when someone needs a particularly big formula they ask a
mathematician to go and fish it out. More concretely, it is in secondary school
where most people develop their abilities to synthesize information and make
evaluative decisions. It is a fact of our society that numbers play an
important role in conveying information, and understanding their manipulation
is just as important as having the technical skills to undertake the
manipulation itself when confronted with the task of interpreting their
meaning. And this means that when a student asks their math teacher what
something means, that teacher had better have a good idea.</p>https://davidlowryduda.com/from-the-exchangeMon, 16 Apr 2012 03:14:15 +0000Education Musingshttps://davidlowryduda.com/education-musingsDavid Lowry-Duda<p>Every year a bit before Christmas, a particular 8th grade science teacher from
my old middle school holds a Christmas Party. Funny enough, she was not my
science teacher (but she was my science olympiad coach), even though I learned
much science from her. She has a reputation for being strict and rigorous
(which she deserves), and in general she was one of those teachers that people
know are simply <em>good</em>.</p>
<p>Many other of the <em>good</em> middle school 8th grade teachers come to this
party every year as well. This is always particularly interesting to me,
because although I've changed a whole lot since my 8th grade year, there is a
clear line connecting me to then. It was my 8th grade math teacher, teaching me
geometry, that made me first like learning math in school. Back then, my school
was not afraid of letting the highest-level students learn more and deeper
material than other, and it was 8th grade where this difference really became
pronounced for me.</p>
<p>Every now and then, I reminisce about my primary and secondary education. Was
mine good? I think it was better than most, but certainly not the best. A
bigger question to me is always: what is the purpose of public education? Is it
to establish a certain minimum level of knowledge, or to create a basic
universal level of civic virtue? Most of my close friends and I were bored
throughout primary and secondary (and a lot of college, for that matter)
education. I was thinking of these sorts of things when I saw my old middle
school teachers at this Christmas Party.</p>
<p>It turns out my math teacher, whom I should perhaps credit as directing me
towards math as a career, no longer teaches math. He hasn't retired or anything
- he simply doesn't teach math anymore. Why not? He loved teaching math, and he
tells me he plans on going back "once the curriculum settles." The problem is
that right now, the math curriculum is changing too often, and each time the
teachers have to go back and pass some sort of certification. It used to be
simple - there was a three-track system (i.e. an advanced, intermediate, and
basic level for each grade). The advanced had pre-algebra in 6th grade, algebra
in 7th, and geometry in 8th. In high school, this would continue to algebra II,
AP Statistics, analysis (fancy word for pre-calc), AP Calc. So in 8th grade,
students would be taking geometry on the advanced track, algebra on the
intermediate, or pre-algebra on the basic track. (The three-track course broke
down in high schools - seniors might take AP Calc AB or BC, or AP Stat, or
pre-calc, or a version of albebra III, or perhaps something else).</p>
<p>A few changes later brings us to the current 'integrated curriculum.' So in 6th
grade, one learns either 'advanced 6th grade math' or 'basic 6th grade math.'
It's 2-track and fully integrated (whatever that really means). As my teacher
tells it, some thrive under the system and some flounder, as with every system.
One of the greater problems is that the advanced track is more or less
equivalently hard as before (let's not mess with the imprecision of this
statement), while it is more challenging to teach interesting classes to the
lower track. In his words: "There's not much difference between an upper
algebra student and a geometry student. Perhaps the algebra student just moved
into town and so wasn't yet able to transfer up to an appropriate level. Now,
he's stuck at the lowest class, and he's bored. How are we to make it at all
interesting to him?"</p>
<p>Behind this story lies the idea that schools should keep their students
interested. Is that something that public schools should do? Almost every
mathematician or math PhD candidate I have talked to says that they became
mathematicians 'in spite of' the math of their schooling. I frequently say that
I learned 13 years of arithmetic before I learned any math. I firmly believe
that the current public education system allows innumerate students to
unabashedly enter the 'real world.' As Paulos says in his books on Innumeracy,
people can publicly admit that they are 'not numbers people,' i.e. that they
are bad at math and arithmetic, and not feel ashamed. It also means that dinner
conversation is a lot harder - so many people are poisoned against math.</p>
<p>Another first-year graduate student at Brown named Paul and I talk about some
of the basic ideas of math education and testing frequently. Usually, this
conversation centers around the GRE and other ETS tests. Paul and I once talked
with a foreign grad student who spoke English as a second language, and his GRE
strategy. The Verbal GRE section is often formidable to English speakers, let
alone foreigners. He revealed to us that his strategy was to memorize a list of
10000 or so words so that he could answer all the vocab-based questions, and
answered 'c' to everything else. In fact, he and his friends practiced
determining whether a question was vocab and coloring the bubble 'c' as quickly
as possible if it was not. And he did very well.</p>
<p>Paul and I have a theory, or at least an interesting experiment. If we were to
read application materials of students, leaving out test scores, how well could
we predict general GRE and GRE Math subject scores? Now we just need to get
access to those materials... that's a bit challenging. But it sounds like a
great experiment. How worthwhile are those scores, really?</p>
<p>But these are just brief musings on math education. A good friend of mine named
John Kosh and I had a great conversation about education once. We talked about
a thought experiment: what would happen if for the first 5 years of school,
reading, writing, and critical thinking were the only educational goals? No
explicit math, history, geography, etc. We weren't saying this because we
thought it should be done. Instead, we thought that critical thinking skills
are poorly under-emphasized, and it's an interesting idea. Would it be so bad
for a mathematical education? I sort of suspect not. There is the problem that
even basic reading and critical thinking skills require a certain amount of
math, so it's not as though all math is forgone. I'm getting beside myself. The
point is that we were able to come up with a system that addressed each of the
shortcomings that we identified of the current system, but were of course
unable to examine any new problems. That is the way with these - it's much
easier to identify errors than to fix problems.</p>
<p>And so it is with the ETS and the GRE, in my opinion.</p>
<p>Anyhow, I wish anyone who read this far a Merry Christmas and, if I don't
update before, a Happy New Year.</p>https://davidlowryduda.com/education-musingsSat, 24 Dec 2011 03:14:15 +0000Tiontobl - a combinatorial gamehttps://davidlowryduda.com/tiontobl-a-combinatorial-gameDavid Lowry-Duda<p>As a sophomore at Georgia Tech, I took a class on Combinatorial Game Theory
with two good friends, David Hollis (now at <a
href="http://recklessabandonlabs.com/">Reckless Abandon Labs</a>, which he
founded) and Michelle Delcourt (now working towards her PhD at UIUC). As a
final project, we were supposed to analyze a game combinatorially. The three of
us ended up creating a game, called Tiontobl, and we wrote a brief paper. We
submitted it to the journal Integers, but we were asked to revise and expand
part of the paper. At some point in time, we'll finish revising the paper and
submit it again (it's harder now, since we're split across the country - but it
will happen).</p>
<p>Nonetheless, I was talking about it the other day, and I thought I should put
the current paper out there.</p>
<p>The paper can be found here (<a href="/wp-content/uploads/2011/12/tiontobl.pdf">tiontobl</a>).</p>https://davidlowryduda.com/tiontobl-a-combinatorial-gameSat, 10 Dec 2011 03:14:15 +0000Mapping class groups summaryhttps://davidlowryduda.com/mapping-class-groupsDavid Lowry-Duda<p>I recently wrote up a small article on a talk on Mapping Class Groups. This
isn't quite the final draft, but it's what it is.</p>
<p>EDIT: I have updated this paper, and I think all of the small errors have been corrected. I hope so, anyway.</p>
<p><a href="/wp-content/uploads/2011/12/mappingclassgroups1.pdf">mappingClassGroups</a></p>https://davidlowryduda.com/mapping-class-groupsTue, 06 Dec 2011 03:14:15 +0000Ghostwritten Wordhttps://davidlowryduda.com/ghostwritten-wordDavid Lowry-Duda<p>I've just learned of the concept of ghostwriting, and I'm stunned.</p>
<p>A friend and fellow grad student of mine cannot believe that I've made it this
far without imagining it to be possible. I asked around, and I realized that I
was one of the few who wasn't familiar with ghostwriting.</p>
<p>Before I go on, I should specify exactly what I mean. By 'ghostwriting,' I
don't mean situations where the President or another statesman gives a speech
that they didn't write themselves, but that was instead written by a
ghostwriter. That makes a lot of sense to me. I refer to the cases where a
student goes to a person or service, gives them their assignment, and pays for
it to be completed. And by assignment, I don't just mean 20 optimization
problems in one variable calculus. I mean things like 20 page term papers on
the parallels between the Meichi Revolution and American Occupation in Japan,
or 50 page theses, or (so it's claimed by some) doctoral dissertations.</p>
<p>This felt like nonsense to me when I first heard it, but it also caught my eye.
I brought it up with another good friend of mine, and he referred me to an <a
href="http://chronicle.com/article/The-Shadow-Scholar/125329/">article at the
Chronicle</a>, called The Shadow Scholar. It's a stylized auto-documentary by a
ghostwriter, claiming to have written thousands of pages of essays for students
in the last year. Perhaps better than the essay were the comments. There are a
lot of them, and they partly track my own initial thoughts towards
ghostwriting.</p>
<p>At first, people were angry. <em>This should be illegal!</em> - they would
type. Is it illegal? While it is tempting to immediately turn to ethics, we
should not get ahead of ourselves. It is not, as far as I can tell, illegal in
general. One might assume that it constitutes some sort of fraud, or some sort
of implied copyright infringement or something. It certainly is an abuse of
intellectual property, but that doesn't make it against the law to ghostwrite.
Instead, all the problems are on the student's side - it is almost certain that
the student is violating some sort of school code. Even if not considered
direct plagiarism, a student passing off other's ideas as his own is often
grounds for serious punishment. So one would expect that students don't do it.
I would. But I was wrong.</p>
<p>This concept is so old that Wikipedia has multiple pages on related concepts.
The general concept behind ghostwriting is evidently referred to as <a
href="http://en.wikipedia.org/wiki/Contract_cheating">Contract
Cheating</a> (For that matter, there is also a <a
href="http://en.wikipedia.org/wiki/Ghostwriting#Academic">Ghostwriting</a> wiki
page).</p>
<p>Suddenly, I doubted many things. This is one of those things that surprised me,
and I couldn't wrap my head around it. That is part of the reason why I write
about it now - to wrap my head around it. How widespread is it? In The Shadow
Writer, the ghostwriter makes their living as a ghostwriter. According to his
claim, it pays more (greater than $60,000) than what the average gradeschool
teacher makes. I looked for more. <a
href="http://open.salon.com/blog/holly_robinson/2011/11/18/writer_for_hire">Here
</a>(from open.salon.com) and <a
href="http://www.thesmartset.com/article/article10100801.aspx">here</a> (from
thesmartset.com) are more self-told tales of ghostwriters. But in particular,
they were all writers, and so they wrote essays for students.
In fact, some worked for companies - whole companies of ghostwriters. Wikipedia
beat me here, too - these are known as <a
href="http://en.wikipedia.org/wiki/Essay_mill">"Essay Mills"</a>. The phrase
'essay mill' is clearly meant to be suggestive. How easy are they to find?
Unfortunately, since they're apparently legal, there is no problem with them
advertising openly. They are super easy to find. Disappointingly so. It turns
out that all my friends who were stunned that I hadn't heard of essay mills...
were right to be stunned. And the fact that there are so many is merely
evidence of high demand. And prices are high.
In fact, I would have thought the prices high enough to be dissuasive too. To
pay $10 to $20 a page is unbelievable - an incentive to do your own work, at
the least. Unfortunately, it's also an incentive to work at an essay mill.
So this made me ask: are there math mills, too? Yes. Unfortunately, they're
very easy to find, too. The first one I came across, I found that I could see
some of the the open questions. And I was curious. So I opened one up, and it
was a mechanics question about springs.
A student was offering $2 for anyone who could answer a question: there was
this spring with spring constant k, and there was this weight of mass m on top
of it. How much is it compressed? I read this, and I think... this is a one
liner. So I send the guy a message - <em>Do you know Hooke's Law?</em> He
responded, i<em> think weve covered itin class. does that help? Can you plz hlp
me?</em></p>
<p>So I write him a message telling him the law, and that if you plug in numbers
it says that you divide. And then, the site credits me with $2. What?</p>
<p>Now, I'm even more caught in this trouble. I contributed, in a little way, to
this terrible thing. I have carefully avoided talking about the ethics of
ghostwriting, because I wanted to talk about other things first. But now, to be
clear, I think it's terrifying. That there is such high demand is evidence to
me of a great lack of ethical strength or moral integrity on behalf of
students. It's also evidence of a great lack of inspiration. Many people
treasure education - it's priceless (excepting of course the large price we put
on it). An education can change a whole lot about a life. But for someone to
subscribe to a ghostwriter is to go completely against that.</p>
<p>I've read several different articles on ghostwriting now, and many comment
threads on these articles. The best ones, in the sense that I found them most
educational, are those where the following happens: a ghostwriter writes the
article with the same dark, insinuating tone behind the name 'essay mill' and
gloats a bit about how they make more money than some professional educator;
people become appalled and, more or less, insult them; the ghostwriter defends
himself and his career; and then one or both sides accuses the educational
system as having failed these students. I think The Shadow Shadow is an
excellent example of this.</p>
<p>I deem it immediately ethically obvious that it is wrong for a student to use
such a service. At first, I was not so convinced about the moral and ethical
situation from the ghostwriter's point of view. It's not illegal, and so supply
of ghostwriters is free to rise to demand. Some might even say it would be
terrible for a free market to behave in any other way. I talked with yet
another fellow grad student, and we chatted about some of the quandaries of
ghostwriting. Many of the grad students I know tutor to get a little extra
cash, and one of the problems is that we are often paid to help people with
their homework. And that's related. Where is the line?</p>
<p>But instead of justifying ghostwriting, I think that this experience has made
me really ponder the ethics of being a good tutor. In essence, tutors should
elucidate. Perhaps this is very obvious. It was for me, and then it wasn't for
a bit, and now it is again. This is sort of like John Stuart Mill's <em>On
Liberty</em>, where he demands the importance of a free marketplace of ideas so
that out of conflict, the 'good ideas' can be revealed.</p>
<p>So I am completely against ghostwriting, and even more against students willing
to use it. So I think it is most natural to then ask: what should be done about
it? This is not so easy, I think. Especially because, though I think
ghostwriting is wrong, I also place the fault largely with the student.</p>
<p>Large scale detection is impractical in the sense that we cannot possible hope
to catch every plagiarized case. It's also impractical to demand all work to be
done in class (I think). I'll be on the lookout for this when I teach. I'll be
sure to have times when students must demonstrate some sort of working
knowledge in class.</p>
<p>Really, I think this comes down to a simple philosophy. A student who goes to
college should want to learn about whatever it is that they're doing. A college
shouldn't give a student a degree unless they are proficient in whatever that
degree guarantees.</p>https://davidlowryduda.com/ghostwritten-wordFri, 02 Dec 2011 03:14:15 +0000Points under a parabolahttps://davidlowryduda.com/points-under-parabolaDavid Lowry-Duda<p>In this, I present a method of quickly counting the number of lattice points
below a quadratic of the form ${y = \frac{a}{q}x^2}$. In particular, I
show that knowing the number of lattice points in the interval ${[0,q-1]}$, then we have a closed form for the number of lattice points in any
interval ${[hq, (h+1)q - 1]}$. This method was inspired by the
collaborative <a
href="http://michaelnielsen.org/polymath1/index.php?title=Finding_primes">Polymath4
Finding Primes Project</a>, and in particular the guidance of Dr. Croot from
Georgia Tech.</p>
<p><strong>1. Intro </strong></p>
<p>Suppose we have the quadratic ${f(x) = \frac{p}{q}x^2}$. In short, we seperate the lattice points into regions and find a relationship between the number of lattice points in one region with the number of lattice points in other regions. Unfortunately, the width of each region is ${q}$, so that this does not always guarantee much time-savings.</p>
<p>This came up while considering <a name="eq sum"></a></p>
<p align="center">$\displaystyle \sum_{d \leq x \leq m} \left\lfloor \frac{N}{x} \right\rfloor \ \ \ \ \ (1)$</p>
<p><a name="eq sum"></a></p>
<p> </p>
<p><a name="eq sum"></a></p>
<p>In particular, suppose we write ${x = d + n}$, so that we have ${\left\lfloor \dfrac{N}{d + n} \right\rfloor}$. Then, expanding ${\dfrac{N}{d+n}}$ like ${\dfrac{1}{x}}$, we see that <a name="eqexpandedSum"></a></p>
<p align="center">$\displaystyle \frac{N}{d+n} = \frac{N}{d} - \frac{N}{d^2} (n - d) + O\left(\frac{N}{d^3} \cdot (n-d)^2 \right) \ \ \ \ \ (2)$</p>
<p><a name="eqexpandedSum"></a></p>
<p> </p>
<p><a name="eqexpandedSum"></a></p>
<p>And correspondingly, we have that <a name="eqexpandedSum2"></a></p>
<p align="center">$\displaystyle \sum \left\lfloor \frac{N}{d+n} \right\rfloor = \sum \left\lfloor \frac{N}{d} - \frac{N}{d^2} (n - d) + O\left(\frac{N}{d^3} \cdot (n-d)^2 \right) \right\rfloor \ \ \ \ \ (3)$</p>
<p><a name="eqexpandedSum2"></a></p>
<p> </p>
<p><a name="eqexpandedSum2"></a></p>
<p>Now, I make a great, largely unfounded leap. This is <em>almost</em> like a quadratic, so what if it were? And then, what if that quadratic were tremendously simple, with no constant nor linear term, and with the only remaining term having a rational coefficient? Then what could we do?</p>
<p><strong>2. The Method </strong></p>
<p>We want to find the number of lattice points under the quadratic ${y = \frac{a}{q}x^2}$ in some interval. First, note that <a name="eqrecRelation"></a></p>
<p align="center">$\displaystyle \left\lfloor \frac{a}{q} (x+q)^2 \right\rfloor = \left\lfloor \frac{a}{q} (x^2 + 2xq + q^2) \right\rfloor = \left\lfloor \frac{a}{q}x^2 \right\rfloor + 2ax + aq \ \ \ \ \ (4)$</p>
<p><a name="eqrecRelation"></a></p>
<p> </p>
<p><a name="eqrecRelation"></a></p>
<p>Then we can sum over an interval of length q, and we'll get a relationship with the next interval of length q. In particular, this means that <a name="eqsumRec"></a></p>
<p align="center">$\displaystyle \sum_{x=0}^{q-1} \left\lfloor\frac{a}{q}x^2\right\rfloor = \sum_{x=q}^{2q-1} \left\lfloor \frac{a}{q}x^2 \right\rfloor - \sum_{x=0}^{q-1} (2ax + aq) \ \ \ \ \ (5)$</p>
<p><a name="eqsumRec"></a></p>
<p> </p>
<p><a name="eqsumRec"></a> Now I adopt the notation ${S_{a,b} := \sum_{x = a}^b \left\lfloor \frac{a}{q}x^2 \right\rfloor}$, so that we can rewrite equation <a href="#eqsumRec">5</a> as</p>
<p align="center">$\displaystyle S_{0,q-1} = S_{q, 2q-1} - \sum_0^{q-1} (2ax + aq) $</p>
<p>Of course, we quickly see that we can write the right sum in closed form. So we get <a name="eqsumRed"></a></p>
<p align="center">$\displaystyle S_{0,q-1} = S_{q, 2q-1} - a(q-1)(q) - aq^2 \ \ \ \ \ (6)$</p>
<p><a name="eqsumRed"></a></p>
<p> </p>
<p><a name="eqsumRed"></a> We can extend this by noting that ${\frac{a}{q}(x + hq)^2 = \frac{a}{q}x^2 + 2ahx + ahq}$, so that <a name="eqsumExt"></a></p>
<p align="center">$\displaystyle S_{0,q-1} = S_{hq, (h+1)q-1} - \sum_0^{q-1}(2ahx + ahq) \ \ \ \ \ (7)$</p>
<p><a name="eqsumExt"></a></p>
<p> </p>
<p><a name="eqsumExt"></a> Extending to multiple intervals at once, we get <a name="eqsumFin"></a></p>
<p>$\lambda S_{0,q-1} = \sum_{h=1}^\lambda \left( S_{hq, (h+1)q - 1} - h\sum_0^{q-1}(2ax + aq) \right) $</p>
<p>$S_{q, (\lambda + 1)q-1}-$ $\sum_{h=1}^{\lambda} $ $h \left(\sum_0^{q-1} (2ax + aq) \right) $</p>
<p>$S_{q, (\lambda + 1)q-1}-$ $\frac{\lambda (\lambda +1)}{2}[aq(q+1) + aq^2]$</p>
<p>So, in short, if we know the number of lattice points under the parabola on the interval ${[0,q-1]}$, then we know in ${O(1)}$ time the number of lattice points under the parabola on an interval ${[0,(\lambda + 1)q-1]}$.</p>
<p>Unfortunately, when I have tried to take this method back to the Polymath4-type problem, I haven't yet been able to reign in the error terms. But I suspect that there is more to be done using this method.</p>https://davidlowryduda.com/points-under-parabolaThu, 24 Nov 2011 03:14:15 +0000Two short problemshttps://davidlowryduda.com/two-short-problemsDavid Lowry-Duda<p>A brief post today:</p>
<p>I was talking about an algebraic topology problem from Hatcher's book (available freely on <a href="http://www.math.cornell.edu/~hatcher/AT/ATchapters.html">his website</a>) with two of my colleagues. In short, we were finding the fundamental group of some terrible space, and we thought that there might be a really slick almost entirely algebraic way to do a problem. We had a group $G$ and the exact sequence $0 \to \mathbb{Z} \to G \to \mathbb{Z} \to 0$, in short, and we wondered what we could say about $G$. Before I go on, I mention that we had been working on things all day, and we were a bit worn. So the calibre of our techniques had gone down.</p>
<p>In particular, we could initially think of only two examples of such a $G$, and we could show one of them didn't work. Of the five of us there, two of us thought that there might be a whole family of nonabelian groups that we were missing, but we couldn't think of any. And if none of us could think of any, could there be any? At the time, we decided no, more or less. So $G \approx Z \times Z$, which is what we wanted in the sense that we sort of knew this was the correct answer. As is often the case, it is very easy to rationalize poor work if the answer that results is the correct one.</p>
<p>We later made our work much better (in fact, we can now show that our group in question is abelian, or calculate is in a more geometric way). But this question remained - what counterexamples are there? There are infinitely many nonabelian groups satisfying that exact sequence! But I'll leave this question for a bit -</p>
<blockquote>Find a nonabelian group (or family of groups) that satisfy $0 \to \mathbb{Z} \to G \to \mathbb{Z} \to 0$</blockquote>
<p>The second quick problem of this post. It's found in Ahlfors - Find a closed form for $\displaystyle \sum_{n \in \mathbb{Z}} \frac{1}{z^3 - n^3}$.</p>
<p>When I first had to do this, I failed miserably. I had all these divergent series about, and it didn't go so well. I try to factor $z^3 - n^3 = (z - n)(z - \omega n)(z - \omega^2 n)$, use partial fractions, and go. And... it's not so fruitful. You get three terms, each of which diverge (if taken independently from each other) for a given $z $. And you can do really possibly-witty things, like find functions that have the same poles and try to match the poles, and such. But the divergence makes things hard to deal with. But if you do $-(n^3 - z^3) = -[(n-z)(n-\omega z)(n - \omega^2 z)]$, everything works out very nicely. That's the thing with complex numbers - the 'natural factorization' may not always be unique.</p>https://davidlowryduda.com/two-short-problemsMon, 21 Nov 2011 03:14:15 +0000Datasetshttps://davidlowryduda.com/datasetsDavid Lowry-Duda<h1> Datasets </h1>
<p>Having good access to data is important. At various times I have collected,
cleaned, or acquired data that I found helpful, useful, or rare. I share these
here.</p>
<h5>Eurovision Song Contest Lyrics</h5>
<p>Although this certainly exists out there, I have a plaintext repository of the
lyrics from the Eurovision Song Contest at <a
href="https://github.com/davidlowryduda/EurovisionLyrics">https://github.com/davidlowryduda/EurovisionLyrics</a>.
In fact, this step one of a project with my wife, and I left a bit of html
because I was feeling lazy (it's easy to fix, and the hard regex was done
already). If at any point this presents a problem to you, let me know and I
can clean it up.</p>
<h5>Political Terror Scale</h5>
<p>From the <a href="http://www.politicalterrorscale.org/index.php">Political
Terror Scale</a>, there are the <a
href="http://www.politicalterrorscale.org/download.php">PTS trends</a> (<a
href="/wp-content/uploads/2011/10/pts-2008-trends-10-09b.xlsx">PTS
2008 trends 10-09b</a>, as of 2008). This is primarily concerned with human
rights, and uses data from <a
href="http://www.amnesty.org/en/human-rights">Amnesty International</a> and the
<a href="http://www.state.gov/g/drl/rls/hrrpt/">US State Department</a>. The
great bit about PTS is that it transforms much of the raw data into a
comparable format. Unfortunately, there is a bit of subjectivity in the system
and almost all the data is simply ordinal.</p>
<p>However, I did once cowrite a paper (<a
href="/wp-content/uploads/2011/10/the-end-modlow.docx">Why
Democracies Repress</a>) using this data, for an empirical methods project back
in the day. This is another thing that I'll follow up on and write a paper with
a more complete and better method, sometime (don't read it too close as it - it
was a bit exploratory).</p>
<h5>International Crisis Behavior</h5>
<p>In a similar vein, there is much information on the <a
href="http://www.cidcm.umd.edu/">International Crisis Behavior</a> site, and in
particular there is a large set of <a
href="http://www.cidcm.umd.edu/icb/data/">data</a> available. The idea is that
many interactions between different countries have been broken up and
categorized. I used this data as well to write the paper linked above.</p>
<h5>Mungoagoa Water Analysis</h5>
<p>I came across this when a friend from the Georgia Tech chapter of Engineers
Without Borders asked for a little statistical help. As far as I know, this is
the only public copy of this data.</p>
<p>A group of people went door to door and analyzed some of the hygienic practices
of local people in the village of Mungoagoa in 2009. They used a questionaire
(<a
href="/wp-content/uploads/2011/10/hygiene-questionaire.docx">Hygiene
Questionaire</a>) to collate there data. The full record is available here (<a
href="/wp-content/uploads/2011/10/mungoagoa-survey1.xls">MUNGOAGOA
SURVEY</a>). If interested, I have a copy of this data in SPSS format, but
organized in more convenient ways.</p>
<p>Please let me know if there is anything you think should make it here to this list.</p>https://davidlowryduda.com/datasetsMon, 31 Oct 2011 03:14:15 +0000A new toy problemhttps://davidlowryduda.com/a-new-toy-problemDavid Lowry-Duda<p>First, a short math question from Peter:</p>
<blockquote><strong>Question: </strong>What is the coefficient of $x^{12}$ in the simplified expression of $(a-x)(b-x) \dots (z-x)$?</blockquote>
<p>I often hate these questions, but this one gave me a laugh. Perhaps it was just
at the right time.</p>
<p>A police car passed me the other day with sirens wailing and I became reminded
full on about the Doppler Effect. But the siren happened to agree with a song I
was whistling to myself at the time, and this made me wonder - suppose we had a
piece of music (or to start, a scale) that we wanted to hear, and we stood in
the middle of a perfectly straight train track. Now suppose the train had on it
a very loud (so that we could hear it no matter how far away it was) siren that
always held the same pitch. If the train moved so that via the Doppler effect,
we heard the song (or scale), what would its possible movements look like? How
far away would it have to be to not run us over?</p>
<p>Some annoying things come up like the continuity of the velocity and pitch, so
we might further specify that we have some sort of time interval. So we have a
scale, and the note changes every second. And perhaps we want the train to have
the exact right pitch at the start of every second (so that it would have
constant acceleration, I believe - not so exciting). Or perhaps we are a bit
looser, and demand only that the train hit the correct pitch each second. Or
perhaps we let it have instantaneous acceleration - I haven't played with the
problem yet, so I don't really know. I'm just throwing out the idea because I
liked it, and perhaps I'll play with it sometime soon.</p>
<p>Now, the reason I like it is because we can go up a level. Suppose we have a
car instead, and we're in an infinitely large, empty, parking lot (or perhaps
not empty - that'd be interesting). Suppose the car had a siren that wailed a
constant pitch, too. What do the possible paths of the car look like? How does
one minimize the distance the car travels, or how far from us it gets, or how
fast it must go (ok - this one isn't as hard as the previous 2)? It's more
interesting, because there's this whole other dimension thing going on.</p>
<p>And even better: what about a plane? I sit on the ground and a plane flies
overhead. What do its paths look like?</p>
<p>All together, this sounds like there could be a reasonable approach to some
aspect of this problem. Under the name, "Planes, trains, and automobiles" - or
perhaps in order - "Trains, automobiles, and planes," this could be a humorous
and fun article for something like Mathematics Magazine or AMMonthly. Or it
might be really hard. I don't know - I haven't played with it yet. I can only
play with so many things at a time, after all.</p>https://davidlowryduda.com/a-new-toy-problemSun, 23 Oct 2011 03:14:15 +0000Reading Mathhttps://davidlowryduda.com/reading-mathDavid Lowry-Duda<p>First, a recent gem from MathStackExchange:</p>
<blockquote><strong>Task:</strong> Calculate $\displaystyle \sum_{i =
1}^{69} \sqrt{ \left( 1 + \frac{1}{i^2} + \frac{1}{(i+1)^2} \right) }$ as
quickly as you can with pencil and paper only.</blockquote>
<p>Yes, this is just another cute problem that turns out to have a very pleasant
solution. Here's how this one goes. (If you're interested - try it out. There's
really only a few ways to proceed at first - so give it a whirl and any idea
that has any promise will probably be the only idea with promise).</p>
<p>Looking at $1 + \frac{1}{i^2} + \frac{1}{(1+i)^2}$, find a common
denominator and add to get $\dfrac{i^4 + 2i^3 + 3i^2 + 2i +
1}{i^2(i+1)^2} = \dfrac{(i^2 + i + 1)^2}{i^2(i+1)^2}$. Aha - it's a perfect
square, so we can take its square root, and now the calculation is very
routine, almost.</p>
<p>The next clever idea is to say that $\dfrac{ (i^2 + i + 1)}{i(i+1)} =
\dfrac{(i^2 + 2i + 1)}{i(i+1)} - \dfrac{i}{i(i+1) }$, which we can rewrite as
$\dfrac{(i+1)^2}{i(i+1)} - \dfrac{1}{i+1} = 1 + \dfrac{1}{i} -
\dfrac{1}{i+1}$. So it telescopes and behaves very, very nicely. In particular,
we get $69 + 1 - \frac{1}{70}$.</p>
<p>With that little intro out of the way, I get into my main topics of the day.
I've been reading a lot of different papers recently. The collection of
journals that I have access to at Brown is a little different than the
collection I used to get at Tech. And I mean this in two senses: firstly, there
are literally different journals and databases to read from (the print
collections are surprisingly comparable - I didn't realize how good of a math
resource Tech's library really was). But in a second sense, the amount of math
that I comprehend is greater, and the amount of time I'm willing to spend on a
paper to develop the background is greater as well.</p>
<p>That aside, I revisited a topic that I used to think about all the time at the
start of my undergraduate studies: math education. It turns out that there are
journals dedicated solely to math education, <a
href="http://www.springerlink.com/content/t6l8734012v2/">see here for
example</a>. And almost all the journals are either on JSTOR or have
open-access straight from Springerlink, which is great. I have no intention of
becoming a high school teacher or anything, but I became interested as soon as
I began to come across people with radically different high school experiences
than I did.</p>
<p>My high school tried to protect its students, sometimes in ways that I didn't
like. It was the sort of place that, in short, held me back in the following
sense: they wouldn't let anyone take 'too hard' of a course-load for fear that
they would overwork themselves and therefore fail, or do poorly, or overstress,
in everything. In more direct terms, this meant that you had to petition to
take 3 AP classes and had to really work to take 4. Absolutely no one was
allowed to take more than 4 in one school year - so that many of my friends had
to choose what science to take. Those of us who were willing all had sort of
the same schedule in mind - if you did an art (band/choir/orchestra, usually),
then in 10th grade you took AP Statistics, 11th AP Language, 12th AP Lit, AP
Calc, AP (foreign language or Gov or European History or Econ), and an AP
science - if no art, then you could take an additional AP science in 11th
grade. At least, that's how it worked while I was around.</p>
<p>So the big decisions were always around the senior year. For me, I had to ask:
should I take AP Chem or AP Physics? (I ended up taking Physics, which was
great - it was the curiosity and intuition from mechanics that led to me
becoming a mathematician now). Many of my friends asked the same sort of
questions. And it was very annoying - I hate the idea that the school holds us
back, ever. It also turned out that one of my classes was terrible. I was so
annoyed that one of my four choices ended up being bad that I wrote an
embarrassing letter (which I regret to this day).</p>
<p>In short, I felt slighted by the system, and I've considered the system ever
since. One of the articles I read was about the general idea that the sciences
taught in schools and even at entry-undergraduate level in college are
fundamentally different in both motivation and skill set from the ideas held by
scientists and those who progress those subjects. The interesting part about
the article was the amount of feedback that the journal received - enough to
merit multiple copies of letters back and forth to make it to the next
printings of the journal.</p>
<p>That particular article was very careful to simply assert that the current
paths of education in the sciences and the sciences themselves are different,
as opposed to positing that any particular idea or method is above or better
than any other. But of course, it's perhaps the most natural response. Should
they be different? Why does one learn math or the sciences in school? For that
matter, why does one learn history (also oblique and hard to answer, but
something that I maintain is important for at least the reason that it was the
only substitute I ever had for an ethics class in my primary and secondary
education).</p>
<p>These are hard questions, and ones I'm not willing to directly address here at
this time. But I will quickly note that in both Tech and Brown, I am stunned at
how many people lack any sort of intuition for the four basic operations - (I
once tutored someone who, upon being asked what 748 times 342 was, responded
that it didn't even matter because <em>"math was made up at that point. It's
not like someone has sat down and counted that high."</em> oof. That hurts.
Let's not even talk about being able to add or subtract fractions. As a worker
at the 'Math Resource Center,' I've learned that about a quarter of the time,
helping people with their calculus classes is really a matter of helping these
people manipulate fractions. So if the purpose of primary and secondary
education is to get people to understand arithmetic operations and fractions,
it's not doing so well. John Allen Paulos should write yet another book,
perhaps (Innumeracy is a good read).</p>
<p>Should they be different? That is, is there much reason for the sciences and
the education of the sciences to align in method and motivation? I'm not
certain, but perhaps they shouldn't pretend to be the same. I only ever learned
arithmetic, as opposed to math, throughout my primary and secondary education
with 2 exceptions: geometry (which had a surprisingly large logic content for
me, and introduced me to interesting ideas) and calculus. Calling it math is a
disservice - as Paulos mentions in his books, the general negativity towards
math allows people to claim innumeracy (<em>"I'm not really a numbers
person"</em>) with pride - no one would ever say that they weren't very good
with letters. But reading <em>is useful</em>, or rather widely recognized to be
useful and expressive.</p>
<p>I end by mentioning that I think it is more important to come across real ideas
of science and math at an early age, say elementary school, then middle school.
In in elementary and middle school, there really isn't much difference between
the maths and the sciences, so I clump them together. But in my mind, the
initial goals of science and math education should be to spark creativity and
wonder, while English and reading courses stress critical thinking (somehow,
math, science, and English all get the boring end of the stick while reading
gets full hold over the realm of creativity - how backwards I must be).</p>
<p>But those 4th graders whose teacher guided them towards the bee research, <a
href="http://www.wired.com/wiredscience/2010/12/kids-study-bees/">that has now
been published under the 4th graders' names</a> - don't you think that their
view of science will be a much happier and, ultimately, accurate? Exciting,
collaborative, uncertain with a scientific method-based structure. But then
again, perhaps the lesson that my friends and I learned from our own high
school is the most relevant: if you want to do something, then don't let others
stand in your way. A little motivation and discipline goes a long way.</p>https://davidlowryduda.com/reading-mathSat, 22 Oct 2011 03:14:15 +0000A month, you say?https://davidlowryduda.com/a-month-you-sayDavid Lowry-Duda<p>Much has changed in the last month.</p>
<p>I moved to Rhode Island and began grad school. That's a pretty big change. I'm
suddenly much more focused in my studies again (undeniably a good thing),
figuring out what I will do. Solid.</p>
<p>And I'm struggling to get acquainted with the curious structure of classes at
Brown. There are many, many calculus classes here, for example. As a tutor, I'm
somewhat expected to know these things. Classes on differentiation,
integration, fast-paced and (maybe) slow-paced versions, calc I but with
vectors incorporated - all of these fell under the blanket heading of Calc I at
Georgia Tech. And my feelings are mixed. It's an interesting idea. The general
freedom to make mistakes at Brown is something that I firmly stand behind,
though.</p>
<p>But there is one thing that I think is very poorly done - why is there not more
interdepartmental cooperation? Brown is an ivy, and we're close to many other
schools that are excellent at many things in math. Why is there no form of
cooperation between these universities? This is something that I absolutely
must change. Somehow. I'll work on that.</p>
<p>Ok, let's actually do some math.</p>
<p>I recently came across a<a title="The Fundamental Theorem of Algebra: A Most
Elementary Proof" href="http://arxiv.org/abs/1109.1459" target="_blank"> fun
paper</a>, The Fundamental Theorem of Algebra: A Most Elementary Proof, by
Oswaldo Rio Branco de Oliveira on proving the Fundamental Theorem of Algebra
with no bells, whistles, or ballyhoo in general. All that is assumed is the
Bolzano-Weierstrass Theorem and that polynomials are continuous. Here is the
gist of the proof.</p>
<div class="theorem">
<p>Let $P(z) = a_0 + a_1 z + ... + a_n z^n, a_n \not = 0,$ be a complex polynomial
with degree $n \geq 1$. Then P
has a zero.</p>
</div>
<div class="proof">
<p>We have that $|P| \geq | |a_n| |z|^n -
|a_{n-1}||z|^{n-1} - ... - |a_0||z||$, and so $\lim_{|z| \to \infty}
|P(z)| = \infty$. By continuity, $|P|$ has a global minimum at some $z_0
\in \mathbb{C}$. We suppose wlog that $z_0 = 0$. Then $|P(z)|^2 -
|P(0)|^2 \geq 0 \forall z \in \mathbb{C}$. Then we may write $P(z) = P(0)
+ z^k Q(z)$ for some $k \in \{1, ..., n \}$, and where $Q$ is a
polynomial and $Q(0) \neq 0$ (the idea being that one factored that part
out already).</p>
<p>Pick some $\zeta \in \mathbb{C}$ and substitute $z = r \zeta, r
\geq 0$ into the above inequality and dividing by $r^k$, we get: $2
\mathrm{Re} [ \overline{P(0)} \zeta ^k Q(r \zeta)] + r^k |\zeta ^k Q(r \zeta
)|^2 \geq 0 \forall r > 0, \forall \zeta$. The left side is a continuous
function of r for nonnegative r, and so taking the limit as $r \to 0$,
one finds $2 \mathrm{Re} [ \overline{P(0)} \zeta ^k Q(0)] \geq 0, \forall
\zeta$.</p>
<p>Now suppose $\alpha := \overline{P(0)}Q(0) = a + b i$. For $k$ odd,
setting $\zeta = \pm 1$ and $\zeta = \pm i$ in this inequality lets
us conclude that $a = b = 0$. So then we have $P(0) = 0$, and the
odd case is complete. Now before I go on, I give a brief lemma, which I'll not
prove here. But it just requires using binomial expansions and keeping track of
lots of exponents and factorials.</p>
<div class="lemma" data-text="Credited to Estermann">
<p>For $\zeta = \left( 1 + \frac{i}{k} \right)^2$ and $k \geq 2$, even, we have
that $\mathrm{Re}[\zeta ^k] < 0 < \mathrm{Im} [\zeta
^k]$.</p>
</div>
<p>For k even, we don't have the handy cancellation that we used above. But let us
choose $\zeta$ as in this lemma, and write $\zeta ^k = x + iy;
\quad x < 0, y > o$. Then we can substitute $\zeta ^ k$ and $\overline{
\zeta ^k}$ in the inequality, and a little work shows that $\mathrm{Re}[\alpha
(x \pm iy)] = ax \mp by \geq 0$. So $ax \geq 0$ and
since $x < 0$, we see $a \leq 0$. But then $a = 0$.
Similarly, we get $b = 0$ after considering $\mp by \geq 0$. Then we
again see that $P(0) = 0$, and the theorem is proved as long as you believe
Etermann's Lemma.</p>
</div>
<p>I like it when such results have relatively simple proofs. The first time I
came across the FTA, we used lots of machinery to prove it. Some integration
and differentiation on series, in particular.</p>
<p>And now that I'm vaguely settled and now that I see new things routinely,
perhaps I'll update this more.</p>https://davidlowryduda.com/a-month-you-sayWed, 21 Sep 2011 03:14:15 +0000Fermat Factorization IIhttps://davidlowryduda.com/fermat-factorization-iiDavid Lowry-Duda<p>In this post, we will look into some of the more intricate refinements of
Fermat's Method of Factorization.</p>
<p>Recall the general idea of Fermat's method for factoring a number $N$ : we seek
to write $N$ as a difference of two squares, $N = a^2 - b^2 = (a+b)(a-b) \quad
; (a+b), (a-b) > 1$. To do this, we start guessing, $a = \lceil \sqrt N
\rceil, \; a = \lceil \sqrt N + 1, \; ...$. We then test whether $a^2 - N$ is a
square or not.</p>
<h3><span style="text-decoration: underline;">A Method due to R. Sherman Lehman</span></h3>
<p>Our first improvement allows $N = pq$ to be factored in $O(N^{1/3})$ operations
(i.e. addition, subtraction, multiplication, division, and taking a square
root).</p>
<p>We will need two lemma before we can introduce the improvement.</p>
<p>For a positive integer $r$, let $S_r$ be the sequence of rational numbers
$\frac{a}{b}; \; \; 0 \leq a \leq b, \; b > 0, \; ab \leq r; \; \;
\mathrm{gcd} (a,b) = 1$. And let this sequence be arranged in order, from
smallest to largest. So, for example, $S_{15}$ would be</p>
<p>$\frac{0}{1}, \; \frac{1}{15}, \; \frac{1}{14}, \; \frac{1}{13}, \;
\frac{1}{12}, \; \frac{1}{11}, \; \frac{1}{10}, \; \frac{1}{9}, \; \frac{1}{8},
\; \frac{1}{7}, \; \frac{1}{6}, \; \frac{1}{15}, \; \frac{1}{4}, \;
\frac{2}{7}, \; \frac{1}{3}, \; \frac{2}{5}, \; \frac{1}{2}, \; \frac{3}{5}, \;
\frac{2}{3}, \; \frac{3}{4}, \; \frac{1}{1}$</p>
<p>This is a sort of relative of the Farey Series of order n. I will refer to it as a Semi-Farey Series (I made this up, but so be it). So the statement of our lemma</p>
<div class="lemma">
<p>If $\frac{a}{b}$ and $\frac{a'}{b'}$ are two successive terms of $S_r$, then we have that $a'b - ab' = 1$ and $(a+a')(b + b') > r$.</p>
</div>
<div class="proof">
<p>Note that we can generate the Semi-Farey Series of order r in the following
manner. Start with $\frac{0}{1}$ and $\frac{1}{1}$. Between two successive
terms of the sequence thus generated, say $\frac{a}{b}$ and $\frac{a'}{b'}$,
insert their 'mediant' $\frac{a + a'}{b + b'}$ whenever $(a+a')(b + b') \leq
r$. Note that this term is necessarily between the two initial fractions, and
is also in reduced form (exactly because $a'b - ab' = 1$, it turns out - it's
an iff relation). This is a known property of Farey Sequences, so I will not
include the proof in this write-up (as always, I can clarify through
comments/edits - I still feel odd leaving proofs 'as exercises').$\diamondsuit$</p>
</div>
<p>We will break up the interval $[0, 1]$ with divisions corresponding to the
points of $S_r$. In fact, each point of $S_r$ will correspond to the mediant
before and after it, also known as the points that precede and succeed it in
the series $S_{r+1}$. If $\frac{a'}{b'}, \; \frac{a}{b}, \; \frac{a''}{b''}$
are three successive terms of $S_r$, then we will associate the subinterval</p>
<p>$$\left[ \dfrac{a + a'}{b + b'} , \dfrac{a + a''}{b + b''} \right]$$</p>
<p>By Lemma 1, we have that</p>
<p>$$\dfrac{a + a'}{b + b'} = \dfrac{a}{b} - \dfrac{1}{b(b+b')}, \qquad \dfrac{a + a''}{b + b''} = \dfrac{a}{b} + \dfrac{1}{b(b+b'')} \tag{1}$$</p>
<p>This brings us to our second lemma.</p>
<div class="lemma">
<p>If $\alpha$ is in the subinterval corresponding to $\frac{a}{b}$ in the partition or $S_r$ described above, then
$$\frac{a}{b} ( 1 - \delta (a + \frac{1}{4} \delta ^2) ^{1/2} + \frac{1}{2}
\delta ^2 ) \leq \alpha \leq \frac{a}{b} (1 + \delta (1 + \frac{1}{4} \delta
^2) ^{1/2} + \frac{1}{2} \delta ^2)$$
where $\delta = (ab(r + 1))^{-1/2}$</blockquote></p>
</div>
<div class="proof">
<p>Again, let $\frac{a'}{b'}$ and $\frac{a''}{b''}$ be the terms preceding and following $\frac{a}{b}$, respectively, in $S_r$. Suppose also that $\alpha$ is in the interval corresponding to $\frac{a}{b}$ with $1 \leq a \leq b$. Then by (1) and Lemma 1,
$$r + 1 \leq (a + a')(b + b') = \dfrac{a + a'}{b + b'} (b + b')^2 =
\dfrac{a}{b} (b + b')^2 - \dfrac{b + b'}{b} \tag{2}$$
and
$$r + 1 \leq \frac{a}{b} (b + b'')^2 + \frac{b + b''}{b} \tag{3}$$</p>
<p>If we use (2), solve for $(b + b')$ using the quadratic formula, and remember that $b + b' > 0$, we get
$$b + b' \geq \dfrac{1 + \sqrt{1 + 4ab(r + 1)}}{2a}$$
and
$$\dfrac{1}{b(b + b')} \leq \dfrac{a}{b} \dfrac{2}{1 + \sqrt{1 + 4ab(r+1)} } =
\dfrac{a}{b}( - \frac{1}{2} \cdot \delta ^2 + \sqrt{ \delta ( 1 + \frac{1}{4} +
\delta^2) })$$</p>
<p>With (3), we get
$$b + b'' \geq \dfrac{(-1 + \sqrt{ 1 + 4ab(r + a) } ) }{2a}$$
and
$$\dfrac{1}{b(b + b'')} \leq \dfrac{a}{b} ( \frac{1}{2} \delta^2 + \delta \sqrt{1 + \frac{1}{4} \delta^2} )$$</p>
<p>These complete the proof of the second lemma.$\diamondsuit$</p>
</div>
<p>We are now ready to present and prove this method. The main idea is that we
will now be looking at $x^2 - y^2 = 4kn, \; k = ab; \; 1 \leq k \leq r$. We
will divide up the unit interval according to our scheme above.</p>
<div class="theorem">
<p>Suppose that n is a positive odd integer, r is an integer s.t. $1 \leq r \leq \sqrt n$ If $n = pq$, p, q, both primes, and $\sqrt{ \dfrac{n}{r+a} } < p \leq \sqrt n$, then there are nonnegative integers x, y, and k s.t.</p>
$$x^2 - y^2 = 4k, \quad 1 \leq k \leq r$$
$$x \equiv k + 1 \mod 2$$
$$x \equiv k + n \mod 4$$
if k is odd,
$$0 \leq x - \sqrt{ 4kn} \leq \dfrac{1}{4(r+1)} \sqrt{ \dfrac{n}{k} }$$
and, very importantly,
$$p = \mathrm{min} ( \mathrm{gcm} (x + y, n), \; \mathrm{gcd} (x - y, n) )
\tag{*}$$
And if n is a prime, then there are no integers satisfying these requirements.</p>
</div>
<p>There is a detail in my proof of this theorem that is hanging me up a bit, so
I'll have to return to it. But let's take that for granted for a moment, and
see how fast we can obtain a factorization of a number $n = pq$ with $p, q$
primes.</p>
<p>First, there are $O( \left( \dfrac{n}{r} \right) ^{1/2} )$ divisions done to
eliminate any small factors less than $\left( \dfrac{n}{r+1} \right) ^ {1/2}$.
Counting the elementary operations, and assuming that the extraction of a root
is one operation, we get $\sum _{1 \leq k \leq r} O( (\frac{1}{r}
\frac{n}{k}^{1/2} + 1)$ operations. Let $r = \lfloor 0.1 \cdot n^{1/3}
\rfloor$. This is not quite optimal, but it's pretty close. Then we see that
$O(n^{1/3})$ elementary operations are necessary.</p>
<p>And this can still be improved.</p>https://davidlowryduda.com/fermat-factorization-iiSun, 07 Aug 2011 03:14:15 +0000From the Exchangehttps://davidlowryduda.com/exchangeDavid Lowry-Duda<p>I speak of Math Stackexchange frequently for two reasons: because it is
fantastically interesting and because I waste inordinate amounts of time on it.
But I would like to again share some of the more interesting things from the
exchange here.</p>
<p>Firstly, in my last <a title="Factoring II"
href="/?p=245">post on factoring</a>, I spoke of
Sophie Germain's identity. I've had a case of Mom's Corollary* with this - a
question was recently asked on MathSE to "prove that $x^4 + 4$ is
composite for positive integer x." How is this done? In one step, as $x^4
+ 4 = (x^2 + 2x+ 2)(x^2 - 2x + 2)$. There is the minor task to recognize that
for positive integers x, $x^2 - 2x + 2 > 1$, and so the factorization
is nontrivial.</p>
<p>Some may be thinking, "What is Mom's Corollary?" Mom's Corollary is a situation
named by my high school English Teacher, Dr. Covel. It is astounding how often
that one repeatedly comes across a new concept right after one has learnt it.
In other words, when your mother tells you something, it's surprising how often
her advice will come up within the next 3 days. When it does - it's a Mom's
Corollary case.</p>
<p>Secondly, there was a question based on an old GRE question.</p>
<blockquote>"A total of x feet of fencing is to form 3 sides of a level rectangular yard. What is the maximum area in terms of x?"</blockquote>
<p>This is not a hard question except that it defies our normal idea of
associating optimal areas as squares and circles. But as opposed to doing the
typical sort of optimization route, the MathSE user Jonas Meyer gives a
solution that allows our intuition to soar. The idea is, to 'place a mirror'
next to the missing side of the rectangular yard. Then the problem becomes to
maximize the area in terms of 2x, and to translate it back to the 1x case. I
love it when people see such symmetries and shortcuts in problems. (It's now a
square - super handy).</p>
<p>Thirdly, I learned of <a title="Sinc Sums"
href="http://web.cs.dal.ca/%7Ejborwein/sinc-sums.pdf">a certain paper</a> with
some really interesting identities. It is largely known that $\displaystyle \int _0 ^{\infty} \frac{\sin{(x)}}{x} \mathrm{d} x =
\frac{\pi}{2}$. It is not as well known that $\displaystyle \int _0
^{\infty} \left( \frac{ \sin{(x)} }{x} \right) ^2 \mathrm{d}x = \frac{\pi}{2}$
as well. But this paper references the following, absolutely nonintuitive to me
fact:</p>
<p style="padding-left: 30px;">$\displaystyle \int _0 ^{\infty} \frac{
\sin{(x)} } {x} \mathrm{d} x = $ $\displaystyle \int _0 ^{\infty} \left(
\frac{ \sin{(x)} }{x} \right) ^2 \mathrm{d}x = \frac{\pi}{2} = $ $\displaystyle \sum_{n = 1} ^ {\infty} \frac{\sin{( n)} } {n}+ \frac{1}{2} =
\displaystyle \sum_{n = 1} ^ {\infty} \left( \frac{\sin{( n)} } {n} \right) ^2
+ \frac{1}{2}$.</p>
<p>And therefore, also that:</p>
<p style="padding-left: 30px;">$\displaystyle \int _{- \infty} ^{\infty} \frac{ \sin{(x)} } {x} \mathrm{d} x = $ $\displaystyle \int _{- \infty} ^{\infty} \left( \frac{ \sin{(x)} }{x} \right) ^2 \mathrm{d}x = $ $\displaystyle \sum_{n = - \infty } ^ {\infty} \frac{\sin{( n)} } {n} = \displaystyle \sum_{n = - \infty } ^ {\infty} \left( \frac{\sin{( n)} } {n} \right) ^2 = \pi$.</p>
<p> </p>
<p>Finally, I happened to read a post on whether $\infty$ was an odd or an
even number. Okay, this is silly, and not really relevant to understanding the
concept of $\infty$. But instead of the - infinity is not a number -
response, one of my favorite responses was along the following lines: an even
number is a number that can be paired off into equal subdivisions. So in a
sense, infinity is even, as it can certainly be paired off (associate $n$
with $n + 1$. Of course, it is also the case that $\omega = \omega
+ 1$.</p>https://davidlowryduda.com/exchangeThu, 28 Jul 2011 03:14:15 +0000Factoring IIIhttps://davidlowryduda.com/factoring-iiiDavid Lowry-Duda<p>This is a continuation of my last factoring post. I thought that I might have
been getting ahead of myself, as I left out a relatively simple idea that led
to great gains in factorization. The key of this post is Fermat's
Factorization, and a continuation by R. Lehman.</p>
<p>The main idea here is very simple. When a number $N$ can be written as
the difference of two squares, i.e. $N = a^2 - b^2$, then we can factor
$N$ as $N = (a+b)(a-b)$. And as long as neither are 1, it yields a
proper factorization.</p>
<p>Now I mention a brief aside, not relevant to Fermat Factorization. There is
another, lesser-known identity called Sophie Germain's Identity:
$a^4 + 4 b^4 = ( (a+b)^2 + b^2)((a - b)^2 +b^2) =$
$(a^2 + 2ab + 2b^2)(a^2 - 2ab + 2b^2) $
By the way, a Sophie Germain prime is a prime $p$ such that $2p +
1$ is also a prime. It is unknown whether or not there are infinitely many
Sophie Germain primes.</p>
<p>Now, the idea of this factorization method is to try different values of $
a$ in the hope that $a^2 - N = c$ is a square, i.e. $c = b^2$ for
some integer $b$. So the algorithm proper might be like:</p>
<blockquote>
<ol>
<li>Set $a = \lceil \sqrt N \rceil$</li>
<li>$c = a^2 - N$</li>
<li>Test if c is a square. If it is, look at $a - \sqrt c$ and $a + \sqrt c$. If not, go to step 4.</li>
<li>Increase a by 1 and set $c = a^2 - N $. Return to step 3.</li>
</ol>
</blockquote>
<p>Of course, one should end the algorithm sometime if no factorization is found.
So perhaps one should use a counter. But that's besides the point.</p>
<p>Clearly, this algorithm works best when the factors are about the size of
$\sqrt N$. Done naively, Fermat's Algorithm works slower than trial
diversion in the worst case. But there is a quick way to combine Fermat's
Algorithm and trial division that is faster than doing either separately. This
is only the first of a few possible optimizations.</p>
<p>Let's look at an example. Suppose we want to factor $N = 123456789123$
(astoundingly enough, 1234567891 is prime, and was going to be my first
example). One sees that $\lceil N \rceil = 351365$. So with trial
division, we would need to test numbers up to 351365. But let's do a couple of
iterations of Fermat's Method:</p>
<p>Step:</p>
<ol>
<li>$351365^2 - N = 574102$; and $\sqrt{574102} = 757.7$</li>
<li>$351366^2 - N = 1276833$; and $\sqrt{1276833} = 1129.97$</li>
<li>$351367^2 - N = 1979566$ and $\sqrt{1979566} = 1406.97$</li>
<li>$351368^2 - N = 2682301$ and $\sqrt{2682301} = 1637.77$</li>
<li>...</li>
</ol>
<p>So no result yet, it might seem. But Fermat's Method does not miss factors in
the range that it sweeps through. Thus from these 4 iterations, we have looked
for factors as low as $351368 - 1637.77 = 349730$. So in just 4
iterations, we narrowed the range to be tested by trial division from every
between 1 and 351365 to everything between 1 and 349730. 4 iterations reduces
the 1000 most complicated trial division tests (there are 210 primes in that
range, so at least that many).</p>
<p>In general, if one chooses a bound $d > \sqrt N$ and uses Fermat to find
factors between $\sqrt N$ and $d$, then the resulting necessary
bound for trial division is $d - \sqrt{d^2 - N}$.</p>
<p>There is another easy modification to speed up this test, involving a sieve.
Let's make a couple of observations.</p>
<h3> First Note </h3>
<p>Note that all squares end in $0, 1, 4, 5, 9,\; \mathrm{or} \; 16 \mod 20$. Suppose we have calculated a particular value of $a^2 - N \mod 20$. The last two digits are cyclic too - they repeat with each increase by 10. So calculating a couple of values of $a^2 - N$ allows us to decide which a to continue to calculate. This leads to calculating only a fraction of the a values and a fraction of the subsequent square root calculations.</p>
<p>There is nothing special about 20, really. Any modulus will do. But 10, 20, or (for larger values) a prime power. Or one can combine a few with the Chinese Remainder Theorem. Unfortunately, this only really changes the overall time complexity by a constant factor.</p>
<h3> Second Note </h3>
<p>While it is clear that Fermat's Method works best for factors near $\sqrt
N$, that doesn't happen too often. But if one can somehow divine (or guess) an
approximate ratio of the two factors, suppose something like $
\frac{p}{q}$, then one can choose a rational number $\frac{l}{m}$ near
that value and then perform Fermat's Method on $Nlm = pm \cdot du$, which
should pull the factors $cv \; \mathrm{and} \; du$ first.</p>
<p>This is turning out to be longer than I was expecting, so I'm splitting this
post into 2 (that way I can finish the other half tomorrow). In the next, we
will continue to improve upon Fermat's Factorization, firstly by splitting up
the interval wittily. Secondly, by coming up with a method that heuristically
splits a composite number $n$ in $O(n^{1/4 + \epsilon})$ steps by using a
slight generalization. Finally, I have heard of an improvement of Grenier that
allows polynomial time factorization if the primes are relatively close to each
other.</p>https://davidlowryduda.com/factoring-iiiTue, 26 Jul 2011 03:14:15 +0000Factoring IIhttps://davidlowryduda.com/factoring-iiDavid Lowry-Duda<p>In continuation of my <a title="Factoring I"
href="/?p=151">previous post on factoring</a>, I
continue to explore these methods. From Pollard's $\rho$ method, we now
consider Pollard's p-1 algorithm.</p>
<p>Before we consider the algorithm proper, let's consider some base concepts.</p>
<p>Firstly, I restate Euler's Theorem (of course, Euler was prolific, and so there
are altogether too many things called Euler's Theorem - but so it goes):</p>
<blockquote>
If $\mathrm {gcd} (a,n) = 1$, i.e. if a and n are relatively
prime, then $a^{\phi (n)} \equiv 1 \mod n$, where $\phi (n)$ is the
Euler Totient Function or Euler's Phi Function (perhaps as opposed to someone
else's phi function?). As a corollary, we get Fermat's Little Theorem, which
says that if $\mathrm {gcd} (a,p) = 1$, with p a prime, then $
a^{p-1} \equiv 1 \mod p$.
</blockquote>
<p>The second base concept is called smoothness. A number is called B-smooth is
none of its prime factors is greater than B. A very natural question might be:
why do we always use the letter B? I have no idea. Another good question might
be, what's an example? Well, the number $24 = 2^3 \cdot 3$ is 3-smooth
(and 4-smooth, 5-smooth, etc). We call a number B-power smooth if all prime
powers $p_i ^{n_i}$ dividing the number are less than or equal to B. So
24 is 8-power smooth, but not 7-power smooth. Note also that in this case, the
number is a factor of $\mathrm{lcm} (1, 2, 3, ..., B)$.</p>
<p>Pollard's (p-1) algorithm is called the "p-1" algorithm because it is a
specialty factorization algorithm. Suppose that we are trying to factorize a
number N and we choose a positive integer B, and that there is a prime divisor
p of N such that p-1 is B-power smooth (we choose B beforehand - it can't be
too big or the algorithm becomes computationally intractable). Now for a
positive integer a coprime to p, we get $a^{p-1} \equiv 1 \mod p$. Since
p-1 is B-power smooth, we know that $(p-1) | m = \mathrm{lcm} (1, 2, ...,
B)$. Thus $a^m \equiv 1 \mod p$, or rather $p | (a^m - 1)$.</p>
<p>Thus $p| \mathrm{gcd} (a^m - 1, N) > 1$. And so one hopes that this
factor is nontrivial and proper.</p>
<p>This is the key idea behind the entire algorithm.</p>
<blockquote>
<p>Pollard's (p-1) Algorithm</p>
<ol>
<li>Choose a smoothness bound B (often something like $10^6$)</li>
<li>Compute $m = \mathrm{lcm} (1, 2, ..., B)$</li>
<li>Set a = 2.</li>
<li>Compute $x = a^m - 1 \mod N$ and $g = \mathrm{gcd} (x, N)$</li>
<li>If we've found a nontrivial factor, then that's grand. If not, and if $a < 20$ (say), then replace a by a+1 and go back to step 4.</li>
</ol>
</blockquote>
<p>So how fast is the algorithm? Well, computing the lcm will likely take
something in the order of $O(B \; log_2 B)$ complexity (using the Sieve
of Erosthanes, for example). The modular exponentiation will take $O( \;
(log_2 N)^2)$ time. Calculating the gcd takes only $O( \; (log_2 N)^3)$,
so the overall algorithm takes $O(B \cdot lob_2 B \cdot (log_2 N)^2 +
(log_2 N)^3)$ or so. In other words, it's only efficient is B is small; but
when B is small, fewer primes can be found.</p>
<p>I've read that Dixon's Theorem guarantees that there is a probability of about
$\dfrac{1}{27}$ that a value of B the size of $N^{1/6}$ will yield a
factorization. Unfortunately, this grows untenably large. In practice, this
algorithm is an excellent way to eliminate the possibility of small factors (or
to successfully divide out small factors). Using $B = 10^6$ will identify
smaller factors and allows different techniques, such as the elliptic curve
factorization algorithm, to try to find larger factors.</p>
<p>I'll expand on more factoring later.</p>https://davidlowryduda.com/factoring-iiMon, 25 Jul 2011 03:14:15 +0000Prime rich and prime poorhttps://davidlowryduda.com/prime-rich-and-prime-poorDavid Lowry-Duda<p>A short excursion -</p>
<p>The well-known Euler's Polynomial $x^2 - x + 41$ generates 40 primes at
the first 40 natural numbers. It is sometimes called a <em>prime-rich
polynomial</em>. There are many such polynomials, and although Euler's
Polynomial is perhaps the best-known, it is not the best. The best that I have
heard of is $(x^5 - 133 c^4 + 6729 x^3 - 158379 x^2 + 1720294x -
6823316)/4$, which generates 57 primes. But this morning, I was reading an
article on Ulam's Spiral when I heard of the opposite - a prime-poor
polynomial. The polynomial $x^{12} + 488669$ doesn't produce a prime
until $x = 616980$. Who knew?</p>
<p>And to give them credit, that prime-rich polynomial was first discovered by
Jaroslaw Wroblewski & Jean-Charles Meyrignac in one of Al Zimmerman's
Programming Contests (before being found by a few other teams too).</p>https://davidlowryduda.com/prime-rich-and-prime-poorMon, 25 Jul 2011 03:14:15 +0000Giving Journalshttps://davidlowryduda.com/giving-journalsDavid Lowry-Duda<p>Firstly, I wanted to note that keeping a frequently-updated blog is hard. It
has its own set of challenges that need to be overcome. Bit by bit.</p>
<p>But today, I talk about a sort of funny experience. Suppose for a moment that
you had acquired a set of low-level math journals throughout the undergrad
days, journals like the College Mathematics Journal, Mathematics Magazine, etc.
Presuming that you didn't want to keep them in graduate school (I don't -
they're heavy and I have online access), what would you do with them?</p>
<p>This was how I found myself after I graduated from Tech. At first, I thought
that either a professor or student in the math department would want them - but
they didn't. Those that cared to read them already had access to them, and
those that didn't already read them didn't want to (they're more fun reading
than particularly educational, after all). Next I tried a couple of local
libraries. Each has a relatively good selection of material on high-school
level science and math subjects, so I thought it would be reasonable. But the
people in charge of accepting materials at the libraries were of the opinion
that no one would ever come to them to learn math. Having material that
requires the knowledge of calculus (which is really the only pre-requisite for
most of these journals) was considered absolutely beyond the reach of the
normal populace.</p>
<p>This struck me. When I was in high school, I read a lot. And I went to the
library a lot. At the time, I was not nearly so dedicated to the idea of
becoming a mathematician - that didn't strike me until halfway through my
senior year. I didn't know what I wanted, and so I read lots of diverse things
from lots of subject areas, and I loved that the library allowed me access to
the next level of material, whatever that might be. So in thinking of these
journals, which are sort of like a bridge between grade school calculus and
arithmetic, and college math and research. This is a big gap! I think that one
of the big reasons that math has such bad connotations is that most people
think it as merely arithmetic. To be honest, I didn't know what math was until
I went to Georgia Tech. It is lucky that I liked what I went to Tech to do -
but ultimately a fluke.</p>
<p>So the fact that the libraries wouldn't accept these journals rubbed me the
wrong way, just like the common misperception (which, ironically, is caught by
my inbuilt spell-checker) that math and arithmetic are one and the same. There
is a quote that I have liked since I first heard it: "Children are capable of
an enormous amount, and the problem with our educational system is the
grown-ups." It's from the documentary <em>The Lottery</em>. <em>
</em></p>
<p>The next place I tried was a used-bookstore where I've gone for many years,
and that has consistently had a few textbooks in stock. It was sort of a long
shot, and it is not so surprising to me that they didn't take them (they sort
of feel like any other periodical, which they also don't take). Well, how
troublesome.</p>
<p>I finally talked to my old high school, and my old calculus teacher. In
hindsight, I should have tried this before I went to the bookstore. I thought
they could either go to the library there, or that my teacher would be willing
to have them himself. And as if it were obvious, they accepted them
immediately. They were excited.</p>
<p>So was I. I had thought it would be a trivial matter - they are good, fun
journals. I loved them, and so would others. But it was not so easy. Go figure.</p>https://davidlowryduda.com/giving-journalsSat, 23 Jul 2011 03:14:15 +0000The Collatz Conjecture - recent development?https://davidlowryduda.com/the-collatz-conjecture-recent-developmentDavid Lowry-Duda<p>On his <a href="http://www.johndcook.com/blog/2011/06/01/collatz-3n-1-conjecture-solved/">site</a>,
John D. Cook recently proliferated a <a
href="http://preprint.math.uni-hamburg.de/public/papers/hbam/hbam2011-09.pdf">paper</a>
by Gerhard Opfer that claimed to solve the Collatz Conjecture. The Collatz
Conjecture is simple to state:</p>
<blockquote>Collatz (or the 3n + 1 conjecture):
Starting at any number do the following: if n is even, divide by 2; if n is odd, multiply by 3 and add 1.
The conjecture states that no matter what positive integer you start at, you will end up at 1 (the so-called 1-4-2 loop).</blockquote>
<p>At first, I had high hopes for the paper. It seems relatively well-written and
was submitted to the Mathematics of Computation, a very respectable journal. I
even sent out a brief email about the paper. But the paper is flawed. The
problem, I think, can be succinctly summarized by the following: he relies on
the assumption that starting with any number $ n_0$, one will eventually hit a
number that is less than $ n_0$. When stated like this, it seems obvious that
there is a problem, but he only relied on that one number (rather than the
apparent infinite descent that could follow). The exact problem occurs with his
'annihilation argument' on page 11 of the pdf above. He more or less states
that one can start at 1 and reach every number by doing a sort of reverse
Collatz function (he's actually a bit wittier than that), but does not prove
it.</p>
<p>More commentary can be found on <a
href="http://www.reddit.com/r/math/comments/hpn3g/learned_gentlemen_of_rmath_im_counting_on_you/c1xo3qq">reddit</a>, reddit <a href="http://www.reddit.com/r/math/comments/hqqph/collatz_3n_1_conjecture_solved/c1xp6iu">again</a>,
and on<a href="http://math.stackexchange.com/questions/43051/collatz-finally-solved/43082#43082">
math.SE</a> (a question protected by Qiaocho Yuan - go him).</p>
<p>I use this as an intro to a sort of joke that goes around mathematician's
circles. A while back, Sean Carroll wrote up 'The Alternative-Science
Respectability Checklist,' and it's <em>awesome</em>. Find it <a
href="http://blogs.discovermagazine.com/cosmicvariance/2007/06/19/the-alternative-science-respectability-checklist/">here</a>.
It turns out that Scott Aaronson wrote up a similar <a
href="http://www.scottaaronson.com/blog/?p=304">article</a>, inspired by Sean
Carroll, that is titled "Ten Signs a Claimed Mathematical Breakthrough is
Wrong."</p>
<p>His inspiration was the time-old problem that simply stated problems encourage
generations up people to attack them, and frequently to think that they have
made progress. So he asks :</p>
<blockquote><em>Suppose someone sends you a complicated solution to a famous
decades-old math problem, like P vs. NP. How can you decide, in ten minutes or
less, whether the solution is worth reading?</em></blockquote>
<p>And thus his 10 signs were created. I happen to have heard a few people say
that this most recent paper on the Collatz Conjecture only failed three: #6
(The paper jumps into technicalities without presenting a new idea), #8 (The
paper wastes lots of space on standard material), and #10 (The techniques just
seem too wimpy for the problem at hand). {though perhaps #8 is debateable -
some say it's related to a different convention of writing papers, but I don't
know about any of that}</p>
<p>In my experience, I rely mostly on #1 (it's not written in $\TeX$), #4 (it
conflicts with some impossibility result), and #7 (it doesn't build on any
previous work). But both of these articles are very funny, though not exactly
precise nor entirely true.</p>https://davidlowryduda.com/the-collatz-conjecture-recent-developmentMon, 06 Jun 2011 03:14:15 +0000Factoring Ihttps://davidlowryduda.com/factoring-iDavid Lowry-Duda<p>I remember when I first learnt that testing for primality is in P (as noted in
the paper <a href="http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf"
target="_blank">Primes is in P</a>, which explains the AKS algorithm). Some
time later, I was talking with a close friend of mine (who has since received
his bachelors in Computer Science). He had thought it was hard to believe that
it was possible to determine whether a number was prime without factoring that
number. That's pretty cool. The AKS algorithm doesn't even rely on anything
really deep - it's just a clever application of many (mostly) elementary
results. Both of us were well aware of the fact that numbers are hard, as an
understatement, to factor. My interest in factoring algorithms has suddenly
surged again, so I'm going to look through some factoring algorithms (other
than my <a title="An interesting (slow) factoring algorithm"
href="/?p=129" target="_blank">interesting
algorithm</a>, that happens to be terribly slow).</p>
<p>The most fundamental of all factoring algorithms is to simply try lots of
factors. One immediately sees that one only needs to try prime numbers up to
the size $ \sqrt{n}$. Of course, there is a problem - in order to only
trial divide by primes, one needs to know which numbers are primes. So if one
were to literally only divide by primes, one would either maintain a Sieve of
Erosthanes or Perform something like AKS on each number to see if it's prime.
Or you could perform Miller-Rabin a few times (with different bases) to try
'almost only' primes (not in the measurable sense). As the prime number theorem
says that $ \pi (n) ~ \dfrac{n}{\log n}$, and so one would expect about
$ O(\sqrt{n})$ or more bit operations. This is why we call this the
trivial method.</p>
<p>The first major improvement didn't come about until 1975, when John Pollard
proposed a new algorithm, commonly referred to as the Pollard-$ \rho $
algorithm. Already, the complexity of the algorithm is far more intense than
the previous. The main idea of the algorithm is the following observation: if
we are trying to factor $ n$ and $ d$ is a relatively small factor of
$ n$, then it is likely there exist two numbers $ x_i$ and $
x_j$ such that $ d|(x_i - x_j)$ but $ n \nmid (x_i - x_j)$, so that
$ \gcd(x_i - x_j, n) > 1$ and a factor has been found. But how is this
implemented if we don't know what $ d|n$?</p>
<p>That's a very interesting question, and it has a sort of funny answer. Perhaps
the naive way to use this idea would be to pick a random number $ a$ and
another random number $ b$. Then check to see if $ \gcd(a-b,n) >
1$. If it's not, try a random number $ c$, and check $ \gcd(c-a,n)$
and $ \gcd(c-b,n)$, and so on. This is of course very time-consuming, as
on the jth step one needs to do j-1 gcd tests.</p>
<p>But we can be (far) wittier. This is one of those great insights that made me
say - whoa, now there's an idea - when I first heard it. Suppose that instead,
we have some polynomial $ f(x)$ that we will use to pick our random
numbers, i.e. we will choose our next random number $ x_n$ by the
iterative method $ x_n = f(x_{n-1})$. Then if we hit a point where $
x_j \equiv x_k \mod{d}$, with $ k < j$, then we will also have that
$ f(x_j) \equiv f(x_k) \mod{d}$. This is how the method got its name -
after a few random numbers, the sequence will loop back in on itself just like
the letter $\rho$.</p>
<p>Of course, the sequence couldn't possibly go on for more than d numbers without
having some repeat mod d. But the greatest reason why we use this function is
because it allows us to reduce the number of gcd checks we need to perform.
Suppose that the 'length' of the loop is l: i.e. if $ x_i \equiv x_j$,
with $ j > i$, then l is the smallest positive integer such that $
x_{j+l} \equiv x_j \equiv x_i$. Also suppose that the loop starts at the mth
random number. Then if we are at the kth number, with $k \geq m, n|k$, then we
are 'in the loop' so to speak. And since $ n|k, n|2k$ as well. So then
$ x_{2k} \equiv x_k \mod{d}$, and so $ \gcd(x_{2k} - x_k, n) > 1$.</p>
<p>Putting this together means that we should check $ \gcd(x_k - x_{k/2}, n)$
for every even k, and that's that. Now we do not have to do k-1 gcd
calculations on the kth number, but instead one gcd calculation on every other
random number. We left out the detail about the polynomial $ f(x)$, which
might seem a bit problematic. But most of the time, we just choose a polynomial
of the form $f(x) = x^2 + a$, where $ a \not \equiv 0, -2 \mod{n}$. (This
just prevents hitting a degenerative sequence 1,1,1,1,1...).</p>
<p>Of course, there are a few additional improvements that can be made. This,
which I have heard called the "Tortoise and the Hare" approach (named after the
slow moving $ x_i$ being compared to the fast moving $ x_{2i}$), is
not the only way of finding cycles. There is a method called Brent's Variant
that finds cycles in a different way that reduces the number of modular
multiplications. The key idea in his is to have the 'tortoise' sit at $
x_{2^i}$ and compare to the 'hare' who moves from $ x_{2^i + 1}$ up to
$ x_{2^{i+1}}$. Then the tortoise sits at the next 2 power. The main idea
of the savings is that at each step, Brent's algorithm only needs to evaluate
f(x) once, while implementing Pollard's algorithm requires 3 (one for the
tortoise, two for the hare).</p>
<p>In addition, one might not want to perform Euclid's Algorithm after each step.
Instead, one might do 100 steps at a time, and then perform Euclid's algorithm
on the product (effectively replacing 99 gcd computations by 99 modular
multiplications, which saves time).</p>
<p>There is also undoubtedly an art to choosing the polynomial well, but I don't
know it. Fortunately, this sort of algorithm can easily be implemented in
parallel with other polynomials. Unfortunately, although it picks up small
factors quickly, its worst case running time is poor. The complexity of the
algorithm is such that as far as I know, its big O running time isn't fully
proven. The sudden jump in factoring!</p>https://davidlowryduda.com/factoring-iFri, 27 May 2011 03:14:15 +0000Visiting Gaussian Quadraturehttps://davidlowryduda.com/visiting-gaussian-quadratureDavid Lowry-Duda<p>I frequent math.stackexchange these days (if you haven't heard of it, you
should go check <a title="Math SE" href="http://math.stackexchange.com">it</a>
out), and every once in a while I get stunned by a solution or a thoughtful
question. As I took my Numerical Analysis Class my last semester as an
undergrad (last semester, woo hoo!), I remember coming up against Gaussian
Quadrature estimates for integration. It's a very cool thing, but the system of
equations to solve seems very challenging - in fact, it feels like one must use
a numerical approximation method to solve them. While I don't have any big
qualms with numerical solutions, I much prefer exact solutions. Here is the
best method I've seen in solving these (this is for 3 points, but we see how it
could be used for 1,2, and 4 points as well), and all credit must be given to
the user Aryabhatta at Math SE, from <a
href="http://math.stackexchange.com/questions/13174/solving-a-peculiar-system-of-equations">this
post</a>.</p>
<p>The task is easy to state: solve the following system:
$$ \begin{align*}
a + b + c &= m_0 \\
ax + by + cz &= m_1 \\
ax^2 + by^2 + cz^2 &= m_2 \\
ax^3 + by^3 + cz^3 &= m_3 \\
ax^4 + by^4 + cz^4 &= m_4 \\
ax^5 + by^5 + cz^5 &= m_5
\end{align*}$$
We are to solve for x, y, z, a, b, and c; the m are given. This is
unfortunately nonlinear. And when I first came across such a nonlinear system,
I barely recognized the fact that it would be so annoying to solve. It would
seem that for too many years, the solutions the most of the questions that I've
had to come across were too pretty to demand such 'vulgar' attempts to solve
them. Anyhow, one could use iterative methods to arrive at a solution. Or one
could use the Golub-Welsch Algorithm (which I also discovered at Math SE). One
could use resultants, which I did in my class. Or one could be witty.</p>
<p>Let's introduce three new variables. Let x, y, and z be the roots of $ t^3 +
pt^2 + qt + r$. Then we have
$$\begin{align*}
x^3 + px^2 + qx + r = 0\\
y^3 + py^2 + qy + r = 0\\
z^3 + pz^2 + qz + r = 0
\end{align*}$$
Multiply equation (1) by $ a$, equation (2) by $ b$, and equation (3) by $ c$
and add. Then we get
$$ m_3 + pm_2 + qm_1 + rm_0 = 0 $$
Multiply equation (1) by $ ax$, equation (2) by $ by$, and equation (3) by $
cz$ and add. Then we get
$$ m_4 + pm_3 + qm_2 + rm_1 = 0 $$
Finally (you might have guessed it) multiply equation (1) by $ ax^2$, equation
(2) by $ by^2$, and equation (3) by $ cz^2$ and add. Then we get
$$ m_5 + pm_4 + qm_3 + rm_2 = 0 $$
Now (4),(5), (6) is just a set of 3 linear equations in terms of the variables
p, q, r. Solving them yields our cubic. We can then solve the cubic (perhaps
using Cardano's formula, etc.) for x, y, and z. And once we know x, y, and z we
have only a linear system to solve to find the weights a, b, and c. That's way
cool!</p>https://davidlowryduda.com/visiting-gaussian-quadratureFri, 27 May 2011 03:14:15 +0000Daily Math in Zagrebhttps://davidlowryduda.com/zagrebDavid Lowry-Duda<p>So I'm in Zagreb now, and naturally this means that I've not updated this blog
in a while. But this is not to say that I haven't been doing math! In fact,
I've been doing lots, even little things to impress the girl. 'Math to
impress the girl?' you might say, a little insalubriously. Yes! Math to
impress the girl!</p>
<p>She is working on finishing her last undergrad thesis right now, which is what
brings us to Croatia (she works, I play – the basis for a strong relationship,
I think... but I'm on my way to becoming a mathematician, which isn't really so
different to play). After a few 'average' days of thesis writing, she has one
above and beyond successful day. This is good, because she is very happy on
successful days and gets dissatisfied if she has a bad writing day. So what
does a knowledgeable and thoughtful mathematician do? It's time for a
mathematical interlude -</p>
<h4>Gambling and Regression to the Mean</h4>
<p>There is a very well-known fallacy known as the Gambler's Fallacy, which is
best explained through examples. This is the part of our intuition that sees a
Roulette table spin red 10 times in a row and thinks, 'I bet it will spin black
now, to 'catch up.' ' Or someone tosses heads 10 times in a row, and we might
start to bet that it's more likely than before to toss tails now. Of course,
this is fallacious thinking – neither roulette nor coins has any memory. They
don't 'remember' that they're on some sort of streak, and they have the same
odds from one toss to another (which we assume to be even – conceivably the
coin is double-sided, or the Roulette wheel is flat and needs air, or
something).</p>
<p>The facts that flipping a coin always has about even odds and that the odds of
Roulette being equally against the gambler are what allow casinos to expect to
make money. It also distinguishes them from games with 'memory,' such as
blackjack (I happen to think that Bringing Down the House is a fun read). But
that's another story.</p>
<p>But the related concept of 'Regression to the Mean' holds more truth – this
says that the means of various sets of outcomes should eventually approximate
the expected mean (perhaps called the 'actual mean' – flipping a coin should
have about half heads and half tails, for instance). So if someone flips a coin
20 times and gets heads all 20 times, we would expect them to get fewer than 20
heads in the next 20 throws, Note, I didn't say that tails are more likely than
heads!</p>
<h4>Back to the Girl</h4>
<p>So how does this relate? I anticipated that the next day of writing would not
be as good as the previous, and that she might accordingly be a bit
disappointed with herself for it. And, the next day – she was! But alas, I came
prepared with sour cherry juice (if you've never had it, you're missing out),
and we picked up some strawberries. Every day is better if it includes sour
cherry juice and strawberries.</p>https://davidlowryduda.com/zagrebTue, 24 May 2011 03:14:15 +0000Integration by Partshttps://davidlowryduda.com/integration-by-partsDavid Lowry-Duda<p>I suddenly have college degrees to my name. In some sense, I think that I
should feel different - but all I've really noticed is that I've much less to
do. Fewer deadlines, anyway. So now I can blog again! Unfortunately, I won't
quite be able to blog as much as I might like, as I will be traveling quite a
bit this summer. In a few days I'll hit Croatia.</p>
<p>Georgia Tech is magnificent at helping its students through their first few
tough classes. Although the average size of each of the four calculus classes
is around 150 students, they are broken up into 30 person recitations with a TA
(usually a good thing, but no promises). Some classes have optional 'Peer Led
Undergraduate Study' programs, where TA-level students host additional hours to
help students master exercises over the class material. There is free tutoring
available in many of the freshmen dorms every on most, if not all, nights of
the week. If that doesn't work, there is also free tutoring available from the
Office of Minority Education or the Department of Success Programs - the host
of the so-called 1-1 Tutoring program (I was a tutor there for two years). One
can schedule 1-1 appointments between 8 am and something like 9 pm, and you can
choose your tutor. For the math classes, each professor and TA holds office
hours, and there is a general TA lounge where most questions can be answered,
regardless of whether one's TA is there. Finally, there is also the dedicated
'Math Lab,' a place where 3-4 highly educated math students (usually math grad
students, though there are a couple of math seniors) are available each hour
between 10 am and 4 pm (something like that - I had Thursday from 1-2 pm, for
example). It's a good theory.</p>
<p>During Dead Week, the week before finals, I had a group of Calc I students
during my Math Lab hour. They were asking about integration by parts - when in
the world is it useful? At first, I had a hard time saying something that they
accepted as valuable - it's an engineering school, and the things I find
interesting do not appeal to the general engineering population of Tech. I
thought back during my years at Tech (as this was my last week as a student
there, it put me in a very nostalgic mood), and I realized that I associate IBP
most with my quantum mechanics classes with Dr. Kennedy. In general, the way to
solve those questions was to find some sort of basis of eigenvectors, normalize
everything, take more inner products than you want, integrate by parts until it
becomes meaningful, and then exploit as much symmetry as possible. Needless to
say, that didn't satisfy their question.</p>
<p>There are the very obvious answers. One derives Taylor's formula and error with
integration by parts:</p>
<p>$\begin{array}{rl}
f(x) &= f(0) + \int_0^x f'(x-t) \,dt\\
&= f(0) + xf'(0) + \displaystyle \int_0^x tf''(x-t)\,dt\\
&= f(0) + xf'(0) + \frac{x^2}2f''(0) + \displaystyle \int_0^x \frac{t^2}2 f'''(x-t)\,dt
\end{array}
$ ... and so on.</p>
<p>But in all honesty, Taylor's theorem is rarely used to estimate values of a
function by hand, and arguing that it is useful to know at least the bare bones
of the theory behind one's field is an uphill battle. This would prevent me
from mentioning the derivation of the <a title="Euler-Maclaurin"
href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#Derivation_by_mathematical_induction">Euler-Maclaurin
formula</a> as well.</p>
<p>I appealed to aesthetics: Taylor's Theorem says that $ \displaystyle
\sum_{n\ge0} x^n/n! = e^x$, but repeated integration by parts yields that $
\displaystyle \int_0^\infty x^n e^{-x} dx=n!$. That's sort of cool - and not as
obvious as it might appear at first. Although I didn't mention it then, we also
have the pretty result that n integration by parts applied to $ \displaystyle
\int_0^1 \dfrac{ (-x\log x)^n}{n!} dx = (n+1)^{-(n+1)}$. Summing over n, and
remembering the Taylor expansion for $ e^x$, one gets that $ \displaystyle
\int_0^1 x^{-x} dx = \displaystyle \sum_{n=1}^\infty n^{-n}$.</p>
<p>Finally, I decided to appeal to that part of the student that wants only to do
well on tests. Then for a differentiable function $ f$ and its inverse $
f^{-1}$, we have that:
$$ int f(x)dx = xf(x) - \displaystyle \int xf'(x)dx = $$
$ = xf(x) - \displaystyle \int f^{-1}(f(x))f'(x)dx = xf(x) - \displaystyle \int f^{-1}(u)du$.
In other words, knowing the integral of $ f$ gives the integral of $ f^{-1}$
very cheaply, and this is why we use integration by parts to integrate things
like $ \ln x$, $ \arctan x$, etc. Similarly, one gets the reduction formulas
necessary to integrate $ \sin^n (x)$ or $ \cos^n (x)$. If one believes that
being able to integrate things is useful, then these are useful.There is of
course the other class of functions such as $ \cos(x)\sin(x)$ or $ e^x
\sin(x)$, where one integrates by parts twice and solves for the integral. I
still think that's really cool - sort of like getting something for nothing.</p>
<p>And at the end of the day, they were satisfied. But this might be the crux of
the problem that explains why so many Tech students, despite having so many
resources for success, still fail - they have to trudge through a whole lost of
'useless theory' just to get to the 'good stuff.' Unfortumately, it's hard to
say what is 'useless' and what is 'good', and these answers aren't uniform from
person to person.</p>https://davidlowryduda.com/integration-by-partsThu, 12 May 2011 03:14:15 +0000Towards an expression for pihttps://davidlowryduda.com/towards-an-expression-for-piDavid Lowry-Duda<p>We start with $ \cos( \dfrac{\xi}{2})\cos(\dfrac{\xi}{4}) ...
\cos(\dfrac{\xi}{2^n})$. Recall the double angle identity for sin: $ \sin
2 \theta = 2\sin \theta \cos \theta $. We will use this a lot.</p>
<p>Multiply our expression by $ \sin(\dfrac{\xi}{2^n})$. Then we have
$$ \cos( \dfrac{\xi}{2})\cos(\dfrac{\xi}{4}) \cdots \cos(\dfrac{\xi}{2^n})\sin(\dfrac{\xi}{2^n})$$
Using the double angle identity, we can reduce this:
$$ = \dfrac{1}{2} \cos( \dfrac{\xi}{2})\cos(\dfrac{\xi}{4}) ... \cos(\dfrac{\xi}{2^{n-1}})sin(\dfrac{\xi}{2^{n-1}}) =$$
$$ = \dfrac{1}{4} \cos( \dfrac{\xi}{2})\cos(\dfrac{\xi}{4}) ... \cos(\dfrac{\xi}{2^{n-2}})\sin(\dfrac{\xi}{2^{n-2}}) =$$
$$ \ldots $$
$$ = \dfrac{1}{2^{n-1}}\cos(\xi / 2)\sin(\xi / 2) = \dfrac{1}{2^n}\sin(\xi)$$
So we can rewrite this as
$$ \cos( \dfrac{\xi}{2})\cos(\dfrac{\xi}{4}) ... \cos(\dfrac{\xi}{2^n}) = \dfrac{\sin \xi}{2^n \sin( \dfrac{\xi}{2^n} )}$ for $ \xi \not = k \pi$$
Because we know that $ lim_{x \to \infty} \dfrac{\sin x}{x} = 1$, we see that
$lim_{n \to \infty} \dfrac{\xi / 2^n}{\sin(\xi / 2^n)} = 1$. So we see that
$$ \cos( \dfrac{\xi}{2})\cos(\dfrac{\xi}{4}) ... = \dfrac{\xi}{\xi}$$
$$ \xi = \dfrac{\sin(\xi)}{\cos(\dfrac{\xi}{2})\cos(\dfrac{\xi}{4})...}$$
Now we set $ \xi := \pi /2$. Also recalling that $ \cos(\xi / 2 ) = \sqrt{ 1/2 + 1/2 \cos \xi}$. What do we get?
$$ \dfrac{\pi}{2} = \dfrac{1}{\sqrt{1/2} \sqrt{ 1/2 + 1/2 \sqrt{1/2} } \sqrt{1/2 + 1/2 \sqrt{ 1/2 + 1/2 \sqrt{1/2} \cdots}}}$$
This is pretty cool. It's called Vieta's Formula for $ \dfrac{\pi}{2}$. It's
also one of the oldest infinite products.</p>https://davidlowryduda.com/towards-an-expression-for-piWed, 20 Apr 2011 03:14:15 +0000Khan an effective supplement, a surveyhttps://davidlowryduda.com/khan-an-effective-supplementDavid Lowry-Duda<p>To My Calc III students:</p>
<p>So I have referenced the Khan Academy as a supplement every now and then. Is it useful?</p>
<p>In addition, it's that survey time of year! You should fill out this <a
href="https://surveys.oit.gatech.edu/limesurvey/index.php?sid=79272&lang=en">survey
</a>for this course.</p>https://davidlowryduda.com/khan-an-effective-supplementWed, 20 Apr 2011 03:14:15 +00002401 - Additional examples for test 3https://davidlowryduda.com/2401-additional-examples-for-test-3David Lowry-Duda<p>In the past, I have talked about how good a supplemental source of information
the Khan Academy is. Again, it is supplementary. But it seems to have lots of
fully worked and fully explained examples of the concepts of chapters 17 and
chapter 18 (sections 1 through 4) – the topics for your next exam. I have
placed the relevant links below.</p>
<p>Double Integrals</p>
<ul>
<li><a href="http://www.khanacademy.org/video/double-integral-1?playlist=Calculus">I</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-2?playlist=Calculus">II</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-3?playlist=Calculus">III</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-4?playlist=Calculus">IV</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-5?playlist=Calculus">V</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-6?playlist=Calculus">VI</a>)</li>
</ul>
<p>Triple Integrals</p>
<ul>
<li><a href="http://www.khanacademy.org/video/triple-integrals-1?playlist=Calculus">I</a></li>
<li><a href="http://www.khanacademy.org/video/triple-integrals-2?playlist=Calculus">II</a></li>
<li><a href="http://www.khanacademy.org/video/triple-integrals-3?playlist=Calculus">III</a></li>
</ul>
<p>Line Integrals</p>
<ul>
<li><a href="http://www.khanacademy.org/video/introduction-to-the-line-integral?playlist=Calculus">I</a></li>
<li><a href="http://www.khanacademy.org/video/line-integral-example-1?playlist=Calculus">II</a></li>
<li><a href="http://www.khanacademy.org/video/line-integral-example-2–part-1?playlist=Calculus">III</a></li>
<li><a href="http://www.khanacademy.org/video/line-integral-example-2–part-2?playlist=Calculus">IV</a>)</li>
</ul>
<p>Clever Line Integrals</p>
<ul>
<li><a href="http://www.khanacademy.org/video/line-integrals-and-vector-fields?playlist=Calculus">I</a></li>
<li><a href="http://www.khanacademy.org/video/using-a-line-integral-to-find-the-work-done-by-a-vector-field-example?playlist=Calculus">II</a></li>
<li><a href="http://www.khanacademy.org/video/parametrization-of-a-reverse-path?playlist=Calculus">III</a></li>
<li><a href="http://www.khanacademy.org/video/scalar-field-line-integral-independent-of-path-direction?playlist=Calculus">IV</a></li>
<li><a href="http://www.khanacademy.org/video/vector-field-line-integrals-dependent-on-path-direction?playlist=Calculus">V</a></li>
<li><a href="http://www.khanacademy.org/video/path-independence-for-line-integrals?playlist=Calculus">VI</a></li>
<li><a href="http://www.khanacademy.org/video/closed-curve-line-integrals-of-conservative-vector-fields?playlist=Calculus">VII</a></li>
<li><a href="http://www.khanacademy.org/video/example-of-closed-line-integral-of-conservative-field?playlist=Calculus">VIII</a></li>
<li><a href="http://www.khanacademy.org/video/second-example-of-line-integral-of-conservative-vector-field?playlist=Calculus">IX</a>)</p></li>
</ul>
<p>As always, if you have any questions let me know. I will be hosting a review
session in the Math Lab at 5 - come prepared and with questions. I suspect
we'll be focusing on the iterated integrals of Chapter 17. Good luck!</p>https://davidlowryduda.com/2401-additional-examples-for-test-3Thu, 14 Apr 2011 03:14:15 +0000Slow factoring algorithm IIhttps://davidlowryduda.com/slow-factoring-algorithm-iiDavid Lowry-Duda<p>I was considering the algorithm described in the<a title="An interesting (slow)
factoring algorithm" href="/an-interesting-slow-factoring-algorithm"> parent post</a>,
and realized suddenly that the possible 'clever method' to speed up the
algorithm is complete nonsense. In particular, this simply reduces to trial
division (except slightly obscured, so still slower). But the partition thing
is still pretty cool, I think.</p>https://davidlowryduda.com/slow-factoring-algorithm-iiWed, 13 Apr 2011 03:14:15 +0000Math 420 - Supplement on Gaussian integers IIhttps://davidlowryduda.com/math-420-supplement-on-gaussian-integers-iiDavid Lowry-DudaThis post is larger than 10000 bytes, which is above the limit for this RSS feed. Perhaps it is long or has embedded images or code. Please view it directly at the url.https://davidlowryduda.com/math-420-supplement-on-gaussian-integers-iiWed, 13 Apr 2011 03:14:15 +0000Test Solutionhttps://davidlowryduda.com/test-solutionDavid Lowry-Duda<p>Due to the amount of confusion and the large number of emails, I have written
up the solution to Problem 1 from Test 2.</p>
<div class="question">
<p>Determine the path of steepest descent along the surface $ z = 2 + x + 2y - x^2
- 3y^2 $ from the point $ (0,0,2)$.</p>
</div>
<p>There are a few things to note - the first thing we must do is find which
direction points 'downwards' the most. So we note that for a function $ f(x,y)
= z, $ we know that $ \nabla f $ points 'upwards' the most at all points where
it isn't zero. So at any point $ P, $ we go in the direction $ -\nabla f.$</p>
<p>The second thing to note is that we seek a path, not a direction. So let us
take a curve that parametrizes our path:
$$ {\bf C} (t) = x(t) \hat{i} + y(t) \hat{j}.$$</p>
<p>So $ -\nabla f = (2x -1)\hat{i} + (6y -2)\hat{j}.$
As the velocity of the curve points in the direction of the curve, our path satisfies:
\begin{align}
x'(t) &= 2x(t) - 1; x(0) = 0 \\
y'(t) &= 6y(t) - 2; y(0) = 0
\end{align}</p>
<p>These are two ODEs that we can solve by separation of variables (something that
is, in theory, taught in 1502 - for more details, look at chapter 9 in Salas,
Hille, and Etgen). Let's solve the y one:
$$y' = 6y - 2$$
$$\frac{dy}{dt} = 6y - 2$$
$$\frac{dy}{6y-2} = dt$$
$$ln(6y-2)(1/6) = t + k$$
for a constant k
$$6y = e^{6t + k} + 2= Ae^{6t}$$
for a constant A
$$y = Ae^{6t} + 1/3$$
for a new constant A
$$y(0) = 0 \Rightarrow A = -1/3.$$</p>
<p>Solving both yields
\begin{align}
x &= \frac{1}{2} -\frac{1}{2} e^{2t} \\
y &= \frac{1}{3} - \frac{1}{3} e^{6t}
\end{align}</p>
<p>Now let's get rid of the t. Note that $ (3y -1) = e^{6t}$ and $ (2x -1) =
e^{2t}$. Using these together, we can get rid of t by noting that $
\dfrac{3y-1}{(2x - 1)^3} = 1.$ Rewriting, we get $ 3y = (2x-1)^3 + 1.$</p>
<p>So the path is given by $ 3y = (2x-1)^3 + 1$</p>
<p>Good luck on your next test!</p>https://davidlowryduda.com/test-solutionFri, 08 Apr 2011 03:14:15 +0000Blue Eyes and Brown Eyeshttps://davidlowryduda.com/blue-eyesDavid Lowry-Duda<p>This is a puzzle I heard on a much smaller level while I was in my freshmen
year of college. Georgia Tech has a high school mathematics competition every
spring for potential incoming students. The competition comes in rounds - and
those that don't make it to the final rounds can attend fun mathematical talks.
I was helping with the competition and happened to be at a talk on logic
puzzles, and this came up.</p>
<p>I bring it up now because it has raised a lot of ruckus at Terry Tao's <a
href="http://terrytao.wordpress.com/2011/04/07/the-blue-eyed-islanders-puzzle-repost/#comment-51498">blog</a>.
It doesn't seem so peculiar to me, but the literally hundreds of comments at
Terry's blog made me want to spread it some more. There is something about this
puzzle that makes people doubt the answer.</p>
<p>I have reposted the puzzle itself, as written by Terry. But for his included
potential 'solutions,' I direct you back to his blog. Of course, the hundreds
of comments there also merit attention.</p>
<p>Terry's puzzle:</p>
<blockquote>
<p>There is an island upon which a tribe resides. The tribe consists of 1000
people, with various eye colours. Yet, their religion forbids them to know
their own eye color, or even to discuss the topic; thus, each resident can
(and does) see the eye colors of all other residents, but has no way of
discovering his or her own (there are no reflective surfaces). If a
tribesperson does discover his or her own eye color, then their religion
compels them to commit ritual suicide at noon the following day in the village
square for all to witness. All the tribespeople are highly logical and devout,
and they all know that each other is also highly logical and devout (and they
all know that they all know that each other is highly logical and devout, and
so forth).</p>
<p>Of the 1000 islanders, it turns out that 100 of them have blue eyes and 900 of
them have brown eyes, although the islanders are not initially aware of these
statistics (each of them can of course only see 999 of the 1000 tribespeople).</p>
<p>One day, a blue-eyed foreigner visits to the island and wins the complete
trust of the tribe.</p>
<p>One evening, he addresses the entire tribe to thank them for their hospitality.</p>
<p>However, not knowing the customs, the foreigner makes the mistake of
mentioning eye color in his address, remarking “how unusual it is to see
another blue-eyed person like myself in this region of the world”.</p>
<p>What effect, if anything, does this <em>faux pas</em> have on the tribe?</p>
</blockquote>
<p>The islanders are highly logical.
<sup>1</sup>
<span class="aside"><sup>1</sup>For the purposes of this logic puzzle, “highly logical” means that any
conclusion that can logically deduced from the information and observations
available to an islander, will automatically be known to that islander.</span></p>
<p>An essentially equivalent version of the logic puzzle is <a
href="http://xkcd.com/blue_eyes.html">also given at the xkcd web site</a>.
Many other versions of this puzzle can be found in many places.</p>https://davidlowryduda.com/blue-eyesFri, 08 Apr 2011 03:14:15 +0000On least squares - a question from reddithttps://davidlowryduda.com/a-response-to-ftyous-question-on-redditDavid Lowry-Duda<p>FtYou <a
href="http://www.reddit.com/r/math/comments/1wkt68/least_square_approximation_with_basis_functions/">writes</a></p>
<blockquote>Hello everyone ! There is a concept I have a hard time getting my head wrap around. If you have a Vector Space V and a subspace W, I understand that you can find the least square vector approximation from any vector in V to a vector in W. And this correspond to the projection of V to the subspace W. Now , for data fitting ... Let's suppose you have a bunch of points (xi, yi) where you want to fit a set a regressors so you can approximate yi by a linear combination of the regressors lets say ( 1, x, x2 ... ). What Vector space are we talking about ? If we consider the Vector space of function R -> R, in what subspace are we trying to map these vectors ?
I have a hard time merging these two concepts of projecting to a vector space and fitting the data. In the latter case what vector are we using ? The functions ? If so I understand the choice of regressors ( which constitute a basis for the vector space ) But what's the role of the (xi,yi) ?
I want to point out that I understand completely how to build the matrices to get Y = AX and solving using least square approx. What I miss is the big picture. The linear algebra picture. Thanks for any help !</blockquote>
<p>We'll go over this by closely examining and understanding an example. Suppose we have the data points ${(x_i, y_i)}$</p>
<p align="center">$\displaystyle \begin{cases} (x_1, y_1) = (-1,8) \\ (x_2, y_2) = (0,8) \\ (x_3, y_3) = (1,4) \\ (x_4, y_4) = (2,16) \end{cases}, $</p>
<p>and we have decided to try to find the best fitting quadratic function. What do we mean by best-fitting? We mean that we want the one that approximates these data points the best. What exactly does that mean? We'll see that before the end of this note - but in linear algebra terms, we are projecting on to some sort of vector space - we claim that projection is the ''best-fit'' possible.</p>
<p>So what do we do? A generic quadratic function is ${f(t) = a + bt + ct^2}$. Intuitively, we apply what we know. Then the points above become</p>
<p align="center">$\displaystyle \begin{cases} f(-1) = a - b + c = 8 \\ f(0) = a = 8 \\ f(1) = a + b + c = 4 \\ f(2) = a + 2b + 4c = 16 \end{cases}, $</p>
<p>and we want to find the best ${[a b c]}$ we can that ''solves'' this. Of course, this is a matrix equation:</p>
<p align="center">$\displaystyle \begin{pmatrix} 1 & -1 & 1 \\ 1 & 0 & 0 \\ 1 & 1 & 1 \\ 1 & 2 & 4 \end{pmatrix} \begin{pmatrix} a \\ b \\ c \end{pmatrix} = \begin{pmatrix} 8 \\ 8 \\ 4 \\ 16 \end{pmatrix}. $</p>
<p>And so you see how the algorithm would complete this. But now let's get down the ''linear algebra picture,'' as you say.</p>
<p>We know that quadratic polynomials ${f(t) = a + bt + ct^2}$ are a three dimensional vector space (which I denote by ${P_2)}$ spanned by ${1, t, t^2}$. We know we have four data points, so we will define a linear transformation ${A}$ to be the transformation taking a quadratic polynomial ${f}$ to ${\mathbb{R}^4}$ by evaluating ${f}$ at ${-1, 0, 1, 2}$ (i.e. the ${x_i}$). In other words,</p>
<p align="center">$\displaystyle A : P_2 \longrightarrow \mathbb{R}^4 $</p>
<p>where</p>
<p align="center">$\displaystyle A(f) = \begin{pmatrix} f(-1) \\ f(0) \\ f(1) \\ f(2) \end{pmatrix}. $</p>
<p>We interpret ${f}$ as being given by three coordinates, ${a, b, c \in \mathbb{R}^3}$, so we can think of ${A}$ as a linear transformation from ${\mathbb{R}^3 \longrightarrow \mathbb{R}^4}$. In fact, ${A}$ is nothing more than the matrix we wrote above.</p>
<p>Then a solution to</p>
<p align="center">$\displaystyle A^T A \begin{pmatrix} a \\ b \\ c \end{pmatrix} = A^T \begin{pmatrix} 8 \\ 8 \\ 4 \\ 16 \end{pmatrix} $</p>
<p>is the projection of the space of quadratic polynomials on ${\mathbb{R}^4}$ (which in this case is the space of evaluations of quadratic polynomials at four different points). If ${f^* }$ is the found projection, and I denote the ${y_i}$ coordinate vector as ${y^ *}$, then this projection minimizes</p>
<p align="center">$\displaystyle || y^* - Af^*||^2 = (y_1 - f^*(x_1))^2 + \ldots + (y_4 - f^*(x_4))^2, $</p>
<p>and it is in this sense that we mean we have the ''best-fit.'' (This is roughly interpreted as the distances between the ${y_i}$ and ${f^*(x_i)}$ are minimized; really, it's the sum of the squares of the distances - hence ''Least-Squares'').</p>
<p>So in short: ${A}$ is a matrix evaluating quadratic polynomials at different points. The columns vectors correspond to a basis for the space of quadratic polynomials, ${1, t, t^2}$. The codomain is ${\mathbb{R}^4}$, coming from the evaluation of the input polynomial at the four different ${x_i}$. The projection of the set of quadratic polynomials onto their evaluation space minimizes the sum of the squares of the distances between ${f(x_i)}$ and ${y_i}$.</p>
<p>Does that make sense?</p>https://davidlowryduda.com/a-response-to-ftyous-question-on-redditFri, 08 Apr 2011 03:14:15 +0000Math 170 - Fall 2014https://davidlowryduda.com/math-170David Lowry-Duda<h1>Math 170 Fall 2014</h1>
<p>This is the fall 2014 Math 170 Calculus II posthead for David Lowry-Duda’s
section. <strong>This is not the main site for the whole course</strong> (which
can be found at <a
href="https://sites.google.com/a/brown.edu/fa14-math0170/section-2">https://sites.google.com/a/brown.edu/fa14-math0170/section-2</a>),
but it will contain helpful bits and is a good venue through which you can ask
questions.</p>
<p>A few things that might interest you so far:</p>
<ol>
<li><a title="An intuitive introduction to calculus" href="/?p=1259">An intuitive introduction to calculus</a>, which contains a brief overview of first semester calculus. I wrote this for students in Brown's Math 100 as a sort of review.</li>
<li><a title="An Intuitive Overview of Taylor Series" href="/?p=1520">An intuitive overview of Taylor series</a>, which tries to give a better motivated intro to Taylor series than is given in Thomas' Calculus.</li>
<li>A <a title="A bit more about partial fraction decomposition" href="/?p=1711">short note on partial fractions</a>.</li>
</ol>
<p>In addition, I've taught twice before at Brown, and I've looked through some
data on these courses. Two trends are emerging. Firstly, there is an extremely
high correlation between performance on the first midterm and the overall final
grade - far larger than you would think. I interpret this to mean that these
courses are cumulative and unfriendly to students who fall behind early on -
so <strong>do your best to not fall behind</strong>. Secondly, students who do
a poor job on their homework do a poor job overall (but students who do good on
homework don't necessarily do well overall, interestingly).</p>
<p>You can find the data itself and my interpretations here:</p>
<ol>
<li><a title="Math 90: Concluding Remarks" href="/?p=988">Concluding Remarks for Math 90</a></li>
<li><a title="Math 100 Fall 2013: Concluding Remarks" href="/?p=1593">Concluding Remarks for Math 100</a></li>
</ol>
<p>What will the Concluding Remarks for Math 170 say?</p>
<p>And now, the administrative details (the rest can be found on the main course website).</p>
<blockquote>Instructor Name: David Lowry-Duda
Email address: djlowry [at] math [dot] brown.edu (although please only use email for private communication – math questions can be asked here, and others can benefit from their openness).
Office hours: 10:30-11:30AM on Mondays and 3:00-4:00PM on Fridays in Kassar 018 (the basement)
MRC: Free math tutoring is given at the Kassar House, room 105, each week on Monday-Thursday from 8-10pm. It's free, there are many students learning and tutors tutoring, and there's very little reason not to go. I heavily encourage you to go.</blockquote>https://davidlowryduda.com/math-170Fri, 08 Apr 2011 03:14:15 +0000An interesting (slow) factoring algorithmhttps://davidlowryduda.com/an-interesting-slow-factoring-algorithmDavid Lowry-Duda<p>After my previous posts (<a title="Perfect Partitions"
href="/perfect-partitions">I</a>, <a title="Perfect Partitions II"
href="/perfect-partitions-ii">II</a>) on perfect partitions of
numbers, I continued to play with the relationship between compositions and
partitions of different numbers. I ended up stumbling across the following
idea: which numbers can be represented as a sum of consecutive positive
integers? This seems to be another well-known question, but I haven't come
across it before.</p>
<p>First of all, I note that without the restriction to positive integers, this
problem is trivial - every integer n is clearly the sum of the integers $
-(n-1), -(n-2), ..., 0, 1, ..., n-1, n$. So we restrict ourselves to sums of
consecutive positive integers instead.</p>
<p>Here is the claim:</p>
<blockquote>A number $ n$ can be written as the sum of consecutive positive integers if and only if $ n$ is not a 2-power.</blockquote>
<p>I will present two different proofs of the claim.</p>
<h3>First Proof</h3>
<p>I think of this as a clever proof. Consider the averages of a consecutive set
of positive integers. The average is the sum of the first and last number,
divided by 2. Further, the sum of the sequence is the product of the average
and the number of elements. We have two possibilities - we either have an even
number of elements that we are summing over, or an odd number.</p>
<p>First suppose there are an even number of elements. In this case, either the
first element or the last element is odd, and the other is even. Then the sum
of the series is the product of an even integer and some number that has a
fractional part of 1/2 (as the average of an odd element and an even element
has a half-integer fractional part). If we divide the first term by 2 and
multiply the second term by 2, and we note that twice a number with a
half-integer fractional part is odd, then we see that the overall sum has an
odd factor.</p>
<p>Now suppose that there are an odd number of elements. In this case, the average
of the first and last integers is just an integer, and so the overall sum
contains an odd factor.</p>
<p>So we have that no 2-power is a sum of a consecutive set of positive integers,
as they are the numbers that contain no odd factors. We can explicitly
construct a consecutive set of positive integers that sum to every other
integer. Suppose the integer has an odd factor, and so is of the form $
(2k+1)*(n)$. Then this number is the sum of the integers $ n-k, n-k+1, ... ,
n+k-1, n + k$.</p>
<p>And so we are done.</p>
<h3>Second Proof</h3>
<p>This is what I consider a very suggestive proof, and is the source of
inspiration for the aforementioned terrible factoring algorithm. In this, we
will duplicate the work of Wai Yan Pong, a professor at California State
University. We will establish a one to one correspondence between the odd
factors of a number and the consecutive sets of positive integers that sum to
that number.</p>
<p>Suppose $ k$ is an odd factor of $ n$. Then since the sum of the integers from
$ \dfrac{-(k-1)}{2} to \dfrac{(k-1)}{2}$ is 0, we can add $ n/k$ to each of
them so that the overall sum is $ n$. If $ \dfrac{-(k-1)}{2} <
\dfrac{n}{k}$, then all the terms in the sum are positive and so we have a set
of consecutive positive integers of length k. If $ \dfrac{-(k-1)}{2} >
\dfrac{n}{k}$, then the set starts with either a zero or a negative number,
which means that after dropping the zero and cancelling the negative terms with
their positive variants, we are left with the sequence $ {k-1}/2 - n/k + 1,
..., n/k + {k-1}/2$. This has an even number of terms (since we disregarded the
0) and has length $ \dfrac{2n}{k}$.</p>
<p>Now we consider the other direction. Suppose that $ k + 1, k+2, k+3, ..., k+ m$
is a series of consecutive integers that sums to the integer n. Since the sum
is n, we have that $ m*(2k + m + 1)/2 = n$. Either $ m$ or $ 2k+m+1$ is an odd
factor of n, and it can be verified that the set of consecutive integers is the
one associated with this factor, similar to the work above.</p>
<p>So we have a one to one correspondence between the sets of consecutive integers
that add up to an odd number n and the odd factors of n. In particular, if one
finds such a set of even length l, then the number l divides 2n (and as long as
the length is more than 2, this reveals a nontrivial factor of n). And if the
length is odd, then l is a factor of n (and as long as it's not the trivial
sequence, i.e. n itself, it leads to a nontrivial factor).</p>
<h3>The interesting(ly terrible) factoring algorithm</h3>
<p>And now we arrive at the promised factoring algorithm (for an odd number - just
go ahead and divide all the 2's out). Of course, we can already see that it's
one of the least efficient factoring algorithms conceivable. How does one do
this as 'efficiently' as possible? One might think that you should start at 1
and add up numbers until one either reaches n or passes n - if one hits n,
we've found a factor, and if not, then start at 2 and move on. Conceivably, one
might need to do this until one hits about n/2 - far worse than trial division.</p>
<p>But this is not to despair! There are potentially interesting things that can
be done modulus the lengths of a particular series. Suppose one notates the sum
of the first l terms by $ S_l$, in the style of other partial sums. What one
should do is check to see if $ S_l \cong 0 mod(l). If it is not, then it is
impossible for any sequence of length l to sum to n, and thus impossible for l
(if l is odd) or l/2 (if l is even) to divide n. Further, one only needs to
check prime lengths and twice prime lengths. Similarly, one needs only to check
up to root n (and twice root n) - so this is actually roughly on par with trial
division.</p>
<p>It is more interesting and efficient than it might appear! Of course, this is
still slow. Very slow. But it's very interesting.</p>https://davidlowryduda.com/an-interesting-slow-factoring-algorithmThu, 07 Apr 2011 03:14:15 +0000An even later pi day posthttps://davidlowryduda.com/an-even-later-pi-day-postDavid Lowry-Duda<p>In my <a title="A late pi day post" href="/a-late-pi-day-post">post
</a>dedicated to pi day, I happened to refer to a musical interpretation of pi.
This video (while still viewable from the link I gave) has been forced off of
YouTube due to a copyright claim. The video includes an interpretation by
Michael Blake, a funny and avid YouTube artist. The copyright claim comes from
Lars Erickson - he apparently says that he created a musical creation of pi
first (and... I guess therefore no others are allowed...). In other words, it
seems very peculiar.</p>
<p>I like Vi Hart's <a href="http://www.youtube.com/user/Vihart#p/u/1/XJtLSLCJKHE"
target="_blank">treatment </a>of the copyright claim. For completeness, here is
Blake's <a
href="http://www.youtube.com/user/michaeljohnblake#p/a/u/1/9pOd-8AZC-k"
target="_blank">response</a>.</p>https://davidlowryduda.com/an-even-later-pi-day-postThu, 07 Apr 2011 03:14:15 +0000Math subject organizationhttps://davidlowryduda.com/math-subject-organizationDavid Lowry-Duda<p>This is a brief description of how my mathematical blog posts will be
categorized. I use roughly the same description as used by the <a
href="http://arxiv.org/">arxiv</a>. This means that the possible areas are the
following:</p>
<ul>
<li>Algebraic Geometry (math.AG)</li>
<li>Algebraic Topology (math.AT)</li>
<li>Analysis of PDEs (math.AP)</li>
<li>Classical Analysis and ODEs (math.CA)</li>
<li>Category Theory (math.CT)</li>
<li>Combinatorics (math.CO)</li>
<li>Commutative Algebra (math.AC)</li>
<li>Complex Variables (math.CV)</li>
<li>Differential Geometry (math.DG)</li>
<li>Dynamical Systems (math.DS)</li>
<li>Functional Analysis (math.FA)</li>
<li>General Mathematics (math.GM)</li>
<li>General Topology (math.GN)</li>
<li>Geometric Topology (math.GT)</li>
<li>Group Theory (math.GR)</li>
<li>History and Overview (math.HO)</li>
<li>Information Theory (math.IT)</li>
<li>K-Theory and Homology (math.KT)</li>
<li>Logic (math.LO)</li>
<li>Mathematical Physics (math.MP)</li>
<li>Metric Geometry (math.MG)</li>
<li>Numerical Analysis (math.NA)</li>
<li>Number Theory (math.NT)</li>
<li>Operator Algebras (math.OA)</li>
<li>Optimization and Control (math.OC)</li>
<li>Probability (math.PR)</li>
<li>Quantum Algebra (math.QA)</li>
<li>Representation Theory (math.RT)</li>
<li>Rings and Algebras (math.RA)</li>
<li>Spectral Theory (math.SP)</li>
<li>Statistics (math.ST)</li>
<li>Symplectic Geometry (math.SG)</li>
</ul>
<p>Of course, I almost certainly won't refer to all of them, but I will use many.
I will also use math.REC for recreational mathematics, which I do all the time.</p>https://davidlowryduda.com/math-subject-organizationWed, 06 Apr 2011 03:14:15 +0000Perfect Partitions IIhttps://davidlowryduda.com/perfect-partitions-iiDavid Lowry-Duda<p>In continuation of <a title="Perfect Partitions"
href="/perfect-partitions" target="_blank">my previous post</a> on
perfect partitions, I seek to extend the previous result to all numbers, not
just one less than a prime power.</p>
<p>Previously, we had:</p>
<blockquote>
<p>The number of perfect partitions of the number $p^\alpha -1$ is the
same as the number of compositions of the number $\alpha$.</p>
</blockquote>
<p>Today, we will find a relation for the number of perfect partitions of any
positive integer $ n $. The first thing we note is that every number n can be
written uniquely as:
$$ n = {p_1}^{\alpha _1}{p_2}^{\alpha _2} *...* {p_n}^{\alpha_n} - 1,$$
where each of the p are distinct primes.</p>
<p>Parallel to the last result, we can see that the number of perfect partitions
of the number $ {p_1}^{\alpha _1}{p_2}^{\alpha _2} *...* {p_n}^{\alpha_n} - 1$
is the same as the number of ways to factor</p>
<p>$$ \dfrac{x^{{p_1}^{\alpha _1}{p_2}^{\alpha _2} *...* {p_n}^{\alpha_n} - 1} - 1}{x - 1}. $$</p>
<p>This can immediately be proved by extending the previous post: still use finite
geometric series and the positivity of the terms to show that each exponent can
be reached in exactly one way. The different factors would again be of the
form:</p>
<p>$$ \dfrac{1 - x^\gamma}{1-x^\delta} $$ where $ \delta | \gamma $.</p>
<p>In this fashion, we see that the number of perfect partitions are the same as
the number of ordered factorizations of $ {\alpha _1}{\alpha _2} *...*
{\alpha_n}$. But we haven't really considered this yet. How many ordered
factorizations are there? This is unfortunately beyond the scope of this post,
but Sloane's has that sequence<a href="http://oeis.org/A074206"> here</a>, and
there is a Dirichlet generating function: $ f(s) = \dfrac{1}{2 - \zeta(s)}$.</p>
<p>As an aside that made more sense in the original post, I consider the number of
compositions of a number. How many compositions are there for a number n?</p>
<p>I hadn't seen this before, but upon consideration I see that it is a very
simple exercise that one might encounter. Imagine we are thinking of the number
of compositions of n. Then $ n = 1 + 1 + 1 ... + 1$. But then each '+' symbol
might be replaced by a separator, so that for example $ 1 + 1 + 1$ might be $ 1
+ 1, 1 = 2, 1$. So we see that there are always $ 2^{n-1} $ different
compositions of the number n. So we now know the number of partitions of each
number n!</p>https://davidlowryduda.com/perfect-partitions-iiThu, 31 Mar 2011 03:14:15 +0000A bag's journey in search of its ownerhttps://davidlowryduda.com/a-bags-journeyDavid Lowry-Duda<p>This is the story of a bag,<br />
who lost its owner and trav'led the whole world!<br />
And though it left with lots o' tags attached,<br />
She absolutely lost it, when she flied. </p>
<p>How many days would it be?<br />
She arrived with hope, but found only tears.<br />
The bag just disappeared,<br />
so she flew in without any gear.<br />
But she gets a call the next morning,<br />
"Where are you, your bag is right here!"<br />
Thousands of miles afar.<br />
When she looks in the mirror so how does she choose?<br />
The same clothes worn day after day.<br />
When travelling homeward bound,<br />
her bag seems never to be found. </p>
<p>This is the story of a bag,<br />
who lost its owner and trav'led the whole world!<br />
And though it left with lots o' tags attached,<br />
She absolutely lost it, when she flied. </p>
<p>[loosely to "Story of a Girl"]</p>
<p>This is one of those strange stories - girl gets ready for flight from Atlanta
to New York to Prague, girl ends up going Atlanta to Norfolk to New York to
Prague, but bag ends up going Atlanta to New York to Atlanta to New York to
Prague to New York to Atlanta to a warehouse to Atlanta to New York to Prague
to Krakow... you know, the typical story. To be fair, the flight change through
Norfolk as opposed to a direct to New York was last minute, and it makes sense
for the bag to have been detained in New York. Perhaps it would make it on a
later flight to Prague - such is life.</p>
<p>But nothing so simple occurred. The bag makes it to Prague, and when the girl
notes that the bag should be sent to her home, one might expect the story to
end. Instead, the bag ends up back in New York, then back in Atlanta. Of
course, girl doesn't know this - it's all a big mystery (as she borrows
friends' clothing, of course). Fortunately, a Delta worker named Carl (I think)
finds this bag and its tag in this warehouse, looks it up and calls girl. Girl
asks for it to be shipped to her - no problem, he says. Carl is very good at
his job, I think, and I commend him. Unfortunately, the bag gets to Prague
again and somehow whatever instructions were once somehow connected to the bag
are lost. So now someone at Prague calls up girl - what do you want to do with
this bag? So the bag goes to Krakow, but that's okay. That's where the girl
found the bag.</p>
<p>A very logical route, one might say.</p>https://davidlowryduda.com/a-bags-journeyTue, 29 Mar 2011 03:14:15 +0000Perfect Partitionshttps://davidlowryduda.com/perfect-partitionsDavid Lowry-Duda<p>I was playing with a variant of my <a title="Containers of Water: Maybe an
interesting question." href="/water" target="_blank">Containers of Water question</a> where we were instead
interested in solid weights on a scale. It occurred to me that, as is often the
case, I should consider easier problems first and see what comes of them.
Ultimately, this led me down a path towards the idea of a 'perfect partition'
and a few papers published in the late 1800s by MacMahon. Here's how that went:</p>
<p>Consider the vastly easier problem: you have n stones (each of some integer
weight) on a balancing scale (a classic balance, so that one kilogram on each
side will cause the scale to be 'balanced'). How many different integer weights
can you correctly measure with these n stones? Alternately, for each n, what
stones maximize the number of weights one can properly measure (for this post,
we want to measure the weights $ (1,...,n)$ for some $ n$).</p>
<p>Say for example that you have 2 stones, a 1-stone and a 3-stone (to take from
my containers of water notation - this means one weighs 1 kg and the other
weighs 3 kg). Then it is possible to measure out 1, 2, 3, and 4 kg on the
scale. To measure 1 kg, simply measure against the 1-stone. To measure 2 kg,
measure the 1-stone against the 3-stone (i.e. place the 1-stone on one side and
the 3-stone on the other, so that 2 kg will balance the scale). 3 and 4 follow.</p>
<p>In one direction, it is very easy to construct maximal sets of stones. Let us
distinguish the two sides of the balance so that there is a Load side (where we
put weights to weigh against) and a Measure side. In our previous example, we
might have (3 in Load ; 1 in Measure) and we can properly measure 2 kg. So we
also expect the Load-weight to be greater than the Measure-weight, so that we
can actually measure a positive weight (no negative weights here). So if there
is exactly 1 stone, we require it to be a 1-stone. Now say we add an n-stone
from some n>1: we can get up to 4 different possible measurements. We can
see this as follows:</p>
<p style="padding-left:30px;">(Load; Measure) (1; 0) , (1, n; 0) , (n; 1) , (n; 0)</p>
<p>In fact, this can be generalized to a recurrence relation. Suppose we have a
set S of weights whose total weight is W and which can measure N different
measurements. Then by adding a new weight w > W, we can get at most 3N + 1
different measurements.</p>
<p style="padding-left:30px;">(Load; Measure) <br>
(S; 0) This has N different measurements corresponding to the assumption <br>
(S, w; 0) This also has N different measurements, except w more than above <br>
(w; S) This also has N different measurements, except w less than the first <br>
(w; 0) This measures only the weight w. <br></p>
<p>It also turns out that this bound is attainable. Simply start with 1 and add 3,
then $ 2*(3+1) + 1 = 9$, and so on. You always add twice the previous total
weight plus 1. And it is quickly apparent that this is maximal as there is no
overlap. It is cool, though, that this means that with weights of size 1, 3, 9,
27, and 81, one can measure every weight up to 121 kg. And it's unique!</p>
<p>But there is a more interesting question here, closely related to the
uniqueness of the measurements up to 121 done with the weights 1, 3, 9, 27, and
81 (or, for a better parallel, the weights 1, 1, 3, 3, 9, 9, 27, 27, 81, and 81
for the weight 242 - actually a perfect partition). And this is the true topic
of the day: perfect partitions.</p>
<p>An English mathematician by the name of Percy Alexander MacMahon came up with
the idea of a perfect partition in his first paper: <em>Certain Special
Partitions of Numbers</em>, published in 1886 in the Quarterly Journal of
Mathematics, pg. 367-373. A <a
href="http://mathworld.wolfram.com/PerfectPartition.html"
target="_blank">perfect partition</a> of a number n is a partition whose
elements uniquely generate any number in (1, ..., n). For example, (12) is a
perfect partition of 3, and (122) is a perfect partition of 5. MacMahon also
established quick ways of determining the number of perfect partitions for a
number. Unfortunately, as neither the Quarterly Journal of Mathematics nor the
collected works of MacMahon (available from MIT Press) were at hand, I found
the proofs behind these results to be elusive. Fortunately, Goulden and
Jackson's book Combinatorial Enumeration has a key hint in their problem
2.5.12. I present a set of proofs for these very interesting results.</p>
<div class="claim">
<p>a number $ p^{\alpha} - 1 $, where p is prime, has #{combinations of $ \alpha$}
number of perfect partitions. Note that a composition of a number is the number
of ways of writing it as a sum of smaller integers. For example, some
compositions of 5 are (11111),(14), (23), (5), (32), etc.</p>
</div>
<h4>Proof</h4>
<p>So consider a number of the form $ p^{\alpha} - 1$, where p is a prime. Note
that every number has the trivial perfect partition of units, i.e. any number n
has as a perfect partition $ (1 ... 1)$ with n 1's. Note also the relation:
$$ \dfrac{1 - x^{p^{\alpha}}}{1-x} = 1 + x + x^2 + x^3 + ... + x^{ {p^\alpha} -1}$$</p>
<p>This relation corresponds to the perfect partition (1 ... 1) with n 1's. But
there's more - consider the special case where $ \alpha = 3$.
$$ \dfrac{1 - x^{p^3}}{1-x} = \dfrac{1 - x^{p^3}}{1-x^{p^2}} * \dfrac{1-x^{p^2}}{1-x} = 1 + x + x^2 + x^3 + ... + x^{ {p^3} -1}$$
or
$$ (1 + x^{p^2} + x^{2(p^2)} + ... + x^{p^3 - p^2})*(1 + x + x^{2} + ... + x^{p^2 - 1})$$
$$ = 1 + x + x^2 + x^3 + ... + x^{ {p^3} -1}. $$</p>
<p>In considering the multiplications of the product, we see that each of the
exponents $ 0, 1, 2, ..., p^3 -1$ is the product of a unique set of factors
from each term. This is to say that the possible exponents form a perfect
partition. So, in this example, from the $ \dfrac{1 - x^{p^3}}{1-x^{p^2}}$ term
we take the partition element $ p^2$. How many do we have? We have as many as
are possible from the product, which is to say up to p - 1 of them. From the $
\dfrac{1-x^{p^2}}{1-x}$ term, we take the partition element 1 (just as from the
trivial example). How many? From the product, we determine there are $ p^2 -1$
of them. So this shows that $ ((p^2)^{p-1} {1}^{p^2 -1})$ is a perfect
partition of $ p^3 -1$, where $ 1^{p^2 - 1}$ denotes that there are $ p^2 - 1$
1's included in the partition (do a quick check - it's perfect, and they add up
to $ p^3 -1$.</p>
<p>In fact, each of the factorizations of $ \dfrac{1 - x^{p^3}}{1-x}$ yields a
different perfect partition. Here, I show the correspondence.
$$ \dfrac{1 - x^{p^3}}{1-x^{p^2}} * \dfrac{1 - x^{p^2}}{1-x^p} * \dfrac{1 -
x^{p}}{1-x} \Longleftrightarrow ( (p^2)^{p-1}p^{p-1}1^{p-1})$$
$$ \dfrac{1 - x^{p^3}}{1-x^{p^2}}* \dfrac{1 - x^{p^2}}{1-x} \Longleftrightarrow ((p^2)^{p-1}1^{p^2 -1})$$
$$ \dfrac{1 - x^{p^3}}{1-x^{p}}* \dfrac{1 - x^{p}}{1-x} \Longleftrightarrow (p^{p^2-1}1^{p-1})$$
$$ \dfrac{1 - x^{p^3}}{1-x} \Longleftrightarrow (1^{p^3-1})$$</p>
<p>That strikes me as being pretty cool already, but we aren't done. In each
factor, let's note the difference between the difference of the exponent of p
on the numerator and denominator. For example, we see
$$ \dfrac{1 - x^{p^3}}{1-x^{p^2}} * \dfrac{1 - x^{p^2}}{1-x^p} * \dfrac{1 - x^{p}}{1-x} \mapsto [111]$$
$$ \dfrac{1 - x^{p^3}}{1-x^{p^2}}* \dfrac{1 - x^{p^2}}{1-x} \mapsto [12]$$
and so on. It is quickly apparent that every different factoring yields a
different composition of the number 3, and therefore corresponds to a different
perfect partition of $ p^3-1$. Of course, this quickly generalizes, so that
when we have $ p^{\alpha}-1$, and we write
$$ \dfrac{1 - x^{p^{\alpha}}}{1-x^{\beta}} * \dfrac{1 - x^{\beta}}{1-x^{\gamma}} * \dfrac{1 - x^{\gamma}}{1-x}$$
where $ \gamma | \beta, \beta | \alpha$ (and extended for as many factors as
necessary in the logical manner).</p>
<p>And this corresponds to a perfect partition and the composition of $ \alpha :
[(\alpha - \beta)(\beta - \gamma)(\gamma)]$ (again, extended for as many
factors as necessary. Further, it is quickly apparent that each composition $
[\xi _1 \xi _2 ... \xi _k ] $ can be written as $ [ (\alpha - (\alpha - \xi
_1))(\alpha - \xi _1 - (\alpha - \xi _1 - \xi _2 )) ...$ <em>{continued on next
line}</em></p>
<p>$ ... (\alpha - ... - \xi _{k-1} - (\alpha - ... - \xi_{k-1} - \xi _k ))]$</p>
<p>which corresponds to the factoring:</p>
<p>$$ \dfrac{1 - x^{p^\alpha}}{1-x^{\alpha - \xi _1}} * \dfrac{1 - x^{p^{\alpha -
\xi _1} } }{1-x^{p^{\alpha - \xi _1 - \xi_2} } } $$
$$ *(...) * $$
$$ \dfrac{1 - x^{p^{\alpha - \xi _1 - ... - \xi_k}}}{1-x}$$</p>
<p>Thus there is a bijection between the factorings of $ \dfrac{1 -
x^{p^{\alpha}}}{1-x}$ and the perfect partitions of $ p^\alpha -1$, and another
with the compositions of the number $ \alpha $. Thus we have proven that the
number of perfect partitions of the number $ p^\alpha -1 $ is the same as the
number of compositions of the number $ \alpha $. And we are done with this
claim!</p>https://davidlowryduda.com/perfect-partitionsMon, 28 Mar 2011 03:14:15 +0000About 3 weeks behind silly hypehttps://davidlowryduda.com/about-3-weeks-behindDavid Lowry-Duda<p>On 8 March 2011, Dr. Thomas Weiler and graduate fellow Chiu Man Ho of
Vanderbilt put a <a href="http://arxiv.org/PS_cache/arxiv/pdf/1103/1103.1373v1.pdf"
target="_blank">paper</a> on another possibility of achieving a sort of time
travel. This apparently got a good deal of press at the time, as both CBS and
<a href="http://www.upi.com/Science_News/2011/03/15/Physicists-propose-collider-time-travel/UPI-51411300243945/"
target="_blank">UPI </a>actually picked up the story and ran with it.</p>
<p>Why do I mention this? It is most certainly not because I have a dream or hope
of time travel - quite the opposite really. In the past, I have talked of how
surprised I was at the lack of hype coming out of the <a
href="http://en.wikipedia.org/wiki/Lhc" target="_blank">LHC</a>. The last
terrible bit I heard was a sort of rogue media assault on the possibility of
the LHC creating a black hole and thereby destroying everything! But that was
years ago and not stirred up by the high energy physics community.
<sup>1</sup>
<span class="aside"><sup>1</sup>As an aside, it did provide the very comical <a
href="http://hasthelargehadroncolliderdestroyedtheworldyet.com/"
target="_blank">http://hasthelargehadroncolliderdestroyedtheworldyet.com/</a>,
which includes a very simple answer and funny source code. By the way, no - as
far as we can tell, it hasn't yet destroyed the world.</span></p>
<p>Let's get clear - I don't think that hype is bad. Dr. Weiler himself noted the
speculative interest in this idea and that it's perhaps not the most likely
theory. And it doesn't contradict <a href="http://en.wikipedia.org/wiki/M-theory" target="_blank">M-Theory</a>,
apparently. I know nothing of this, so I can't comment. But I can say that such
fanciful papers are wonderful. This sort of free form play is liberating, and
exactly the same sort of thing that drew me into science. What can we say about
the world around us that goes along with what we know? Whether it's correct or
not is something that can be explored, but it's just an idea.</p>
<p>For those who don't want to read the article, Dr. Weiler and Chiu Man Ho allude
to the possibility of transferring Higgs singlets (a relative to the
as-yet-only-hypothesized Higgs Boson) to a previous time. So no, unfortunately
we cannot yet fix the problems of our past.</p>
<p>Nonetheless, there should be more hype about the LHC. The test schedule on the
collider is becoming more intense all the time. Very exciting.</p>https://davidlowryduda.com/about-3-weeks-behindSun, 27 Mar 2011 03:14:15 +0000A lite pi day posthttps://davidlowryduda.com/a-late-pi-day-postDavid Lowry-Duda<p>As this blog started after March 14th, it hasn't paid the proper amount of
attention to $ \pi $. I only bring this up because I have just been introduced
to Christopher Poole's intense dedication to $ \pi $. It turns out that
Christopher has set up a $ \pi $-phone, i.e. a phone number that you can call
if you want to hear $ pi $. It will literally read out the digits of $ \pi $ to
you. I've only heard the first 20 or so digits, but perhaps the more
adventurous reader will find out more. The number is 253 243-2504. Call it if
you are ever in need of some $ \pi $.</p>
<p>Of course, I can't leave off on just that - I should at least mention two other
great $ \pi $-day attractions (late as they are). Firstly, any unfamiliar with
the $ \tau $ movement should <a href="http://tauday.com/" target="_blank">read
up on it</a> or check out Vi Hart's<a
href="http://www.youtube.com/watch?v=jG7vhMMXagQ" target="_blank"> pleasant
video</a>. I also think it's far more natural to relate the radius to the
circumference rather than the diameter to the circumference (but it would mean
that area becomes not as pleasant as $ \pi r^2 $).</p>
<p>Finally, there is a great <a href="http://bcove.me/59mhi5v8"
target="_blank">musical interpretation</a> and celebration of $ \pi $. What if
you did a round (or fugue) based on interpreting each digit of $ \pi$ as a
different musical note? Well, now you can find out!</p>
<p>Until $ \tau $ day!</p>https://davidlowryduda.com/a-late-pi-day-postSat, 26 Mar 2011 03:14:15 +0000Containers of Water IIhttps://davidlowryduda.com/containers-of-water-iiDavid Lowry-Duda<p>In a <a href="/containers-of-water">previous post</a> I
considered the following two questions:</p>
<div class="question">
<p>What sets $ S $ maximize $ |{\bf F}(S;p)| $ for various $ p$?
What sets $ S $ maximize $ \lfloor {\bf F}(S; p) \rfloor $ for various $ p$?</p>
</div>
<p>I then changed the first question, which I think is perhaps not so interesting, to the following:</p>
<div class="question">
<p>What sets $S$ maximize $|{\bf F}(S;p)|_c$, where $|\cdot|_c$ denotes the largest connected interval of results?</p>
</div>
<p>Let's explore a few cases to see what these answers might look like.</p>
<h3>One Bottle</h3>
<p>With only one bottle, the game is simple. The bottle is either full or empty,
and so there is little to explore. Clearly, any bottle will maximize $ |{\bf
F}|_c$ and choosing a bottle of size 1 will maximize $ \lfloor {\bf F}
\rfloor$, at 2.</p>
<h3>Two Bottles</h3>
<p>With two bottles, the game already becomes interesting. If we restrict
ourselves to only 1 pour, then we would want a 1-bottle and a 2-bottle so that
$ \lfloor {\bf F} \rfloor = 3.$ But say we have 2 pours with which to work - it
might seem like a good idea to stay with the 1-bottle and the 2-bottle, as we
could fill them both so that $ \lfloor {\bf F}(1, 2; 2) \rfloor = 4,$ but this
is not optimal. Instead, consider a 1-bottle and a 3-bottle. We could fill the
1-bottle or we could fill the 3-bottle with our first pour. With a full
3-bottle, would could pour it into the 1-bottle to get 2 liters left in the
3-bottle. Or we can fill both bottles, getting 4 liters. So we know that $
\lfloor {\bf F} (S;2) \rfloor \geq 5.$</p>
<p>Is that the maximum? How might we know. I claim that yes, 5 is the maximum for
2 bottles and 2 pours. How might we see this? Suppose that we have one a-bottle
and one b-bottle, and that b > a. Then with our first pour, we can either
fill the a-bottle or fill the b-bottle. With our second pour, we might fill the
other bottle, or we might pour one bottle into the other. In other words, we
can get the amounts a, b, a+b, and b-a (by pouring the b-bottle into the
a-bottle, yielding a full a-bottle and the amount b-a in the b-bottle). Note
that pouring the a-bottle into the b-bottle doesn't accomplish anything in this
case. As these are all the possibilities, at most 4 amounts can be made. Thus
5 is, in fact, the maximum.</p>
<p>One might ask, is this the unique set that guarantees 4 consecutive amounts
that can be filled? Yes, and here's a sketch of why. If the smaller number is
3 or greater, then when it's added to the larger number a gap of more than 4
emerges. So the smaller number is 1 or 2. But then the larger number can't be
larger than 4. Then there are only a few possibilities left.</p>
<p>Let's look at one more case for 2 bottles as further evidence of what an
interesting set of questions we have. With three pours, the possible amounts
are a, b, a + b, b-a, and a + a (gotten from filling the a-bottle, pouring it
into the b-bottle, and filling the a-bottle again). Upon a little
consideration, we see that one 2-bottle and one 3-bottle gives the resulting
amounts 1, 2, 3, 4, and 5 - the maximal set.</p>
<p>I haven't yet looked into other bottle amounts that yield 5 consecutive
results, nor into the cases where there are more pours.</p>
<p>Actually, on second thought, the case for four or more pours is very easy. The
thing is that the only way for the additional pours to matter is if the
b-bottle is large enough to hold many multiples of the a-bottle. But in this
case, the only way that consecutive amounts can be achieved is if the smaller
amount is 1. Thus for any odd number of pours 2n+1 = p and large bottle of size
n, one can get all amounts from 1 to p+1 in at most p pours. But as my first
post demonstrated, this is also perhaps not optimal.</p>
<p>I might think about this later.</p>
<h3>Three Bottles</h3>
<p>For three bottles, the game quickly becomes more complicated. For 1 pour, of
course, it's trivial and the set 1, 2, 3 maximizes $ \lfloor {\bf F} \rfloor$
at 4.</p>
<p>For 2 pours, it seems that the sets (1, 3, 6) and (2, 4, 5) both maximize $
\lfloor {\bf F} \rfloor$ at 8, but neither are optimal. With bottles of size a,
b, and c, with c > b > a, the possibilities are a, b, c, a+b, a+c, b+c,
c-a, c-b, and b-a. In this sense, we could hope for 9 consecutive amounts. Is
it possible?</p>
<p>3 pours presents interesting opportunities, but I haven't looked into them
fully, either.</p>https://davidlowryduda.com/containers-of-water-iiThu, 24 Mar 2011 03:14:15 +00002401 - Missing recitationhttps://davidlowryduda.com/2401-missing-recitationDavid Lowry-Duda<p>As I went to visit Brown on the 16th-18th, I had my friend Matt cover
recitation. As we have started considering double and triple integrals, and
iterated integrals in particular, I thought I could point out a very good site
for brushing up on material. I think it can act as a wonderful supplement to
the lecture and recitation material. The Khan Academy is an online information
center built around the idea that video presentations and video lectures can
give intuition without adding any pressure - you can rewatch anything you've
missed, repeat important parts, etc. all without feeling like you're wasting
someone's time. While most of the Khan material is aimed at primary and
secondary school, they happen to have a multivariable calculus section
(although it's far insufficient to Tech's course material - don't think of this
as a replacement, but instead as merely a supplementary way to build
intuition).</p>
<p>Here are links to the Khan Academy website, and links to 5 lectures that I
think are relevant to material I would have covered.</p>
<ol>
<li><a href="http://www.khanacademy.org" target="_blank">Khan website</a></li>
<li><a href="http://www.khanacademy.org/video/double-integral-1" target="_blank">Intro to iterated integrals</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-2" target="_blank">II</a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-3" target="_blank">III </a></li>
<li><a href="http://www.khanacademy.org/video/double-integrals-4"
target="_blank">IV </a>, and</li>
<li><a href="http://www.khanacademy.org/video/double-integrals-5" target="_blank">V</a></li>
</ol>
<p>I encourage you to check them out before we meet again for recitation.</p>https://davidlowryduda.com/2401-missing-recitationWed, 23 Mar 2011 03:14:15 +0000Containers of Water - maybe an interesting questionhttps://davidlowryduda.com/waterDavid Lowry-Duda<p>Consider the old middle-school type puzzle question: Can you measure 6 quarts
of water using only a 4 quart bottle and a 9 quart bottle? Yes, you can, if
you're witty. Fill the 9 quart bottle. Then fill the 4 quart bottle from the 9
quart bottle, leaving 5 quarts in the 9-bottle. Empty the 4-bottle, and fill it
again from the 9-bottle, leaving only 1 quart in the 9-bottle. Again empty the
4-bottle, and pour the 1 quart from the 9-bottle into it. Now fill the 9-bottle
one last time and pour into the 4-bottle until it's full - at this time, the
4-bottle has 4 quarts and the 9 bottle has 6 quarts. Aha!</p>
<p>But consider the slightly broader question: how many values can you get? We see
we can get 1, 4, 5, 6, and 9 already. We can clearly get 8 by filling the
4-bottle twice and pouring this bottle into the 9. With this, we could get 4 by
filling the 4-bottle again and then pouring as much as possible into the
9-bottle when its filled with 8 - as this leaves 3 quarts in the 4-bottle.
Finally, we can get 2 by taking 6 quarts and trying to pour it into an empty
4-bottle. (With 2, we get 7 by putting the 2 into the 4-bottle and then trying
to pour a full 9-bottle into the 4-bottle). So we can get all numbers from 1 to
9. If we were really picky, we could note that we can then easily extend this
to getting all numbers from 0 to 13 quarts, albeit not all in one container.</p>
<p>If you go through these, it turns out it is possible to get any number between
0 and 13 quarts with a 4-bottle and a 9-bottle in at most 10 pours, where a
pour includes filling a bottle from the water source (presumably infinite),
pouring from one bottle into another, or emptying the water into the water
source. Now I propose the following question:</p>
<div class="question">
<p>Given only 2 bottles (of an integer size) and up to 10 pours, what is the
largest N s.t. we can measure out 0, ... , N quarts (inclusive)?</p>
<p>As is natural, let's extend this a bit further. For any subset of the natural
numbers $ S$ and for a number of pours $ p$, define
$ {\bf F}(S; p) :=$ the set $ R$ of possible results after using at most $ p $
pours from containers of sizes in $ S$</p>
For example, we saw above that $ {\bf F}(4,9;10) = (0,1, ... , 13).$ But is
this maximal for two containers and 10 pours? If we only allow 5 pours, the set
$ R$ reduces to size 7; and if we consider the smallest result not attainable,
we get 2 quarts. That is very small! I'm tempted to use $ \lfloor {\bf F}
\rfloor$ to denote the smallest unattainable positive integer amount, but
that's only a whim.</p>
<p>Ultimately, this leads to two broad, open (as far as I know) questions:</p>
<div class="question">
<p>What sets $ S $ maximize $ |{\bf F}(S;p)| $ for various $ p$?
What sets $ S $ maximize $ \lfloor {\bf F}(S; p) \rfloor $ for various $ p$?</p>
</div>
<p>Perhaps these have already been explored? I don't even know what they might be called.</p>
</div>https://davidlowryduda.com/waterTue, 22 Mar 2011 03:14:15 +0000Abouthttps://davidlowryduda.com/aboutDavid Lowry-Duda<figure class="float-right" style="width:350px;">
<img src="/wp-content/uploads/2023/07/beach.jpg" width="300"
alt="A photo of David on a beach" title="Me on a beach. Photo by Magda Duda." />
</figure>
<p>I'm David Lowry-Duda. I'm a Senior Research Scientist at
<a href="https://icerm.brown.edu/">ICERM</a>.</p>
<p>I study mathematics. My research falls mostly inside number theory, arithmetic
geometry, cryptography, and computation. I study these topics using tools from
modular forms, complex analysis, Fourier analysis, algebraic geometry, and
software. I devote a lot of my time developing research math software.</p>
<p>I'm fortunate to be supported by the
<a href="https://simonscollab.icerm.brown.edu/">Simons Collaboration in Arithmetic Geometry, Number Theory, and
Computation</a>.</p>
<h3>Mathematical Potential</h3>
<p>I subscribe to the axioms laid out<sup>1</sup>
<span class="aside"><sup>1</sup>
included in <a href="https://www.ams.org/publications/journals/notices/201610/rnoti-p1164.pdf">Todas Cuentan</a>
in the AMS Notices</span>
by Federico Ardila:</p>
<ol>
<li>Mathematical potential is distributed equally among different groups,
irrespective of geographic, demographic, and economic boundaries.</li>
<li>Everyone can have joyful, meaningful, and empowering mathematical experiences.</li>
<li>Mathematics is a powerful, malleable tool that can be shaped and used
differently by various communities to serve their needs.</li>
<li>Every student deserves to be treated with dignity and respect.</li>
</ol>
<h3>Previous Studies</h3>
<p>I studied at Georgia Tech for undergrad. I studied applied mathematics,
international affairs, and modern languages. I was not immediately set on
mathematics and instead took my time to determine what I liked. I studied
abroad in Spain and Mexico, worked in the European Parliament in Brussels, and
then attended the Budapest Semester in Mathematics program. By the end, my
interests favored math and I began to focus on number theory.</p>
<p>I went to Brown for graduate school, studying analytic number theory. I began
to work on the <a href="http://lmfdb.org/">LMFDB</a> during my postdoc at the University
of Warwick, which is when I began to incorporate computational number theory
into my work.</p>
<h3>About this site</h3>
<p>I write about research, teaching, programming, math, and what interests me.
I include discussions about my research and related work. Some of these posts
are aimed at my collaborators. There are many mathematical tidbits and context
that are useful to know, but that don't fit in papers.</p>
<p>When I'm teaching, I use this site to distribute supplemental materials. These
remain available for everyone — I hope they're useful.</p>
<p>This site has no ads, no tracking scripts, no tracking pixels, and almost no
external scripts (except mathjax or an occasional visualization library).</p>
<p>The photo at the top of this page is from my sister-in-law-in-law, Magda.</p>
<p>If you enjoy my work and wish to support this site in a small way, you might
like to <a href="https://www.buymeacoffee.com/davidlowryduda">buy me a coffee</a>.</p>
<h3>Contacting me</h3>
<p>If you have a question or comment about my work or site, send me an email at
<a href="mailto:david@lowryduda.com">david@lowryduda.com</a>. If you prefer
official-sounding emails,
<a href="mailto:davidlowryduda@brown.edu">davidlowryduda@brown.edu</a> also reaches me.</p>
<p>I'm a <a href="https://math.stackexchange.com/users/9754/davidlowryduda">moderator</a> at
<a href="https://math.stackexchange.com/">math.stackexchange</a> and am present in various
other fora and sites as something similar to "mixedmath" or "davidlowryduda".
<strong>Please note that I don't respond to moderator inquiries here or over email</strong>,
but I am <em>very happy</em> to talk about math.</p>
<p>I have a PGP key<sup>2</sup>
<span class="aside"><sup>2</sup>see <a href="https://xkcd.com/1181/">relevant xkcd</a>
</span>
associated to <code>davidlowryduda@davidlowryduda.com</code> available from
keyservers with fingerprint <code>8369 7536 2D4F 19C1 8357 DDBD 42E1 5895 BF7F
0291</code>. I'm also davidlowryduda on <a href="https://keybase.io/davidlowryduda">keybase</a>.</p>
<h3>Disclaimer</h3>
<p>Disclaimer: Any opinions, findings, and conclusions or recommendations
expressed in this material are those of the author(s) and do not necessarily
reflect the views of the Simons Collaboration, National Science Foundation, or
any other institution.</p>https://davidlowryduda.com/aboutSun, 20 Mar 2011 03:14:15 +0000Math - dealing with sin (and cos) every dayhttps://davidlowryduda.com/first-postDavid Lowry-Duda<p>I start this blog just as I am finishing up my undergraduate degrees and
heading for grad school in math. I will keep a record of interesting problems
and facts that come up along the was as well as listing open problems (as far
as I can tell) and updates on my research.</p>
<p>To start, I give credit to a far more famous David:</p>
<p>"Mathematics is a game played according to certain simple rules with
meaningless marks on paper." - David Hilbert</p>https://davidlowryduda.com/first-postSun, 20 Mar 2011 03:14:15 +0000Fun limithttps://davidlowryduda.com/fun-limitDavid Lowry-Duda<p>Recently, a friend of mine, Chris, posed the following question to me:</p>
<p>Consider the sequence of functions, $ f_0 (x) = x, f_1 (x) = \sin (x),
f_2 (x) = \sin{(\sin (x)) }.$ For what values $ x \in {\bf R}$ does the
limit of this sequence exist, and what is that limit?</p>
<p>After a few moments, it is relatively easy to convince oneself that for all $ x
$, this sequence converges to $ 0 $, but a complete proof seemed tedious. Chris
then told me to consider the concept of fixed points and a simple solution
would arise.</p>
<p>If such a sequence were to converge to a limit, then it could only do so at a
fixed point of that sequence, i.e. a point $ x$ such that $ f_1 (x) = f_2 (x) =
\cdots = f_n (x) = \cdots = L$, and in that case, the limit would be $ L $. What are
the fixed points of the $ sin $ composition? Only $ 0 $! Then it takes only the
simple exercise to see that the sequence does in fact have a limit for every x
(one might split the cases for positive and negative angles, in which case one
has a decreasing/increasing sequence that is bounded below/above for example).</p>
<p>A cute little exercise, I think.</p>https://davidlowryduda.com/fun-limitSun, 20 Mar 2011 03:14:15 +0000Pizzahttps://davidlowryduda.com/pizzaDavid Lowry-Duda<p>Suppose you had a pizza, and you were asked to find the volume of the pizza.</p>
<p>Let's assume, with huge loss of generality, that our pizza is approximately a
right cylinder, and call it's height a and it's radius z (it's well known that
pizza's cover everything from a to z, so this is appropriate). The area of a
right cyliner is $\pi * r^2 * height,$ which in this case is</p>
<p>\begin{equation} \pi z z a. \end{equation}</p>https://davidlowryduda.com/pizzaSun, 20 Mar 2011 03:14:15 +0000Mathhttps://davidlowryduda.com/mathDavid Lowry-Duda<p>Most of this site is filled with topics related to my interests in mathematics.
For other mathematicians, I suspect the best way to rapidly see what sort of
mathematics I'm interested in is to see <a href="/research/">my research page</a>, where I
link to preprints of my work and link to related pages on talks and research
discussion.</p>
<p>More broadly, see <a href="/categories/Math/">posts under the Math category</a>.</p>
<p>Below, I give a broad introduction to my research.<span class="aside">for a
scientifically literate layperson</span></p>
<h1>Introduction to my Research</h1>
<p>I study number theory and arithmetic geometry. This might sound like different
things, but they are very strongly connected. Concretely, one problem I'm
interested in is the following.</p>
<div class="question">
<p>If you draw a circle of radius $R$ on a sheet of graph paper, how many little
boxes of the graph paper are contained inside the circle?</p>
</div>
<p>One way to study this problem is to <em>experimentally</em> model it: choose lots of
different radii $R$ and try to detect a pattern. Performing this experiment
suggests that there are approximately $\pi R^2$ boxes, if we suppose that
the boxes are squares of side length $1$. Then the natural question becomes
<em>how close to $\pi R^2$ is it?</em></p>
<p>Further experimentation suggests that $\pi R^2$ is actually a very good
approximation,<sup>1</sup>
<span class="aside"><sup>1</sup>in the sense that the size of the error is no more than
$\sqrt{R}$ or so.</span>
but despite our efforts we don't actually <em>know</em> if
this is true or not. Interestingly, it's possible to slighly change this
problem in many ways and get very similar answers. If we use rectangular grids
instead of square grids, but the area of the rectangles is also $1$, then
essentially similar results seem to hold. Or if we scale up an ellipse (or
almost any reasonable curved shape) instead of a circle, again the results look
very similar. But if we use a shape with any straight lines instead of a
circle, then the behavior changes radically.</p>
<p>Problems like this have been studied for their intrinsic interest for thousands
of years. The similarities in behavior in different situations suggest
underlying structure, and I try to investigate that structure.</p>
<p>Number theory is a funny subject. Unlike many other areas of math, it's named
after the <em>subject matter</em> instead of the <em>tools used</em>. It's possible to study
the same problem from many different points of view.</p>
<p>For example, it turns out that there is a function closely related to the
Riemann zeta function
\begin{equation*}
\zeta(s) = \sum_{n = 1}^\infty \frac{1}{n^s}
\end{equation*}
that encodes information about the number of graph paper squares contained
inside a circle of radius $R$. This information is contained in the behavior of
the variable $s$. This type of relationship was first noticed in the
mid 1800s, when Riemann described how one could use the behavior of $\zeta(s)$
as a function of $s$ to count the number of prime numbers up to $X$.</p>
<p>The "name" of the function associated to the circle problem above is
\begin{equation*}
L(s) := \sum_{n = 1}^\infty \frac{r_2(n)}{n^s},
\end{equation*}
where $r_2(n)$ means the number of ways of writing $n$ as a sum of $2$ squares.</p>
<p>A common thread in my research is to associate functions like $\zeta(s)$ or
$L(s)$ to some mathematical question and then to use complex analysis to study
this function, and thus learn about the question.</p>
<p>It turns out that "functions like $\zeta(s)$ or $L(s)$" are often called
$L$-functions, and there is a very deep set of ideas linking problems in
arithmetic and geometry to the analytic behaviors of $L$-functions.</p>
<p>Compared to many other number theorists who study similar subject matter, my
points of view are more <em>complex analytic</em> (meaning that I extract information
from the behavior of the variable $s$ using complex analysis), <em>real analytic</em>
(meaning that I also use tools from Fourier analysis and real analysis), and
<em>computational</em>. For the last several years, I've spent a lot of time computing
$L$-functions that are likely to be relevant to arithmetic applications that we
haven't considered yet.</p>
<p>These objects are computed and described in the $L$-function and modular form
database (<a href="https://lmfdb.org">LMFDB</a>).</p>
<hr>
<p>My <a href="/talk-how-computation-and-experimentation-inform-research/">slides from a talk about how computation informs
research</a> give
additional detail on many of the topics breached here.</p>https://davidlowryduda.com/mathSat, 01 Jan 2000 03:14:15 +0000