`lcalc`

to compute the values of half-integral weight $L$-functions.
We will be using lcalc through sage. Unfortunately, we are going to be using some functionality which sage doesn’t expose particularly nicely, so it will feel a bit silly. Nonetheless, using sage’s distribution will prevent us from needing to compile it on our own (and there are a few bugfixes present in sage’s version).

Some $L$-functions are inbuilt into lcalc, but not half-integral weight $L$-functions. So it will be necessary to create a datafile containing the data that lcalc will use to generate its approximations. In short, this datafile will describe the shape of the functional equation and give a list of coefficients for lcalc to use.

It is assumed that the $L$-function is normalized in such a way that

$$\begin{equation}

\Lambda(s) = Q^s L(s) \prod_{j = 1}^{A} \Gamma(\gamma_j s + \lambda_j) = \omega \overline{\Lambda(1 – \overline{s})}.

\end{equation}$$

This involves normalizing the functional equation to be of shape $s \mapsto 1-s$. Also note that $Q$ will be given as a real number.

An annotated version of the datafile you should create looks like this

```
2 # 2 means the Dirichlet coefficients are reals
0 # 0 means the L-function isn't a "nice" one
10000 # 10000 coefficients will be provided
0 # 0 means the coefficients are not periodic
1 # num Gamma factors of form \Gamma(\gamma s + \lambda)
1 # the \gamma in the Gamma factor
1.75 0 # \lambda in Gamma factor; complex valued, space delimited
0.318309886183790 # Q. In this case, 1/pi
1 0 # real and imaginary parts of omega, sign of func. eq.
0 # number of poles
1.000000000000000 # a(1)
-1.78381067250408 # a(2)
... # ...
-0.622124724090625 # a(10000)
```

If there is an error, lcalc will usually fail silently. (Bummer). Note that in practice, **datafiles should only contain numbers and should not contain comments.** This annotated version is for convenience, not for use.

You can find a copy of the datafile for the unique half-integral weight cusp form of weight $9/2$ on $\Gamma_0(4)$ here. This uses the first 10000 coefficients — it’s surely possible to use more, but this was the test-setup that I first set up.

In order to create datafiles for other cuspforms, it is necessary to compute the coefficients (presumably using magma or sage) and then to populate a datafile. A good exercise would be to recreate this datafile using sage-like methods.

One way to create this datafile is to explicitly create the q-expansion of the modular form, if we happen to know a convenient expression. For us, we happen to know that $f = \eta(2z)^{12} \theta(z)^{-3}$. Thus one way to create the coefficients is to do something like the following.

```
num_coeffs = 10**5 + 1
weight = 9.0 / 2.0
R.<q> = PowerSeriesRing(ZZ)
theta_expansion = theta_qexp(num_coeffs)
# Note that qexp_eta omits the q^(1/24) factor
eta_expansion = qexp_eta(ZZ[['q']], num_coeffs + 1)
eta2_coeffs = []
for i in range(num_coeffs):
if i % 2 == 1:
eta2_coeffs.append(0)
else:
eta2_coeffs.append(eta_expansion[i//2])
eta2 = R(eta2_coeffs)
g = q * ( (eta2)**4 / (theta_expansion) )**3
coefficients = g.list()[1:] # skip the 0 coeff
print(coefficients[:10])
normalized_coefficients = []
for idx, elem in enumerate(coefficients, 1):
normalized_coeff = 1.0 * elem / (idx ** (.5 * (weight - 1)))
normalized_coefficients.append(normalized_coeff)
print(normalized_coefficients[:10])
```

Suppose that you have a datafile, called `g1_lcalcfile.txt`

(for example). Then to use this from sage, you point lcalc within sage to this file. This can be done through calls such as

```
# Computes L(0.5 + 0i, f)
lcalc('-v -x0.5 -y0 -Fg1_lcalcfile.txt')
# Computes L(s, f) from 0.5 to (2 + 7i) at 1000 equally spaced samples
lcalc('--value-line-segment -x0.5 -y0 -X2 -Y7 --number-samples=1000 -Fg1_lcalcfile.txt')
# See lcalc.help() for more on calling lcalc.
```

The key in these is to pass along the datafile through the `-F`

argument.

`comp.lang.python`

mailing list, I saw an interesting question concerning the behavior of the default sorting algorithm in python. This led to this post.
Python uses timsort, a clever hybrid sorting algorithm with ideas borrowing from merge sort and (binary) insertion sort. A major idea in timsort is to use the structure of naturally occuring runs (consecutive elements in the list that are either monotone increasing or monotone decreasing) when sorting.

Let’s look at the following simple list.

`10, 20, 5`

A simple sorting algorithm is *insertion sort*, which just advances through the list and inserts each number into the correct spot. More explicitly, insertion sort would

- Start with the first element,
`10`

. As a list with one element, it is correctly sorted tautologically. - Now consider the second element,
`20`

. We insert this into the correct position in the already-sorted list of previous elements. Here, that just means that we verify that`20 > 10`

, and now we have the sorted sublist consisting of`10, 20`

. - Now consider the third element,
`5`

. We want to insert this into the correct position in the already-sorted list of previous elements. A naively easy way to do this is to either scan the list from the right or from the left, and insert into the correct place. For example, scanning from the right would mean that we compare`5`

to the last element of the sublist,`20`

. As`5 < 20`

, we shift left and compare`5`

to`10`

. As`5 < 10`

, we shift left again. As there is nothing left to compare against, we insert`5`

at the beginning, yielding the sorted list`5, 10, 20`

.

How many comparisons did this take? This took `20 > 10`

, `5 < 20`

, and `5 < 10`

. This is three comparisons in total.

We can see this programmatically as well. Here is one implementation of insertion_sort, as described above.

```
def insertion_sort(lst):
'''
Sorts `lst` in-place. Note that this changes `lst`.
'''
for index in range(1, len(lst)):
current_value = lst[index]
position = index
while position > 0 and lst[position - 1] > current_value:
lst[position] = lst[position - 1]
position = position - 1
lst[position] = current_value
```

Let’s also create a simple `Number`

class, which is just like a regular number, except that anytime a comparison is done it prints out the comparison. This will count the number of comparisons made for us.

```
class Number:
def __init__(self, value):
self.value = value
def __str__(self):
return str(self.value)
def __repr__(self):
return self.__str__()
def __lt__(self, other):
if self.value < other.value:
print("{} < {}".format(self, other))
return True
print("{} >= {}".format(self, other))
return False
def __eq__(self, other):
if self.value == other.value:
print("{} = {}".format(self, other))
return True
return False
def __gt__(self, other):
return not ((self == other) or (self < other))
def __le__(self, other):
return (self < other) or (self == other)
def __ge__(self, other):
return not (self < other)
def __nq__(self, other):
return not (self == other)
```

With this class and function, we can run

```
lst = [Number(10), Number(20), Number(5)]
insertion_sort(lst)
print(lst)
```

which will print

```
10 < 20
20 >= 5
10 >= 5
[5, 10, 20]
```

These are the three comparisons we were expecting to see.

Returning to python’s timsort — what happens if we call python’s default sorting method on this list? The code

```
lst = [Number(10), Number(20), Number(5)]
lst.sort()
print(lst)
```

prints

```
20 >= 10
5 < 20
5 < 20
5 < 10
[5, 10, 20]
```

There are *four* comparisons! And weirdly, the method checks that `5 < 20`

twice in a row. What’s going on there?^{1}

At its heart, this was at the core of the thread on comp.lang.python. Why are there extra comparisons in cases like this?

Poking around the implementation of timsort taught me a little bit more about timsort.^{2}

Timsort approaches this sorting task in the following way.

- First, timsort tries to identify how large the first run within the sequence is. So it keeps comparing terms until it finds one that is out of order. In this case, it compares
`20`

to`10`

(finding that`20 > 10`

, and thus the run is increasing), and then compares`5`

to`20`

(finding that`5 < 20`

, and thus that`5`

is not part of the same run as`10, 20`

). Now the run is identified, and there is one element left to incorporate. - Next timsort tries to insert
`5`

into the already-sorted run. It is more correct to say that timsort attempts to do a binary insertion, since one knows already that the run is sorted.^{3}In this binary insertion, timsort will compare`5`

with the middle of the already-sorted run`10, 20`

. But this is a list of length 2, so what is its middle element? It turns out that timsort takes the latter element,`20`

, in this case. As`5 < 20`

, timsort concludes that`5`

should be inserted somewhere in the first half of the run`10, 20`

, and not in the second half. - Of course, the first half consists entirely of
`10`

. Thus the remaining comparison is to check that`5 < 10`

, and now the list is sorted.

We count^{4} all four of the comparisons. The doubled comparison is due to the two tasks of checking whether `5`

is in the same run as `10, 20`

, and then of deciding through binary insertion where to place `5`

in the smaller sublist of `10, 20`

.

Now that we’ve identified a doubled comparison, we might ask *Why is it this way?* Is this something that should change?

The short answer is *it doesn’t really matter.* A longer answer is that to apply this in general would cause additional comparisons to be made, since this applies in the case when the last element of the run agrees in value with the central value of the run (which may occur for longer lists if there are repeated values). Checking that this happens would probably either involve comparing the last element of the run with the central value (one extra comparison, so nothing is really saved anyway), or perhaps adding another data structure like a skip list (which seems sufficiently more complicated to not be worth the effort). Or it would only apply when sorting really short lists, in which case there isn’t much to worry about.

Learning a bit more about timsort made me realize that I could probably learn a lot by really understanding an implementation of timsort, or even a slightly simplified implementation. It’s a nice reminder that one can choose to optimize for certain situations or behaviors, and this might not cover all cases perfectly — and that’s ok.

]]>It was a great conference, and definitely one of the better conferences that I’ve attended. What made it so good? For one thing, it was in Budapest, and I love Budapest. Many of the main topics were close to my heart, which is a big plus.

But what I think really set it apart was that there were lots of relatively short talks, and almost everyone attended almost every talk.^{1}

The amount of time allotted to a talk carries extreme power in deciding what sort of talk it will be. A typical hour-long seminar talk is long enough to give context, describe a line of research leading to a set of results, discuss how these results fit into the literature, and even to give a non-rushed description of how something is proved. Sometimes a good speaker will even distill a few major ideas and discuss how they are related. A long talk can have multiple major ideas (although just one presented very well can make a good talk too).

In comparison, 50, 40, and 30 minute talks require much more discipline. As the amount of time decreases, the number of ideas that can be inserted into a talk decreases. And this relationship is not linear! Thirty minutes is just about long enough to describe one idea pretty well, and to do anything more is very hard.^{2}

Something interesting happens for shorter talks. For 20 minute, 15 minute, and 10 minute talks, the limitation almost serves as a source of inspiration.^{3} Being forced to focus on what’s important is a powerful organizing force.

The median talk length was 20 minutes, which is a very comfortable number. This is long enough to state a result and give context. It’s also long enough to tempt speakers into describing methodology behind a proof, but not long enough to effectively teach someone how the proof works.

An extraordinary aspect of a 20 minute talk is also that it’s short enough to pay attention to, even if it’s only an okay talk. It is perhaps not a surprise to most conference goers that most talks are not so great. To be a skilled orator is to be exceptional.

At Building Bridges, I was introduced to math *speed talks*. These are two minute talks. I’ve seen many programming *lightning talks* (often used to plug a particular product or solution to a common programming problem), but these math *speed talks* were different.

People used their two minutes to introduce an idea, or a result. And they either chose to give the broadest possible context, or a singular idea in the proof.

People were talking about *real mathematics* in **two minutes**. And I loved it.

Simply having a task where you distill some real mathematics into a two minute coherent description is worthwhile. *What’s important? What do you really want to say? Why?*

Two minutes is so short that it feels silly. And silly means that it doesn’t feel dangerous or scary, and thus many people felt willing to give it a try. At Building Bridges, the organizers gamified the speed talks, so that getting the closest to 2 minutes was rewarded with applause and going over two minutes resulted in a buzzer going off. It was a game, and it was **fun**. It was encouraging.

I firmly support any activity that encourages people who normally don’t speak so much, especially students and junior researchers. You learn a lot by giving a talk, even if it’s only a two minute talk.^{4}

This conference had 19 (I think) speed talks over a three day stretch. They were given in clumps after the last regular talk each day. Since people were there for the big talk, everyone attended the speed talks. This is also important! In conferences like the Joint Math Meetings, where there might even be something like speed talks, it’s essentially impossible to pay attention since there are too many people in too many places and you never can step in the same river twice. Here, speed talks were given on the same stage as long talks, to the same audience, and with the same equipment.

Every conference should have speed talks. And they should be treated as first-class talks, with the exception that they are irrefutably silly.

Go forth and spread the speed talk gospel.

]]>I wondered: *how hard is it to use colorblind friendly colors here?*

I had in the back of my mind the thought of the next time I sit down and pair program with someone who is colorblind (which will definitely happen). Pair programming is largely about sharing experiences and ideas, and color disambiguation shouldn’t be a wedge.

I decided that loading customized CSS is the way to go. There are different ways to do this, but an easy method for quick replicability is to create a bookmarklet that adds CSS into the page. So, I did that.

You can get that bookmarklet here. (Due to very sensible security reasons, WordPress doesn’t want to allow me to provide a link which is actually a javascript function. So I make it available on a static, handwritten page).^{1}

Here’s how it works. A Travis log looks typically like this:

After clicking on the bookmarklet, it looks like

This is not beautiful, but it works and it’s very noticable. Nonetheless, when the goal is just to be able to quickly recognize if errors are occuring, or to recognize exceptional lines on a quick scroll-by, the black-text-on-white-box wins the standout crown.

The LMFDB uses pytest, which conveniently produces error summaries at the end of the test. (We used to use nosetest, and we hadn’t set it up to have nice summaries before transitioning to pytest). This bookmark will also effect the error summary, so that it now looks like

Again, I would say this is not beautiful, but definitely noticeable.

As an aside, I also looked through the variety of colorschemes that I have collected over the years. And it turns out that 100 percent of them are unkind to colorblind users, with the exception of the monotone or monochromatic schemes (which are equal in the Harrison Bergeron sense).

We should do better.

]]>This is fun. It’s fun seeing other people’s workflows. (In these cases, it happened to be that the other person was usually the one at the keyboard and typing, and I was backseat driving). I live in the terminal, subscribe to the Unix-is-my-IDE general philosophy: vim is my text editor; a mixture of makefiles, linters, and fifos with tmux perform automated building, testing, and linting; git for source control; and a medium-sized but consistently growing set of homegrown bash/python/c tools and scripts make it fun and work how I want.

I’m distinctly interested in seeing tools other people have made for their own workflows. Those scripts that aren’t polished, but get the work done. There is a whole world of git-hooks and aliases that amaze me.

But my recent encounters with pair programming exposed me to a totally different and unexpected experience: two of my programming partners were color blind.^{2}

At first, I didn’t think much of it. I had thought that you might set some colorblind-friendly colorschemes, and otherwise configure your way around it. But as is so often the case with accessibility problems, I underestimated both the number of challenges and the difficulty in solving them (lousy but true aside: **most companies almost completely ignore problems with accessibility**).

I first noticed differences while trying to fix bugs and review bugfixes in the LMFDB. We use Travis CI for automated testing, and we were examining a build that had failed. We brought up the Travic CI interface and scroll through the log. I immediately point out the failure, since I see something like this.^{3}

*How do you know something failed? *asks John, my partner for the day. *Oh, it’s because the output is colored, isn’t it? I didn’t know.* With the help of the color-blindness.com color-blindness simulator, I now see that John saw something like

With red-green colorblindness, there is essentially no difference in the shades of PASSED and FAILED. That’s sort of annoying.

We’d make a few changes, and then rerun some tests. Now we were running tests in a terminal, and the testlogs are scolling by. We’re chatting about emacs wizardy (or c++ magic, or compiler differences between gcc and clang, or something), and I point out that we can stop the tests since three tests have already failed.

He stared at me a bit dumbfoundedly. It was like I had superpowers. I could recognize failures without paying almost any attention, since flashes of red stand out.

But if you don’t recognize differences in color, how would you even know that the terminal outputs different colors for PASSED and FAILED? (We use pytest, which does). A quick look for different colorschemes led to frustration, as there are different sorts of colorblindness and no single solution that will work for everyone (and changing colorschemes is sort of annoying anyway).^{4}

I should say that the Travis team has made some accessibility improvements for colorblind users in the past. The build-passing and build-failing icons used to be little circles that were red or green, as shown here.

That means the build status was effectively invisible to colorblind users. After an issue was raised and discussed, they moved to the current green-checkmark-circle for passing and red-exed-circle for failing, which is a big improvement.

The colorscheme used for Travic CI’s online logs is based on the nord color palette, and there is no colorscheme-switching option. It’s a beautiful and well-researched theme *for me*, but not for everybody.

The colors on the page are controllable by CSS, but not in a uniform way that works on many sites. (Or at least, not to my knowledge. I would be interested if someone else knew more about this and knew a generic approach. The people I was pair-programming with didn’t have a good solution to this problem).

Should you really need to write your own solution to every colorblind accessibility problem?

In the next post, I’ll give a (lousy but functional) bookmarklet that injects CSS into the page to see Travis CI FAILs immediately.

]]>$$\begin{equation} X^2 + Y^2 = Z^2 + h \end{equation}$$

for any fixed integer $h$.

I gave a similar talk at the 32nd Automorphic Forms Workshop in Tufts in March. I don’t say this during my talk, but a big reason for giving these talks is to continue to inspire me to finish the corresponding paper. (There are still a couple of rough edges that need some attention).

The methodology for the result relies on the spectral expansion of half-integral weight modular forms. This is unfriendly to those unfamiliar with the subject, and particularly mysterious to students. But there is a nice connection to a topic discussed by Arpad Toth during the previous week’s associated summer school.

Arpad sketched a proof of the spectral decomposition of holomorphic modular cusp forms on $\Gamma = \mathrm{SL}(2, \mathbb{Z})$. He showed that

$$\begin{equation} L^2(\Gamma \backslash \mathcal{H}) = \textrm{cuspidal} \oplus \textrm{Eisenstein}, \tag{1}

\end{equation}$$

where the *cuspidal* contribution comes from Maass forms and the *Eisenstein* contribution comes from line integrals against Eisenstein series.

The typical Eisenstein series $$\begin{equation} E(z, s) = \sum_{\gamma \in \Gamma_\infty \backslash \Gamma} \textrm{Im}(\gamma z)^s \end{equation}$$ only converges for $\mathrm{Re}(s) > 1$, and the initial decomposition in $(1)$ implicitly has $s$ in this range.

To write down the integrals appearing in the Eisenstein spectrum explicitly, one normally shifts the line of integration to $1/2$. As Arpad explained, classically this produces a pole at $s = 1$ (which is the constant function).

In half-integral weight, the Eisenstein series has a pole at $s = 3/4$, with the standard theta function

$$\begin{equation} \theta(z) = \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z} \end{equation}$$

as the residue. (More precisely, it’s a constant times $y^{1/4} \theta(z)$, or a related theta function for $\Gamma_0(N)$). I refer to this portion of the spectrum as *the residual spectrum*, since it comes from often-forgotten residues of Eisenstein series. Thus the spectral decomposition for half-integral weight objects is a bit more complicated than the normal case.

When giving talks involving half-integral weight spectral expansions to audiences including non-experts, I usually omit description of this. But for those who attended the summer school, it’s possible to at least recognize where these additional terms come from.

The slides for this talk are available here.

]]>This is the final chapter in my series about the state of internet fora, and Math.SE and StackOverflow in particular. The previous chapters are Challenges Facing Community Cohesion and Ghosts of Forums Past. Unlike the previous entries, this also sits on Meta.Math.SE (and was posted there a week before here). (As I write this as a moderator of Math.SE, I refer to the Math.SE community as “we”, “us”, and “our” community).

A couple of weeks ago, there was a proposal on meta.Math.SE to introduce a third level of math site to the SE network. Many members of the the MathSE community have reacted very positively to this proposal, to the extent that even some of the moderators have considered throwing their weight behind it.

But a *NoviceMathSE site would be doomed to fail, and such a separation would not solve the underlying problems facing the site.*

To explain my point of view, we need to examine more closely the arguments in favor of NoviceMathSE.

In the proposal itself, the goal is stated to

act as a place where students are solving their homeworks all together.

- Some students will learn a lot by answering their friends questions. They are usually discouraged to write answers on MSE since usually language and notation is less formal.
- Discussion between them might be more helpful than a discussion where one stays very formal.
- They will put more effort on questions, since usually on MSE even a challenging/tricky questions gets a hint immediately.

Encouraging lots of discussion between students solving homework together is a mixture between subjectivity and localization, two things SE tends to avoid.

Maybe someone could create a tool where a school/college/university course would have a SE-like forum/Q&A allowing students to work together on a SE-like framework. This style of tool is used already in some MOOCs to facilitate learning environments (especially since the ratio of students to instructors can be enormous). Some MOOCS reset the forums each term/year to foster additional rounds of student involvement. I don’t know if this sort of tool already exists (if not, then maybe someone should go make one).

This sort of tool belongs there, not on the SE network.

**But I think much of the positive reaction to the proposal wasn’t for exactly the same proposal as in the OP, but instead for the thought of adding a lower-level Math Q&A.**

For this reason (and because certainly SE would not want to be explicitly viewed as a place where students go to get their homework done for them), I refer to the potential site as NoviceMathSE instead of HWMathSE. (I note that Jyrki has suggested calling it MathTutoringSE, which is also better than HWMathSE).

The proposal asks about “a third level of math site”. Implicitly stated in this proposal is the distinction between Math.StackExchange and MathOverflow as being a difference of the level of the question. But this is not an accurate description of the differences.

MathOverflow is not an ordinary member of the StackExchange network. MathOverflow is run by a non-profit organization which has an agreement with SE to host their site. It did not start through the typical experimental-beta-public StackExchange model, and does not have the same culture (or even all the same rules) as the rest of the StackExchange sites.

It is more appropriate to compare MathOverflow with PhysicsOverflow, which is separate from the StackExchange network.

In essence, MathOverflow has content that is interesting to research mathematicians. This consists largely of research level mathematics, but sometimes it also consists of essentially basic questions that are of interest to mathematicians. This is exactly how MO was founded (it’s older than MathSE).

It is not true that once a question hits a certain level of difficulty, it should be asked on MathOverflow instead of MathSE. Instead it is the audiences that are different.

With this in mind, it is not appropriate to think of creating another math site as something making a three-step trinity of NoviceMathSE, MathSE, MathOverflow.

The goal makes sense. Right now, most of the noise on MathSE comes from low-level questions. The major intent behind this proposal is to raise the ratio of signal to noise on MathSE by removing most of the noise.

But this cannot hope to work, because we cannot achieve consensus on how to distinguish “signal” from “noise”. There are already endless disagreements on what is on-topic or off-topic. It is unreasonable to expect MathSE to be able to draw a clear line on what is on-topic and what is off-topic now.

I cannot begin to imagine the moderating headache that would come from attempting to identify and close these questions amidst the various sources of ensuing community backlash. It would be one thing if MathSE had consensus on the various choices facing it, but this is not the case.

More worrying to me is that this proposal seems to be supported most strongly by users who want to *dump bad questions somewhere else.* (It is possible that I am misinterpreting this, but I don’t think so.)

Such a site is doomed to fail. It would indeed be full of noise. There would be fewer experts there because there are fewer interesting questions, and novices would often prefer to not post there because there would be fewer experts there. Users want good answers, and depending on novices to help other novices is more appropriate for peer-learning environments than a SE Q&A.

One of the major reasons the SE model has been effective is that each site is created to be a place with very high quality content, where experts want to answer interesting questions, and where people looking for good answers can find good, accurate information.

Yes, migrating lower quality questions to NoviceMathSE from MathSE might improve the condition of MathSE, but the signal/noise ratio of NoviceMathSE would almost certainly spiral out of control towards 0 and the site would fail.

We cannot expect to migrate all the lower-quality content (assuming we could even identify what that means) to another site. **If the goal is to remove lower-quality content, then the appropriate course of action is to try to find a way of identifying and removing it. Why bother trying to find somewhere else to dump it?**

Many of the comments and posts in favor of a NoviceMath.SE seem to want it to exist in order to solve problems of low quality content on Math.SE. It is unreasonable for a group of *us* to try to create a site for *some other group*. That is, it doesn’t make sense for a group of MathSE members to decide on a site that other people should go and populate.

If a group of people want to make NoviceMathSE (or some variant thereof) happen **and be a part of that new community**, then it would be a good idea for them to step forward and begin establishing what they want and what they’re missing from Math.SE. This is how new communities are established. Too much of the discussion essentially concerns ghettoizing low quality questions. This is against all principles of self determination on the network.

But I think there are some other ways to improve the quality of MathSE that don’t rely on fragmenting the community.

- Implement a Triage queue here. StackOverflow has a special review queue called “Triage”. The goal is to quickly sort potentially problematic posts into categories that can be routed elsewhere. In short, questions are sorted into three categories: Looks Ok (where it goes to the front page), Should be Improved (where it has limited visibility on the front page and goes into a help and improvement queue), and Unsalvageable (where it goes to mod review or a close/delete queue).
- Consider creating an Ask a Question Template (like the one being experimented with on SO). It is a hard question to determine what someone might put into a question template, but it may just work.
- Improve awareness of the ability to
*favorite*and*ignore*tags, and to*hide questions from ignored tags*. Did you know that you can not only favorite tags, but you can ignore them? And did you know that you can hide questions from ignored tags? This seems to be little-known, but the fact is that every additional method of filtering towards content that you prefer is better.

But I should note that these come with caveats. The Triage Queue is resource intensive. SE has declined to implement it on other sites in the past because it requires tweaking lots of Machine Learning algorithms (i.e. lots of maybe continuous work) and it requires many people looking at review queues to identify questions quickly. As noted here, triage was tailored to the needs of StackOverflow. This doesn’t preclude its use elsewhere, but that’s a discussion which needs to be had separately. Fortunately, triage makes sense on the largest sites on the network, and Math.SE certainly fits that bill (second largest on the network).

An Ask-a-question template is somewhat complicated, since there are many different questions that can be asked. But in the AB testing on StackOverflow there has been some success. I think it may be beneficial to try to develop a template on Math.SE and proceed with some AB testing as well. (The worst that happens is that it doesn’t work, right?).

]]>In this chapter I focus more on Math.SE and StackOverflow. Math.SE is now experiencing growing pains and looking for solutions. But many users of Math.SE have little involvement in the rest of the StackExchange network and are mostly unaware of the fact that StackOverflow has already encountered and passed many of the same trials and tribulations (with varying degrees of success).

Thinking more broadly, many communities have faced these same challenges. Viewed from the point of view from the last chapter, it may appear that there are only a handful of tools a community might use to try to retain group cohesion. Yet it is possible to craft clever mixtures of these tools synergistically. The major reason the StackExchange model has succeeded where other fora have stalled (or failed outright) is through its innovations on the implementation of communition cohesion strategies while still allowing essentially anyone to visit the site.

Slashdot^{1} popularized the idea of associating imaginary internet points to different users. It was called *karma*. You got karma if other users rated your comments or submissions well, and lost karma if they rated your posts as poor. But perhaps most importantly, each user can set a threshold for minimum scores of content to see. Thus if people have reasonable thresholds and you post crap, then most people won’t even see it after it’s scored badly.

What reputation and karma do is send a message that this is a community with norms, it’s not just a place to type words onto the internet. (That would be 4chan.) We don’t really exist for the purpose of letting you exercise your freedom of speech. You can get your freedom of speech somewhere else.

Astoundingly, karma even contributes to some sort of community cohesion when there are no benefits or detriments to having karma. See reddit, where karma is on the one hand almost worthless^{3}, and on the other hand highly valued. Even sought after.

Seriously, if you look you can found thousands upon thousands of people asking how to get more reddit karma (and a much smaller number asking what it’s good for).^{4}

I credit StackOverflow with popularizing the idea that imaginary internet points can be used as a formal (rather than informal) indicator of community standing. SO even calls it “reputation”. As a user gains more reputation points, they are given more peer moderation abilities. A user gains the ability to upvote^{5}, downvote, edit any post, close/reopen posts, or even delete/undelete posts.

This has worked astoundingly well. But it’s not perfect.

Very often I hear the same sort of story. A new user comes to ask a question, but it gets downvoted and negatively commented immediately. Then a *moderator* comes in and closes or deletes the question. And if even the mods are against new users, then what are they to do? That isn’t so welcoming, is it?

I frequently look into these cases and find a slightly different backstory. What usually happens is that several very high reputation users decided to close/delete the question with a somewhat minimal comment, such as *This is a duplicate of [this other question]* or *What have you tried?* or *RTFM*. The source of the confusion is that these high rep users have lots of moderator powers that new users don’t have. At first I thought that this distinction was important: it’s not the mods that are unwelcoming new users; it’s just some high rep users.

But then I realized that to a new user, this distinction is completely meaningless. The typical new user doesn’t care about their own reputation or badges or even the community itself — they just want an answer to a question. Any any obstacle in their way (like reading a *How to Ask a Question* page or comments saying *Use MathJax*) are pitfalls to be navigated through on the way to the goal. The fact is that they wanted help with something, went to get some help, only to feel like they were shut down.

This has been a major complaint about StackOverflow for years. In 2012 StackOverflow tried to reform new user culture through their Summer of Love initiative (Summer of Love, aka the Hunting of the Snark. Goal: keep SO welcoming and friendly *without* lowering standards).^{6}

Did it work? Not quite. In fact, the opening post on SO’s meta site^{7} generated so much bickering and negative commentary that it was deleted.

Other ideas were tried, but they had at most temporary success. A few years later John Slegers wrote a highly viewed post The decline of Stack Overflow, documenting the standard negative first impression received by new users, and how even older users can be at the whims of *Privileged Trolls*. These Privileged Trolls are those high reputation users who user their powers extensively.

Why doesn’t StackOverflow ban/suspend/quell the class of Privileged Trolls? In short, it’s because they’re not wrong. Most often these so-called Privileged Trolls are seeking to combat low quality questions and the existence of Help Vampires^{8}.

There exists a class of high reputation user who is very frequently on the site, has a corner that they care about, and is very familiar with the majority of content in that corner. We might optimistically call them a *Caretaker* instead of a Privileged Troll.

A typical bad scenario might go as follows. A new user comes and asks *Why is this python program hanging?*. A Caretaker sees the question, recognizes that the user was trying to pull stdin from an IDE, and then marks the question as a duplicate of some question about how to get around this. In the abstract, does this answer the question? Yes. But to the new user who is following some tutorial and doesn’t even know what `import`

means yet, this is probably unhelpful or confusing.

In Math.SE, this problem might be further exacerbated by the fact that there are high power mathematical results that quash all sorts of weaker statements. But having your introductory real analysis question about *How do I show that this function is integrable?* closed as a duplicate of some question which states that *Any function which is almost everywhere continuous is Riemann integrable* is most definitely unhelpful (and yet similar occurrences definitely occur).

Caretakers are trying to maintain high site quality. One aspect of quality is the ratio of signal to noise, and the existence of a vast number of duplicate questions is a source of noise. Using the enumeration from this answer to the meta.SO question Why is StackOverflow so negative of late?:

Basically there are 4 camps of users on Stack Overflow:

- The “caretakers” who want to keep the site clean and with good content.
- The “help vampires” who flood the site with bad/duplicate questions who only want their question answered and care nothing for the site.
- The “repwhores” who answer everything they can (or can’t).
- The ones who no longer give a shit.
For the most part:

2 and 3 love each other. They should get married.

1 hates 2 because they’re flooding the site making good questions impossible to find.

1 hates 3 because they’re encouraging 2 to keep going.

2 hates 1 because 1 constantly downvotes/closes/deletes/flames 2.

3 hates 1 because they keep closing/deleting the questions that 3 likes to answer.

1 and 3 have all the moderation powers, but only 1 cares to use them.

4 is sitting on the sideline shaking their heads…

1 hates 4 because 4 isn’t helping the situation.

With so much hate, there’s going to be conflict.

There are too many moderators (both true mods and very-high-rep-users) for a single common viewpoint to dominate the others. And from my point of view, a central division is over the purpose of a StackOverflow. Is it to

- Quickly get people great answers to their programming questions, or to
- Serve as a repository of useful programming knowledge.

In many ways these work together. Providing great answers to new questions serve both. But repeatedly answering the same question (especially with slightly different answers) makes the site less useful as a repository of knowledge — a visiting user may need to check several variants of a question to find an answer that works for them. Why not just ask another variant instead, adding to the tidal wave of similar questions? Conversely, requiring users to interpret a canonical question and answer in their own situation is annoying, especially to novice users who don’t know enough to recognize alternate phrasings of the same topic.

I believe the intent of the site was the latter, but somehow a large minority of users much prefer the former.

On Math.SE, there is perhaps a third category. One can ask whether the purpose is to

- Teach people mathematics,
- Answer mathematical problems from all levels, or to
- Serve as a repository of useful mathematical knowledge.

I think the reason why Math.SE cares so much about teaching mathematics is that many of the veteran users are (or have been) educators (teachers, professors, teaching assistants, lecturers, etc.). But similarly to the StackOverflow case, I believe the founders had the last option in mind, but frequent answerers are often interested in actually teaching people mathematics.

Despite the apparent difference, most cultural problems appear to be the same. Or rather, since Math.SE is a bit younger and a bit smaller than SO (but still the second largest site on the StackExchange network), the cultural problems facing Math.SE are a mix of the current problems facing SO and the problems from a few years ago.

Let us now dive into parallel responses between StackOverflow and Math.SE.

- Why is “Can someone help me?” not an actual question? Response summary: the site is intended to create a knowledge repository of solutions to programming problems. When you ask a question, make sure you
*actually ask a question.* - Why the backlash against poor questions? Response summary: bad questions are noise while good questions and answers are the signal. If the signal is drowned out by the noise, then people interested in answering questions go away, leaving behind people asking questions.
- Can we adopt a stop-whining-about-bad-questions policy?

Response summary: No. Bad questions = noise. Constantly seeing the same

question will lead to people not answering anymore and leaving.^{9} - Should trivial re-occurring questions really be answered? The response is complicated. As long as answers to these questions will be upvoted, then there are incentives to answer them (and therefore the asker, even if downvoted, will probably get the answer they were looking for). Some suggest downvoting answers to bad questions to remove the incentive structure. But that’s quite a complicated thought process. It is also noted that there is a dichotomy between the Atwood Keep Question Quality Really High to Optimize for Pearls vs the Spolsky Ask Any Question As Long As It Hasn’t Been Asked policy.
^{10} - Should one give advice on off topic questions? The upvoted response is to downvote and close off-topic questions, and to
*absolutely not help or advise*as this incentivizes poor questions. - Off topic questions have to be cleared out of the way, but NOT via closure. The theme of this post is that the current reputation system incentivizes people giving answers to poor questions, which in turn inventivizes people asking poor questions. The responses have an interesting theme: most say that hoping for an ideal site where people don’t answer low-quality questions is probably a waste of time (perhaps even counterproductive), even though there is definitely a real problem there. Others advise users to downvote low-quality posts.
- Should SO be awarding As for

effort?

This is really about people asking questions and others saying “This doesn’t

show enough effort to merit a good response” and the related viewpoint that

questions with lots of effort shown do deserve a good answer. The answers hit

a really wide set of contradictory opinions, and reading this question and

its answers gives good insight into different trains of thought on the topic. - How to ask and answer homework questions?

From these topics, you may get the impression that there is a central response to downvote good answers to low-quality questions, as that is frequently advertised as a central method to maintaining high-quality content. But then you read Is it okay to downvote answers to bad questions? and see that the overwhelmingly upvoted response here is *No, it’s not okay to downvote good answers to bad questions.* But in fact the subtler issue here is that *As long as users don’t engage in vote fraud, they can vote however they want.* There is also a rebuttal by Brad Larson that notes that targeting downvotes at people who answer low quality questions will most likely drive those frequent answerers away (definitely undesirable); further, he doesn’t believe the assumption that making people stop answering bad questions will make bad questions stop coming.^{11}

Thus both identifying low-quality content and deciding how to prevent them are almost entirely unresolved. In practice, there are people who downvote low-quality questions and answers to low-quality questions (Caretakers), and people who upvote them, and people who answer them, with the dynamics described by the various Usercamps above.

It should be noted that on Math.SE, the vast majority of low-quality questions (and indeed, the majority of all questions), are from students of mathematics trying to learn new material. A typical question comes either from a suggested or assigned problem from an instructor, or from a math book that someone is trying to understand or solve an exercise from. So on math there is a big conflation between “homework”, “cut-and-paste” questions, and “low-quality”.

With that noted, these problems (and mostly their suggested responses) also appear independently on Math.SE.

- Why isn’t more being done to avoid facilitating copy paste homework

questions? - Can I try to tell experienced users to not answer bad

questions? - What to do when other users answer low quality

questions? - Dealing with zero effort

questions - Howto deal with just-google-it

questions - Have the questionson Math.SE changed in

quality?

As with SO, many responses suggest downvoting low-quality posts more, that there really is a problem, but that the problem may not be solveable. Trying to prevent experienced users from answering bad questions may be a waste of effort (or a noble effort), and these should be ignored (or upvoted, or downvoted).

And if you think that there is a recurring suggestion to downvote or delete low-quality content, then one would be going against the (upvoted and respected) thought process behind the answers to Downvoting complete solutions.

There simply isn’t consensus on these issues, or on What the purpose of Math.SE is.

One major takeaway from the above discussion is that there are real problems facing Math.SE and SO, and these problems stem from underlying problems that are essentially unresolved. There isn’t consensus on the purpose of the site or how to deal with low quality questions (or even if they’re a real problem).

Does that mean that trying to resolve these problems is a waste of time? No! In fact StackOverflow has implemented a variety of tools not (yet) present on other sites in the network that can help some of these problems. (And these don’t have anything to do with the recent StackExchange blog post suggesting to make SO a more welcoming community.

A recent suggestion that gained some traction on meta.Math.SE was to introduce another site to the network where novice mathematical questions are welcome.

In the next chapter, I will say why I think *NoviceMath.SE is a bad idea* (but that there are some changes that can be made now that will relieve some of the tension on the site.

Now with some perspective as a frequent contributor/user/moderator of various online newsgroups and fora, I want to take a moment to examine the current state of Math.SE.

To a certain extent, this is inspired by Joel Spolsky’s series of posts on StackOverflow (which he is currently writing and sending out). But this is also inspired by recent discussion on Meta.Math.SE. As I began to collect my thoughts to make a coherent reply, I realized that I have a lot of thoughts, and a lot to say.

So this is chapter one of a miniseries of writings on internet fora, and Math.SE and StackOverflow in particular.

I fondly remember the beginning, when it was possible to read every question and answer that was posted on Math.SE.^{1} I’m not saying this was a good idea, but I was learning lots of middle undergraduate math and this sort of math dominated the site. It felt particularly relevant.

Further, it was so vastly superior to the alternatives. Before Math.SE, there were other math fora and discussion boards. There were the Usenet newsgroups (which were message boards and should be thought of more as fora, less as a source of news), the Art of Problem Solving forums, and mymathforum. Maybe there were more, but these were what I knew.

These were each good in their own way. Usenet started a revolution but was ephemeral. If you didn’t store the history yourself, you needed to hope that someone else was archiving the newsgroup you were interested in and had some way of letting you access it.^{2} The more static fora like mymathform and AoPS were easier to jump into and browse (a big plus), but they depended entirely on a small group of moderators to police the community. That’s a lot of work for a few people, and there was a lot of noise.

There’s a problem that hit the older fora. When communities grow to a certain size, the ratio of signal to noise plummets. Maybe this is closely related to Dunbar’s Number?^{3} The point is that it’s frequently a sudden freefall. Abruptly there is almost no signal, just noise.

How do online communities fight Dunbar’s Number? There are only a few frequently used techniques.

*Moderators*remove, delete, kick, ban, mute, content, etc. This is perhaps the most common, and can be very effective. This is how it is in IRC and traditional fora like mymathforum and AoPS. And there is a core of special moderators on StackOverflow, Math.SE, reddit, Slashdot, etc.But as the community gets large, one needs more moderators, and if the core moderator group gets too big then the moderators can suffer from infighting.*Peer Moderators*can be used (or peer moderation skills can be earned). On Slashdot, digg, reddit, and hackernews, the community relies on general users to enforce (and create) community rules and guidelines. Good content rises to the top, while bad/unwelcome content sits or sinks. A great innovation in the StackExchange model is that there is extensive peer moderation, but as users gain clout (read: reputation) within the community, they gain more and more powerful moderation capabilities. It is almost like having a much larger group of core moderators.This has proved to be extremely effective, especially when a community has a strong identity. On the other hand, since the direction of the community is enforced most often by community members, it may veer off in unexpected ways. What if a corner of your community goes in the direction of intolerance and hate-speech? A few years ago reddit shut down five subreddits in a new anti-harrassment policy, including the “Fat People Hate” subreddit. Many of the community felt this went against the (faux) democracy of reddit.^{4}*Use Membership Requirements*to keep membership low and controlled. On the one hand, this is what secret clubs and societies do. Or country clubs which charge high membership fees. But college fraternities and sororities also enforce membership requirements, even if they’re wholly implicit. Some mailing lists also let anyone subscribe, but only a privileged or controlled group can post to the list.Sometimes this works. Sometimes groups bicker about what membership requirements really should be, especially if they’re subjective or implicit.^{5}*Create subcommunities, or secede and create a side community*to maintain a strong group around a strong vision. Many fora have various individual discussion topics or discussion boards which different groups of people focus on. Reddit uses subreddits^{6}to an enormous degree of success. The StackExchange network has different SE sites (like StackOverflow and Math.SE, or perhaps more meaningfully like the dichotomy between StackOverflow/SoftwareEngineering.SE or Math.SE/MathOverflow^{7}). These are subcommunities. For secessions, I think of “Quit Digg Day” on August 30, 2010, when many users flocked to the very young reddit after unappetizing digg changes.Centrally created subcommunities serve to divide the overall community into smaller groups, but once created it’s usually not effective to try to create further subsubcommunities.When splitting off from the old community, there are odd dynamics at play. On the one hand, you hope enough like-minded people follow to make a vibrant community. But you don’t want everyone to come, since then nothing would change. So these splits are usually somewhat secretive, or maybe the new community will enforce stronger membership requirements, etc. This might work for a while. But often it’s only a matter of time before the new community becomes exactly like the original community

^{8}, or these more stringent requirements and the passage of time lead to dwindling communities which don’t benefit from the original easy access and random internet encounters that led to their original success.

In terms of tools that online communities use to defeat Dunbar’s Number, that’s about it. Hopefully that’s enough — hopefully there is some combination of these methods that works. Otherwise, it’s all noise and no signal.

What does all noise, no signal look like? Most of the old usenet groups still exist. The main math one is sci.math, and it (perhaps astoundingly) has really high volume even today. But it’s a mostly barren wasteland now. Look at this shot of the most recent content today.

People ask for solutions manuals, complain to Joel Spolsky and Jeff Atwood about something on StackExchange (?), ask about contracts with the devil, and say that Terry Tao failed some math test. In other words, utter nonsense. It’s maybe not all bad, but the signal to noise ratio is so terrible that it almost certainly drives away many many people (including me — I certainly don’t read sci.math anymore).^{9}

As communities get larger, not everyone can even agree on what “noise” even means. In mailing lists or current event discussion groups or book clubs or other communuties where discussion revolves around whatever is “current” and whatever is “current” is constantly changing, this can be less of a problem. But on support lists or Q&A sites like Math.SE or StackOverflow, there is a large class of users who have been around for a while and don’t want to keep answering the same questions over and over, and there is a large class of users who have recently come across something they want/need help on and they really just want to find an answer.^{10}

Maybe it’s impossible for any community to be stable forever. It seems like one might conjecture a Second Law of Community Thermodynamics: the total entropy of a community will always increase until heat death. Further, heat death can have two forms: the “hot” form is spam death, where all signal is overrun by noise, and the “cold” form where any meaningful voices abandon the community, leaving only vacuum noise.^{11}

But they’re sure fun while they work, so it’s not like that’s going to stop anyone.

This is the end of the first chapter on community-building and maintenance. In the next chapter, I’ll focus a bit more on Math.SE and StackOverflow, and more specifically on how Math.SE should consider the *Ghosts of Forums Past*.

We consider some triangles. There are many right triangles, such as the triangle with sides $(3, 4, 5)$ or the triangle with sides $(1, 1, \sqrt{2})$. We call a right triangle *rational* when all its side lengths are rational numbers. For illustration, $(3, 4, 5)$ is rational, while $(1, 1, \sqrt{2})$ is not. $\DeclareMathOperator{\sqfree}{sqfree}$

There is mythology surrounding rational right triangles. According to legend, the ancient Greeks, led both philosophcally and mathematically by Pythagoras (who was the first person to call himself a philosopher and essentially the first to begin to distill and codify mathematics), believed all numbers and quantities were ratios of integers (rational). When a disciple of Pythagoras named Hippasus found that the side lengths of the right triangle $(1, 1, \sqrt{2})$ were not rational multiples of each other, the other followers of Pythagoras killed him by casting him overboard while at sea for having produced an element which contradicted the gods. (It with some irony that we now attribute this as a simple consequence of the Pythagorean Theorem).

This mythology is uncertain, but what is certain is that even the ancient Greeks were interested in studying rational right triangles, and they began to investigate what we now call the Congruent Number Problem. By the year 972 the CNP appears in Arabic manuscripts in (essentially) its modern formulation. The *Congruent Number Problem* (CNP) may be the oldest unresolved math problem.

We call a positive rational number $t$ *congruent* if there is a rational right triangle with area $t$. The triangle $(3,4,5)$ shows that $6 = 3 \cdot 4 / 2$ is congruent. The CNP is to describe all congruent numbers. Alternately, the CNP asks whether there is an algorithm to show definitively whether or not $t$ is a congruent number for any $t$.

We can reduce the problem to a statement about integers. If the rational number $t = p/q$ is the area of a triangle with legs $a$ and $b$, then the triangle $aq$ and $bq$ has area $tq^2 = pq$. It follows that to every rational number there is an associated squarefree integer for which either both are congruent or neither are congruent. Further, if $t$ is congruent, then $ty^2$ and $t/y^2$ are congruent for any integer $y$.

We may also restrict to integer-sided triangles if we allow ourselves to look for those triangles with squarefree area $t$. That is, if $t$ is the area of a triangle with rational sides $a/A$ and $b/B$, then $tA^2 B^2$ is the area of the triangle with integer sides $aB$ and $bA$.

It is in this form that we consider the CNP today.

Congruent Number ProblemGiven a squarefree integer $t$, does there exist a triangle with integer side lengths such that the squarefree part of the area of the triangle is $t$?

We will write this description a lot, so for a triangle $T$ we introduce the notation

\begin{equation}

\sqfree(T) = \text{The squarefree part of the area of } T.

\end{equation}

For example, the area of the triangle $T = (6, 8, 10)$ is $24 = 6 \cdot 2^2$, and so $\sqfree(T) = 6$. We should expect this, as $T$ is exactly a doubled-in-size $(3,4,5)$ triangle, which also corresponds to the congruent number $6$. Note that this allows us to only consider primitive right triangles.

Let $\tau(n)$ denote the square-indicator function. That is, $\tau(n)$ is $1$ if $n$ is a square, and is $0$ otherwise. Then the main result of the paper is that the sum

\begin{equation}

S_t(X) := \sum_{m = 1}^X \sum_{n = 1}^X \tau(m-n)\tau(m)\tau(nt)\tau(m+n)

\end{equation}

is related to congruent numbers through the asymptotic

\begin{equation}

S_t(X) = C_t \sqrt X + O_t\Big( \log^{r/2} X\Big),

\end{equation}

where

\begin{equation}

C_t = \sum_{h_i \in \mathcal{H}(t)} \frac{1}{h_i}.

\end{equation}

Each $h_i$ is a hypotenuse of a primitive integer right triangle $T$ with $\sqfree(T) = t$. Each hypotnesue will occur in a pair of similar triangles $(a,b, h_i)$ and $(b, a, h_i)$; $\mathcal{H}(t)$ is the family of these triangles, choosing only one triangle from each similar pair. The exponent $r$ in the error term is the rank of the elliptic curve

\begin{equation}

E_t(\mathbb{Q}): y^2 = x^3 – t^2 x.

\end{equation}

What this says is that $S_t(X)$ will have a main term if and only if $t$ is a congruent number, so that computing $S_t(X)$ for sufficiently large $X$ will show whether $t$ is congruent. (In fact, it’s easy to show that $S_t(X) \neq 0$ if and only if $t$ is congruent, so the added value here is the nature of the asymptotic).

We should be careful to note that this does not solve the CNP, since the error term depends in an inexplicit way on the desired number $t$. What this really means is that we do not have a good way of recognizing when the first nonzero term should occur in the double sum. We can only guarantee that for any $t$, understanding $S_t(X)$ for sufficiently large $X$ will allow one to understand whether $t$ is congruent or not.

There are four primary components to this result:

- There is a bijection between primitive integer right triangles $T$ with

$\sqfree(T) = t$ and arithmetic progressions of squares $m^2 – tn^2, m^2,

m^2 + tn^2$ (where each term is itself a square). - There is a bijection between primitive integer right triangles $T$ with

$\sqfree(T) = t$ and points on the elliptic curve $E_t(\mathbb{Q}): y^2 = x^3

– t x$ with $y \neq 0 $. - If the triangle $T$ corresponds to a point $P$ on the curve $E_t$, then

the size of the hypotenuse of $T$ can be bounded below by $H(P)$, the

(naive) height of the point on the elliptic curve. - Néron (and perhaps Mordell, but I’m not quite fluent in the initial

history of the theory of elliptic curves) proved strong (upper) bounds on

the number of points on an elliptic curve up to a given height. (In fact,

they proved asymptotics which are much stronger than we use).

In this paper, we use $(1)$ to relate triangles $T$ to the sum $S_t(X)$ and we use $(2)$ to relate these triangles to points on the elliptic curve. Tracking the exact nature of the hypotenuses through these bijections allows us to relate the sum to certain points on elliptic curves. In order to facilitate the tracking of these hypotenuses, we phrase these bijections in slightly different ways than have appeared in the literature. By $(3)$ and $(4)$, we can bound the number and size of the hypotenuses which appear in terms of numbers of points on the elliptic curve up to a certain height. Intuitively this is why the higher the rank of the elliptic curve (corresponding roughly to the existence of many more points on the curve), the worse the error term in our asymptotic.

I would further conjecture that the error term in our asymptotic is essentially best-possible, even though we have thrown away some information in our proof.

We are not the first to note either the bijection between triangles $T$ and arithmetic progressions of squares or between triangles $T$ and points on a particular elliptic curve. The first is surely an ancient observation, but I don’t know who first considered the relation to elliptic curves. But it’s certain that this was a fundamental aspect in Tunnell’s famous work *A Classical Diophantine Problem and Modular Forms of Weight 3/2* from 1983, where he used the properties of the elliptic curve $E_t$ to relate the CNP to the Birch and Swinnerton-Dyer Conjecture.

One statement following from the Birch and Swinnerton-Dyer conjecture (BSD) is that if an elliptic curve $E$ has rank $r$, then the $L$-function $L(s, E)$ has a zero of order $r$ at $1$. The relation between lots of points on the curve and the existence of a zero is intuitive from the approximate relation that

\begin{equation}

L(1, E) \approx \lim_{X} \prod_{p \leq X} \frac{p}{\#E(\mathbb{F}_p)},

\end{equation}

so if $E$ has lots and lots of points then we should expect the multiplicands to be very small.

On the other hand, the elliptic curve $E_t: y^2 = x^3 – t^2 x$ has the interesting property that any point with $y \neq 0$ generates a free group of points on the curve. From the bijections alluded to above, a primitive right integer triangle $T$ with $\sqfree(T) = t$ corresponds to a point on $E_t$ with $y \neq 0$, and thus guarantees that there are lots of points on the curve. Tunnell showed that what I described as “lots of points” is actually enough points that $L(1, E)$ must be zero (assuming the relation between the rank of the curve and the value of $L(1, E)$ from BSD).

Tunnell proved that if BSD is true, then $L(1, E) = 0$ if and only if $n$ is a congruent number.

Yet for any elliptic curve we know how to compute $L(1, E)$ to guaranteed accuracy (for instance by using Dokchitser’s algorithm). Thus a corollary of Tunnell’s theorem is that BSD implies that there is an algorithm which can be used to determine definitively whether or not any particular integer $t$ is congruent.

This is the state of the art on the congruent number problem. Unfortunately, BSD (or even the somewhat weaker between BSD and mere nonzero rank of elliptic curves as is necessary for Tunnell’s result for the CNP) is quite far from being proven.

In this context, the main result of this paper is not as effective at actually determining whether a number is congruent or not. But it does have the benefit of not relying on any unknown conjecture.

And there is some potential follow-up questions. The sum $S_t(X)$ appears as an integral transform of the multiple Dirichlet series

\begin{equation}

\sum_{m,n} \frac{\tau(m-n)\tau(m)\tau(nt)\tau(m+n)}{m^s n^w}

\approx

\sum_{m,n} \frac{r_1(m-n)r_1(m)r_1(nt)r_1(m+n)}{m^s n^w},

\end{equation}

where $r_1(n)$ is $1$ if $n = 0$ or $2$ if $n$ is a positive square, and $0$ otherwise. Then $r_1(n)$ appears as the Fourier coefficients of the half-integral weight standard theta function

\begin{equation}

\theta(z)

= \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z}

= \sum_{n \geq 0} r_1(n) e^{2 \pi i n z},

\end{equation}

and $S_t(X)$ is a shifted convolution sum coming from some products of modular forms related to $\theta(z)$.

It may be possible to gain further understanding of the behavior of $S_t(X)$ (and therefore the congruent number problem) by studying the shifted convolution as coming from theta functions.

I would guess that there is a deep relation to Tunnell’s analysis in his 1983 paper, as in some sense he constructs appropriate products of three theta functions and uses them centrally in his proof. But I do not understand this relationship well enough yet to know whether it is possible to deepen our understanding of the CNP, BSD, or Tunnell’s proof. That is something to explore in the future.

]]>