This is the final chapter in my series about the state of internet fora, and Math.SE and StackOverflow in particular. The previous chapters are Challenges Facing Community Cohesion and Ghosts of Forums Past. Unlike the previous entries, this also sits on Meta.Math.SE (and was posted there a week before here). (As I write this as a moderator of Math.SE, I refer to the Math.SE community as “we”, “us”, and “our” community).

A couple of weeks ago, there was a proposal on meta.Math.SE to introduce a third level of math site to the SE network. Many members of the the MathSE community have reacted very positively to this proposal, to the extent that even some of the moderators have considered throwing their weight behind it.

But a *NoviceMathSE site would be doomed to fail, and such a separation would not solve the underlying problems facing the site.*

To explain my point of view, we need to examine more closely the arguments in favor of NoviceMathSE.

In the proposal itself, the goal is stated to

act as a place where students are solving their homeworks all together.

- Some students will learn a lot by answering their friends questions. They are usually discouraged to write answers on MSE since usually language and notation is less formal.
- Discussion between them might be more helpful than a discussion where one stays very formal.
- They will put more effort on questions, since usually on MSE even a challenging/tricky questions gets a hint immediately.

Encouraging lots of discussion between students solving homework together is a mixture between subjectivity and localization, two things SE tends to avoid.

Maybe someone could create a tool where a school/college/university course would have a SE-like forum/Q&A allowing students to work together on a SE-like framework. This style of tool is used already in some MOOCs to facilitate learning environments (especially since the ratio of students to instructors can be enormous). Some MOOCS reset the forums each term/year to foster additional rounds of student involvement. I don’t know if this sort of tool already exists (if not, then maybe someone should go make one).

This sort of tool belongs there, not on the SE network.

**But I think much of the positive reaction to the proposal wasn’t for exactly the same proposal as in the OP, but instead for the thought of adding a lower-level Math Q&A.**

For this reason (and because certainly SE would not want to be explicitly viewed as a place where students go to get their homework done for them), I refer to the potential site as NoviceMathSE instead of HWMathSE. (I note that Jyrki has suggested calling it MathTutoringSE, which is also better than HWMathSE).

The proposal asks about “a third level of math site”. Implicitly stated in this proposal is the distinction between Math.StackExchange and MathOverflow as being a difference of the level of the question. But this is not an accurate description of the differences.

MathOverflow is not an ordinary member of the StackExchange network. MathOverflow is run by a non-profit organization which has an agreement with SE to host their site. It did not start through the typical experimental-beta-public StackExchange model, and does not have the same culture (or even all the same rules) as the rest of the StackExchange sites.

It is more appropriate to compare MathOverflow with PhysicsOverflow, which is separate from the StackExchange network.

In essence, MathOverflow has content that is interesting to research mathematicians. This consists largely of research level mathematics, but sometimes it also consists of essentially basic questions that are of interest to mathematicians. This is exactly how MO was founded (it’s older than MathSE).

It is not true that once a question hits a certain level of difficulty, it should be asked on MathOverflow instead of MathSE. Instead it is the audiences that are different.

With this in mind, it is not appropriate to think of creating another math site as something making a three-step trinity of NoviceMathSE, MathSE, MathOverflow.

The goal makes sense. Right now, most of the noise on MathSE comes from low-level questions. The major intent behind this proposal is to raise the ratio of signal to noise on MathSE by removing most of the noise.

But this cannot hope to work, because we cannot achieve consensus on how to distinguish “signal” from “noise”. There are already endless disagreements on what is on-topic or off-topic. It is unreasonable to expect MathSE to be able to draw a clear line on what is on-topic and what is off-topic now.

I cannot begin to imagine the moderating headache that would come from attempting to identify and close these questions amidst the various sources of ensuing community backlash. It would be one thing if MathSE had consensus on the various choices facing it, but this is not the case.

More worrying to me is that this proposal seems to be supported most strongly by users who want to *dump bad questions somewhere else.* (It is possible that I am misinterpreting this, but I don’t think so.)

Such a site is doomed to fail. It would indeed be full of noise. There would be fewer experts there because there are fewer interesting questions, and novices would often prefer to not post there because there would be fewer experts there. Users want good answers, and depending on novices to help other novices is more appropriate for peer-learning environments than a SE Q&A.

One of the major reasons the SE model has been effective is that each site is created to be a place with very high quality content, where experts want to answer interesting questions, and where people looking for good answers can find good, accurate information.

Yes, migrating lower quality questions to NoviceMathSE from MathSE might improve the condition of MathSE, but the signal/noise ratio of NoviceMathSE would almost certainly spiral out of control towards 0 and the site would fail.

We cannot expect to migrate all the lower-quality content (assuming we could even identify what that means) to another site. **If the goal is to remove lower-quality content, then the appropriate course of action is to try to find a way of identifying and removing it. Why bother trying to find somewhere else to dump it?**

Many of the comments and posts in favor of a NoviceMath.SE seem to want it to exist in order to solve problems of low quality content on Math.SE. It is unreasonable for a group of *us* to try to create a site for *some other group*. That is, it doesn’t make sense for a group of MathSE members to decide on a site that other people should go and populate.

If a group of people want to make NoviceMathSE (or some variant thereof) happen **and be a part of that new community**, then it would be a good idea for them to step forward and begin establishing what they want and what they’re missing from Math.SE. This is how new communities are established. Too much of the discussion essentially concerns ghettoizing low quality questions. This is against all principles of self determination on the network.

But I think there are some other ways to improve the quality of MathSE that don’t rely on fragmenting the community.

- Implement a Triage queue here. StackOverflow has a special review queue called “Triage”. The goal is to quickly sort potentially problematic posts into categories that can be routed elsewhere. In short, questions are sorted into three categories: Looks Ok (where it goes to the front page), Should be Improved (where it has limited visibility on the front page and goes into a help and improvement queue), and Unsalvageable (where it goes to mod review or a close/delete queue).
- Consider creating an Ask a Question Template (like the one being experimented with on SO). It is a hard question to determine what someone might put into a question template, but it may just work.
- Improve awareness of the ability to
*favorite*and*ignore*tags, and to*hide questions from ignored tags*. Did you know that you can not only favorite tags, but you can ignore them? And did you know that you can hide questions from ignored tags? This seems to be little-known, but the fact is that every additional method of filtering towards content that you prefer is better.

But I should note that these come with caveats. The Triage Queue is resource intensive. SE has declined to implement it on other sites in the past because it requires tweaking lots of Machine Learning algorithms (i.e. lots of maybe continuous work) and it requires many people looking at review queues to identify questions quickly. As noted here, triage was tailored to the needs of StackOverflow. This doesn’t preclude its use elsewhere, but that’s a discussion which needs to be had separately. Fortunately, triage makes sense on the largest sites on the network, and Math.SE certainly fits that bill (second largest on the network).

An Ask-a-question template is somewhat complicated, since there are many different questions that can be asked. But in the AB testing on StackOverflow there has been some success. I think it may be beneficial to try to develop a template on Math.SE and proceed with some AB testing as well. (The worst that happens is that it doesn’t work, right?).

]]>In this chapter I focus more on Math.SE and StackOverflow. Math.SE is now experiencing growing pains and looking for solutions. But many users of Math.SE have little involvement in the rest of the StackExchange network and are mostly unaware of the fact that StackOverflow has already encountered and passed many of the same trials and tribulations (with varying degrees of success).

Thinking more broadly, many communities have faced these same challenges. Viewed from the point of view from the last chapter, it may appear that there are only a handful of tools a community might use to try to retain group cohesion. Yet it is possible to craft clever mixtures of these tools synergistically. The major reason the StackExchange model has succeeded where other fora have stalled (or failed outright) is through its innovations on the implementation of communition cohesion strategies while still allowing essentially anyone to visit the site.

Slashdot^{1} popularized the idea of associating imaginary internet points to different users. It was called *karma*. You got karma if other users rated your comments or submissions well, and lost karma if they rated your posts as poor. But perhaps most importantly, each user can set a threshold for minimum scores of content to see. Thus if people have reasonable thresholds and you post crap, then most people won’t even see it after it’s scored badly.

What reputation and karma do is send a message that this is a community with norms, it’s not just a place to type words onto the internet. (That would be 4chan.) We don’t really exist for the purpose of letting you exercise your freedom of speech. You can get your freedom of speech somewhere else.

Astoundingly, karma even contributes to some sort of community cohesion when there are no benefits or detriments to having karma. See reddit, where karma is on the one hand almost worthless^{3}, and on the other hand highly valued. Even sought after.

Seriously, if you look you can found thousands upon thousands of people asking how to get more reddit karma (and a much smaller number asking what it’s good for).^{4}

I credit StackOverflow with popularizing the idea that imaginary internet points can be used as a formal (rather than informal) indicator of community standing. SO even calls it “reputation”. As a user gains more reputation points, they are given more peer moderation abilities. A user gains the ability to upvote^{5}, downvote, edit any post, close/reopen posts, or even delete/undelete posts.

This has worked astoundingly well. But it’s not perfect.

Very often I hear the same sort of story. A new user comes to ask a question, but it gets downvoted and negatively commented immediately. Then a *moderator* comes in and closes or deletes the question. And if even the mods are against new users, then what are they to do? That isn’t so welcoming, is it?

I frequently look into these cases and find a slightly different backstory. What usually happens is that several very high reputation users decided to close/delete the question with a somewhat minimal comment, such as *This is a duplicate of [this other question]* or *What have you tried?* or *RTFM*. The source of the confusion is that these high rep users have lots of moderator powers that new users don’t have. At first I thought that this distinction was important: it’s not the mods that are unwelcoming new users; it’s just some high rep users.

But then I realized that to a new user, this distinction is completely meaningless. The typical new user doesn’t care about their own reputation or badges or even the community itself — they just want an answer to a question. Any any obstacle in their way (like reading a *How to Ask a Question* page or comments saying *Use MathJax*) are pitfalls to be navigated through on the way to the goal. The fact is that they wanted help with something, went to get some help, only to feel like they were shut down.

This has been a major complaint about StackOverflow for years. In 2012 StackOverflow tried to reform new user culture through their Summer of Love initiative (Summer of Love, aka the Hunting of the Snark. Goal: keep SO welcoming and friendly *without* lowering standards).^{6}

Did it work? Not quite. In fact, the opening post on SO’s meta site^{7} generated so much bickering and negative commentary that it was deleted.

Other ideas were tried, but they had at most temporary success. A few years later John Slegers wrote a highly viewed post The decline of Stack Overflow, documenting the standard negative first impression received by new users, and how even older users can be at the whims of *Privileged Trolls*. These Privileged Trolls are those high reputation users who user their powers extensively.

Why doesn’t StackOverflow ban/suspend/quell the class of Privileged Trolls? In short, it’s because they’re not wrong. Most often these so-called Privileged Trolls are seeking to combat low quality questions and the existence of Help Vampires^{8}.

There exists a class of high reputation user who is very frequently on the site, has a corner that they care about, and is very familiar with the majority of content in that corner. We might optimistically call them a *Caretaker* instead of a Privileged Troll.

A typical bad scenario might go as follows. A new user comes and asks *Why is this python program hanging?*. A Caretaker sees the question, recognizes that the user was trying to pull stdin from an IDE, and then marks the question as a duplicate of some question about how to get around this. In the abstract, does this answer the question? Yes. But to the new user who is following some tutorial and doesn’t even know what `import`

means yet, this is probably unhelpful or confusing.

In Math.SE, this problem might be further exacerbated by the fact that there are high power mathematical results that quash all sorts of weaker statements. But having your introductory real analysis question about *How do I show that this function is integrable?* closed as a duplicate of some question which states that *Any function which is almost everywhere continuous is Riemann integrable* is most definitely unhelpful (and yet similar occurrences definitely occur).

Caretakers are trying to maintain high site quality. One aspect of quality is the ratio of signal to noise, and the existence of a vast number of duplicate questions is a source of noise. Using the enumeration from this answer to the meta.SO question Why is StackOverflow so negative of late?:

Basically there are 4 camps of users on Stack Overflow:

- The “caretakers” who want to keep the site clean and with good content.
- The “help vampires” who flood the site with bad/duplicate questions who only want their question answered and care nothing for the site.
- The “repwhores” who answer everything they can (or can’t).
- The ones who no longer give a shit.
For the most part:

2 and 3 love each other. They should get married.

1 hates 2 because they’re flooding the site making good questions impossible to find.

1 hates 3 because they’re encouraging 2 to keep going.

2 hates 1 because 1 constantly downvotes/closes/deletes/flames 2.

3 hates 1 because they keep closing/deleting the questions that 3 likes to answer.

1 and 3 have all the moderation powers, but only 1 cares to use them.

4 is sitting on the sideline shaking their heads…

1 hates 4 because 4 isn’t helping the situation.

With so much hate, there’s going to be conflict.

There are too many moderators (both true mods and very-high-rep-users) for a single common viewpoint to dominate the others. And from my point of view, a central division is over the purpose of a StackOverflow. Is it to

- Quickly get people great answers to their programming questions, or to
- Serve as a repository of useful programming knowledge.

In many ways these work together. Providing great answers to new questions serve both. But repeatedly answering the same question (especially with slightly different answers) makes the site less useful as a repository of knowledge — a visiting user may need to check several variants of a question to find an answer that works for them. Why not just ask another variant instead, adding to the tidal wave of similar questions? Conversely, requiring users to interpret a canonical question and answer in their own situation is annoying, especially to novice users who don’t know enough to recognize alternate phrasings of the same topic.

I believe the intent of the site was the latter, but somehow a large minority of users much prefer the former.

On Math.SE, there is perhaps a third category. One can ask whether the purpose is to

- Teach people mathematics,
- Answer mathematical problems from all levels, or to
- Serve as a repository of useful mathematical knowledge.

I think the reason why Math.SE cares so much about teaching mathematics is that many of the veteran users are (or have been) educators (teachers, professors, teaching assistants, lecturers, etc.). But similarly to the StackOverflow case, I believe the founders had the last option in mind, but frequent answerers are often interested in actually teaching people mathematics.

Despite the apparent difference, most cultural problems appear to be the same. Or rather, since Math.SE is a bit younger and a bit smaller than SO (but still the second largest site on the StackExchange network), the cultural problems facing Math.SE are a mix of the current problems facing SO and the problems from a few years ago.

Let us now dive into parallel responses between StackOverflow and Math.SE.

- Why is “Can someone help me?” not an actual question? Response summary: the site is intended to create a knowledge repository of solutions to programming problems. When you ask a question, make sure you
*actually ask a question.* - Why the backlash against poor questions? Response summary: bad questions are noise while good questions and answers are the signal. If the signal is drowned out by the noise, then people interested in answering questions go away, leaving behind people asking questions.
- Can we adopt a stop-whining-about-bad-questions policy?

Response summary: No. Bad questions = noise. Constantly seeing the same

question will lead to people not answering anymore and leaving.^{9} - Should trivial re-occurring questions really be answered? The response is complicated. As long as answers to these questions will be upvoted, then there are incentives to answer them (and therefore the asker, even if downvoted, will probably get the answer they were looking for). Some suggest downvoting answers to bad questions to remove the incentive structure. But that’s quite a complicated thought process. It is also noted that there is a dichotomy between the Atwood Keep Question Quality Really High to Optimize for Pearls vs the Spolsky Ask Any Question As Long As It Hasn’t Been Asked policy.
^{10} - Should one give advice on off topic questions? The upvoted response is to downvote and close off-topic questions, and to
*absolutely not help or advise*as this incentivizes poor questions. - Off topic questions have to be cleared out of the way, but NOT via closure. The theme of this post is that the current reputation system incentivizes people giving answers to poor questions, which in turn inventivizes people asking poor questions. The responses have an interesting theme: most say that hoping for an ideal site where people don’t answer low-quality questions is probably a waste of time (perhaps even counterproductive), even though there is definitely a real problem there. Others advise users to downvote low-quality posts.
- Should SO be awarding As for

effort?

This is really about people asking questions and others saying “This doesn’t

show enough effort to merit a good response” and the related viewpoint that

questions with lots of effort shown do deserve a good answer. The answers hit

a really wide set of contradictory opinions, and reading this question and

its answers gives good insight into different trains of thought on the topic. - How to ask and answer homework questions?

From these topics, you may get the impression that there is a central response to downvote good answers to low-quality questions, as that is frequently advertised as a central method to maintaining high-quality content. But then you read Is it okay to downvote answers to bad questions? and see that the overwhelmingly upvoted response here is *No, it’s not okay to downvote good answers to bad questions.* But in fact the subtler issue here is that *As long as users don’t engage in vote fraud, they can vote however they want.* There is also a rebuttal by Brad Larson that notes that targeting downvotes at people who answer low quality questions will most likely drive those frequent answerers away (definitely undesirable); further, he doesn’t believe the assumption that making people stop answering bad questions will make bad questions stop coming.^{11}

Thus both identifying low-quality content and deciding how to prevent them are almost entirely unresolved. In practice, there are people who downvote low-quality questions and answers to low-quality questions (Caretakers), and people who upvote them, and people who answer them, with the dynamics described by the various Usercamps above.

It should be noted that on Math.SE, the vast majority of low-quality questions (and indeed, the majority of all questions), are from students of mathematics trying to learn new material. A typical question comes either from a suggested or assigned problem from an instructor, or from a math book that someone is trying to understand or solve an exercise from. So on math there is a big conflation between “homework”, “cut-and-paste” questions, and “low-quality”.

With that noted, these problems (and mostly their suggested responses) also appear independently on Math.SE.

- Why isn’t more being done to avoid facilitating copy paste homework

questions? - Can I try to tell experienced users to not answer bad

questions? - What to do when other users answer low quality

questions? - Dealing with zero effort

questions - Howto deal with just-google-it

questions - Have the questionson Math.SE changed in

quality?

As with SO, many responses suggest downvoting low-quality posts more, that there really is a problem, but that the problem may not be solveable. Trying to prevent experienced users from answering bad questions may be a waste of effort (or a noble effort), and these should be ignored (or upvoted, or downvoted).

And if you think that there is a recurring suggestion to downvote or delete low-quality content, then one would be going against the (upvoted and respected) thought process behind the answers to Downvoting complete solutions.

There simply isn’t consensus on these issues, or on What the purpose of Math.SE is.

One major takeaway from the above discussion is that there are real problems facing Math.SE and SO, and these problems stem from underlying problems that are essentially unresolved. There isn’t consensus on the purpose of the site or how to deal with low quality questions (or even if they’re a real problem).

Does that mean that trying to resolve these problems is a waste of time? No! In fact StackOverflow has implemented a variety of tools not (yet) present on other sites in the network that can help some of these problems. (And these don’t have anything to do with the recent StackExchange blog post suggesting to make SO a more welcoming community.

A recent suggestion that gained some traction on meta.Math.SE was to introduce another site to the network where novice mathematical questions are welcome.

In the next chapter, I will say why I think *NoviceMath.SE is a bad idea* (but that there are some changes that can be made now that will relieve some of the tension on the site.

Now with some perspective as a frequent contributor/user/moderator of various online newsgroups and fora, I want to take a moment to examine the current state of Math.SE.

To a certain extent, this is inspired by Joel Spolsky’s series of posts on StackOverflow (which he is currently writing and sending out). But this is also inspired by recent discussion on Meta.Math.SE. As I began to collect my thoughts to make a coherent reply, I realized that I have a lot of thoughts, and a lot to say.

So this is chapter one of a miniseries of writings on internet fora, and Math.SE and StackOverflow in particular.

I fondly remember the beginning, when it was possible to read every question and answer that was posted on Math.SE.^{1} I’m not saying this was a good idea, but I was learning lots of middle undergraduate math and this sort of math dominated the site. It felt particularly relevant.

Further, it was so vastly superior to the alternatives. Before Math.SE, there were other math fora and discussion boards. There were the Usenet newsgroups (which were message boards and should be thought of more as fora, less as a source of news), the Art of Problem Solving forums, and mymathforum. Maybe there were more, but these were what I knew.

These were each good in their own way. Usenet started a revolution but was ephemeral. If you didn’t store the history yourself, you needed to hope that someone else was archiving the newsgroup you were interested in and had some way of letting you access it.^{2} The more static fora like mymathform and AoPS were easier to jump into and browse (a big plus), but they depended entirely on a small group of moderators to police the community. That’s a lot of work for a few people, and there was a lot of noise.

There’s a problem that hit the older fora. When communities grow to a certain size, the ratio of signal to noise plummets. Maybe this is closely related to Dunbar’s Number?^{3} The point is that it’s frequently a sudden freefall. Abruptly there is almost no signal, just noise.

How do online communities fight Dunbar’s Number? There are only a few frequently used techniques.

*Moderators*remove, delete, kick, ban, mute, content, etc. This is perhaps the most common, and can be very effective. This is how it is in IRC and traditional fora like mymathforum and AoPS. And there is a core of special moderators on StackOverflow, Math.SE, reddit, Slashdot, etc.But as the community gets large, one needs more moderators, and if the core moderator group gets too big then the moderators can suffer from infighting.*Peer Moderators*can be used (or peer moderation skills can be earned). On Slashdot, digg, reddit, and hackernews, the community relies on general users to enforce (and create) community rules and guidelines. Good content rises to the top, while bad/unwelcome content sits or sinks. A great innovation in the StackExchange model is that there is extensive peer moderation, but as users gain clout (read: reputation) within the community, they gain more and more powerful moderation capabilities. It is almost like having a much larger group of core moderators.This has proved to be extremely effective, especially when a community has a strong identity. On the other hand, since the direction of the community is enforced most often by community members, it may veer off in unexpected ways. What if a corner of your community goes in the direction of intolerance and hate-speech? A few years ago reddit shut down five subreddits in a new anti-harrassment policy, including the “Fat People Hate” subreddit. Many of the community felt this went against the (faux) democracy of reddit.^{4}*Use Membership Requirements*to keep membership low and controlled. On the one hand, this is what secret clubs and societies do. Or country clubs which charge high membership fees. But college fraternities and sororities also enforce membership requirements, even if they’re wholly implicit. Some mailing lists also let anyone subscribe, but only a privileged or controlled group can post to the list.Sometimes this works. Sometimes groups bicker about what membership requirements really should be, especially if they’re subjective or implicit.^{5}*Create subcommunities, or secede and create a side community*to maintain a strong group around a strong vision. Many fora have various individual discussion topics or discussion boards which different groups of people focus on. Reddit uses subreddits^{6}to an enormous degree of success. The StackExchange network has different SE sites (like StackOverflow and Math.SE, or perhaps more meaningfully like the dichotomy between StackOverflow/SoftwareEngineering.SE or Math.SE/MathOverflow^{7}). These are subcommunities. For secessions, I think of “Quit Digg Day” on August 30, 2010, when many users flocked to the very young reddit after unappetizing digg changes.Centrally created subcommunities serve to divide the overall community into smaller groups, but once created it’s usually not effective to try to create further subsubcommunities.When splitting off from the old community, there are odd dynamics at play. On the one hand, you hope enough like-minded people follow to make a vibrant community. But you don’t want everyone to come, since then nothing would change. So these splits are usually somewhat secretive, or maybe the new community will enforce stronger membership requirements, etc. This might work for a while. But often it’s only a matter of time before the new community becomes exactly like the original community

^{8}, or these more stringent requirements and the passage of time lead to dwindling communities which don’t benefit from the original easy access and random internet encounters that led to their original success.

In terms of tools that online communities use to defeat Dunbar’s Number, that’s about it. Hopefully that’s enough — hopefully there is some combination of these methods that works. Otherwise, it’s all noise and no signal.

What does all noise, no signal look like? Most of the old usenet groups still exist. The main math one is sci.math, and it (perhaps astoundingly) has really high volume even today. But it’s a mostly barren wasteland now. Look at this shot of the most recent content today.

People ask for solutions manuals, complain to Joel Spolsky and Jeff Atwood about something on StackExchange (?), ask about contracts with the devil, and say that Terry Tao failed some math test. In other words, utter nonsense. It’s maybe not all bad, but the signal to noise ratio is so terrible that it almost certainly drives away many many people (including me — I certainly don’t read sci.math anymore).^{9}

As communities get larger, not everyone can even agree on what “noise” even means. In mailing lists or current event discussion groups or book clubs or other communuties where discussion revolves around whatever is “current” and whatever is “current” is constantly changing, this can be less of a problem. But on support lists or Q&A sites like Math.SE or StackOverflow, there is a large class of users who have been around for a while and don’t want to keep answering the same questions over and over, and there is a large class of users who have recently come across something they want/need help on and they really just want to find an answer.^{10}

Maybe it’s impossible for any community to be stable forever. It seems like one might conjecture a Second Law of Community Thermodynamics: the total entropy of a community will always increase until heat death. Further, heat death can have two forms: the “hot” form is spam death, where all signal is overrun by noise, and the “cold” form where any meaningful voices abandon the community, leaving only vacuum noise.^{11}

But they’re sure fun while they work, so it’s not like that’s going to stop anyone.

This is the end of the first chapter on community-building and maintenance. In the next chapter, I’ll focus a bit more on Math.SE and StackOverflow, and more specifically on how Math.SE should consider the *Ghosts of Forums Past*.

We consider some triangles. There are many right triangles, such as the triangle with sides $(3, 4, 5)$ or the triangle with sides $(1, 1, \sqrt{2})$. We call a right triangle *rational* when all its side lengths are rational numbers. For illustration, $(3, 4, 5)$ is rational, while $(1, 1, \sqrt{2})$ is not. $\DeclareMathOperator{\sqfree}{sqfree}$

There is mythology surrounding rational right triangles. According to legend, the ancient Greeks, led both philosophcally and mathematically by Pythagoras (who was the first person to call himself a philosopher and essentially the first to begin to distill and codify mathematics), believed all numbers and quantities were ratios of integers (rational). When a disciple of Pythagoras named Hippasus found that the side lengths of the right triangle $(1, 1, \sqrt{2})$ were not rational multiples of each other, the other followers of Pythagoras killed him by casting him overboard while at sea for having produced an element which contradicted the gods. (It with some irony that we now attribute this as a simple consequence of the Pythagorean Theorem).

This mythology is uncertain, but what is certain is that even the ancient Greeks were interested in studying rational right triangles, and they began to investigate what we now call the Congruent Number Problem. By the year 972 the CNP appears in Arabic manuscripts in (essentially) its modern formulation. The *Congruent Number Problem* (CNP) may be the oldest unresolved math problem.

We call a positive rational number $t$ *congruent* if there is a rational right triangle with area $t$. The triangle $(3,4,5)$ shows that $6 = 3 \cdot 4 / 2$ is congruent. The CNP is to describe all congruent numbers. Alternately, the CNP asks whether there is an algorithm to show definitively whether or not $t$ is a congruent number for any $t$.

We can reduce the problem to a statement about integers. If the rational number $t = p/q$ is the area of a triangle with legs $a$ and $b$, then the triangle $aq$ and $bq$ has area $tq^2 = pq$. It follows that to every rational number there is an associated squarefree integer for which either both are congruent or neither are congruent. Further, if $t$ is congruent, then $ty^2$ and $t/y^2$ are congruent for any integer $y$.

We may also restrict to integer-sided triangles if we allow ourselves to look for those triangles with squarefree area $t$. That is, if $t$ is the area of a triangle with rational sides $a/A$ and $b/B$, then $tA^2 B^2$ is the area of the triangle with integer sides $aB$ and $bA$.

It is in this form that we consider the CNP today.

Congruent Number ProblemGiven a squarefree integer $t$, does there exist a triangle with integer side lengths such that the squarefree part of the area of the triangle is $t$?

We will write this description a lot, so for a triangle $T$ we introduce the notation

\begin{equation}

\sqfree(T) = \text{The squarefree part of the area of } T.

\end{equation}

For example, the area of the triangle $T = (6, 8, 10)$ is $24 = 6 \cdot 2^2$, and so $\sqfree(T) = 6$. We should expect this, as $T$ is exactly a doubled-in-size $(3,4,5)$ triangle, which also corresponds to the congruent number $6$. Note that this allows us to only consider primitive right triangles.

Let $\tau(n)$ denote the square-indicator function. That is, $\tau(n)$ is $1$ if $n$ is a square, and is $0$ otherwise. Then the main result of the paper is that the sum

\begin{equation}

S_t(X) := \sum_{m = 1}^X \sum_{n = 1}^X \tau(m-n)\tau(m)\tau(nt)\tau(m+n)

\end{equation}

is related to congruent numbers through the asymptotic

\begin{equation}

S_t(X) = C_t \sqrt X + O_t\Big( \log^{r/2} X\Big),

\end{equation}

where

\begin{equation}

C_t = \sum_{h_i \in \mathcal{H}(t)} \frac{1}{h_i}.

\end{equation}

Each $h_i$ is a hypotenuse of a primitive integer right triangle $T$ with $\sqfree(T) = t$. Each hypotnesue will occur in a pair of similar triangles $(a,b, h_i)$ and $(b, a, h_i)$; $\mathcal{H}(t)$ is the family of these triangles, choosing only one triangle from each similar pair. The exponent $r$ in the error term is the rank of the elliptic curve

\begin{equation}

E_t(\mathbb{Q}): y^2 = x^3 – t^2 x.

\end{equation}

What this says is that $S_t(X)$ will have a main term if and only if $t$ is a congruent number, so that computing $S_t(X)$ for sufficiently large $X$ will show whether $t$ is congruent. (In fact, it’s easy to show that $S_t(X) \neq 0$ if and only if $t$ is congruent, so the added value here is the nature of the asymptotic).

We should be careful to note that this does not solve the CNP, since the error term depends in an inexplicit way on the desired number $t$. What this really means is that we do not have a good way of recognizing when the first nonzero term should occur in the double sum. We can only guarantee that for any $t$, understanding $S_t(X)$ for sufficiently large $X$ will allow one to understand whether $t$ is congruent or not.

There are four primary components to this result:

- There is a bijection between primitive integer right triangles $T$ with

$\sqfree(T) = t$ and arithmetic progressions of squares $m^2 – tn^2, m^2,

m^2 + tn^2$ (where each term is itself a square). - There is a bijection between primitive integer right triangles $T$ with

$\sqfree(T) = t$ and points on the elliptic curve $E_t(\mathbb{Q}): y^2 = x^3

– t x$ with $y \neq 0 $. - If the triangle $T$ corresponds to a point $P$ on the curve $E_t$, then

the size of the hypotenuse of $T$ can be bounded below by $H(P)$, the

(naive) height of the point on the elliptic curve. - Néron (and perhaps Mordell, but I’m not quite fluent in the initial

history of the theory of elliptic curves) proved strong (upper) bounds on

the number of points on an elliptic curve up to a given height. (In fact,

they proved asymptotics which are much stronger than we use).

In this paper, we use $(1)$ to relate triangles $T$ to the sum $S_t(X)$ and we use $(2)$ to relate these triangles to points on the elliptic curve. Tracking the exact nature of the hypotenuses through these bijections allows us to relate the sum to certain points on elliptic curves. In order to facilitate the tracking of these hypotenuses, we phrase these bijections in slightly different ways than have appeared in the literature. By $(3)$ and $(4)$, we can bound the number and size of the hypotenuses which appear in terms of numbers of points on the elliptic curve up to a certain height. Intuitively this is why the higher the rank of the elliptic curve (corresponding roughly to the existence of many more points on the curve), the worse the error term in our asymptotic.

I would further conjecture that the error term in our asymptotic is essentially best-possible, even though we have thrown away some information in our proof.

We are not the first to note either the bijection between triangles $T$ and arithmetic progressions of squares or between triangles $T$ and points on a particular elliptic curve. The first is surely an ancient observation, but I don’t know who first considered the relation to elliptic curves. But it’s certain that this was a fundamental aspect in Tunnell’s famous work *A Classical Diophantine Problem and Modular Forms of Weight 3/2* from 1983, where he used the properties of the elliptic curve $E_t$ to relate the CNP to the Birch and Swinnerton-Dyer Conjecture.

One statement following from the Birch and Swinnerton-Dyer conjecture (BSD) is that if an elliptic curve $E$ has rank $r$, then the $L$-function $L(s, E)$ has a zero of order $r$ at $1$. The relation between lots of points on the curve and the existence of a zero is intuitive from the approximate relation that

\begin{equation}

L(1, E) \approx \lim_{X} \prod_{p \leq X} \frac{p}{\#E(\mathbb{F}_p)},

\end{equation}

so if $E$ has lots and lots of points then we should expect the multiplicands to be very small.

On the other hand, the elliptic curve $E_t: y^2 = x^3 – t^2 x$ has the interesting property that any point with $y \neq 0$ generates a free group of points on the curve. From the bijections alluded to above, a primitive right integer triangle $T$ with $\sqfree(T) = t$ corresponds to a point on $E_t$ with $y \neq 0$, and thus guarantees that there are lots of points on the curve. Tunnell showed that what I described as “lots of points” is actually enough points that $L(1, E)$ must be zero (assuming the relation between the rank of the curve and the value of $L(1, E)$ from BSD).

Tunnell proved that if BSD is true, then $L(1, E) = 0$ if and only if $n$ is a congruent number.

Yet for any elliptic curve we know how to compute $L(1, E)$ to guaranteed accuracy (for instance by using Dokchitser’s algorithm). Thus a corollary of Tunnell’s theorem is that BSD implies that there is an algorithm which can be used to determine definitively whether or not any particular integer $t$ is congruent.

This is the state of the art on the congruent number problem. Unfortunately, BSD (or even the somewhat weaker between BSD and mere nonzero rank of elliptic curves as is necessary for Tunnell’s result for the CNP) is quite far from being proven.

In this context, the main result of this paper is not as effective at actually determining whether a number is congruent or not. But it does have the benefit of not relying on any unknown conjecture.

And there is some potential follow-up questions. The sum $S_t(X)$ appears as an integral transform of the multiple Dirichlet series

\begin{equation}

\sum_{m,n} \frac{\tau(m-n)\tau(m)\tau(nt)\tau(m+n)}{m^s n^w}

\approx

\sum_{m,n} \frac{r_1(m-n)r_1(m)r_1(nt)r_1(m+n)}{m^s n^w},

\end{equation}

where $r_1(n)$ is $1$ if $n = 0$ or $2$ if $n$ is a positive square, and $0$ otherwise. Then $r_1(n)$ appears as the Fourier coefficients of the half-integral weight standard theta function

\begin{equation}

\theta(z)

= \sum_{n \in \mathbb{Z}} e^{2 \pi i n^2 z}

= \sum_{n \geq 0} r_1(n) e^{2 \pi i n z},

\end{equation}

and $S_t(X)$ is a shifted convolution sum coming from some products of modular forms related to $\theta(z)$.

It may be possible to gain further understanding of the behavior of $S_t(X)$ (and therefore the congruent number problem) by studying the shifted convolution as coming from theta functions.

I would guess that there is a deep relation to Tunnell’s analysis in his 1983 paper, as in some sense he constructs appropriate products of three theta functions and uses them centrally in his proof. But I do not understand this relationship well enough yet to know whether it is possible to deepen our understanding of the CNP, BSD, or Tunnell’s proof. That is something to explore in the future.

]]>My first experience in “programming” was following a semi-tutorial on how to patch the Starcraft exe in order to make it understand replays from previous versions. I was about 10, and I cobbled together my understanding from internet mailing lists and chatrooms. The documentation was awful and the original description was flawed, and to make it worse, I didn’t know anything about any sort of programming yet. But I trawled these lists and chatroom logs and made it work, and learned a few things. Each time Starcraft was updated, the old replay system broke completely and it was necessary to make some changes, and I got pretty good at figuring out what changes were necessary and how to perform these patches.

On the other hand, my first formal experience in programming was taking a course at Georgia Tech many years later, in which a typical activity would revolve around an exciting topic like concatenating two strings or understanding object polymorphism. These were dry topics presented to us dryly, but I knew that I wanted to understand what was going on and so I suffered the straight-faced-ness of the class and used the course as an opportunity to build some technical depth.

Now I recognize that these two approaches cover most first experiences learning a technical subject: a motivated survey versus monographic study. At the heart of the distinction is a decision to view and alight on many topics (but not delving deeply in most) or to spend as much time as is necessary to understand completely each topic (and hence to not touch too many different topics). Each has their place, but each draws a very different crowd.

The book *Cracking Codes with Python: An Introduction to Building and Breaking Ciphers* by Al Sweigart^{1} is very much a motivated flight through various topics in programming and cryptography, and not at all a deep technical study of any individual topic. A more accurate (though admittedly less beckoning) title might be *An Introduction to Programming Concepts Through Building and Breaking Ciphers in Python.* The main goal is to promote programmatical thinking by exploring basic ciphers, and the medium happens to be python.

But ciphers are cool. And breaking them is cool. And if you think you might want to learn something about programming and you might want to learn something about ciphers, then this is a great book to read.

Sweigart has a knack for writing approachable descriptions of what’s going on without delving into too many details. In fact, in some sense Sweigart has already written this book before: his other books *Automate the Boring Stuff with Python* and *Invent your own Computer Games with Python* are similarly survey material using python as the medium, though with different motivating topics.

Each chapter of this book is centered around exploring a different aspect of a cipher, and introduces additional programming topics to do so. For example, one chapter introduces the classic Caesar cipher, as well as the “if”, “else”, and “elif” conditionals (and a few other python functions). Another chapter introduces brute-force breaking the Caesar cipher (as well as string formatting in python).

In each chapter, Sweigart begins by giving a high-level overview of the topics in that chapter, followed by python code which accomplishes the goal of the chapter, followed by a detailed description of what each block of code accomplishes. Readers get to see fully written code that does nontrivial things very quickly, but on the other hand the onus of code generation is entirely on the book and readers may have trouble adapting the concepts to other programming tasks. (But remember, this is more survey, less technical description). Further, Sweigart uses a number of good practices in his code that implicitly encourages good programming behaviors: modules are written with many well-named functions and well-named variables, and sufficiently modularly that earlier modules are imported and reused later.

But this book is not without faults. As with any survey material, one can always disagree on what topics are or are not included. The book covers five classical ciphers (Caesar, transposition, substitution, Vigenere, and affine) and one modern cipher (textbook-RSA), as well as the write-backwards cipher (to introduce python concepts) and the one-time-pad (presented oddly as a Vigenere cipher whose key is the same length as the message). For some unknown reason, Sweigart chooses to refer to RSA almost everywhere as “the public key cipher”, which I find both misleading (there are other public key ciphers) and giving insufficient attribution (the cipher is implemented in chapter 24, but “RSA” appears only once as a footnote in that chapter. Hopefully the reader was paying lots of attention, as otherwise it would be rather hard to find out more about it).

Further, the choice of python topics (and their order) is sometimes odd. In truth, this book is almost language agnostic and one could very easily adapt the presentation to any other scripting language, such as C.

In summary, this book is an excellent resource for the complete beginner who wants to learn something about programming and wants to learn something about ciphers. After reading this book, the reader will be a mid-beginner student of python (knee-deep is apt) and well-versed in classical ciphers. Should the reader feel inspired to learn more python, then he or she would probably feel comfortable diving into a tutorial or reference for their area of interest (like Full Stack Python if interested in web dev, or Python for Data Analysis if interested in data science). Or he or she might dive into a more complete monograph like Dive into Python or the monolithic Learn Python. Many fundamental topics (such as classes and objects, list comprehensions, data structures or algorithms) are not covered, and so “advanced” python resources would not be appropriate.

Further, should the reader feel inspired to learn more about cryptography, then I recommend that he or she consider Cryptanalysis by Gaines, which is a fun book aimed at diving deeper into classical pre-computer ciphers, or the slightly heavier but still fun resource would “Codebreakers” by Kahn. For much further cryptography, it’s necessary to develop a bit of mathematical maturity, which is its own hurdle.

This book is not appropriate for everyone. An experienced python programmer could read this book in an hour, skipping the various descriptions of how python works along the way. An experienced programmer who doesn’t know python could similarly read this book in a lazy afternoon. Both would probably do better reading either a more advanced overview of either cryptography or python, based on what originally drew them to the book.

]]>$$\begin{equation}

X^2 + Y^2 = Z^2 + h

\end{equation}$$

for any fixed integer $h$. But thematically, I wanted to give another concrete example of using modularforms to compute some sort of arithmetic data, and to mention how the perhapsapparently unrelated topic of spectral theory appears even in such an arithmeticapplication.

Somehow, starting from counting points on $X^2 + Y^2 = Z^2 + h$ (which appearssimple enough on its own that I could probably put this in front of anelementary number theory class and they would feel comfortable experimentingaway on the topic), one gets to very scary-looking expressions like

$$\begin{equation}

\sum_{t_j}

\langle P_h^k, \mu_j \rangle

\langle \theta^2 \overline{\theta} y^{3/4}, \mu_j \rangle +

\sum_{\mathfrak{a}}\int_{(1/2)}

\langle P_h^k, E_h^k(\cdot, u) \rangle

\langle \theta^2 \overline{\theta} y^{3/4}, E_h^k(\cdot, u) \rangle du,

\end{equation}$$

which is full of lots of non-obvious symbols and is generically intimidating.

Part of the theme of this talk is to give a very direct idea of how one gets tothe very complicated spectral expansion from the original lattice-countingproblem. Stated differently, perhaps part of the theme is to describe a simple-lookingnail and a scary-looking hammer, and show that the hammer actually works quitewell in this case.

The slides for this talk are available here.

]]>In short, it was not too hard, and now the app is set up for use. (It’s not a public tool, so I won’t link to it).

But there were a few things that I had to think figure out which I would quickly forget. Following the variety of information I found online, the only nontrivial aspect was configuring the site to run on a non-root domain (like `davidlowryduda.com/subdomain`

instead of at `davidlowryduda.com`

). I’m writing this so as to not need to figure this out when I write and hoost more flask apps. (Which I’ll almost certainly do, as it’s so straightforward).

There are some uninteresting things one must do on WebFaction.

- Log into your account.
- Add a new application of type
`mod_wsgi`

(and the desired version of python, which is hopefully 3.6+). - Add this application to the desired website and subdomain in the WebFaction control panel.

After this, WebFaction will set up a skeleton “Hello World” mod_wsgi application with many reasonable server setting defaults. The remainder of the setup is done on the server itself.

In `~/webapps/application_name`

there will now appear

```
apache2/ # Apache config files and bin
htdocs/ # Default location where Apache looks for the app
```

We won’t change that structure. In htdocs^{1} there is a file `index.py`

, which is where apache expects to find a python wsgi application called `application`

. We will place the flask app along this structure and point to it in `htdocs/index.py`

.

Usually I will use a virtualenv here. So in `~/webapps/application_name`

, I will run something like `virtualenv flask_app_venv`

and `virtualenv activate`

(or actually out of habit I frequently source the `flask_app_venv/bin/activate`

file). Then pip install flask and whatever other python modules are necessary for the application to run. We will configure the server to use this virtual environment to run the app in a moment.

Copy the flask app, so that the resulting structure looks something like

```
~/webapps/application_name:
- apache2/
- htdocs/
- flask_app_venv/
- flask_app/ # My flask app
- config.py
- libs/
- main/
- static/
- templates/
- __init__.py
- views.py
- models.my
```

I find it conceptually easiest if I have `flask_app/main/__init__.py`

to directly contain the flask `app`

to reference it by name in `htdocs/index.py`

. It can be made elsewhere (for instance, perhaps in a file like `flask_app/main/app.py`

, which appears to be a common structure), but I assume that it is at least imported in `__init__.py`

.

For example, `__init__.py`

might look something like

```
# application_name/flask_app/main/__init__.py
# ... other import statements from project if necessary
from flask import Flask
app = Flask(__name__)
app.config.from_object('config')
# Importing the views for the rest of our site
# We do this here to avoid circular imports
# Note that I call it "main" where many call it "app"
from main import views
if __name__ == '__main__':
app.run()
```

The Flask constructor returns exactly the sort of wsgi application that apache expects. With this structure, we can edit the `htdocs/index.py`

file to look like

```
# application_name/htdocs/index.py
import sys
# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')
# launching our app
from main import app as application
```

Now the server knows the correct wsgi_application to serve.

We must configure it to use our python virtual environment (and we’ll add a few additional convenience pieces). We edit `/apache2/conf/httpd.conf`

as follows. Near the top of the file, certain modules are loaded. Add in the alias module, so that the modules look something like

```
#... other modules
LoadModule wsgi_module modules/mod_wsgi.so
LoadModule alias_module modules/mod_alias.so # <-- added
```

This allows us to alias the root of the site. Since all site functionality is routed through `htdocs/index.py`

, we want to think of the root `/`

as beginning with `/htdocs/index.py`

. At the end of the file

```
Alias / /home/username/webapps/application_name/htdocs/index.py/
```

We now set the virtual environment to be used properly. There will be a set of lines containing names like `WSGIDaemonProcess`

and `WSGIProcessGroup`

. We edit these to refer to the correct python. WebFaction will have configured `WSGIDaemonProcess`

to point to a local version of python by setting the python-path. Remove that, making that line look like

```
WSGIDaemonProcess application_name processes=2 threads=12
```

(or similar). We set the python path below, adding the line

```
WSGIPythonHome /home/username/webapps/application_name/flask_app_venv
```

I believe that this could also actually be done by setting puthon-path in WSGIDaemonProcess, but I find this more aesthetically pleasing.

We must also modify the “ section. Edit it to look like

```
<Directory /home/username/webapps/application_name/htdocs>
AddHandler wsgi-script .py
RewriteEngine On # <-- added
RewriteBase / # <-- added
WSGIScriptReloading On # <-- added
<Directory>
```

It may very well be that I don't use the RewriteEngine at all, but if I do then this is where it's done. Script reloading is a nice convenience, especially while reloading and changing the app.

I note that it may be convenient to add an additional alias for static file hosting,

```
Alias /static/ /home/your_username/webapps/application_name/app/main/static/
```

though I have not used this so far. (I get the same functionality through controlling the flask views appropriately).

The rest of this file has been setup by WebFaction for us upon creating the wsgi application.

If the application is to be run on a non-root domain, such as `davidlowryduda.com/subdomain`

, then there is currently a problem. In flask, when using url getters like `url_for`

, urls will be returned as though there is no subdomain. And thus all urls will be incorrect. It is necessary to alter provided urls in some way.

The way that worked for me was to insert a tiny bit of middleware in the wsgi_application. Alter `htdocs/index.py`

to read

```
#application_name/htdocs/index.py
import sys
# append flask project files
sys.path.append('/home/username/webapps/application_name/my_flask_app/')
# subdomain url rerouting middleware
from webfaction_middleware import Middleware
from main import app
# set app through middleware
application = Middleware(app)
```

Now of course we need to write this middleware.

In `application_name/flask_app`

, I create a file called `webfaction_middleware.py`

, which reads

```
# application_name/flask_app/webfaction_middleware.py
class Middleware(object): # python2 aware
def __init__(self, app):
self.app = app
def __call__(self, environ, start_response):
app_url = '/subdomain'
if app_url != '/':
environ['SCRIPT_NAME'] = app_url
return self.app(environ, start_response)
```

I now have a template file in which I keep `app_url = '/'`

so that I can forget this and not worry, but that is where the subdomain url is prepended. *Note that the leading slash is necessary.* When I first tried using this, I omitted the leading slash. The application worked sometimes, and horribly failed in some other places. Some urls were correcty constructed, but most were not. I didn't try to figure out which ones were doomed to fail --- but it took me an embarassingly long time to realize that prepending a slash solved all problems.

The magical-names of `environ`

and `start_response`

are because the flask app is a wsgi_application, and this is the api of wsgi_applications generically.

Restart the apache server (`/apache2/bin/restart`

) and go. Note that when incrementally making changes above, some changes can take a few minutes to fully propogate. It's only doing it the first time which takes some thought.

Gerrymandering has become a recurring topic in the news. The Supreme Court of the US, as well as more state courts and supreme courts, is hearing multiple cases on partisan gerrymandering (all beginning with a case in Wisconsin).

Intuitively, it is clear that gerrymandering is bad. It allows politicians to choose their voters, instead of the other way around. And it allows the majority party to quash minority voices.

But how can one identify a gerrymandered map? To quote Justice Kennedy in his Concurrence the 2004 Supreme Court case Vieth v. Jubelirer:

When presented with a claim of injury from partisan gerrymandering, courts confront two obstacles. First is the lack of comprehensive and neutral principles for drawing electoral boundaries. No substantive definition of fairness in districting seems to command general assent. Second is the absence of rules to limit and confine judicial intervention. With uncertain limits, intervening courts–even when proceeding with best intentions–would risk assuming political, not legal, responsibility for a process that often produces ill will and distrust.

Later, he adds to the first obstacle, saying:

The object of districting is to establish “fair and effective representation for all citizens.” Reynolds v. Sims, 377 U.S. 533, 565—568 (1964). At first it might seem that courts could determine, by the exercise of their own judgment, whether political classifications are related to this object or instead burden representational rights. The lack, however, of any agreed upon model of fair and effective representation makes this analysis difficult to pursue.

From Justice Kennedy’s Concurrence emerges a theme — a “workable standard” of identifying gerrymandering would open up the possibility of limiting partisan gerrymandering through the courts. Indeed, at the core of the Wisconsin gerrymandering case is a proposed “workable standard”, based around the **efficiency gap.**

In 1971, American economist Thomas Schelling (who later won the Nobel Prize in Economics in 2005) published *Dynamic Models of Segregation* (Journal of Mathematical Sociology, 1971, Vol 1, pp 143–186). He sought to understand why racial segregation in the United States seems so difficult to combat.

He introduced a simple model of segregation suggesting that even if each individual person doesn’t mind living with others of a different race, they might still *choose* to segregate themselves through mild preferences. As each individual makes these choices, overall segregation increases.

I write this post because I wondered what happens if we adapt Schelling’s model to instead model a state and its district voting map. In place of racial segregation, I consider political segregation. Supposing the district voting map does not change, I wondered how the efficiency gap will change over time as people further segregate themselves.

It seemed intuitive to me that political segregation (where people who had the same political beliefs stayed largely together and separated from those with different political beliefs) might correspond to more egregious cases of gerrymandering. But to my surprise, I was (mostly) wrong.

Let’s set up and see the model.

Let us first set up Schelling’s model of segregation. Let us model a state as a grid, where each square on that grid represents a house. Initially ten percent of the houses are empty (White), and the remaining houses are randomly assigned to be either Red or Blue.

We suppose that each person wants to have a certain percentage ($p$) of their neighbors that are like itself. By “neighbor”, we mean those in the adjacent squares. We will initially suppose that $p = 0.33$, which is a pretty mild condition. For instance, each person doesn’t mind living with 66 percent of their neighbors being different, so long as there a couple of similar people nearby.

At each step (which I’ll refer to as a year), if a person is unhappy (i.e. fewer than $p$ percent of their neighbors are like that person) then they leave their house and move randomly to another empty house. Notice that after moving, they may or may not be happy, and they will cause other people to perhaps become happy or become unhappy.

We also introduce a measure of segregation. We call the segregation the value of

$$\text{segregation} = \frac{(\sum \text{number of same neighbors})}{(\sum \text{number of neighbors})},$$

summed across the houses in the grid, and where empty spaces aren’t the same as anything, and don’t count as a neighbor. Thus high segregation means that more people are surrounded only by people like them, and low segregation means people are very mixed. (Note also that 50 percent segregation is considered “very unsegregated”, as it means half your neighbors are the same and half are different).

To get to a specific example, here is an instance of this model for $400$ spaces in a $20$ by $20$ grid.

Initially, there is a lot of randomness, and segregation is a low $0.50$. After one year, the state looks like:

After another year, it changes a bit more:

From year to year, the changes are small. But already significant segregation is occurring. The segregation measure is now at $0.63$. After another 10 years, we get the following picture:

That map appears extremely segregated, and now the segregation measure is $0.75$. Further, it didn’t even take very long!

Let’s look at a larger model. Here is a 200 by 200 grid. And since we’re working larger, suppose each square “neighbors” the nearest 24, so that a neighborhood around ‘o’ looks like

```
xxxxx
xxxxx
xxoxx
xxxxx
xxxxx
```

Then we get:

Initially this looks quite a bit like red, white, and blue static. Going forward a couple of years, we get

Let us now fast forward ten years.

And after another ten years…

As an example of the sorts of things that can happen, we present several 15 second animations of these systems with a variety of initial parameters. In each animation, 30 years pass — one each half second.

We first consider a set of cases where we hold everything fixed except for how large people consider their “neighborhood.” In the following, neighborhood sizes range from your nearest 8 neighbors (those which are 1 step away, counting diagonal steps) to those which are up to 5 steps away. The behavior is a bit different in each one. Note that in the really-large neighborhood case, there are only a few people moving at all.

We next consider the cases when people want to have neighborhoods which are at least 40 percent like themselves. That is, people want to cluster a bit more. The list then spans over differing neighborhood sizes again.

These are stunning, and sort of beautiful. I am reminded of slime mold growth.

We now up percentage again to $p = 0.5$. That is, people only feel comfortable in neighborhoods with at least 50 percent of occupants similar to themselves. This is actually the parameter that Schelling introduced in his paper and experiments.

The higher comfort factor leads to much quicker convergence to extreme segregation. This is intuitive — high individual segregation concerns leads to quick societal segregation.

We now increase the population to a million people. In the first animation, there is large segregation at the end, but it manifests as a sort of network of red and blue wispy fingers rather than big blobs. Partly this is because there simply more room. But it’s also a manifestation of the fact that with small neighborhood sizes, one gets mostly local effects.

The second and third animations are pretty astounding to me. In these, people want 40 percent similarity with their neighbors, and their “neighborhoods” are those without four steps (including diagonally) to them. In the third, 55% of the population is Red and 45% is Blue. In the last one, a much larger majority is Red, and there are so few Blue people that they are all rapidly moving trying to find some base where they feel comfortable. But they never found a comfort base.

We should take a moment to say what it is that we are actually trying to measure. Is this supposed to be a perfect model of actual behavior? No.

This note has been examining how some individual incentives, decisions, and perceptions of difference can lead collectively towards greater segregation. Although I have phrased this in terms of political party identification, this analysis is so abstract that it could be applied to any singular distinction.

We should also note that several causes of segregation are omitted from consideration. One is organized action (be is legal or illegal, in good faith or in bad). Another are economic causes behind many separations, such as how the poor are separated from the rich, the unskilled from the skilled, the less educated from the more educated. These lead to separations in job, pastime, residence, and so on. And as political party affiliation correlates strongly with income, and income correlates strongly with where one lives, this is a major factor to omit.

I do not claim that these other sources of discrimination and segregation are less important, but only that I do not know how to model them. And instead I follow Schelling’s line of thought, whereby one looks to see to what extent we might expect individual action to lead to collective outcomes.

Given a Schelling model, we now adapt it incorporate voting districts. Let us suppose that our square is divided up into (regular rectangular) regions of voters. We will assume a totally polarized voter base, so that Red people will always vote for the Red party and Blue people will always vote for the Blue party. (This is a pretty strong assumption).

Before we describe exactly how we set up the model, let’s look at an example. Given a typical Schelling model, we separate it into (in this case, 10) districts.

Each of the 10 areas vote, giving some tallies. In this case, we have the following table which describes the results of this year’s vote. Districts are numbered from top left to bottom right, sequentially.

District | Blue Vote | Red Vote | Winner | Blue Wasted | Red Wasted | Net Wasted |
---|---|---|---|---|---|---|

0 | 18 | 14 | blue | 3 | 14 | -11 |

1 | 20 | 16 | blue | 3 | 16 | -13 |

2 | 19 | 15 | blue | 3 | 15 | -12 |

3 | 12 | 25 | red | 12 | 12 | 0 |

4 | 18 | 20 | red | 18 | 1 | 17 |

5 | 19 | 15 | blue | 3 | 15 | -12 |

6 | 21 | 13 | blue | 7 | 13 | -6 |

7 | 23 | 15 | blue | 7 | 15 | -8 |

8 | 22 | 15 | blue | 6 | 15 | -9 |

9 | 14 | 23 | red | 14 | 8 | 6 |

“Blue Wasted” refers to a wasted blue vote (similarly for Red). This is a key idea counted in the efficiency gap, and contributes towards the overall measure of gerrymandering.

A wasted vote is one that doesn’t contribute to winning an additional election. A vote can be wasted in two different ways. All votes for a losing candidate are wasted, since they didn’t contribute to a win. On the other hand, excess voting for a single candidate is also wasted.

So in District 0, the Blue candidate won and so all 14 Red votes are wasted. The Blue candidate only needed 15 votes to win, but received 18. So there are three excess Blue votes, which means that there are 3 Blue votes wasted.

I adopt that convention (for ease of summing up) that the net wasted votes is the number of Blue wasted votes minus the number of Red wasted votes. So if it is positive, this means that more Blue votes were wasted than Red. And if it’s negative, then more Red votes were wasted than Blue.

With this example in mind, a rough definition of gerrymandering in a competitive district is to draw lines so that one party has many more wasted votes. In this example, there are 186 Blue voters and 171 Red voters, so it might be expected that approximately half of the winners would be Red and half would be Blue. But in fact there are 7 Blue winners and only 3 Red winners.

And a big reason why is that the overall net wasted number of votes is $-48$, which means that $48$ more Red votes than Blue votes did not contribute to a winning election.

So roughly, more wasted votes corresponds to more gerrymandering. The efficiency gap is defined to be

$$ \text{Efficiency Gap} = \frac{\text{Net Wasted Votes}}{\text{Number of Voters}}.$$

In this case, there are $48$ wasted votes and $357$ voters, so the efficiency gap is $48/357 = 0.134$. This number, 13.4 percent, is very high. The proposed gap to raise flags in gerrymandering cases is 7 percent — any higher, and one should consider redrawing district lines.

The efficiency gap is extremely easy to compute, which is a good plus. But whether it is a good indicator of gerrymandering is more complicated, and is one of the considerations in teh Supreme Court case concerining gerrymandering in Wisconsin.

With this example in mind, we are now prepared to describe the model explicitly. The initial setup is the same as in Schelling’s model. A state is a rectangular grid, where each square on this grid represents a house. Unoccupied houses are White. If a Red person occupies a house, then the house is colored Red. If a Blue person occupies a house, then that house is colored Blue. Each year, all the Red people vote for the Red candidate, and all Blue people vote for the Blue candidate, and we can tally the results. We will then measure the efficiency gap.

At the same time, each year people may move as in Schelling’s model. A person is satisfied if they have at least $p$ percent of their neighbors which are similar to them. We will again default to $33$ percent, and a person’s neighbors will be all those people which adjacent (or for larger models, perhaps 2-step adjacent, or that flavor) away.

At each step, we can measure the segregation (which we know will increase from Schelling’s model) and the efficiency gap.

At last, we are prepared to investigate the relationship between segregation and the efficiency gap.

In our first simulation, there is an initial segregation of 51% and and initial efficiency gap of 6%. This is pictured below.

As we can see, an increase in segregation corresponds to an increase in the efficiency gap. ^{1}

Let us now consider a second simulation. There are no parameters changed between this and the above simulation, aside from the chance placement of people.

An increase in segregation actually occurs with a decrease in efficiency gap. Further, if we stepped through year to year, we would see that as the state became more segregated, it also lowered its efficiency gap.

At least naively we should no longer expect increased segregation to correspond to an increase in the efficiency gap.

Let’s try a larger simulation. This one is 200 x 200, with 25 districts.

Again, segregation correlates negatively with the efficiency gap.

What if 55 percent of the population is blue? Does this imbalance lead to interesting simulations? We present two such simulations below.

In each of these simulations, there was an initially large efficiency gap. This is fundamentally caused by the relatively equidistributed Red minority, which essentially loses everywhere. We might say that the Red group begins in a *cracked* state. After 20 years, the efficiency gap falls, since segregation has the interesting side effect of relieving the Red people from their diffused state.

In fact I ran a very large number of simulations with a variety of parameters, and generically increased segregation tends to correspond to a decrease in the efficiency gap.

More segregation leads to a smaller efficiency gap. Why might this be?

I think one of the major reasons is evident in the last pair of simulations I presented above. Uniform segregation reduces the “cracking” gerrymandering technique. In *cracking*, one tries to divide a larger group into many smaller minorities by splitting them into many districts. This maximizes the number of wasted votes coming from lost elections (as opposed to wasted votes from *packing* lots of people into one district so that they over-win an election). Segregation produces clusters, and these clusters tend to win their local district’s election.

The few examples above where high segregation accompanied high efficiency gaps were when the segregated clusters happened to be split by district lines.^{2}

I read many pieces from others while preparing this post. Though I don’t cite any of them explicitly, these works were essential for my preparation.

- A formula goes to court: partisan gerrymandering and the effiency gap. By Mira Bernstein and Moon Duchin. Available on the arXiv.
- An impossibility theorem for gerrymandering. By Boris Alexeev and Dustin Mixon. Available on the arXiv.
- Flaws in the efficiency gap. By Christopher Chambers, Alan Miller, and Joel Sobel. Available on Christopher Chambers’ site.
- How the new math of gerrymandering Works, in the New York Times. By Nate Cohn and Quoctrung Bui. Available at the nytimes.
- The flaw in America’s ‘holy grail’ against gerrymandering, in the Atlantic. By Sam Kean. Available at the atlantic.
- Dynamic Models of Segregation, by Thomas Schelling. In Journal of Mathematical Sociology, 1971, Vol 1, pp143–186.

Below, I include the code I used to generate these simulations and images. This code, as well as much of the code I used to generate the particular data above, is available as a jupyter notebook in my github. (But I would mention that unlike some previous notebooks I’ve made available, this was really a working notebook and isn’t a final product in itself).

The heart of this code is based on code from Allen Downey, presented in his book “Think Complexity.” He generously released his code under the MIT license.

```
"""
Copyright (c) 2018 David Lowry-Duda
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib.colors import LinearSegmentedColormap
from scipy.signal import correlate2d
class Schelling:
"""A 2D grid of Schelling agents."""
options = dict(mode='same', boundary='wrap')
def __init__(self, n, m=None, p=0.5, empty_prob=0.1, red_prob=0.45, size=3):
"""
Initialize grid with attributes.
Args:
n: (int) number of rows.
m: (int) number of columns. In None, defaults to n.
p: (float) ratio of neighbors that makes one feel comfortable.
empty_prob: (float) probability an initial cell is empty.
red_prob: (float) probability an initial cell is Red.
size: (odd int) size of the neighborhood kernel.
"""
self.p = p
m = n if m is None else m
EMPTY, RED, BLUE = 0, 1, 2
empty_prob = 0.1
choices = [EMPTY, RED, BLUE]
probs = [empty_prob, red_prob, 1 - empty_prob - red_prob]
self.array = np.random.choice(choices, (n, m), p=probs).astype(np.int8)
self.kernel = self._make_kernel(size)
def _make_kernel(self, size):
"""
Construct size*size adjacency kernel.
Args:
size: (int) for size of kernel.
Returns:
np.array such as (for size=3)
[[1,1,1],
[1,0,1],
[1,1,1]]
In the size=n case, it's an n*n array of ones with a zero at
the center.
"""
pad = int((size**2-1)/2)
return np.array([1]*(pad) + [0] + [1]*(pad)).reshape((size,size))
def count_neighbors(self):
"""
Surveys neighbors of cells.
Returns:
This returns the tuple (occupied, frac_red, frac_same)
where
occupied: logical array indicating occupied cells.
frac_red: array containing fraction of red neighbors around each cell.
frac_same: array containing the fraction of similar neighbors.
Note:
Unoccupied cells do not count in neighbors or similarity.
"""
a = self.array
EMPTY, RED, BLUE = 0, 1, 2
# These create np.arrays where each entry is True if condition is true
red = a==RED
blue = a==BLUE
occupied = a!=EMPTY
# count red neighbors and all neighbors
num_red = correlate2d(red, self.kernel, **self.options)
num_neighbors = correlate2d(occupied, self.kernel, **self.options)
# compute fraction of similar neighbors
frac_red = num_red / num_neighbors
frac_blue = 1 - frac_red
frac_same = np.where(red, frac_red, frac_blue)
# no neighbors is considered the same as no similar neighbors
frac_same[num_neighbors == 0] = 0
frac_red[num_neighbors == 0] = 0
# Unoccupied squares are not similar to anything
frac_same[occupied == 0] = 0
return occupied, frac_red, frac_same
def segregation(self):
"""Computes the average fraction of similar neighbors."""
occupied, _, frac_same = self.count_neighbors()
return np.sum(frac_same) / np.sum(occupied)
def step(self):
"""Executes one time step."""
a = self.array
# find the unhappy cells
occupied, _, frac_same = self.count_neighbors()
unhappy_locs = locs_where(occupied & (frac_same < self.p))
# find the empty cells
empty = a==0
num_empty = np.sum(empty)
empty_locs = locs_where(empty)
# shuffle the unhappy cells
if len(unhappy_locs):
np.random.shuffle(unhappy_locs)
# for each unhappy cell, choose a random destination
for source in unhappy_locs:
i = np.random.randint(len(empty_locs))
dest = tuple(empty_locs[i])
# move
a[dest] = a[tuple(source)]
a[tuple(source)] = 0
empty_locs[i] = source
num_empty2 = np.sum(a==0)
assert num_empty == num_empty2
return
def locs_where(condition):
"""
Find cells where a logical array is True.
Args:
condition: (2D numpy logical array).
Returns:
Array with one set of coordinates per row indicating where
condition was true.
Example:
Input is (as np.array)
[[1,0],
[1,1]]
Then the output will be
[[0,0],[1,0],[1,1]]
which are the three locations of the nonzero (True) cells.
"""
return np.transpose(np.nonzero(condition))
def make_cmap(color_dict, vmax=None, name='mycmap'):
"""
Makes a custom color map.
Args:
color_dict: (dict) of form {number:color}.
vmax: (float) high end of the range. If None, use max value
from color_dict.
name: (str) name for map.
Returns:
pyplot color map.
"""
if vmax is None:
vmax = max(color_dict.keys())
colors = [(value/vmax, color) for value, color in color_dict.items()]
cmap = LinearSegmentedColormap.from_list(name, colors)
return cmap
class SchellingViewer:
"""Generates animated view of Schelling array"""
# colors from http://colorbrewer2.org/#type=diverging&scheme=RdYlBu&n=5
colors = ['#fdae61','#abd9e9','#d7191c','#ffffbf','#2c7bb6']
cmap = make_cmap({0:'white', 1:colors[2], 2:colors[4]})
options = dict(interpolation='none', alpha=0.8)
def __init__(self, viewee):
"""
Initialize.
Args:
viewee: (Schelling) object to view
"""
self.viewee = viewee
self.im = None
self.hlines = None
self.vlines = None
def step(self, iters=1):
"""Advances the viewee the given number of steps."""
for i in range(iters):
self.viewee.step()
def draw(self, grid=False):
"""
Draws the array, perhaps with a grid.
Args:
grid: (boolean) if True, draw grid lines. If False, don't.
"""
self.draw_array(self.viewee.array)
if grid:
self.draw_grid()
def draw_array(self, array=None, cmap=None, **kwds):
"""
Draws the cells.
Args:
array: (2D np.array) Array to draw. If None, uses self.viewee.array.
cmap: colormap to color array.
**kwds: keywords are passed to plt.imshow as options.
"""
# Note: we have to make a copy because some implementations
# of step perform updates in place.
if array is None:
array = self.viewee.array
a = array.copy()
cmap = self.cmap if cmap is None else cmap
n, m = a.shape
plt.axis([0, m, 0, n])
# Remote tickmarks
plt.xticks([])
plt.yticks([])
options = self.options.copy()
options['extent'] = [0, m, 0, n]
options.update(kwds)
self.im = plt.imshow(a, cmap, **options)
def draw_grid(self):
a = self.viewee.array
n, m = a.shape
lw = 2 if m < 10 else 1 options = dict(color='white', linewidth=lw) rows = np.arange(1, n) self.hlines = plt.hlines(rows, 0, m, **options) cols = np.arange(1, m) self.vlines = plt.vlines(cols, 0, n, **options) def animate(self, frames=20, interval=200, grid=False): """ Creates an animation. Args: frames: (int) number of frames to draw. interval: (int) time between frames in ms. grid: (boolean) if True, include grid in drawings. """ fig = plt.figure() self.draw(grid=grid) anim = animation.FuncAnimation(fig, self.animate_func, init_func=self.init_func, frames=frames, interval=interval) return anim def init_func(self): """Called at the beginning of an animation.""" pass def animate_func(self, i): """Draws one frame of the animation.""" if i > 0:
self.step()
a = self.viewee.array
self.im.set_array(a)
return (self.im,)
```

Then a typical Schelling object can be viewed through a call like

```
grid = Schelling(n=6)
viewer = SchellingViewer(grid)
viewer.draw(grid=True)
```

And code for the District analysis, which sits on top of Schelling from above.

```
class Districts(Schelling):
"""A 2D grid of Schelling agents organized into districts."""
def __init__(self, n, m=None, p=0.5, rows=2,
cols=2, empty_prob=0.1, red_prob=0.45, size=3):
"""
Initialize grid.
Args:
n: (int) number of rows in grid.
m: (int) number of columns in grid. If None, defaults to n.
p: (float) ratio of neighbors required to feel comfortable.
rows: (int) number of rows of districts.
cols: (int) number of columns of districts.
empty_prob: (float) probability each initial cell is empty.
red_prob: (float) probability each initial cell is Red.
size: (odd int) size of neighborhood kernel.
Note:
`rows` must divide n, and `cols` must divide m.
An exception is raised otherwise.
"""
self.p = p
self.n = n
self.m = n if m is None else m
self.rows = rows
self.cols = cols
self.schelling_grid = Schelling(n, m=self.m, p=p,
empty_prob=empty_prob,
red_prob=red_prob, size=3)
self.array = self.schelling_grid.array
self.kernel = self.schelling_grid.kernel
self.row_mult = self.n//self.rows
self.col_mult = self.m//self.cols
try:
assert(self.row_mult*self.rows==self.n)
assert(self.col_mult*self.cols==self.m)
except AssertionError:
raise Exception(("The number of rows and number of columns must"
" divide the size of the grid."))
self.districts = self.make_districts()
def make_districts(self, array=None):
"""
Returns array of np.arrays, one for each district.
"""
if array is None:
array = self.array
# double indices works from numpy sugar
return [array[self.row_mult*i: self.row_mult*(i+1),
self.col_mult*j: self.col_mult*(j+1)]
for i in range(self.rows) for j in range(self.cols)]
def votes(self, output=False):
"""Count votes in each district."""
votes = dict()
if output:
print ("Vote totals\n-----------\n")
for num, district in enumerate(self.districts):
IS_RED = 1
IS_BLUE = 2
votes[num] = {'red': list(district.flatten()).count(IS_RED),
'blue': list(district.flatten()).count(IS_BLUE)}
if output:
print ("District {}:: Red vote: {}, Blue vote: {}".format(
num, votes[num]['red'], votes[num]['blue']))
return votes
def tally_votes(self, output=False):
"""Detect winners from votes in each district."""
tallies = self.votes()
if output:
print ("Tallying votes\n--------------\n")
for num, district in enumerate(self.districts):
dist_tally = tallies[num]
dist_tally.update(self.determine_winner(dist_tally))
return tallies
def determine_winner(self, vote_tally):
"""
Given a single district's vote_tally, determine the winner.
Returns:
A dictionary with the keys
'winner'
'red_wasted'
'blue_wasted'
computed from vote tally.
"""
res = dict()
if vote_tally['red'] > vote_tally['blue']:
res['winner'] = 'red'
res['red_wasted'] = vote_tally['red'] - vote_tally['blue'] - 1
res['blue_wasted'] = vote_tally['blue']
elif vote_tally['blue'] > vote_tally['red']:
res['winner'] = 'blue'
res['blue_wasted'] = vote_tally['blue'] - vote_tally['red'] - 1
res['red_wasted'] = vote_tally['red']
else:
res['winner'] = 'tie'
res['red_wasted'] = 0
res['blue_wasted'] = 0
return res
def net_wasted_votes_by_district(self):
"""
Compute net wasted votes in each district.
Note:
We adopt the convention that 1 wasted vote means a wasted blue vote,
while -1 wasted vote means a wasted red vote.
"""
res = dict()
tallies = self.tally_votes()
for num, district in enumerate(self.districts):
res[num] = tallies[num]['blue_wasted'] - tallies[num]['red_wasted']
return res
def net_wasted_votes(self):
wasted_by_dist = self.net_wasted_votes_by_district()
return sum(wasted_by_dist[num] for num in wasted_by_dist.keys())
def efficiency_gap(self):
return abs(self.net_wasted_votes()) / (np.sum(self.array != 0))
def votes_to_md_table(self):
"""
Output votes to a markdown table.
This is a jupyter notebook convenience method.
"""
vote_tally = self.tally_votes()
ret = "|District|Blue Vote|Red Vote|Winner|Blue Wasted|Red Wasted|Net Wasted|\n"
ret += "|-|-|-|-|-|-|-|\n"
for i in range(len(vote_tally)):
district = i
dist_res = vote_tally[i]
bv = dist_res['blue']
bw = dist_res['blue_wasted']
rv = dist_res['red']
rw = dist_res['red_wasted']
nw = bw - rw
winner = dist_res['winner']
ret += "|{}|{}|{}|{}|{}|{}|{}|\n".format(district, bv, rv, winner, bw, rw, nw)
return ret
class District_Viewer(SchellingViewer):
"""Viewer of Schelling District arrays"""
def __init__(self, districts):
super().__init__(districts.schelling_grid)
self.row_multiplier = districts.row_mult
self.col_multiplier = districts.col_mult
def draw_grid(self):
"""Draws the grid."""
a = self.viewee.array
n, m = a.shape
lw = 2 if m < 10 else 1
options = dict(color='white', linewidth=lw)
rows = self.row_multiplier*np.arange(1, n)
self.hlines = plt.hlines(rows, 0, m, **options)
cols = self.col_multiplier*np.arange(1, m)
self.vlines = plt.vlines(cols, 0, n, **options)
```

The functionality is built on top of Schelling, above. Typical use would look like

```
dgrid = Districts(10, cols=5, p=.2)
viewer = District_Viewer(dgrid)
viewer.draw(grid=True)
dgrid.tally_votes()
```

]]>

There are a couple of different ways to take this story. The most common response I have seen is to blame the employee who accidentally triggered the alarm, and to forgive the Governor his error because who could have guessed that something like this would happen? The second most common response I see is a certain shock that the key mouthpiece of the Governor in this situation is apparently Twitter.

There is some merit to both of these lines of thought. Considering them in turn: it is pretty unfortunate that some employee triggered a state of hysteria by pressing an incorrect button (or something to that effect). We always hope that people with great responsibilities act with extreme caution (like thermonuclear war).

So certainly some blame should be placed on the employee.

As for Twitter, I wonder whether or not a sarcasm filter has been watered down between the Governor’s initial remarks and my reading it in Doug’s article for CNN. It seems likely to me that this comment is meant more as commentary on the status of Twitter as the President’s preferred ^{2} medium of communicating with the People. It certainly seems unlikely to me that the Governor would both frequently use Twitter for important public messages *and* forget his Twitter credentials. Perhaps this is code for “I couldn’t get in touch with the person who manages my Twitter account” (because that person was hiding in a bunker?), but that’s not actually important.

When I first read about the false alarm in Hawaii and the follow-up stories, I was immediately reminded of a story I’d read on HackerNews^{3} and reddit^{4} about a junior software developer starting a job at a new company. Bright-eyed and bushy-tailed, the developer begins to set up her^{5} development environment and build some familiarity with the database. Not quite knowing better, the developer used some credentials in the onboarding document given to her, and ultimately accidentally deleted the entire (actual, production) database.

The company immediately panics and blames her. It is her fault that she destroyed the database, and now the company has an enormous loss of data. They don’t have backups, they’re bringing in legal to assess damage, etc.

What is the moral of this cautionary Parable?

It is certainly NOT that one should blame the young developer.

The moral is that the system should not allow people who do not know any better to access (or delete) the production database, and further that there should be backups so that this sort of catastrophic incident cannot occur. Daily database backups and not including production database access credentials in onboarding documents are two steps in the right direction.

In a famous story from IBM,^{6} a junior developer makes a mistake that cost the company 10 million dollars. He walks into the office of Tom Watson, the CEO, expecting to get fired. “Fire you?” Mr Watson asked. “I just spent 10 million educating you.”

The system and culture should be crafted to

- prevent these mistakes,
- quickly correct these mistakes, and
- learn from errors to improve the system and culture.

Stories in the news have thus far focused on inadequate prevention, such as the widely circulated image of poor interface design

(not to be confused with the earlier, even worse, version, which was apparently made-up^{7}), or stories have focused on inadequate ability to quickly correct these mistakes (such as this CNN article indicating that the Governor’s inability to tweet got in the way of quickly restoring peace of mine).

But what I’m interested in is: what will be learned from this mistake, and what changes to the system will be made? And slightly deeper, what led to the previous system?

US Pacific Command and the office of the Governor of Hawaii need to run a complete post-mortem to understand

- what led to this false alarm,
- what led to the nearly forty minutes between understanding there was a false alarm and disseminating this information, and
- what things should be done to address these issues.

Further, this information should be shared widely with the defense and alarm networks throughout the US. Surely Hawaii is not the only state with that (or a similar) setup in place. Can you not imagine this happening in some other state? Other nations and countries might take this as inspiration to self-reflect on their own disaster-alert systems.

This is a huge opportunity to learn and improve. It may very well be that the poor employee continually makes ridiculous mistakes and should be let go, or it may be that it requires too much concentration to not make an error and the employee can help foolproof the system.

Unfortunately, due to the sensitive nature of this software and scenario, I don’t think that we’ll get to hear about the most important part — what is learned and changed. But it’s still the most important part. It’s the important thing to be learned from this Parable for the Nuclear Age.

]]>Today I give a talk on counting lattice points on one-sheeted hyperboloids. These are the shapes described by

$$ X_1^2 + \cdots + X_{d-1}^2 = X_d^2 + h,$$

where $h > 0$ is a positive integer. The question is: how many lattice points $x$ are on such a hyperboloid with $| x |^2 \leq R$; or equivalently, how many lattice points are on such a hyperboloid and contained within a ball of radius $\sqrt R$ centered at the origin?

I describe my general approach of transforming this into a question about the behavior of modular forms, and then using spectral techniques from the theory of modular forms to understand this behavior. This becomes a question of understanding the shifted convolution Dirichlet series

$$ \sum_{n \geq 0} \frac{r_{d-1}(n+h)r_1(n)}{(2n + h)^s}.$$

Ultimately this comes from the modular form $\theta^{d-1}(z) \overline{\theta(z)}$, where

$$ \theta(z) = \sum_{m \in \mathbb{Z}} e^{2 \pi i m^2 z}.$$

Here are the slides for this talk. Note that this talk is based on chapter 5 of my thesis, and (hopefully) soon a preprint of this chapter ready for submission will appear on the arXiv.

]]>