# Category Archives: Story

## How do we decide how many representatives there are for each state?

The US House of Representatives has 435 voting members (and 6 non-voting members: one each from Washington DC, Puerto Rico, American Samoa, Guam, the Northern Mariana Islands, and the US Virgin Islands). Roughly speaking, the higher the population of a state is, the more representatives it should have.

But what does this really mean?

If we looked at the US Constitution to make this clear, we would find little help. The third clause of Article I, Section II of the Constitution says

Representatives and direct Taxes shall be apportioned among the several States which may be included within this Union, according to their respective Numbers … The number of Representatives shall not exceed one for every thirty thousand, but each state shall have at least one Representative.

This doesn’t give clarity.1 In fact, uncertainty surrounding proper apportionment of representatives led to the first presidential veto.

## The Apportionment Act of 1792

According to the 1790 Census, there were 3199415 free people and 694280 slaves in the United States.2

When Congress sat to decide on apportionment in 1792, they initially computed the total (weighted) population of the United States to be 3199415 + (3/5)⋅694280 ≈ 3615923. They noted that the Constitution says there should be no more than 1 representative for every 30000, so they divided the total population by 30000 and rounded down, getting 3615983/30000 ≈ 120.5.

Thus there were to be 120 representatives. If one takes each state and divides their populations by 30000, one sees that the states should get the following numbers of representatives3

State          ideal    rounded_down
Vermont        2.851    2
NewHampshire   4.727    4
Maine          3.218    3
Massachusetts  12.62    12
RhodeIsland    2.281    2
Connecticut    7.894    7
NewYork        11.05    11
NewJersey      5.985    5
Pennsylvania   14.42    14
Delaware       1.851    1
Maryland       9.283    9
Virginia       21.01    21
Kentucky       2.290    2
NorthCarolina  11.78    11
SouthCarolina  6.874    6
Georgia        2.361    2

But here is a problem: the total number of rounded down representatives is only 112. So there are 8 more representatives to give out. How did they decide which to assign these representatives to? They chose the 8 states with the largest fractional “ideal” parts:

1. New Jersey (0.985)
2. Connecticut (0.894)
3. South Carolina (0.874)
4. Vermont (0.851)
5. Delaware (0.851)
6. Massachusetts+Maine (0.838)
7. North Carolina (0.78)
8. New Hampshire (0.727)

(Maine was part of Massachuestts at the time, which is why I combine their fractional parts). Thus the original proposed apportionment gave each of these states one additional representative. Is this a reasonable conclusion?

Perhaps. But these 8 states each ended up having more than 1 representative for each 30000. Was this limit in the Constitution meant country-wide (so that 120 across the country is a fine number) or state-by-state (so that, for instance, Delaware, which had 59000 total population, should not be allowed to have more than 1 representative)?

There is the other problem that New Jersey, Connecticut, Vermont, New Hampshire, and Massachusetts were undoubtedly Northern states. Thus Southern representatives asked, Is it not unfair that the fractional apportionment favours the North?4

Regardless of the exact reasoning, the Secretary of State Thomas Jefferson and Attorney General Edmond Randalph (both from Virginia) urged President Washington to veto the bill, and he did. This was the first use of the Presidential veto.

Afterwards, Congress got together and decided on starting with 33000 people per representative and ignoring fractional parts entirely. The exact method became known as the Jefferson Method of Apportionment, and was used in the US until 1830. The subtle part of the method involves deciding on the number 33000. In the US, the exact number of representatives sometimes changed from election to election. This number is closely related to the population-per-representative, but these were often chosen through political maneuvering as opposed to exact decision.

As an aside, it’s interesting to note that this method of apportionment is widely used in the rest of the world, even though it was abandoned in the US.5 In fact, it is still used in Albania, Angola, Argentina, Armenia, Aruba, Austria, Belgium, Bolivia, Brazil, Bulgaria, Burundi, Cambodia, Cape Verde, Chile, Colombia, Croatia, the Czech Republic, Denmark, the Dominican Republic, East Timor, Ecuador, El Salvador, Estonia, Fiji, Finland, Guatemala, Hungary, Iceland, Israel, Japan, Kosovo, Luxembourg, Macedonia, Moldova, Monaco, Montenegro, Mozambique, Netherlands, Nicaragua, Northern Ireland, Paraguay, Peru, Poland, Portugal, Romania, San Marino, Scotland, Serbia, Slovenia, Spain, Switzerland, Turkey, Uruguay, Venezuela and Wales — as well as in many countries for election to the European Parliament.

## Measuring the fairness of an apportionment method

At the core of different ideas for apportionment is fairness. How can we decide if an apportionment fair?

We’ll consider this question in the context of the post-1911 United States — after the number of seats in the House of Representatives was established. This number was set at 433, but with the proviso that anticipated new states Arizona and New Mexico would each come with an additional seat.6

So given that there are 435 seats to apportion, how might we decide if an apportionment is fair? Fundamentally, this should relate to the number of people each representative actually represents.

For example, in the 1792 apportionment, the single Delawaran representative was there to represent all 55000 of its population, while each of the two Rhode Island representatives corresponded to 34000 Rhode Islanders. Within the House of Representatives, it was as though the voice of each Delawaran only counted 61 percent as much as the voice of each Rhode Islander7

The number of people each representative actually represent is at the core of the notion of fairness — but even then, it’s not obvious.

Suppose we enumerate the states, so that Si refers to state i. We’ll also denote by Pi the population of state i, and we’ll let Ri denote the number of representatives allotted to state i.

In the ideal scenario, every representative would represent the exact same number of people. That is, we would have
$$\text{pop. per rep. in state i} = \frac{P_i}{R_i} = \frac{P_j}{R_j} = \text{pop. per rep. in state j}$$

for every pair of states i and j. But this won’t ever happen in practice.

Generally, we should expect $\frac{P_i}{R_i} \neq \frac{P_j}{R_j}$ for every pair of distinct states. If
$$\frac{P_i}{R_i} > \frac{P_j}{R_j}, \tag{1}$$

then we can say that each representative in state i represents more people, and thus those people have a diluted vote.

### Amounts of Inequality

There are lots of pairs of states. How do we actually measure these inequalities? This would make an excellent question in a statistics class (illustrating how one can answer the same question in different, equally reasonable ways) or even a civics class.

A few natural ideas emerge:

• We might try to minimize the differences of constituency size: $\left \lvert \frac{P_i}{R_i} – \frac{P_j}{R_j} \right \rvert$.
• We might try to minimize the differences in per capita representation: $\left \lvert \frac{R_i}{P_i} – \frac{R_j}{P_j} \right \rvert$.
• We might take overall size into account, and try to minimize both the relative constituency size and relative difference in per capita representation.

This last one needs a bit of explanation. Define the relative difference between two numbers x and y to be
$$\frac{\lvert x – y \rvert}{\min(x, y)}.$$

Suppose that for a pair of states, we have that $(1)$ holds, i.e. that representatives in state j have smaller constituencies than in state i (and therefore people in state j have more powerful votes). Then the relative difference in constituency size is
$$\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1.$$

The relative difference in per capita representation is
$$\frac{R_j/P_j – R_i/P_i}{R_i/P_i} = \frac{R_j/P_j}{R_i/P_i} – 1 = \frac{P_i/R_i}{P_j/R_j} – 1.$$

Thus these are the same! By accounting for differences in size by taking relative proportions, we see that minimizing relative difference in constituency size and minimizing relative difference in per capita representation are actually the same.

All three of these measures seem reasonable at first inspection. Unfortunately, they all give different apportionments (and all are different from Jefferson’s scheme — though to be fair, Jefferson’s scheme doesn’t seek to minimize inequality and there is no reason to think it should behave the same).

Each of these ideas leads to a different apportionment scheme, and in fact each has a name.

• Minimizing differences in constituency size is the Dean method.
• Minimizing differences in per capita representation is the Webster method.
• Minimizing relative differences between both constituency size and per capita representation is the Hill (or sometimes Huntington-Hill) method.

Further, each of these schemes has been used at some time in US history. Webster’s method was used immediately after the 1840 census, but for the 1850 census the original Alexander Hamilton scheme (the scheme vetoed by Washington in 1792) was used. In fact, the Apportionment Act of 1850 set the Hamilton method as the primary method, and this was nominally used until 1900.8 The Webster method was used again immediately after the 1910 census. Due to claims of incomplete and inaccurate census counts, no apportionment occurred based on the 1920 census.9

In 1929 an automatic apportionment act was passed.10 In it, up to three different apportionment schemes would be provided to Congress after each census, based on a total of 435 seats:

1. The apportionment that would come from whatever scheme was most recently used. (In 1930, this would be the Webster method).
2. The apportionment that would come from the Webster method.
3. The apportionment that would come from the newly introduced Hill method.

If one reads congressional discussion from the time, then it will be good to note that Webster’s method is sometimes called the method of major fractions and Hill’s method is sometimes called the method of equal proportions. Further, in a letter written by Bliss, Brown, Eisenhart, and Pearl of the National Academy of Sciences, Hill’s method was declared to be the recommendation of the Academy.11 From 1930 on, Hill’s method has been used.

### Why use the Hill method?

The Hamilton method led to a few paradoxes and highly counterintuitive behavior that many representatives found disagreeable. In 1880, a paradox now called the Alabama paradox was noted. When deciding on the number of representatives that should be in the House, it was noted that if the House had 299 members, Alabama would have 8 representatives. But if the House had 300 members, Alabama would have 7 representatives — that is, making one more seat available led to Alabama receiving one fewer seat.

The problem is the fluctuating relationships between the many fractional parts of the ideal number of representatives per state (similar to those tallied in the table in the section The Apportionment Act of 1792).

Another paradox was discovered in 1900, known as the Population paradox. This is a scenario in which a state with a large population and rapid growth can lose a seat to a state with a small population and smaller population growth. In 1900, Virginia lost a seat to Maine, even though Virginia’s population was larger and growing much more rapidly.

In particular, in 1900, Virginia had 1854184 people and Maine had 694466 people, so Virginia had 2.67 times the population as Maine. In 1901, Virginia had 1873951 people and Maine had 699114 people, so Virginia had 2.68 times the number of people. And yet Hamilton apportionment would have given 10 seats to Virginia and 3 to Maine in 1900, but 9 to Virginia and 4 to Maine in 1901.

Central to this paradox is that even though Virginia was growing faster than Maine the rest of the nation was growing fast still, and proportionally Virginia lost more because it was a larger state. But it’s still paradoxical for a state to lose a representative to a second state that is both smaller in population and is growing less rapidly each census.12

The Hill method can be shown to not suffer from either the Alabama paradox or the Population paradox. That it doesn’t suffer from these paradoxical behaviours and that it seeks to minimize a meaningful measure of inequality led to its adoption in the US.13

## Understanding the modern Hill method in practice

Since 1930, the US has used the Hill method to apportion seats for the House of Representatives. But as described above, it may be hard to understand how to actually apply the Hill method. Recall that Pi is the population of state i, and Ri is the number of representatives allocated to state i. The Hill method seeks to minimize
$$\frac{P_i/R_i – P_j/R_j}{P_j/R_j} = \frac{P_i/R_i}{P_j/R_j} – 1$$

whenever Pi/Ri > Pj/Rj. Stated differently, the Hill method seeks to guarantee the smallest relative differences in constituency size.

We can work out a different way of understanding this apportionment that is easier to implement in practice.

Suppose that we have allocated all of the representatives to each state and state j has Rj representatives, and suppose that this allocation successfully minimizes relative differences in constituency size. Take two different states i and j with Pi/Ri > Pj/Rj. (If this isn’t possible then the allocation is perfect).

We can ask if it would be a good idea to move one representative from state j to state i, since state j‘s constituency sizes are smaller. This can be thought of as working with Ri′=Ri + 1 and Rj′=Rj − 1. If this transfer lessens the inequality then it should be made — but since we are supposing that the allocation successfully minimizes relative difference in constituency size, we must have that the inequality is at least as large. This necessarily means that Pj/Rj′>Pi/Ri (since otherwise the relative difference is strictly smaller) and
$$\frac{P_jR_i’}{P_iR_j’} – 1 \geq \frac{P_iR_j}{P_jR_i} – 1$$

(since the relative difference must be at least as large). This is equivalent to
$$\frac{P_j(R_i+1)}{P_i(R_j-1)} \geq \frac{P_iR_j}{P_jR_i} \iff \frac{P_j^2}{(R_j-1)R_j} \geq \frac{P_i^2}{R_i(R_i+1)}.$$

As every variable is positive, we can rewrite this as
$$\frac{P_j}{\sqrt{(R_j – 1)R_j}} \geq \frac{P_i}{\sqrt{R_i(R_i+1)}}. \tag{2}$$

We’ve shown that $(2)$ must hold whenever Pi/Ri > Pj/Rj in a system that minimizes relative difference in constituency size. But in fact it must hold for all pairs of states i and j.

Clearly it holds if i = j as the denominator on the left is strictly smaller.

If we are in the case when Pj/Rj > Pi/Ri, then we necessarily have the chain Pj/(Rj − 1)>Pj/Rj > Pi/Ri > Pi/(Ri + 1). Multiplying the inner and outer inequalities shows that $(2)$ holds trivially in this case.

This inequality shows that the greatest obstruction to being perfectly apportioned as per Hill’s method is the largest fraction
$$\frac{R_i}{\sqrt{P_i(P_i+1)}}$$
being too large. (Some call this term the Hill rank-index).

### An iterative Hill apportionment

This observation leads to the following iterative construction of a Hill apportionment. Initially, assign every state 1 representative (since by the Constitution, each state gets at least one representative). Then, given an apportionment for n seats, we can get an apportionment for n + 1 seats by assigning the additional seat the any state i which maximizes the Hill rank-index $R_i/\sqrt{P_i(P_i+1)}$.

Further, it can be shown that there is a unique apportionment in Hill’s method (except for ties in the Hill rank-index, which are exceedingly rare in practice). Thus the apportionment is unique.

This is very quickly and easily implemented in code. In a later note, I will share the code I used to compute the various data for this note, as well as an implementation of Hill apportionment.

## Additional notes: Consequences of the 1870 and 1990 Apportionments

### The 1870 Apportionment

Officially, Dean’s method of apportionment has never been used. But it was perhaps used in 1870 without being described. Officially, Hamilton’s method was in place and the size of the House was agreed to be 292. But the actual apportionment that occurred agreed with Dean’s method, not Hamilton’s method. Specifically, New York and Illinois were each given one fewer seat than Hamilton’s method would have given, while New Hampshire and Florida were given one additional seat each.

There are many circumstances surrounding the 1870 census and apportionment that make this a particularly convoluted time. Firstly, the US had just experienced its Civil War, where millions of people died and millions others moved or were displaced. Animosity and reconstruction were both in full swing. Secondly, the US passed the 14th amendment in 1868, so that suddenly the populations of Southern states grew as former slaves were finally allowed to be counted fully.

One might think that having two pairs of states swap a representative would be mostly inconsequential. But this difference — using Dean’s method instead of the agreed on Hamilton method, changed the result of the 1876 Presidential election. In this election, Samuel Tilden won New York while Rutherford B. Hayes won Illinois, New Hampshire, and Florida. As a result, Tilden received one fewer electoral vote and Hayes received one additional electoral vote — and the total electoral voting in the end had Hayes win with 185 votes to Tilden’s 184.

There is still one further mitigating factor, however, that causes this to be yet more convoluted. The 1876 election is perhaps the most disputed presidential election. In Florida, Louisiana, and South Carolina, each party reported that its candidate had won the state. Legitimacy was in question, and it’s widely believed that a deal was struck between the Democratic and Republican parties (see wikipedia and 270 to win). As a result of this deal, the Republican candidate Rutherford B. Hayes would gain all disputed votes and remove federal troops (which had been propping up reconstructive efforts) from the South. This marked the end of the “Reconstruction” period, and allowed the rise of the Democratic Redeemers (and their rampant black voter disenfranchisement) in the South.

### The 1990 Apportionment

Similar in consequence though not in controversy, the apportionment of 1990 influenced the results of the 2000 presidential election between George W. Bush and Al Gore (as the 2000 census is not complete before the election takes place, so the election occurs with the 1990 electoral college sizes). The modern Hill apportionment method was used, as it has been since 1930. But interestingly, if the originally proposed Hamilton method of 1792 was used, the electoral college would have been tied at 26914. If Jefferson’s method had been used, then Gore would have won with 271 votes to Bush’s 266.

These decisions have far-reaching consequences!

## Sources

1. Balinski, Michel L., and H. Peyton Young. Fair representation: meeting the ideal of one man, one vote. Brookings Institution Press, 2010.
2. Balinski, Michel L., and H. Peyton Young. “The quota method of apportionment.” The American Mathematical Monthly 82.7 (1975): 701-730.
3. Bliss, G. A., Brown, E. W., Eisenhart, L. P., & Pearl, R. (1929). Report to the President of the National Academy of Sciences. February, 9, 1015-1047.
4. Crocker, R. House of Representatives Apportionment Formula: An Analysis of Proposals for Change and Their Impact on States. DIANE Publishing, 2011.
5. Huntington, The Apportionment of Representatives in Congress, Transactions of the American Mathematical Society 30 (1928), 85–110.
6. Peskin, Allan. “Was there a Compromise of 1877.” The Journal of American History 60.1 (1973): 63-75.
7. US Census Results
8. US Constitution
9. US Congressional Record, as collected at https://memory.loc.gov/ammem/amlaw/lwaclink.html
10. George Washington’s collected papers, as archived at https://web.archive.org/web/20090124222206/http://gwpapers.virginia.edu/documents/presidential/veto.html
11. Wikipedia on the Compromise of 1877, at https://en.wikipedia.org/wiki/Compromise_of_1877
12. Wikipedia on Arthur Vandenberg, at https://en.wikipedia.org/wiki/Arthur_Vandenberg

## African clawed frog

In the early 1930s, Hillel Shapiro and Harry Zwarenstein, two South African researchers, discovered that injecting a pregnant woman’s urine into an African clawed frog (Xenopus laevis) caused the frog to ovulate within the next 18 hours. This became a common (and apparently reliable) pregnancy test until more modern pregnancy tests started to become available in the 1960s.

Behold the marvels of science! (Unless you’re a frog).

When I first heard this, I was both astounded and… astounded. How would you discover this? How many things were injected into how many animals before someone realized this would happen?

### Sources

• https://en.wikipedia.org/wiki/African_clawed_frog

• Hillel Harry, Shapiro Zwarenstein (March 1935). “A test for the early diagnosis of pregnancy”. South African Medical Journal. 9: 202.

• Shapiro, H. A.; Zwarenstein, H. (1934-05-19). “A Rapid Test for Pregnancy on Xenopus lævis”. Nature. 133 (3368): 762

## Before frogs, there were mice

In 1928, early-endocrinologist Bernhard Zondek and biologist Selmar Aschheim were studying hormones and human biology. As far as I can tell, they hypothesized that hormones associated to pregnancy might still be present in pregnant women’s urine. They decided to see if other animals would react to the presence of this hormone, so they then went and collected the urine of pregnant women in order to… test their hypothesis.1

It turns out that they were right. The hormone human chrionic gonadotropin (hCG) is produced by the placenta shortly after a woman becomes pregnant. And this hormone is present in the urine of pregnant women. But as far as I can tell, hCG itself wasn’t identified until the 50s — so there was still some guesswork going on. Nonetheless, identifying hCG is common in many home-pregnancy tests today. Zondek and Aschheim developed a test (creatively referred to as the Aschheim-Zondek test2) that worked like this:

1. Take a young female mouse between 3 and 5 weeks old. Actually, take about 5 mice, as one should expect that a few of the mice won’t survive long enough for the test to be complete.
2. Inject urine into the bloodstream of each mouse three times a day for three days.
3. Two days after the final injection, kill any surviving mouse and disect them.3
4. If the ovaries are enlarged (i.e. 2-3 times normal size) and show red dots, then the urine comes from a pregnant woman. If the ovaries are merely enlarged, but there are no red dots, then the woman isn’t pregnant.4

In a trial, this test was performed on 2000 different women and had a 98.9 percent successful identification rate.

From this perspective, it’s not as surprising that young biologists and doctors sought to inject pregnant women’s urine into various animals and to see what happens. In many ways, frogs were superior to mice, as one doesn’t need to kill the frog to determine if the woman is pregnant.

### Sources

• Ettinger, G. H., G. L. M. Smith, and E. W. McHenry. “The Diagnosis of Pregnancy with the Aschheim-Zondek Test.” Canadian Medical Association Journal 24 (1931): 491–2.
• Evans, Herbert, and Miriam Simpson. “Aschheim-Zondek Test for Pregnancy–Its Present Status.” California and Western Medicine 32 (1930): 145.

## And rabbits too

Maurice Friedman, at the University of Pennsylvania, discovered that one could use rabbits instead of mice. (Aside from the animal, it’s essentially the same test).

Apparently this became a very common pregnancy test in the United States. A common misconception arose, where it was thought that the rabbits death indicated pregnancy. People might say that “the rabbit died” to mean that they were pregnant.

But in fact, just like mice, all rabbits used for these pregnancy tests died, as they were dissected.5

### Sources

• Friedman, M. H. (1939). The assay of gonadotropic extracts in the post-partum rabbit. Endocrinology, 24(5), 617-625.

## The Hawaiian Missile Crisis

I read an article from Doug Criss on CNN yesterday with the title “Hawaii’s governor couldn’t correct the false missile alert sooner because he forgot his Twitter password.”1 It turns out that Governor Ige knew within two minutes that the alert was a false alarm, but (in the words of the article) “he couldn’t hop on Twitter and tell everybody — because he didn’t know his password.”

There are a couple of different ways to take this story. The most common response I have seen is to blame the employee who accidentally triggered the alarm, and to forgive the Governor his error because who could have guessed that something like this would happen? The second most common response I see is a certain shock that the key mouthpiece of the Governor in this situation is apparently Twitter.

There is some merit to both of these lines of thought. Considering them in turn: it is pretty unfortunate that some employee triggered a state of hysteria by pressing an incorrect button (or something to that effect). We always hope that people with great responsibilities act with extreme caution (like thermonuclear war).

So certainly some blame should be placed on the employee.

As for Twitter, I wonder whether or not a sarcasm filter has been watered down between the Governor’s initial remarks and my reading it in Doug’s article for CNN. It seems likely to me that this comment is meant more as commentary on the status of Twitter as the President’s preferred 2 medium of communicating with the People. It certainly seems unlikely to me that the Governor would both frequently use Twitter for important public messages and forget his Twitter credentials. Perhaps this is code for “I couldn’t get in touch with the person who manages my Twitter account” (because that person was hiding in a bunker?), but that’s not actually important. (more…)

## Having no internet for four half weeks isn’t necessarily all bad

I moved to the UK to begin a postdoc with John Cremona at the University of Warwick. And for the last four weeks, I have had no internet at my home. This wasn’t by choice — it’s due to the reluctance of my local gigantitelecom to press a button that says “begin internet service.” I could write more about that, but that’s not the purpose of this note.

The purpose of this note is to describe the large effects of having no internet at my home for the last four weeks. I’m at my home about half the time, leading to the title.

I have become accustomed to having the internet at all times. I now see that many various habits of mine involved the internet. In the mornings and evenings, I would check HackerNews, longform, and reddit for interesting reads. Invariably there are more interesting seeming things than I would read, and my Checkout bookmarks list is a hundreds-of-items long growing list of maybe interesting stuff. In the middle times throughout the day, I would checkout a few of these bookmarks.

All in all, I would spend an enormous amount of time reading random interesting tidbits, even though much of this time was spread out in the “in-betweens” in my day.

When I didn’t have internet at my home, I had to fill all those “in-between” moments, as well as my waking and sleeping moments, with something else. Faced with the necessity of doing something, I filled most of these moments with reading books. Made out of paper. (The same sort of books whose sales are rising compared to ebooks, contrary to most predictions a few years ago).

I’d forgotten how much I enjoyed reading a book in large chunks, in very few sittings. I usually have an ebook on my phone that I read during commutes, and perhaps most of my idle reading over the last several years has been in 20 page increments. The key phrase here is “idle reading”. I now set aside time to “actively read”, in perhaps 100 page increments. Reading enables a “flow state” very similar to the sensation I get when mathing continuously, or programming continuously, for a long period of time. I not only read more, but I enjoy what I’m reading more.

As a youth, I would read all the time. Fun fact: at one time, I’d read almost every book in the Star Wars expanded universe. There were over a hundred, and they were all canon (before Disney paved over the universe to make room). I learned to love reading by reading science fiction, and the first novel I remember reading was a copy of Andre Norton’s “The Beastmaster” (… which is great. A part telepath part Navajo soldier moves to another planet. Then it’s a space western. What’s not to love?).

My primary source of books is the library at the University of Warwick. Whether through differences in continental taste or simply a case of different focus, the University Library doesn’t have many books in its fiction collection that I’ve been intending to read. I realize now that most of the nonfiction I read originates on the internet, while much of the fiction I read comes from books. Now, encouraged by a lack of alternatives, I picked up many more and varied nonfiction books than I would otherwise have.

As an unexpected side effect, I found that I would also carefully download some of the articles I identified as “interesting” a bit before I headed home from the office. Without internet, I read far more of my checkout bookmarks than I did with internet. Weird. Correspondingly, I found that I would spend a bit more time cutting down the false-positive rate — I used to bookmark almost anything that I thought might be interesting, but which I wasn’t going to read right then. Now I culled the wheat from the chaff, as harvesting wheat takes time. (Perhaps this is something I should do more often. I recognize that there are services or newsletters that promise to identify great materials, but somehow none of them have worked better to my tastes than hackernews or longform. But these both have questionable signal to noise.).

The result is that I’ve goofed off reading probably about the same amount of time, but in fewer topics and at greater depth in each. It’s easy to jump from 10 page article to 10 page article online; when the medium is books, things come in larger chunks.

I feel more productive reading a book, even though I don’t actually attribute much to the difference. There may be something to the act of reading contiguously and continuously for long periods of time, though. This correlated with an overall increase my “chunking” of tasks across continuous blocks of time, instead of loosely multitasking. I think this is for the better.

I now have internet at my flat. Some habits will slide back, but there are other new habits that I will keep. I’ll keep my bedroom computer-free. In the evening, this means I read books before I sleep. In the morning, this means I must leave and go to the other room before I waste any time on online whatevers. Both of these are good. And I’ll try to continue to chunk time.

To end, I’ll note what I read in the last month, along with a few notes about each.

## Fiction

From best to worse.

• The second best was The Lathe of Heaven by Ursula Le Guin. This is some classic fantasy, and is pretty mindbending. I think the feel of many books of Ursula Le Guin is very similar — there are many interesting ideas throughout the book, but the book deliberately loses coherence as the flow and fury of the plot reaches a climax. I like The Lathe of Heaven more than The Wizard of Earthsea and about the same as The Left Hand of Darkness, also by Le Guin. I read this book in three sittings.
• I read three of the Witcher books, by Andzej Sapkowski. Namely, The Sword of Destiny, Blood of Elves, and Time of Concempt. These are fun, not particularly deep reads. There is a taste of moral ambiguity that I like as it’s different from what I normally find. On the other hand, Sapkowski often uses humor or ambiguity in place of a meaningful, coherent plot. The Sword of Destiny is a collection of short tales, and I think his short tales are better than his novels — entirely because one doesn’t need or expect coherence from short stories.

I’m currently reading Confusion by Neal Stephenson, book two of the Baroque trilogy. Right now, I am exactly 1 page in.

## Nonfiction

I rank these from those I most enjoyed to those I least enjoyed.

• How Equal Temperament Ruined Harmony, by Duffin. This was told to me as an introduction to music theory [in fact, I noted this from a comment thread on hackernews somewhere], but really it is a treatise on the history of tuning and temparaments. It turns out that modern equal termperament suffers from many flaws that aren’t commonly taught. When I got back to the office after reading this book, I spent a good amount of time on youtube listening to songs in mean tone tuning and just intonation. There is a difference! I read this book in 2 sittings — it’s short, pretty simple, and generally nice. However there are several long passages that are simply better to skip. Nonetheless I learned a lot.
• A Random Walk down Wall Street, by Burton Malkiel. I didn’t know too much about investing before reading this book. I wouldn’t actually say that I know too much after reading it either, but the book is about investing. I was warned that reading this book would make me think that the only way to really invest is to purchase index funds. And indeed, that is the overwhelming (and explicit) takeawar from the book. But I found the book surprisingly readable, and read it very quickly. I find that some of the analysis is biased towards long-term investing even as a basis of comparison.
• Guesstimation, by Weinstein. Ok, perhaps it is not fair to say that one “reads” this book. It consists of many Fermi-style questions (how many golf balls does it take to fill up a football stadium type questions), followed by their analysis. So I read a question and then sit down and do my own analysis. And then I compare it against Weinstein’s. I was stunned at how often the analyses were tremendously similar and got essentially the same order of magnitude at the end. [But not always, that’s for sure. There are also lots of things that I estimate very, very poorly]. There’s a small subgenre of “popular mathematics for the reader who is willing to take out a pencil and paper” (which can’t have a big readership, but which I thoroughly enjoy), and this is a good book within that subgenre. I’m currently working through its sequel.
• Natures Numbers, by Ian Stewart. This is a pop math book. Ian Stewart is an emeritus professor at my university, so it seemed appropriate to read something of his. This is a surprisingly fast read (I read it in a single sitting). Stewart is known for writing approachable popular math accounts, and this fits.
• The Structure of Scientific Revolutions, by Thomas Kuhn. This is metascience. I read the first half of this book/essay very quickly, and I struggled through its second half. This came highly recommended to me, but I found the signal to noise ratio to be pretty low. It might be that I wasn’t very willing to navigate the careful treading around equivocation throughout. However, I think many of the ideas are good. I don’t know if someone has written a 30 page summary, but I think this may be possible — and a good alternative to the book/essay itself.

I’m now reading Grit, by Angela Duckworth. Another side effect of reading more is that I find myself reading one fiction, one non-fiction, and one “simple” book at the same time.

Written while on a bus without internet to Heathrow, minus the pictures (which were added at Heathrow).

## How fat would we have to get to balance carbon emissions?

Let’s consider a ridiculous solution to a real problem. We’re unearthing tons of carbon, burning it, and releasing it into the atmosphere.

Disclaimer: There are several greenhouse gasses, and lots of other things that we’re throwing wantonly into the environment. Considering them makes things incredibly complicated incredibly quickly, so I blithely ignore them in this note.

Such rapid changes have side effects, many of which lead to bad things. That’s why nearly 150 countries ratified the Paris Agreement on Climate Change.1 Even if we assume that all these countries will accomplish what they agreed to (which might be challenging for the US),2

most nations and advocacy groups are focusing on increasing efficiency and reducing emissions. These are good goals! But what about all the carbon that is already in the atmosphere?3

You know what else is a problem? Obesity! How are we to solve all of these problems?

Looking at this (very unscientific) graph,4 we see that the red isn’t keeping up! Maybe we aren’t using the valuable resource of our own bodies enough! Fat has carbon in it — often over 20% by weight. What if we took advantage of our propensity to become propense? How fat would we need to get to balance last year’s carbon emissions?

That’s what we investigate here.

Posted in Data, Mathematics, Story | | 1 Comment

## Happy Birthday to The Science Guy

On 10 July 1917, Donald Herbert Kemske (later known as Donald Jeffry Herbert) was born in Waconia, Minnesota. Back when university educations were a bit more about education and a bit less about establishing vocation, Donald studied general science and English at La Crosse State Normal College (which is now the University of Wisconsin-La Crosse). But Donald liked drama, and he became an actor. When World War II broke out, Donald joined the US Air Force, flying over 50 missions as a bomber pilot.

After the war, Donald began to act in children’s programs at a radio station in Chicago. Perhaps it was because of his love of children’s education, perhaps it was the sudden visibility of the power of science, as evidenced by the nuclear bomb, or perhaps something else – but Donald had an idea for a tv show based around general science experiments. And so Watch Mr. Wizard was born on 3 March 1951 on NBC. (When I think about it, I’m surprised at how early this was in the life of television programming). Each week, a young boy or a girl would join Mr. Wizard (played by Donald) on a live tv show, where they would be shown interesting and easily-reproducible science experiments.

Watch Mr. Wizard was the first such tv program, and one might argue that its effects are still felt today. A total of 547 episodes of Watch Mr. Wizard aired. By 1956, over 5000 local Mr. Wizard science clubs had been started around the country; by 1965, when the show was cancelled by NBC, there were more than 50000. In fact, my parents have told me of Mr. Wizard and his fascinating programs. Such was the love and reach of Mr. Wizard that on the first Late Night Show with David Letterman, the guests were Bill Murray, Steve Fessler, and Mr. Wizard. He’s also mentioned in the song Walkin’ On the Sun by Smash Mouth. Were it possible for me to credit the many scientists that certainly owe their

I mention this because the legacy of Mr. Wizard was passed down. Don Herbert passed away on June 12, 2007. In an obituary published a few days later, Bill Nye writes that “Herbert’s techniques and performances helped create the United States’ first generation of homegrown rocket scientists just in time to respond to Sputnik. He sent us to the moon. He changed the world.” Reading the obituary, you cannot help but think that Bill Nye was also inspired to start his show by Mr. Wizard.

In fact, 20 years ago today, on 10 September 1993, the first episode of Bill Nye the Science Guy aired on PBS. It’s much more likely that readers of this blog have heard of Bill Nye; even though production of the show halted in 1998, PBS still airs reruns, and it’s commonly used in schools (did you know it won an incredible 19 Emmys?). I, for one, loved Bill Nye the Science Guy, and I still follow him to this day. I think it is impossible to narrow down the source of my initial interest in science, but I can certainly say that Bill Nye furthered my interest in science and experiments. He made science seem cool and powerful. To be clear, I know science is still cool and powerful, but I’m not so sure that’s the popular opinion. (As an aside: I also think math would really benefit from having our own Bill Nye).

## Twenty Mathematicians, Two Hard Problems, One Week, IdeaLab2013

July has been an exciting and busy month for me. I taught number theory 3 hours a day, 5 days a week, for 3 weeks to (mostly) devoted and motivated high school students in the Summer@Brown program. In the middle, I moved to Massachusetts. Immediately after the Summer@Brown program ended, I was given the opportunity to return to ICERM to participate in an experimental program called an IdeaLab.

IdeaLab invited 20 early career mathematicians to come together for a week and to generate ideas on two very different problems: Tipping Points in Climate Systems and Efficient Fully Homomorphic Encryption. Although I plan on writing a bit more about each of these problems and the IdeaLab process in action (at least from my point of view), I should say something about what these are.

Models of Earth’s climate are used all the time, to give daily weather reports, to predict and warn about hurricanes, to attempt to understand the effects of anthropogenic sources of carbon on long-term climate. As we know from uncertainty about weather reports, these models aren’t perfect. In particular, they don’t currently predict sudden, abrupt changes called ‘Tippling points.’ But are tipping points possible? There have been warm periods following ice-ages in the past, so it seems that there might be tipping points that aren’t modelled in the system. Understanding these form the basis for the idea behind the Tipping Points in Climate Systems project. This project also forms another link in Mathematics of Planet Earth.

On the other hand, homomorphic encryption is a topic in modern cryptography. To encrypt a message is to make it hard or impossible for others to read it unless they have a ‘key.’ You might think that you wouldn’t want someone holding onto an encrypted data to be able to do anything with the data, and in most modern encryption algorithms this is the case. But what if we were able to give Google an encrypted dataset and ask them to perform a search on it? Is it possible to have a secure encryption that would allow Google to do some sort of search algorithm and give us the results, but without Google ever understanding the data itself? It may seem far-fetched, but this is exactly the idea behind the Efficient Fully Homomorphic Encryption group. Surprisingly enough, it is possible. But known methods are obnoxiously slow and infeasible. This is why the group was after ‘efficient’ encryption.

So 20 early career mathematicians from all sorts of areas of mathematics gathered to think about these two questions. For the rest of this post, I’d like to talk about the structure and my thoughts on the IdeaLab process. In later posts, I’ll talk about each of the two major topics and what sorts of ideas came out of the process.

## Dancing ones PhD

In my dealings with the internet this week, I am reminded of a quote by William Arthur Ward, the professional inspirator:

We can throw stones, complain about them, stumble on them, climb over them, or build with them.

In particular, I have been notified by two different math-related things. Firstly, most importantly and more interestingly, my friend Diana Davis created a video entry for the “Dance your PhD” contest. It’s about Cutting Sequences on the Double Pentagon, and you can (and should) look at it on vimeo. It may even be the first math dance-your-PhD entry! You might even notice that I’m in the video, and am even waving madly (I had thought it surreptitious at the time) around 3:35.

That’s the positive one, the “Building with the Internet,” a creative use of the now-common-commodity. After the fold is the travesty.

Posted in Humor, Story | Tagged , , | 5 Comments

## Ghostwritten Word

I’ve just learned of the concept of ghostwriting, and I’m stunned.

A friend and fellow grad student of mine cannot believe that I’ve made it this far without imagining it to be possible. I asked around, and I realized that I was one of the few who wasn’t familiar with ghostwriting.

Before I go on, I should specify exactly what I mean. By ‘ghostwriting,’ I don’t mean situations where the President or another statesman gives a speech that they didn’t write themselves, but that was instead written by a ghostwriter. That makes a lot of sense to me. I refer to the cases where a student goes to a person or service, gives them their assignment, and pays for it to be completed. And by assignment, I don’t just mean 20 optimization problems in one variable calculus. I mean things like 20 page term papers on the parallels between the Meichi Revolution and American Occupation in Japan, or 50 page theses, or (so it’s claimed by some) doctoral dissertations.

Posted in Story | | 1 Comment

Task: Calculate $\displaystyle \sum_{i = 1}^{69} \sqrt{ \left( 1 + \frac{1}{i^2} + \frac{1}{(i+1)^2} \right) }$ as quickly as you can with pencil and paper only.