# Morsels

These are "morsels" — little ideas or results that can be expressed in a short space. They may be exported or extended to full articles at a later time.

1. Convex shadows. Any convex 3D object will have a silhouette of a certain area. Averaged over all viewing angles, the size of the silhouette is exactly a quarter of the object's surface area.

Why? By dimensional analysis, the size of the silhouette must be proportional to surface area. By subdividing the surface into shadow-casting pieces, you can prove that the constant doesn't depend on shape as long as the shape is convex. So use the example of a unit sphere to compute its value.

The same principle applies in higher dimensions $$n$$. The formulas for sphere surface area and volume involve the gamma function, which simplifies if you consider odd and even dimensions separately. Let $$\lambda_n$$ designate the constant for $$n$$ dimensions. Then:

$$\lambda_{n+1} = \begin{cases}\frac{1}{2^{n+1}} {n \choose n/2} & n\text{ even}\\\frac{1}{\pi}\frac{2^n}{n+1} {n \choose {\lfloor n/2\rfloor} }^{-1} & n\text{ odd}\end{cases}$$

$$\lambda_n = \frac{1}{2},\; \frac{1}{\pi},\; \frac{1}{4},\; \frac{2}{3\pi},\; \frac{3}{16},\; \frac{8}{15\pi},\; \ldots .$$

2. M'aide!. One afternoon, you call all your friends to come visit you. If they all depart their houses at once, travel in straight lines towards you, and are uniformly distributed throughout the surrounding area, how many people should you expect to arrive as time goes by?

The expected number of arrivals at each moment is proportional to the time since you called. Imagine a circle, centered on you, whose radius expands at the same rate everyone travels. Whenever the circle engulfs someone's house, that's the exact moment the person arrives at your door. Uniform distribution implies number of new arrivals proportional to circumference; number of cumulative arrivals proportional to area.

Same principle generalizes to higher dimensions $$n$$.

3. Connect infinity. Extend the board game connect four so that it has infinitely many columns. You win after infinitely many moves if you've constructed an unbroken rightwardly-infinite horizontal sequence of pieces in any row and your opponent hasn't. (You don't need to fill the whole row, and diagonal/vertical chains don't count in this game.) Prove that you can force a draw.

Nailing down the proof is subtle, because blocking strategies for lower rows don't always generalize to higher rows. The general principle is to block your opponent infinitely many times in each row, preventing an unbroken chain. The specific principle is: to stop your opponent from reaching height $$k$$ in a certain column, don't put any pieces there; wait until they build a tower of height $$k-1$$ then put your piece on top.

For each row, choose infinite disjoint subsets of $$\mathbb{N}$$: $$T_1, T_2, T_3, T_4$$. Each $$T_k$$ is a predetermined list of columns where you will interrupt your opponent's towers of height $$k$$. Whenever your opponent moves, if their piece builds a tower of height $$k-1$$ in a column in $$T_k$$, put your next piece on top. Otherwise, put your piece in an empty column of $$T_1$$ to the right of all pieces played so far.

You never put pieces on your opponent's except to block them, and this blocking process cannot be stopped. A key insight is you can't stymie your opponent by building your own interrupting towers, only by stopping theirs. There is no way to build a tower of height 3 or 4 if the other player wants to prevent it.

4. Blackout Turing machines. A "blackout machine" is a Turing machine where every transition prints a 1 on the tape. The halting problem for blackout machines is decideable.

The tape of a blackout machine always consists of a contiguous region of 1s surrounded by infinite blank space on the left and right. Any machine that reads sufficiently many ones in a row must enter a control loop. By measuring the net movement during the loop, you can decide whether the machine will overall go left, right, or stay in place each iteration. Regardless of how many ones are in a row, using modular arithmetic, you can efficiently predict what state the machine will be in when it exits.

By this reasoning, a state-space graph will help us fast-forward the looping behavior of the machine. A q-state machine can enter a loop with period 1, 2, 3, …, or at most q. To anticipate its behavior on any-sized string of ones, it's therefore enough to know what the string size is mod 2, mod 3, …, mod q. Thus, for each option of left-side/right-side, each possible state of the machine's control, and each possible value of string length mod 2, mod 3, mod 4,… mod q, the graph has a node. By simulating the machine, you can efficiently compute the transitions between nodes of this graph, recording how the machine-and-tape will look whenever the machine exits the string of ones. (Add a special node for "loops forever within the string of ones" and "halts").

There are (on the order of) q! nodes in this graph—overkill, but still finite. You can simulate the machine in fast-forward using these graph transitions. After simulating a number of transitions equal to the number of nodes in this graph, the machine must either halt, loop forever within the region of ones, or revisit a node (in which case it loops forever, growing the region of ones).

5. Lopped polygons. Lop off the edges of a regular polygon, forming a regular polygon with twice as many edges. Repeat this process until, in the limit, you obtain a circle. As you might expect, that circle turns out to be the incircle of the polygon (i.e. the largest circle contained in it).

Interestingly, the cutting process multiplies the original perimeter of the object by $$(\pi/n) \cot(\pi/n)$$, where $$n$$ is the original number of sides. It multiples the area by the same amount (!).

6. NP-complete morphemes. If you have a language (set) of even length strings, you can form a language comprising just the first half of each string, or just the second half.

1. Is there a language which is in P, but where each half-language is NP-complete?
2. Is there a language which is NP-complete, but where each half-language is in P?

(What are P and NP?1)

The answer to both questions is yes. For the first question, make a language of strings where each half contains a hard problem and a solution to the problem in the other half. Then each half-language is hard because it's a hard problem paired with a solution to an unrelated problem. But the combined language is easy because then both problems come with solutions.

For the second question, make a language by pairing up easy problems according to a complicated criterion. So the half-langauges are easy, because they're just sets of easy problems. But the overall language is hard because deciding whether the pairing is valid is hard.

Concrete examples: In the first question, take strings of the form GxHy, where G and H are isomorphic Hamiltonian graphs; x is a Hamiltonian path through G; and y is an isomorphism from G to H. (Pad as necessary so Gx and Hy have the same length.) When G and H are separated, the strings x and y provide no useful information, so each half-language is as hard as HAMPATH. When united, GxHy is easy because x and y easily allow you to confirm that G and H are Hamiltonian.

In the second question, take strings of the form GH where G and H are graphs and G is isomorphic to a subgraph of H. This is an NP-complete problem. But each half-language is just the set of well-formed graphs, which is easy to check.

7. Juking jukebox. I couldn't tell whether my music player was shuffling songs or playing them in a fixed sequential order starting from the same first song each session. Part of the problem was that I didn't know how many songs were in the playlist, and all I could retain about songs is whether I had heard them before (not, e.g., which songs usually follow which).

You can model this situation as a model-selection problem over bitstrings representing whether played songs have been heard (0) or not-heard (1). Every bitstring is consistent with random play order, but only a handful are consistent with sequential play order. Therefore, the longer you listen and remain unsure, the vastly more probable the sequential model is. (Specifically if you play n songs, there are n+1 outcomes consistent with sequential play out of 2n total outcomes.) (And yes; turns out my music player was sequential, which was strongly indicated by the fact that I felt unsure it was random in the first place.)

8. Loop counting. If there's a unique two-step path between every pair of nodes in a directed graph, then every node has $$k$$ neighbors and the graph has $$k$$ loops, where $$k^2$$ is the number of nodes in the graph.

Astonishingly, you can prove this result using the machinery of linear algebra. You represent the graph as a matrix $$M$$ whose $$(i,j)$$ entry is one if nodes $$i$$ and $$j$$ are neighbors, or 0 otherwise. The sum of the diagonal entries (the trace of the matrix) tells you the number of loops. The trace is also equal to the sum of the generalized eigenvalues, counting repetitions, so you can count loops in a graph by finding all the eigenvalues of the corresponding matrix.

The property about paths translates into the matrix equation $$M^2 = J$$, where $$J$$ is a matrix of all ones. (The r-th power of a matrix counts r-step paths between nodes.) This matrix $$J$$ has a number of special properties—multiplying a matrix by $$J$$ computes its row/column totals (i.e. the number of neighbors for a graph!), multiplying $$J$$ by $$J$$ produces a scalar multiple of $$J$$, and $$J$$ zeroes-out any vector whose entries sum to zero; this is an $$n-1$$ dimensional subspace. The property that $$M^2=J$$, along with special properties of $$J$$, allows you to conclude that higher powers of $$M$$ are all just multiples of $$J$$; in particular, examining $$M^3=MJ=JM$$ reveals that every node in the graph has the same number of neighbors $$k$$. So $$M^3 = kJ$$. (And $$k^2 = n$$ because $$k^2$$ is the number of two-step paths, which lead uniquely to each node in the graph.) Notice that because of this neighbor property, $$M$$ sends a column vector of all ones to a column vector of all k's, so $$M$$ has an eigenvalue $$k$$. Based on the powers of $$M$$, $$M$$ furthermore has $$(n-1)$$ generalized eigenvectors with eigenvalue zero. There are always exactly $$n$$ independent generalized eigenvalues, and we've just found all of them. Their sum is $$0+0+0+\ldots+0+k = k$$, which establishes the result.

The same procedure establishes a more general result: If there's a unique r-step path between every pair of nodes in a graph, then every node has $$k$$ neighbors and the graph has $$k$$ loops, where $$k^r$$ is the number of nodes in the graph.

9. Cheap numbers. The buttons on your postfix-notation calculator each come with a cost. You can push any operator (plus, minus, times, floored division, modulo, and copy) onto the stack for a cost of one. You can also push any integer (say, in a fixed range 1…9) onto the stack; the cost is the value of the integer itself. Find the smallest-cost program for producing any integer.

For example, optimal programs for the first few integers are 1, 2, 3, 2@+, 5, 3@+, 13@++, 2@@**, 3@*. (Here @ denotes the operator which puts a copy of the top element of the stack onto the stack.)

You can find short programs using branch-and-bound search to search the space of stack states. There are some optimizations: Using dynamic programming (Dijkstra's algorithm), you can cache the lowest-cost path to each state so that you never have to expand a state more than once. Second, you can use the size of the stack state as an admissible heuristic: if a stack has $$n$$ items in it, it will take at least $$n-1$$ binary operators to collapse it into single number.

There's a nice theorem, too: For any integer, the lowest-cost program only ever needs the constants 1 through 5 (additional constants never offer additional savings). The program never costs more than the integer itself, and in fact for sufficiently large integers (over 34), it costs less than half of the integer.

To prove the theorem, first show that every integer $$n$$ has a (possibly nonoptimal) program which costs at most $$n$$ and which uses the constants 1…5 2. Next, suppose you have an optimal-cost program which uses any set of constants. By the previous result, you can replace each constant with a short script that uses only 1…5, and this substitution won't increase the program's cost. Hence the program-with-substitutions is still optimal, but only uses the digits 1…5, QED.

10. Rolls with Advantage. In D&D5e, certain die rolls have advantage, meaning that instead of rolling the die once, you roll it twice and use the larger result. Similarly, a modifier is a constant you add to the result of a die roll. There's a simple probabilities nomogram for deciding whether you should prefer to roll (d20 + x) with advantage, or (d20 + y).

To use: draw a line from the DC on the first axis to the modifier on the second axis and continue until you hit the third axis. Read the right-hand value for the probability of success when rolling d20, or the left hand value when rolling d20 with advantage. For example, the orange line shows a situation where DC=16, modifier=+1. With advantage, success rate is around 50%. Without, it's around 30%.

The equation governing the odds $$P$$ of beating a DC of $$D$$ with a modifier of $$X$$ when rolling a g-sided die is: $$g (1-P) = (1-D+X)$$

With advantage, the equation is surprisingly similar: $$g \sqrt{(1-P)} = (1-D+X)$$

And so both can be plotted on the same sum nomogram.

11. Pyramid puzzles The objective of the number-tower puzzle is to fill in numbers in a triangular structure so that every number is equal to the sum of the two numbers below it. Some numbers are specified in advance, providing constraint. Example:

The question is, when designing such a puzzle, how many and which spaces should you specify in advance to guarantee that there is a single unique solution? For example, specifying all values at the base nodes is sufficient. How do you characterize all such arrangements?

Solution: Let $$h$$ be the height of the tower. If you place $$h$$ values in the tower such that you can't solve for any one value in terms of the others, then those $$h$$ values uniquely determine a solution (and conversely). You never require more or less than $$h$$ values. And the values may be chosen freely; only their position matters.

You can prove this using methods of linear algebra. The addition constraints for the tower define a system of linear equations ($$x_1 + x_2 = x_3$$, and so on), which you can represent as a matrix. Because filling out the $$h$$ base nodes of the tower uniquely determines the solution, and because the problem is linear, it always takes $$h$$ independent values to determine a solution.

You can formalize the condition "solve for one value in terms of the other" in terms of linear independence (computing a determinant). There is an ancestral path matrix $$N$$ with one column for every node in the tower, and one row for every node specifically in the base of the tower. Each entry $$N_{i,j}$$ counts the number of distinct upward paths from the base node $$i$$ to the node $$j$$. Suppose you place $$h$$ values at certain places in the tower. Pick out the corresponding columns of $$N$$ to form an $$h\times h$$ submatrix $$D$$. The chosen placement of values uniquely determines a solution if and only if the submatrix $$D$$ is invertible (has nonzero determinant). This is because the rows of $$N$$ are a basis for the solution space (each row is a primitive solution), so the submatrix $$D$$ (which is the composition of $$N$$ with a matrix that quotients out everything but your $$h$$ filled-in places) measures whether it's possible to uniquely recover the $$h$$ parameters of the solution space given the $$h$$ values you've filled in. You can fill in all the values in the tower uniquely if and only if it's possible to uniquely recover those parameters, e.g. the values at the base of the tower.

## Footnotes:

1

By the way, P problems are ones that are easy to solve. For example: is this string a well-formed graph without any syntax errors? In contrast, NP-complete problems are the hardest problems that still have easy-to-verify solutions. For example, is there a path through this graph that visits every node exactly once? It's a hard problem, but verifying a candidate solution is easy. Here's another NP-complete problem: Is this graph a subgraph of that one? Hard in general, but verifying a proposed correspondence is easy.

2

Proof: Check, by hand, that you can make the first 34 numbers with programs that don't cost more than the number and that use only the digits 1…5 (For any smaller set of digits, this part fails because 5 costs more than 5 to make.) For cases larger than 34, a stronger property emerges: the optimal 1…5 program for each integer costs less than half the integer itself, minus one (!). You can prove this by induction. Check cases 34 through 256 by hand. Then if the theorem is true for every integer between 34 and some power of two, it is true for every integer between that power of two and the next one. Indeed, any integer in that range can be written as the sum of two numbers larger than 34, and by the induction hypothesis, the program which generates those two numbers then adds them costs less than half the integer.