Pages

Bookmark and Share

Wednesday, March 23, 2022

Partition Game

Alice and Bob are playing a game as follows. Originally there are $n > 1$ piles containing $s$ stones each. Alice starts by taking the first turn and at each step, the player chooses one pile and discards the other piles. Then that player divides the stones from that one pile into $n$ piles, such that each pile still contains at least 1 stone. The player that cannot move loses.

Determine all $s$ such that Alice has a winning strategy

Solution

We classify every natural numbers into winning numbers or losing numbers with the following recursion: The numbers $1, \dots, n-1$ are losing numbers. If a number can be partitioned into losing numbers, then it's a winning number. Otherwise it's a losing number. In other words, any given partition of a losing number must contain a winning number.

Now the game is a standard Sprague-Grundy game. If a player receives a state where one pile contains a winning number, then that state is a winning state. For example, $n$ is a winning number because the player can split it into ones, regardless of what the other numbers in the pile are. If a player receives a state where all piles are losing numbers, then it's a losing state.

Let $m=n-1$. Now we establish the base cases. $1, \dots, m$ are losing numbers. Then $n$ can be partitioned into all ones, and $mn$ can be partitioned into all $m$'s. It's easy to show that all the numbers in between can also be partitioned into summands between $1$ and $m$ inclusive, so $n,\dots,mn$ are winning numbers.

Now we claim: Let $x \equiv p \mod mn$ then $x$ is a winning number if and only if $p = 0$ or $p \geq n$, and it is a losing number if and only if $p \in [1,m]$. We prove it by induction on $k = \lceil x / mn \rceil$. The case of $k = 1$ is proven by the base case above.

Now suppose we have a number $x$ with residue in $[1,m] \mod mn$ and there is a way to partition this number into $n$ summands, all while staying within $[1,m] \mod mn$. The smallest sum (in the residue space) that can be attained is $n$, and the largest sum is $mn$, and they are all outside of $[1,m]$, a contradiction. So some of the summands must have residue $0$ or $\geq n \mod mn$. Let $y$ be this summand. Note that $ \lceil y / mn \rceil < \lceil x / mn \rceil$ in each of these cases. So $x$ is a losing number by induction hypothesis.

Now suppose we have a number $y$ divisible by $mn$, then we can partition it into $n$ numbers all having residue $m$. The easiest is of course to have the first $m$ terms $m$ and have the last term $y - m^2 = kmn - m^2 = (k-1)mn + m$. Each of these numbers are losing numbers with less quotient than $y$, so $y$ is a winning number.

Next, suppose we have $y$ with residue $[n, mn-1]$. Using the same argument as above, we can see that it can always be decomposed into the sum of $n$ numbers with residue $[1,m]$. However, the quotient of the summands are not necessarily less than $y$, but we have proved that the set in $[1,m]$ with quotient $k$ are losing numbers as well.

Friday, March 11, 2022

Toggling lights 3 at a time

Alice and Bob are playing a game. $n$ lights are put on a line with individual switches, initially all off. Then the two phases happen as follow:

1. Alice turns on some of these lights

2. Bob may do these actions: choose a contiguous block of 3 lights and toggle these 3 lights. He may repeat these actions as many times as he wants for different blocks.

And the scoring of the game is as follows:

In phase 1, Alice gets a -1 point for each light that she turns on. In phase 2, Alice gets a +1 point every time Bob performs an operation. Then at the end of the game, Alice gets $i$ points if light $i$ is on. Obviously Alice wants her score to be as high as possible, and Bob wants her score to be as low as possible. Determine the maximum points that Alice can get if both players are playing optimally.

Solution

Let $T_k$ denote the action by Bob of toggling a contiguous block of $k, k+1, k+2$, where $k \in [1, 98]$. If light $k+3$ is on and light $k$ is off, by performing $T_k + T_{k+1}$ he can turn off $k+3$ and turn on $k$. The net effect to the score is -1 so this move is beneficial to Bob. Therefore, he would keep doing it as long as there are lights on at $k > 3$. Furthermore, if there are 2 lights on $k > l$ such that $k \equiv l \mod 3$ then by repeating the operation above Bob can turn off both lights. The net effect to the score is $-k-l+2\frac{k-l}{3} < 0$. This means that it is not optimal for Alice to turn on two lights that have the same residue mod 3. So at most Alice should turn on 3 lights. But even then, if we ever arrive at the condition where lights $(1,2,3)$ are on, then Bob can perform $T_1$ to turn off all the lights. So Alice should turn on at most 2 lights.

The final condition that is most beneficial to Alice is either $(3)$ or $(1,2)$, both having the same value, and both are accessible from each other in phase 2. But obviously it's more beneficial for Alice to turn on only 1 light, and it should be the largest light divisible by 3, so that Bob has to spend a lot of moves moving it to the smaller number. So Alice should turn on $3\lfloor n/3 \rfloor$, and Bob will have to move it all the way to 3.

The total score is then $-1 + 2\lfloor n/3 \rfloor + 3 = 2\lfloor n/3 \rfloor +1$

Note: the number 3 can be replaced by a larger number $p$ but the principle is the same. Any light on above $p$ can be shifted down for a net gain for Bob. However, the final condition is tricky. Take $p=5$ for example. We don't want $(1,2,3,4,5)$ because then Bob can simply turn them all off in one fell swoop. We also don't want $(3,4,5)$ because Bob can still replace it by $(1,2)$ for a net gain. In fact, the optimal condition for Alice is $(3,5)$ because while Bob can replace it by $(1,2,4)$, it costs him 1 point to do so, so it's not a beneficial move for Bob. Likewise, for $p=6$, the best condition for Alice is $(5,6)$ because it's "just under" $(1+2+3+4+5+6)/2$. The number of lights that would be left on so that the sum is just under $n(n+1)/2$ is $\lfloor (n-1)/2 \rfloor$ which can be proved by induction. However, the exact configuration is harder to determine in closed form expression. This matters because in the course of shifting the lights down during phase 2, Bob may inadvertently have to perform the same $T_k$ twice in which case he would just not perform that action, so the calculation of final point count becomes more complicated.

Wednesday, March 2, 2022

Hyper-triangular numbers

Triangular number $T_n$ is defined as $T_n = 1 + 2 + \dots + n$.

Tetrahedral number $R_n$ is defined as $R_n = T_1 + T_2 + \dots + T_n$.

Let $T^k_n$ denote the $k$-th dimensional triangular numbers, with $T^1_n = 1+1+\dots+1 = n$, $T^2_n = T_n$, and $T^3_n = R_n$. Specifically, the higher dimensional triangular numbers are defined as: $$ T^{k+1}_n = T^k_1 + T^k_2 + \dots + T^k_n$$ Prove that: $$T^k_n.T^m_1 + T^k_{n-1}.T^m_2 + \dots + T^k_1.T^m_n = T^{k+m+1}_{n+1}$$

Tuesday, March 1, 2022

Number of contiguous blocks containing black balls over all permutations

$k$ black balls and $n-k$ white balls are arranged on a straight line, where $n \geq 1$, $0 \leq k \leq n$. The score of the said arrangement is the number of contiguous subsequences that contain at least one black ball. For example, the arrangement $W W B W B$ has two subsequences of length 1, and three subsequences of length 2 that contain at least one black ball. Of course there are subsequences of length 3 or higher as well, so the total score for this arrangement is 11.

What is the sum of the score of all $\binom{n}{k}$ arrangements? You may express this quantity in a recursive formula computable in $O(n^2)$ time, $O(n \log n)$ time, or $O(n)$ time.

Solution 1

We give a recursion formula computable in $O(n^2)$ time with DP.

More to come...

Solution 2A

We prove that the number is $\binom{n+2}{k+2} . \binom{k+1}{2}$ which is computable in $O(n)$ time simply by calculating the factorial table

First let $T_n$ be the $n$-th triangular number, $T_n = n(n+1)/2 = \binom{n+1}{2}$. And we also will use the following well-known identity a lot: $$\sum_{i=1}^{n} \binom{i+p-1}{p} = \binom{n+p}{p+1}, p \geq 0$$ It can be proven many ways, either with induction on $n$ keeping $p$ fixed, induction on $p$ keeping $n$ fixed, or telescoping sum, or using combinatorial arguments (choosing $p+1$ items, and condition on the first item chosen). For $p = 0$ it is a trivial sum, and for $p = 1$ it is the definition of triangular numbers.

Note that if all balls were black, then every single contiguous blocks would contribute to the score. There are $1 + 2 + \dots + n = T_n$ such blocks and that is the score. But if there are white balls, then we want to calculate how many scores are lost due to these pockets of white balls, over the course of all different arrangements.

Now given a particular arrangement of $k$ black balls and $n-k$ white balls, we come with an associated sequence of $k+1$ integers denoting the number of white balls in between black balls, including before the first black ball and after the last black ball. For example, the sequence WWBWWWBBBW corresponds to the sequence $(2, 3, 0, 0, 1)$. Each element in this sequence must be non-negative and they must add up to $n-k$, and there must be exactly $k+1$ integers.

Lemma

Suppose $A$ is the set of all sequences of non-negative integers with length $m \geq 2$ that sum up to $M$, that is $$ A = \{ (a_1, \dots, a_m) | a_i \geq 0, a_1 + \dots + a_m = M \}$$ then:

1. There are $|A| = \binom{M+m-1}{m-1}$ possible sequences.

2. If $m \geq 2$, the number $t \in [0,M]$ occurs $m \binom{M-t+m-2}{m-2}$ times in $A$, in any position.

We prove by induction. For $m=2$, the possible sequences are $A = \{(M,0), (M-1,1,), \dots, (0,M)\}$ so $|A| = M+1$ and there are two occurrences of each number.

Now suppose it holds for $m$, now for $m+1$, consider the first element $a_1 \in [0,M]$. Once its value is chosen, then we have $m-1$ numbers that need to sum up to $M - a_1$. So there are $\binom{M-a_1+m-2}{m-2}$ ways to do this, by induction hypothesis. The total number of ways is the sum of that expression when $a_1$ ranges from $0$ to $M$. $$\sum_{a_1 = 0}^M \binom{M-a_1+m-2}{m-2} = \sum_{i=m-2}^{M+m-2} \binom{i}{m-2} = \sum_{j=1}^{M+1} \binom{j+(m-2)-1}{m-2} = \binom{M+m-1}{m-1}$$ The first two equalities are change of variables, but the last one utilizes the identity above. We have proved the first assertion. For the second assertion, consider the case where the first element $a_1 = t$. The rest of the $m-1$ elements must sum up to $M-t$, so there are $\binom{M-t+m-2}{m-2}$ ways to complete the sequence. However, the number $t$ may appear in any of the other positions as well, so in total there are $m$ times that amount.

One might ask that sometimes the number $t$ itself my appear as a duplicate in multiple elements. For example for $m=4$ and $M=11$, we have have a sequence $(2,2,5,2)$ which should only be counted once. Indeed, when we consider $a_1=1$, we contributed one count towards the total number of "2" appearing. Likewise, when we consider $a_2=2,a_4=2$ we also added one count each time, but the sequence itself contributes three counts towards the appearance of "2." So the total amount is exact and correct, and there is no need to compensate for double counting. Thus the lemma is proven

Alternatively, one could also prove the first assertion by using a combinatorial argument. Consider $M+m-1$ spaces and we are to choose $m-1$ spaces to serve as separators. The spaces between the separators will be the numbers in the sequence.

Now observe that each arrangement corresponds one-to-one with a sequence like we described above. The process to convert from an arrangement to a sequence is invertible. Therefore, the total number of contiguous blocks in all arrangements is $$ T_n \binom{(n-k)+(k+1)-1}{(k+1)-1} = T_n \binom{n}{k} $$ But a lot of these blocks would contribute zero towards the total score. That is, for each sequence $(a_1, \dots, a_{k+1})$, the total loss of scores is the number of contiguous blocks that are contained fully within those pockets of white balls, that is: $$T_{a_1} + \dots + T_{a_{k+1}}$$ So we have to add these over all the possible sequences whose sum is $n-k$. This is where we utilize the lemma. Each time the number $t$ appears, it contributes $T_t$ towards the total loss of score. But thanks to the lemma, we know that the number $t$ appears exactly $$(k+1) \binom{(n-k)-t+k-1}{k-1} = (k+1) \binom{n-t-1}{k-1}$$ times. So the total loss of scores is the sum of this quantity as $t$ ranges from one to $n-k$. (Technically $t=0$ is a valid scenario but it incurs no loss of score because $T_0 = 0$ so we ignore it). $$\sum_{t=1}^{n-k} (k+1) \binom{n-t-1}{k-1} T_t $$ At this point, this sum can be computed in $O(n \log n)$ times using FFT technique, as it's a sum of terms expressed in $t$ and $n-t$, but we can simply the terms further to arrive at a closed form solution, by repeated use of the identity above. Let $B_i = \binom{i+k-2}{k-1}$ (where $k > 0$ is fixed), and for any $m > 0$: $$ B_1 + \dots + B_m = \sum_{i=1}^{m} \binom{i+k-2}{k-1} = \binom{m+k-1}{k}$$ by direct application. Then: $$ \sum_{i=1}^{m} (m+1-i). \binom{i+k-2}{k-1} = m.B_1 + (m-1).B_2 + \dots + 1.B_{m} = B_1 + (B_1+B_2) + \dots + (B_1 + \dots + B_m) = \sum_{i=1}^{m} \binom{i+k-1}{k} = \binom{m+k}{k+1}$$ One more time: $$ \sum_{i=1}^{m} T_{m+1-i}. \binom{i+k-2}{k-1} = T_m.B_1 + T_{m-1}.B_2 + \dots + 3.B_{m-1} + 1.B_{m} $$ $$ = [1+2+ \dots + m]B_1 + [1+2+ \dots + (m-1)]B_2 + \dots + [1+2]B_{m-1} + 1.B_m$$ $$ = 1.B_1 + (2.B_1 + 1.B_2) + (3B_1 + 2B_2 + B_3) + \dots + (m.B_1 + \dots + 1.B_m)$$ $$ =\sum_{i=1}^m \binom{i+k}{k+1} = \binom{m+k+1}{k+2}$$ Now, the total loss of scores becomes: $$ \sum_{t=1}^{n-k} (k+1) \binom{n-t-1}{k-1} T_t = (k+1) \sum_{i=1}^{n-k} T_{n-k+1-i} \binom{i+k-2}{k-1}$$ due to change of variable. Now we apply the latest identity above, with $m=n-k$: $$ = (k+1) \sum_{i=1}^{n-k} T_{n-k+1-i} B_i = (k+1) \binom{n+1}{k+2}$$

So the total score is then: $$ T_n \binom{n}{k} - (k+1) \binom{n+1}{k+2} = \frac{n(n+1) n!}{2. k!(n-k)!} - \frac{(k+1)(n+1)!}{(k+2)!(n-k-1)!} = \frac{(n+1)!}{k!(n-k-1)!} \left(\frac{n}{2(n-k)} - \frac{(k+1)}{(k+2)(k+1)} \frac{n-k}{n-k} \right)$$ $$ \frac{(n+1)!}{k!(n-k)!} \left(\frac{n}{2} - \frac{n-k}{k+2} \right) = \frac{(n+1)!}{k!(n-k)!} \frac{kn+2n-2n+2k}{2(k+2)} = \frac{(n+1)!}{k!(n-k)!} \frac{k(n+2)}{2(k+2)}.\frac{k+1}{k+1} = \frac{(n+2)!}{(k+2)!(n-k)!} \frac{k(k+1)}{2} = \binom{n+2}{k+2} . \binom{k+1}{2}$$

Solution 2B

We also prove that the number is $\binom{n+2}{k+2} . \binom{k+1}{2}$ but using a different (simpler) combinatorial argument.

Given $n+2$ tokens in row, mark $k+2$ of these as blue and $n-k$ as red. There are $\binom{n+2}{k+2}$ ways to choose. And now, among the blue tokens, further mark 2 tokens as "start" and "end", with the condition that "start" must come before "end", and there must be at least one blue token between them. There are $\binom{k+2}{2} - (k+1) = \binom{k+1}{2}$ ways to choose. So the end result is an arrangement of red and blue tokens, and two tokens marked start and end containing at least a blue token in between. We will show that this arrangement corresponds to exactly one score-producing contiguous block in a particular arrangement of black and white balls, and that the mapping is invertible and thus we obtain a bijection.

Arrange the balls according to the $n$ unmarked tokens, where all $n-k$ red tokens correspond to white balls, and $k$ unmarked blue tokens correspond to black balls. The contiguous block is represented by the tokens between start and end, including all red and blue tokens. Because the start and end must flank at least one blue token, then this contiguous block contributes one point towards the score.

On the reverse, given an arrangement of balls, create the corresponding arrangement of red and blue tokens. And then insert the start token to the space right before the first ball in the contiguous block, and the end token to the space right after the last ball. Thus we have a bijection, and the number of total score is $\binom{n+2}{k+2} . \binom{k+1}{2}$.

Solution 3

We present a different combinatorial argument, this time in the permutation space, not a combination space.

First we label all the balls from $1,\dots,n$ such that all the black balls are labeled $1,\dots,k$ and the white balls are labeled $k+1, \dots, n$. Consider any given permutation of the balls, and define the score of this permutation the same way as the problem statement: the number of contiguous blocks that contain at least one black balls. Then we also consider the total number of scores of all $n!$ permutations.

Note that each arrangement of unlabeled balls corresponds to $k! (n-k)!$ permutations of labeled balls, all having the same score. So the total number of scores of all $n!$ permutations is $k! (n-k)!$ times the total score of all arrangements. We will show that the total number of scores of all permutations is $\frac{(n+2)!}{2} \frac{k}{k+2}$.

First we arrange the items $1,\dots,n, X, Y$ in a row, with the condition that $X$ happens before $Y$. There are $\frac{(n+2)!}{2}$ ways to do so. This permutation corresponds to permutation of balls, and the contiguous block is represented by the items between $X$ and $Y$. There may be some permutations where $X$ and $Y$ are next to each other, which means the contiguous block is empty, but that's okay for now. Because we have conditioned that $X$ happens before $Y$, the mapping is injective, meaning no two item permutation map to the same (ball permutation, contiguous block) tuple.

However, not all of those permutations will produce a score. In order for it to produce a score, there must be at least one of $1,\dots,k$ between $X$ and $Y$. So now we consider the permutations of these $k+2$ items. Each of the $n+2$-item permutations has a unique corresponding $k+2$-item permutation, but every $(n-k)!$ of the large permutations produces the same smaller permutation.

There are $\frac{(k+2)!}{2}$ smaller permutations because they still all have $X$ before $Y$. We want to exclude the ones where $X$ and $Y$ are together in this permutation (even if in the original permutations there may be numbers larger than $k$ between them). There are $(k+1)!$ such permutations. So the remaining permutations are: $$\frac{(k+2)!}{2} - (k+1)! = \frac{(k+2 - 2}{2} (k+1)! = \frac{k}{2} (k+1)! = \frac{k}{k+2} \frac{(k+2)!}{2} $$ In other words, the good permutations are only $k/(k+2)$ of the total number of permutations, when we consider all small permutations. But since all large permutations map to the small permutations evenly ($(n-k)!$ to one), then the number of good large permutations is $\frac{(n+2)!}{2} \frac{k}{k+2}$. Then, we map it back to the space of ball arrangement, where each permutation corresponds to $k!(n-k)$ arrangements, so the total number of score of arrangements is: $$\frac{(n+2)!}{2} \frac{k}{k+2} . \frac{1}{k!(n-k)!} .\frac{k+1}{k+1} = \frac{(n+2)!}{(n-k)!(k+2)!} \frac{(k+1)k}{2} = \binom{n+2}{k+2} \binom{k+1}{2}$$

Thursday, February 24, 2022

Neighboring non coprime

Let $n > 1$. A sequence $A_n$ of integers greater than 1 having length $n$ is called nice if any two elements are not coprime. That is, $$\gcd(A_i, A_{i+1}) > 1 \forall i, 1 \leq i \leq n-1$$

Find the smallest real number $t$ such that, given any sequence of integers $A_n$ of integers greater than 1 having length $n$, there exists a sequence of positive integers $B_n$ of length $n$ such that:

1. $\max{B_i} \leq n \max {A_i}$

2. $B_n$ differs from $A_n$ in at most $\lceil tn \rceil$ elements.

Solution

Note that for $t=1$ we can just change every single element to a small number like 2. We can try to do better. Satisfying condition 2 is easy, because for $t = 1/2$ we can just change every other element to the the LCM of it's neighboring elements, but this would make it failr condition 1. For example, suppose the sequence consists of very large primes, then this strategy would cause every other element to be the product of the two large primes, failing condition 1.

We propose that $t = 2/3$ satisfies the condition. That is, if $$A_n = a_1, a_2, a_3,a_4,a_5,a_6,\dots,a_n$$ $$B_n = 2a_2, a_2, 2a_2, 2a_5, a_5, 2a_5, \dots$$

We can see that every two neighboring elements are not coprime, and that the maximum of $B_i$ is at most twice that of $A_i$. We show that there is no smaller $t$ that can satisfy, using a series of counter examples.

For $n=3$, consider the sequence $p_1,p_2,p_3$ of very large primes. There is no way to only change one element to be nice, while keeping the maximum element manageable. The only way to change one element to be nice is to change $p_2 \to p_1p_3$, which then would fail condition 1. So we must change at least 2 elements, which means $\lceil 3t \rceil \geq 2$, so $t > \frac{1}{3}$

For $n=5$ consider the sequence $p_1,p_2,p_3,p_4,p_5$. If we were to change only 2 elements, either we change $p_2,p_4$, or we leave two neighboring primes intact. But if we change $p_2,p_4$, this could mean we have to change $p_2$ to LCM of $p_1,p_3$ and so on, violating condition 1. So we must change at least 3 elements, which means $\lceil 5t \rceil \geq 3$ so $t \frac{2}{5}$

Thursday, February 10, 2022

Expected number of throws

$n$ fair coins are thrown all at once. Those that show Head are removed from the game, and the rest are thrown again. This process is repeated until there are no more coins left. What is the expected number of throws? Find a closed form formula, meaning that it may still be expressed as a sum but may not be expressed as a recurrence relation.

Solution Let $f(n)$ be the desired result. There is a $\binom{n}{k} / 2^n$ chance that $k$ of them will show heads, and the process is repeated with $n-k$ coins now. The rest of the game is expected to take $f(n-k)$ throws. If $k=0$ then we still have the same situation as before. If $k=n$, we have 0 throws left. In all of these situations, we do have to spend 1 turn doing the throw. So the recurrence relation is:

$$f(n) = 1 + \sum_{k=1}^{n-1} \frac{\binom{n}{k}}{2^n} f(k) + \frac{1}{2^n}f(n)$$ $$f(n) = \frac{2^n + \sum_{k=1}^{n-1} \binom{n}{k} f(k)}{2^n-1} $$

Given a large $n$, computing this would take $O(n^2)$ time as we have to calculate and store the previous $f(k)$ values, and each take $O(k)$ time. We can manipulate this expression further to come with a summation formula for $f(n)$ that would allow it to be computed in linear time.

First we claim and prove the following lemma. For $1 \leq m < n$

$$ \sum_{k=m}^{n} \binom{n}{k}\binom{k}{m} = \binom{n}{m}2^{n-m}$$

Proof: given $n$ white balls, we wish to pick $m$ of them and paint them black, and the rest may stay white or be painted gray. The RHS side is obvious, there are $\binom{n}{m}$ ways to choose black balls, and the rest may be painted white or gray.

To show the LHS, first we pick a number $k \in [m,n]$. We choose $k$ balls and paint them gray. Out of these $k$, we further choose $m$ to be painted black.

Alternatively we can prove the lemma by dividing both sides by $\binom{n}{m}$: $$ \sum_{k=m}^{n} \frac{n!}{k!(n-k)!} \frac{k!}{m!(k-m)!} \frac{m!(n-m)!}{n!} = 2^{n-m}$$ $$ \sum_{k=m}^{n} \frac{(n-m)!}{(n-k)!(k-m)!} = 2^{n-m}$$ $$ \sum_{i=0}^{n-m} \frac{(n-m)!}{(n-m-i)!i!} = 2^{n-m}$$

which is a well-known identity

Given the lemma, now we claim the following statement: let $a_n = \frac{2^n}{2^n-1}$ then: $$f(n) = \binom{n}{1}a_1 - \binom{n}{2}a_2 + \binom{n}{3}a_3 + \dots + (-1)^{n+1} \binom{n}{n}a_n$$ We will show that this formula matches our recurrence relation above by strong induction. Suppose they hold for $1,2,\dots,n-1$, then we first evaluate the following expression: $$\sum_{k=1}^{n-1} \binom{n}{k}f(k) = \binom{n}{1}f(1) + \binom{n}{2}f(2) + \dots + \binom{n}{n-1}f(n) $$ $$= \sum_{k=1}^{n-1} \binom{n}{k} \sum_{m=1}^{k} (-1)^{m+1} \binom{k}{m}a_m $$ we rearrange and collect each $a_m$ terms: $$= \sum_{m=1}^{n-1} (-1)^{m+1}a_m \sum_{k=m}^{n-1} \binom{n}{k} \binom{k}{m} $$ Apply the lemma: $$= \sum_{m=1}^{n-1} (-1)^{m+1}a_m \left(\binom{n}{m}2^{n-m} - \binom{n}{m} \right) = \sum_{m=1}^{n-1} (-1)^{m+1} \binom{n}{m} a_m \left(2^{n-m} - 1 \right) $$ We can plug this expression to our recurrence formula and compare against our claim (induction hypothesis). After multiplying both sides by $2^n-1$, we are left to show that: $$(2^n-1) a_n + \sum_{m=1}^{n-1} (-1)^{m+1} \binom{n}{m} a_m \left(2^{n-m} - 1 \right) = (2^n-1) \left( \sum_{m=1}^{n-1} (-1)^{m+1} \binom{n}{m}a_m + (-1)^{n+1}a_n \right)$$ Collect $a_n$ terms to the LHS, and the rest of the terms to RHS: $$(2^n-1)(1 + (-1)^n)a_n = \sum_{m=1}^{n-1}a_m(-1)^{m+1} c_m $$ where $$c_m = (2^n-1)\binom{n}{m} - \binom{n}{m}(2^{n-m}-1) = \binom{n}{m}(2^n - 2^{n-m}) = \binom{n}{m}2^{n-m}(2^m-1) = \binom{n}{m}2^{n-m} \frac{2^m}{a_m}$$ so what we need to show becomes: $$(2^n-1)(1 + (-1)^n)a_n = \sum_{m=1}^{n-1}a_m(-1)^{m+1} \binom{n}{m}2^{n-m} \frac{2^m}{a_m} $$ $$2^n(1 + (-1)^n) = \sum_{m=1}^{n-1}(-1)^{m+1} \binom{n}{m}2^{n} $$ $$1 + (-1)^n = \sum_{m=1}^{n-1}(-1)^{m+1} \binom{n}{m} $$ $$1 + \sum_{m=1}^{n-1}(-1)^{m} \binom{n}{m} + (-1)^n = 0$$ $$(1 -1)^n = 0$$

Friday, August 13, 2021

Evolving Polynomial

On the Cartesian field, each lattice point $(x,y)$ is initially assigned the number $x^2+y^2-1$.

At each step, if $A(x,y)$ represents the number on $(x,y)$, that number is increased by: $$\frac{(x-y) \left( A(x+1,y) + A(x,y-1) - A(x-1,y) - A(x,y+1) \right)}{4}$$

All the additions are done simultaneously.

1. Prove that the numbers at each point are always integers

2. Prove that, aside from the initial configuration, there is never a number that is divisible by 3

Solution

The recursion formula guarantees that $A(x,y)$ is always a polynomial in $x,y$, with real coefficients. In the initial configuration, $B(x,y) = A(x,y) + 1 = x^2+y^2$ is homogeneous in $x,y$ with degree 2. That is, $B(kx,ky) = k^2 A(x,y)$. We will show that this homogeneity is maintained by the recursion formula, and therefore will hold true at all times.

Suppose $A(x,y) = P.x^2 + Q.xy + R.y^2 - 1$ for some real numbers $P,Q,R$ then the recursion formula maintains it. This is because $A(x+1,y) - A(x-1,y)$ cancels out the $x^2,xy,y^2,1$ terms, so only $x,y$ terms are left. Similarly $A(x,y+1) - A(x,y-1)$ will also only preserve $x,y$ terms. So the second factor is homogenous with degree 1, but the factor $(x-y)$ is also homogenous with degree 1, so the added number is homogeneous with degree 2. Therefore, the final number is still of the form $A(x,y) = P'.x^2 + Q'.xy + R'.y^2 - 1$ (for some $P',Q',R'$ constants that may be different from the previous iteration).

Now we also notice that $A(x,y) = A(y,x)$ for all time. This property is preserved in the beginning, but also preserved by the recursion, so the numbers are symmetric with respect to $x,y$. Therefore $P = R$.

Next, we notice that for $x=y$, the additions are always zeros, so $A(x,x) = 2x^2-1$ for all time. Therefore $A(x,x) = (2P+Q)x^2-1 = 2x^2-1$ so $2P+Q = 2$.

This means that, because $Q = 2- 2P$, $A(x,y) = Px^2 + (2-2P)xy + Py^2 - 1= (x^2+y^2 - 1) + (P-1)(x-y)^2$ for some $P$ real (not necessarily integer) that depends on $t$. So we've shown that $A(x,y)$ has the form $(x^2+y^2-1) + K_t(x-y)^2$ for some $K_t$, where $K_0 = 0$.

Then, plugging in to the original recursion formula, we can see: $$(K_{t+1} - K_t)(x-y)^2 = \frac{x-y}{4} ( 4x - 4y +K_t(2(x-y+1)^2 -2(x-y-1)^2))$$ $$(K_{t+1} - K_t)(x-y)^2 = \frac{x-y}{4} ( 4x - 4y +8K_t(x-y))$$ $$K_{t+1} = 3K_t + 1$$ From there, it's a simple matter to see that $K_t = \frac{3^t-1}{2}$ because $K_0 = 0$, but we didn't need to solve the recursion. The fact that $K_0 = 0$ is enough to prove that $K_t$ is always an integer, and that, except for $t=0$, it's always $\equiv 1 \mod 3$.

Because $K_t$ is always an integer then $A(x,y)$ is also integral everywhere. And because $K_t \equiv 1 \mod 3$ then we can check that for any combinations of $x,y \mod 3$, $A(x,y)$ will never be $\equiv 0 \mod 3$.