Tuesday, February 11, 2014

Call applicants to front

Let $n$ be a natural number, and we are given a sequence $a_1, a_2, \dots, a_{2n}$ whose elements are integers from 1 to $n$ inclusive, such that each element in $(1, \dots, n)$ appears at least once.

Now, suppose there are $n$ job applicants $J_1, \dots, J_n$ waiting on a line, in any order. We call them in for interviews using $a_i$ as our "call list." More formally, for $i = 1, \dots, 2n$, we call $J_{a_i}$ one by one.

If a particular job applicant $J_x$ is called for an interview, he has to pay a fee of $y-1$ where $y$ is his position in the line. For example, if he was the second person when called, he pays a fee of 1. If he was the front person, he doesn't pay. After the interview, he returns to the front of the line, and the interviewer proceeds with the next one on the call list.

For a given call list and initial ordering of applicants, we define income as the total fees paid by the applicants. For a given call list, its potential income is the sum of all income over all possible initial ordering of applicants. Now, to make things more interesting, before the interview day, the call list itself was shuffled, such that any $(2n)!$ permutation is equally likely to appear.

If you are tasked with putting together a call list, knowing that it will eventually be shuffled, how should you structure it so that the expected potential income is maximized?

Thursday, February 6, 2014

Searching on a chess board

Alice fills each cell of an $n \times n$ chess board with real numbers, such that there is exactly one zero somewhere on that board. The rest of the cells could be filled with positive or negative numbers, so that all numbers are distinct. Furthermore, she filled it such that each row is sorted from left to right and each column is sorted from top to bottom.

Now she asks Bob to find the location of the zero via a guessing game. Bob is told about the sortedness of the board and that all numbers are distinct, but he doesn't know the numbers themselves. He is allowed to make a series of guesses (of locations), and after each guess, he would be told the number on that cell. The game ends when Bob discovers the zero.

Show that Bob can devise a strategy that guarantees discovery after $2n-1$ guesses.

Show that there is no strategy that guarantees discovery with less than $2n-1$ guesses.

Solution to First Problem

Let cell $(i,j)$ refers to the cell on the $i$-th row and $j$-th column, with $i,j \geq 1$ (usual matrix notation, NOT the usual computer science notation). Bob can follow the following algorithm:

First start by guessing $(n,1)$. If that number is positive, go up 1 cell, and if it's negative, go right one cell.

In order to show that this strategy works, we have to prove three things: termination, correctness, and running time.

Proof of termination and running time

If $(i,j)$ represents the cell of our last guess, let $S = i-j$. Note that in the beginning, $S = n-1$. And for each move, whether that's 1 up or 1 right, $S$ decreases by one. Also that the minimum number of $S$ is attained when $i$ is minimum and $j$ is maximum, namely on cell $(1,n)$, which means $S = 1-n$. That means that after $(n-1)-(1-n)+1 = 2n-1$ moves we are guaranteed to terminate.

Proof of correctness

Let $A$ be the event that we guessed the column (but not necessarily the row) of the zero correctly, and $B$ denotes the event that we guessed the row of the zero correctly. It's easy to see that after $A$ or $B$ is reached, we are guaranteed to reach the zero. For example, once $A$ happens, our guess will keep moving up until it reaches the zero, and vice versa.

Now to show that $A$ or $B$ is ever reached, suppose the zero is at a location $(i_0, j_0)$. That means $A$ happens after $(i_0-1)$ right moves, and $B$ happens after $(n-j_0)$ up moves. Every move is either a right move or an up move, so after many enough moves, either $A$ or $B$ will happen.

Solution to Second Problem

Define the "major" diagonal to be cells $(i,j)$ such that $i+j = n+1$. That is, bottom left to top right. Define the "minor" diagonal to be cells $(i,j)$ such that $i+j = n$. That is, cells above (or to the left of) major diagonal. Now suppose Alice populates the board in the following way: populate everything above the minor diagonal with numbers less than -1, and populate everything below major diagonal with numbers > 1, all while still satisfying the sortedness criteria.

As for the diagonals themselves, note that if the major diagonal is populated with numbers between 0 and 1, and the minor diagonal populated with numbers between -1 and 0, and if the zero is anywhere on the two diagonals, the sortedness criteria is still satisfied. Between cells in the major diagonals themselves (say $(n,1)$ versus $(n-1,2)$) there is no constraint, and likewise there is no constraint between cells in the minor diagonals.

Now suppose Alice never decides on the content of the major and minor diagonals until Bob asks for the content of those cells. In other words, even Alice herself doesn't know where the zero will be. As Bob executes his strategy, if he asks for a cell on the major diagonal, Alice just generates a random number between zero and one. If he asks for a cell on the minor diagonal, she generates a random number between -1 and zero. Note that at any given point, even if Bob knows the content of all the other non-diagonal cells, these numbers are still consistent with the sortedness of the board.

Alice reveals zero only after all the other diagonal cells are exhausted. In other words, Alice only reveals zero if that is the last diagonal cell (major or minor) that Bob hasn't asked. Because there are $2n-1$ diagonal cells, any strategy that Bob has can never guarantee discovery with less than $2n-1$ guesses.

Wednesday, February 5, 2014

Betting on cards

A deck consisting of $n$ red cards and $n$ black cards are shuffled. The cards are then flipped one at a time. Before each flip, you are allowed to make a bet on the color of the next card. If you guessed right, you win an amount equals to your bet. If you guessed wrong, you lose your bet. You don't have to bet on every flip, in fact you don't have to bet at all if you so wish. The bets can be any amount of real number, and you start with one dollar.

You decide to employ the following strategy. Suppose that, by counting already-flipped cards, you deduce that there are $R$ red cards and $B$ black cards left in the deck. If $R > B$, you guess red while betting $\frac{R-B}{R+B}$ of your total money. Similarly if $R < B$ you guess black while betting $\frac{B-R}{B+R}$ of your total money. If $R=B$ you don't bet that turn. Show that this strategy guarantees a final amount of: $$\frac{2^{2n}n!n!}{(2n)!}$$ Note: this final amount is much bigger than people think. That sum equals to: $$\left(1 + \frac{1}{2n-1} \right) \left(1 + \frac{1}{2n-3} \right) \dots \left(1 + \frac{1}{3} \right) \left(1 + \frac{1}{1} \right)$$ which is already greater than 2, and only increases as $n$ increases. This is an extension to the obvious strategy to wait until the last card before we bet the one dollar that we start with, to which we'd be guaranteed a final amount of two.

First Solution

First we claim the following: if at any point there are $R$ red cards and $B$ black cards left in the deck, and your current total is $x$, employing the above strategy will give you a final amount of: $$\frac{2^{R+B}R!B!}{(R+B)!} x$$ That claim is easily verified if either $R+B=1$ because then we would know exactly what's left in the deck. Now we prove the claim by induction on $R+B$. Clearly if $R=B$ then we don't bet that flip, and we reduced it to the case $R+B-1$. But if $R \neq B$, and WLOG we may assume that $R > B$. We bet $\frac{R-B}{W+B}$ that the next card will be red.

Case 1: the next card is red. That means our current total becomes $(1 + \frac{R-B}{R+B})x = \frac{2R}{R+B}x$. But the deck now contains $R-1$ red and $B$ black, so by our induction hypothesis, our final amount will be: $$\frac{2^{R+B-1}(R-1)!B!}{(R+B-1)!} \frac{2R}{R+B}x =\frac{2^{R+B}R!B!}{(R+B)!} x $$

Case 2: the next card is black and we lost our bet. That means our current total is now $(1 - \frac{R-B}{R+B})x = \frac{2B}{R+B}x$ and the deck contains $R$ red and $B-1$ black, so our final amount is: $$\frac{2^{R+B-1}R!(B-1)!}{(R+B-1)!} \frac{2B}{R+B}x =\frac{2^{R+B}R!B!}{(R+B)!} x $$ And the claim is proven. Now it's straightforward to apply that if $R=B=n$, we have: $$\frac{2^{2n}n!n!}{(2n)!} = \frac{(2n)(2n-2)\dots (4)(2)}{(2n-1)(2n-3) \dots (3) (1)}$$ $$= \left(1 + \frac{1}{2n-1} \right) \left(1 + \frac{1}{2n-3} \right) \dots \left(1 + \frac{1}{3} \right) \left(1 + \frac{1}{1} \right)$$

Tuesday, February 4, 2014

Expected number of draws

In an urn there are $n$ balls each with weight $w_i$, drawn one at a time without replacement. At any point of drawing, the probability of a ball being drawn is proportional to its weight. What's the expected number of draws before we draw ball 1?

Sunday, February 2, 2014

Random walk sequences

Let $n$ be a natural number. Suppose $a_1, a_2, \dots$ are sequences of +1s and -1s. Also define $s_k = a_1 + a_2 + \dots + a_k$, and define $s_0 = 0$. Such sequence is called "complete" if for each $i \in {0, \dots, n-1}$ there exists $k$ such that $s_k \equiv i \mod n$.
What is the largest number $S$ such that the following statement is true: For every complete sequence, there exists $j,k \geq 0$ such that $|s_j - s_k| = S$.



We can construct $a_i$ as follows: $n-1$ +1s, followed by $n-1$ -1s, and followed by $n-1$ +1s again, alternatingly. It's easy to see that this sequence is complete. Moreover, the sum of any contiguous block ranges from $-(n-1)$ to $(n-1)$. So clearly for this particular sequence, $S \leq n-1$ fulfills the statement, and $S > n-1$ is not satisfied by this sequence.

Now we have to prove that for $S=n-1$ and any given complete sequence, the problem statement is fulfilled.

First Solution

The sequence is infinite, but suppose we can truncate it to the smallest sequence so that it's still complete. In other words, suppose the sequence $a_1, \dots, a_N$ is complete but $a_1, \dots, a_{N-1}$ is not. Let $s_N = x$. Because $N$ is the smallest index such that it's still complete, that means $s_i \equiv x \mod n$ does not happen before $N$. Consider $a_N$. It can be either +1 or -1. Suppose it's +1 (the case of -1 is similar). So that means $s_{N-1} = x-1$.

Let $p$ be the an index such that $s_p \equiv x+1 \mod n$. We know $p$ exists because the sequence is still complete. Also that $p < N-1$. That means $s_{N-1} - s_p \equiv (x-1) - (x+1) \equiv -2 \mod n$. But from $i = p$ to $i = N-1$ we can't have $s_i = x$, so that means $s_{N-1} - s_p = tn - 2$ for some $ t > 0$. This means that $s_N - s_p = tn-1$.

But if $t > 1$, then we must have some $r$ such that $p < r < N$ and $s_N - s_r = n, s_r - s_p = (t-1)n-1$. But this yields a contradiction, because then $s_r \equiv s_N \equiv x \mod n$ and thus $a_1, \dot, a_r$ is complete, violating our condition that $N$ is the smallest complete sequence. Thus we must have that $t=1$ and consequently $s_N - s_p = n-1$

Second Solution

Suppose that there is no contiguous block whose sum is $\pm(n-1)$. Each contiguous block must have sum of at most $(n-2)$ and at least $-(n-2)$. Now, if there is $k$ such that $s_k = n-1$ then we are done, because it would imply a contiguous block from $0$ to $k$ with sum $n-1$. So let $x$ be the maximum value of $s_i$, where $x \leq n-2$. Likewise, let $y$ be the minimum value of $s_i$ where $y \geq -(n-2)$.

Because the sequence is complete, then we must have that $x-y \geq n-1$, otherwise some values won't be "reached" by $s_i$ at all. But this means that there is a contiguous block from the maximum to the minimum or vice versa, and their sum is $\pm(n-1)$

Flea on a regular polygon

We are given a regular $N$-gon. A flea originally sits on one vertex. At each turn, the flea decides to either jump to the left or to the right to the next (closest) vertex. The flea continues this movement until it has visited all vertices. Let $a_i$ be a sequence that records the flea's movement. If at $i$-th turn $(i \geq 1)$ the flea moves to the left then $a_i = -1$, and if it moves to the right then $a_i = 1$. We are told that this sequence is finite (i.e. the flea does visit all vertices and therefore stops). Show that there exists $j,k$ such that: $$| \sum_{i=j}^k a_i | = N-1$$