## In Search of a 4-by-11 Matrix

October 1st, 2013

IMPORTANT UPDATE [January 30, 2014]: I have managed to solve the 4-by-11 case: there is no such matrix! Details of the computation that led to this result, as well as several other related results, are given in [4]. See Table 3 in that paper for an updated list of which cases still remain open (the smallest open cases are now 5-by-11 and 6-by-10).

After spinning my wheels on a problem for far too long, I’ve decided that it’s time to enlist the help of the mathematical and programming geniuses of the world wide web. The problem I’m interested in asks for a 4-by-11 matrix whose columns satisfy certain relationships. While the conditions are relatively easy to state, the problem size seems to be just slightly too large for me to solve myself.

### The Problem

The question I’m interested in (for reasons that are explained later in this blog post) is, given positive integers p and s, whether or not there exists a p-by-s matrix M with the following three properties:

1. Every entry of M is a nonzero integer;
2. The sum of any two columns of M contains a 0 entry; and
3. There is no way to append a (s+1)th column to M so that M still has property 2.

In particular, I’m interested in whether or not such a matrix M exists when p = 4 and s = 11. But to help illustrate the above three properties, let’s consider the p = 3, s = 4 case first, where one such matrix M is:

$M = \begin{bmatrix}1 & -1 & 2 & -2 \\ 1 & -2 & -1 & 2 \\ 1 & 2 & -2 & -1\end{bmatrix}.$

The fact that M satisfies condition 2 can be checked by hand easily enough. For example, the sum of the first two columns of M is [0, -1, 3]T which contains a 0 entry, and it is similarly straightforward to check that the other 5 sums of two columns of M each contain a 0 entry as well.

Checking property 3 is slightly more technical (NP-hard, even), but is still doable in small cases such as this one. For the above example, suppose that we could add a 5th column (which we will call z = [z1, z2, z3]T) to M such that its sum with any of the first 4 columns has a 0 entry. By looking at M’s first column, we see that one of z’s entries must be -1 (and by the cyclic symmetry of the entries of the last 3 columns of M, we can assume without loss of generality that z1 = -1). By looking at the last 3 columns of M, we then see that either z2 = 2 or z3 = -2, either z2 = 1 or z3 = 2, and either z2 = -2 or z3 = 1. Since there is no way to simultaneously satisfy all 3 of these requirements, no such column z exists.

### What’s Known (and What Isn’t)

As I mentioned earlier, the instance of this problem that I’m really interested in is when p = 4 and s = 11. Let’s first back up and briefly discuss what is known for different values of p and s:

• If s ≤ p then M does not exist. To see this, simply note that property 3 can never be satisfied since you can always append one more column. If we denote the (i,j)-entry of M by mij and the i-th entry of the new column z by zi, then you can choose zi = -mii for i = 1, 2, …, s.
• Given p, the smallest value of s for which M exists is: (a) s = p+1 if p is odd, (b) s = p+2 if p = 4 or p ≡ 2 (mod 4), (c) s = p+3 if p = 8, and (d) s = p+4 otherwise. This result was proved in [1] (the connection between that paper and this blog post will be explained in the “Motivation” section below).
• If s > 2p then M does not exist. In this case, there is no way to satisfy property 2. This fact is trivial when p = 1 and can be proved for all p by induction (an exercise left to the reader?).
• If s = 2p then M exists. To see this claim, let the columns of M be the 2p different columns consisting only of the entries 1 and -1. To see that property 2 is satisfied, simply notice that each column is different, so for any pair of columns, there is a row in which one column is 1 and the other column is -1. To see that property 3 is satisfied, observe that any new column must also consist entirely of 1’s and -1’s. However, every such column is already a column of M itself, and the sum of a column with itself will not have any 0 entries.
• If s = 2p – 4 (and p ≥ 3) then M exists. There is an inductive construction (with the p = 3, s = 4 example from the previous section as the base case) that works here. More specifically, if we let Mp denote a matrix M that works for a given value of p and s = 2p – 4, we let Bp be the matrix from the s = 2p case above, and 1k denotes the row vector with k ones, then
$M_{p+1} = \begin{bmatrix}M_p & B_p \\ 1_{2^p-4} & -1_{2^p}\end{bmatrix}$

is a solution to the problem for p’ = p+1 and s’ = 2p+1 – 4.
• If 2p – 3 ≤ s ≤ 2p – 1 then M does not exist. This is a non-trivial result that follows from [2].

Given p, the above results essentially tell us the largest and smallest values of s for which a solution M to the problem exists. However, we still don’t really know much about when solutions exist for intermediate values of s – we just have scattered results that say a solution does or does not exist in certain specific cases, without really illuminating what is going on. The following table summarizes what we know about when solutions do and do not exist for small values of p and s (a check mark ✓ means that a solution exists, a dash – means no solution exists, and ? means we don’t know).

s \ p 1 2 3 4 5
1
2
3
4
5
6
7
8
9 ?
10 ?
11 ? ?
12
13
14
15
16
17 – 26
27 ?
28
29
30
31
32

The table above shows why I am interested in the p = 4, s = 11 case: it is the only case when p ≤ 4 whose solution still is not known. The other unknown cases (i.e., p = 5 and s ∈ {9,10,11,27}, and far too many to list when p ≥ 6) would be interesting to solve as well, but are a bit lower-priority.

### Some Simplifications

Some assumptions about the matrix M can be made without loss of generality, in order to reduce the search space a little bit. For example, since the values of the entries of M don’t really matter (other than the fact that they come in positive/negative pairs), the first column of M can always be chosen to consist entirely of ones (or any other value). Similarly, permuting the rows or columns of M does not affect whether or not it satisfies the three desired properties, so you can assume (for example) that the first row is in non-decreasing order.

Finally, since there is no advantage to having the integer k present in M unless -k is also present somewhere in M (i.e., if M does not contain any -k entries, you could always just replace every instance of k by 1 without affecting any of the three properties we want), we can assume that the entries of M are between -floor(s/2) and floor(s/2), inclusive.

### Motivation

The given problem arises from unextendible product bases (UPBs) in quantum information theory. A set of pure quantum states $|v_1\rangle, \ldots, |v_s\rangle \in \mathbb{C}^{d_1} \otimes \cdots \otimes \mathbb{C}^{d_p}$ forms a UPB if and only if the following three properties hold:

1. (product) Each state $|v_j\rangle$ is a product state (i.e., can be written in the form $|v_j\rangle = |v_j^{(1)}\rangle \otimes \cdots \otimes |v_j^{(p)}\rangle$, where $|v_j^{(i)}\rangle \in \mathbb{C}^{d_i}$ for all i);
2. (basis) The states are mutually orthogonal (i.e., $\langle v_i | v_j \rangle = 0$ for all i ≠ j); and
3. (unextendible) There does not exist a product state $|z\rangle$ with the property that $\langle z | v_j \rangle = 0$ for all j.

UPBs are useful because they can be used to construct quantum states with very strange entanglement properties [3], but their mathematical structure still isn’t very well-understood. While we can’t really expect an answer to the question of what sizes of UPBs are possible when the local dimensions $d_1, \ldots, d_p$ are arbitrary (even just the minimum size of a UPB is still not known in full generality!), we might be able to hope for an answer if we focus on multi-qubit systems (i.e., the case when $d_1 = \cdots = d_p = 2$).

In this case, the 3 properties above are isomorphic in a sense to the 3 properties listed at the start of this post. We associate each state $|v_j\rangle$ with the j-th column of the matrix M. To each state in the product state decomposition of $|v_j\rangle$, we associate a unique integer in such a way that orthogonal states are associated with negatives of each other. The fact that $\langle v_i | v_j \rangle = 0$ for all i ≠ j is then equivalent to the requirement that te sum of any two columns of M has a 0 entry, and unextendibility of the product basis corresponds to not being able to add a new column to M without destroying property 2.

Thus this blog post is really asking whether or not there exists an 11-state UPB on 4 qubits. In order to illustrate this connection more explicitly, we return to the p = 3, s = 4 example from earlier. If we associate the matrix entries 1 and -1 with the orthogonal standard basis states $|0\rangle, |1\rangle \in \mathbb{C}^2$ and the entries 2 and -2 with the orthogonal states $|\pm\rangle := (|0\rangle \pm |1\rangle)/\sqrt{2}$, then the matrix M corresponds to the following set of s = 4 product states in $\mathbb{C}^2 \otimes \mathbb{C}^2 \otimes \mathbb{C}^2$:

$|0\rangle|0\rangle|0\rangle, \quad |1\rangle|-\rangle|+\rangle, \quad |+\rangle|1\rangle|-\rangle, \quad|-\rangle|+\rangle|1\rangle.$

The fact that these states form a UPB is well-known – this is the “Shifts” UPB from [3], and was one of the first UPBs found.

References

1. N. Johnston. The minimum size of qubit unextendible product bases. In Proceedings of the 8th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC), 2013. E-print: arXiv:1302.1604 [quant-ph], 2013.
2. L. Chen and D. Ž. Ðjoković. Separability problem for multipartite states of rank at most four. J. Phys. A: Math. Theor., 46:275304, 2013. E-print: arXiv:1301.2372 [quant-ph]
3. C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal. Unextendible product bases and bound entanglement. Phys. Rev. Lett., 82:5385–5388, 1999. E-print: arXiv:quant-ph/9808030
4. N. Johnston. The structure of qubit unextendible product basesJournal of Physics A: Mathematical and Theoretical, 47:424034, 2014. E-print: arXiv:1401.7920 [quant-ph], 2014.

## The Spectrum of the Partial Transpose of a Density Matrix

July 3rd, 2013

It is a simple fact that, given any density matrix (i.e., quantum state) $\rho\in M_n$, the eigenvalues of $\rho$ are the same as the eigenvalues of $\rho^T$ (the transpose of $\rho$). However, strange things can happen if we instead only apply the transpose to one half of a quantum state. That is, if $\rho\in M_m \otimes M_n$ then its eigenvalues in general will be very different from the eigenvalues of $(id_m\otimes T)(\rho)$, where $id_m$ is the identity map on $M_m$ and $T$ is the transpose map on $M_n$ (the map $id_m\otimes T$ is called the partial transpose).

In fact, even though $\rho$ is positive semidefinite (since it is a density matrix), the matrix $(id_m\otimes T)(\rho)$ in general can have negative eigenvalues. To see this, define $p:={\rm min}\{m,n\}$ and let $\rho=|\psi\rangle\langle\psi|$, where

$|\psi\rangle=\displaystyle\frac{1}{\sqrt{p}}\sum_{j=1}^{p}|j\rangle\otimes|j\rangle$

is the standard maximally-entangled pure state. It then follows that

$(id_m\otimes T)(\rho)=\displaystyle\frac{1}{p}\sum_{i,j=1}^{p}|i\rangle\langle j|\otimes|j\rangle\langle i|$,

which has $p(p+1)/2$ eigenvalues equal to $1/p$$p(p-1)/2$ eigenvalues equal to $-1/p$, and $p|m-n|$ eigenvalues equal to $0$.

The fact that $(id_m\otimes T)(\rho)$ can have negative eigenvalues is another way of saying that the transpose map is positive but not completely positive, and thus plays a big role in entanglement theory. In this post we consider the question of how exactly the partial transpose map can transform the eigenvalues of $\rho$:

Question. For which ordered lists $\lambda_1\geq\lambda_2\geq\cdots\geq\lambda_{mn}\in\mathbb{R}$ does there exist a density matrix $\rho$ such that $(id_m\otimes T)(\rho)$ has eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_{mn}$?

### The Answer for Pure States

In the case when $\rho$ is a pure state (i.e., has rank 1), we can completely characterize the eigenvalues of $(id_m\otimes T)(\rho)$ by making use of the Schmidt decomposition. In particular, we have the following:

Theorem 1. Let $|\phi\rangle$ have Schmidt rank $r$ and Schmidt coefficients $\alpha_1\geq\alpha_2\geq\cdots\geq\alpha_r>0$. Then the spectrum of $(id_m\otimes T)(|\phi\rangle\langle\phi|)$ is

$\{\alpha_i^2 : 1\leq i\leq r\}\cup\{\pm\alpha_i\alpha_j:1\leq i,

together with the eigenvalue $0$ with multiplicity $p|n-m|+p^2-r^2$.

Proof. If $|\phi\rangle$ has Schmidt decomposition

$\displaystyle|\phi\rangle=\sum_{i=1}^r\alpha_i|a_i\rangle\otimes|b_i\rangle$

then

$\displaystyle(id_m\otimes T)(|\phi\rangle\langle\phi|)=\sum_{i,j=1}^r\alpha_i\alpha_j|a_i\rangle\langle a_j|\otimes|b_j\rangle\langle b_i|.$

It is then straightforward to verify, for all $1\leq i, that:

• $|a_i\rangle\otimes|b_i\rangle$ is an eigenvector with eigenvalue $\alpha_i^2$;
• $|a_i\rangle\otimes|b_j\rangle\pm|a_j\rangle\otimes|b_i\rangle$ is an eigenvector with eigenvalue $\pm\alpha_i\alpha_j$; and
• ${\rm rank}\big((id_m\otimes T)(|\phi\rangle\langle\phi|)\big)= r^2$, from which it follows that the remaining $p|n-m|+p^2-r^2$ eigenvalues are $0$.

Despite such a simple characterization in the case of rank-1 density matrices, there is no known characterization for general density matrices, since eigenvalues aren’t well-behaved under convex combinations.

### The Number of Negative Eigenvalues

Instead of asking for a complete characterization of the possible spectra of $(id_m\otimes T)(\rho)$, for now we focus on the simpler question that asks how many of the eigenvalues of $(id_m\otimes T)(\rho)$ can be negative. Theorem 1 answers this question when $\rho=|\phi\rangle\langle\phi|$ is a pure state: the number of negative eigenvalues is $r(r-1)/2$, where r is the Schmidt rank of $|\phi\rangle$. Since $r\leq p$, it follows that $(id_m\otimes T)(\rho)$ has at most $p(p-1)/2$ negative eigenvalues when $\rho$ is a pure state.

It was conjectured in [1] that a similar fact holds for general (not necessarily pure) density matrices $\rho$ as well. In particular, they conjectured that if $\rho\in M_n\otimes M_n$ then $(id_n\otimes T)(\rho)$ has at most $n(n-1)/2$ negative eigenvalues. However, this conjecture is easily shown to be false just by randomly-generating many density matrices $\rho$ and then counting the number of negative eigenvalues of $(id_n\otimes T)(\rho)$; density matrices whose partial transposes have more than $n(n-1)/2$ negative eigenvalues are very common.

In [2,3], it was shown that if $\rho\in M_m\otimes M_n$ then $(id_m\otimes T)(\rho)$ can not have more than $(m-1)(n-1)$ negative eigenvalues. In [4], this bound was shown to be tight when ${\rm min}\{m,n\}=2$ by explicitly constructing density matrices $\rho\in M_2\otimes M_n$ such that $(id_2\otimes T)(\rho)$ has $n-1$ negative eigenvalues. Similarly, this bound was shown to be tight via explicit construction when $m=n=3$ in [3]. Finally, it was shown in [5] that this bound is tight in general. That is, we have the following result:

Theorem 2. The maximum number of negative eigenvalues that $(id_m\otimes T)(\rho)$ can have when $\rho\in M_m\otimes M_n$ is $(m-1)(n-1)$.

It is worth pointing out that the method used in [5] to prove that this bound is tight is not completely analytic. Instead, a numerical method was presented that is proved to always generate a density matrix $\rho\in M_m\otimes M_n$ such that $(id_m\otimes T)(\rho)$ has $(m-1)(n-1)$ negative eigenvalues. Code that implements this numerical procedure in MATLAB is available here, but no general analytic form for such density matrices is known.

### Other Bounds on the Spectrum

Unfortunately, not a whole lot more is known about the spectrum of $(id_m\otimes T)(\rho)$. Here are some miscellaneous other results that impose certain restrictions on its maximal and minimal eigenvalues (which we denote by $\lambda_\textup{max}$ and $\lambda_\textup{min}$, respectively):

Theorem 3 [3]. $1\geq\lambda_\textup{max}\geq\lambda_\textup{min}\geq -1/2$.

Theorem 4 [2]. $\lambda_\textup{min}\geq\lambda_\textup{max}(1-{\rm min}\{m,n\})$.

Theorem 5 [6]. If $(id_m\otimes T)(\rho)$ has $q$ negative eigenvalues then

$\displaystyle\lambda_\textup{min}\geq\lambda_\textup{max}\Big(1-\big\lceil\tfrac{1}{2}\big(m+n-\sqrt{(m-n)^2+4q-4}\big)\big\rceil\Big)$ and

$\displaystyle\lambda_\textup{min}\geq\lambda_\textup{max}\Big(1-\frac{mn\sqrt{mn-1}}{q\sqrt{mn-1}+\sqrt{mnq-q^2}}\Big)$.

However, these bounds in general are fairly weak and the question of what the possible spectra of $(id_m\otimes T)(\rho)$ are is still far beyond our grasp.

Update [August 21, 2017]: Everett Patterson and I have now written a paper about this topic.

References

1. R. Xi-Jun, H. Yong-Jian, W. Yu-Chun, and G. Guang-Can. Partial transposition on bipartite system. Chinese Phys. Lett., 25:35, 2008.
2. N. Johnston and D. W. Kribs. A family of norms with applications in quantum information theory. J. Math. Phys., 51:082202, 2010. E-print: arXiv:0909.3907 [quant-ph]
3. S. Rana. Negative eigenvalues of partial transposition of arbitrary bipartite states. Phys. Rev. A, 87:054301, 2013. E-print: arXiv:1304.6775 [quant-ph]
4. L. Chen, D. Z. Djokovic. Qubit-qudit states with positive partial transpose. Phys. Rev. A, 86:062332, 2012. E-print: arXiv:1210.0111 [quant-ph]
5. N. Johnston. Non-positive-partial-transpose subspaces can be as large as any entangled subspace. Phys. Rev. A, 87:064302, 2013. E-print: arXiv:1305.0257 [quant-ph]
6. N. Johnston. Norms and Cones in the Theory of Quantum Entanglement. PhD thesis, University of Guelph, 2012.

## The Minimal Superpermutation Problem

April 10th, 2013

Imagine that there is a TV series that you want to watch. The series consists of n episodes, with each episode on a single DVD. Unfortunately, however, the DVDs have become mixed up and the order of the episodes is in no way marked (and furthermore, the episodes of the TV show are not connected by any continuous storyline – there is no way to determine the order of the episodes just from watching them).

Suppose that you want to watch the episodes of the TV series, consecutively, in the correct order. The question is: how many episodes must you watch in order to do this?

To illustrate what we mean by this question, suppose for now that n = 2 (i.e., the show was so terrible that it was cancelled after only 2 episodes). If we arbitrarily label one of the episodes “1” and the other episode “2”, then we could watch the episodes in the order “1”, “2”, and then “1” again. Then, regardless of which episode is really the first episode, we’ve seen the two episodes consecutively in the correct order. Furthermore, this is clearly minimal – there is no way to watch fewer than 3 episodes while ensuring that you see both episodes in the correct order.

So what is the minimal number of episodes we must watch for a TV show consisting of n episodes? Somewhat surprisingly, no one knows. So let’s discuss what is known.

### Minimal Superpermutations

Rephrased a bit more mathematically, we are interested in finding a shortest possible string on the symbols “1”, “2”, …, “n” that contains every permutation of those symbols as a contiguous substring. We call a string that contains every permutation in this way a superpermutation, and one of minimal length is called a minimal superpermutation. Minimal superpermutations when n = 1, 2, 3, 4 are easily found via brute-force computer search, and are presented here:

n Minimal Superpermutation Length
1 1 1
2 121 3
3 123121321 9
4 123412314231243121342132413214321 33

By the time n = 5, the strings we are looking for are much too long to find via brute-force. However, the strings in the n ≤ 4 cases provide some insight that we can hope might generalize to larger n. For example, there is a natural construction that allows us to construct a short superpermutation on n+1 symbols from a short superpermutation on n symbols (which we will describe in the next section), and this construction gives the minimal superpermutations presented in the above table when n ≤ 4.

Similarly, the minimal superpermutations in the above table can be shown via brute-force to be unique (up the relabeling the characters – for example, we don’t count the string “213212312” as distinct from “123121321”, since they are related to each other simply by interchanging the roles of “1” and “2”). Are minimal superpermutations unique for all n?

### Minimal Length

A trivial lower bound on the length of a superpermutation on n symbols is n! + n – 1, since it must contain each of the n! permutations as a substring – the first permutation contributes a length of n characters to the string, and each of the remaining n! – 1 permutations contributes a length of at least 1 character more.

It is not difficult to improve this lower bound to n! + (n-1)! + n – 2 (I won’t provide a proof here, but the idea is to note that when building the superpermutation, you can not add more than n-1 permutations by appending just 1 character each to the string – you eventually have to add 2 or more characters to add a permutation that is not already present). In fact, this argument can be stretched further to show that n! + (n-1)! + (n-2)! + n – 3 is a lower bound as well (a rough proof is provided here). However, the same arguments do not seem to extend to lower bounds like n! + (n-1)! + (n-2)! + (n-3)! + n – 4 and so on.

There is also a trivial upper bound on the length of a minimal superpermutation: n×n!, since this is the length of the string obtained by writing out the n! permutations in order without overlapping. However, there is a well-known construction of small superpermutations that provides a much better upper bound, which we now describe.

Suppose we know a small superpermutation on n symbols (such as one of the superpermutations provided in the table in the previous section) and we want to construct a small superpermutation on n+1 symbols. To do so, simply replace each permutation in the n-symbol superpermutation by: (1) that permutation, (2) the symbol n+1, and (3) that permutation again. For example, if we start with the 2-symbol superpermutation “121”, we replace the permutation “12” by “12312” and we replace the permutation “21” by “21321”, which results in the 3-symbol superpermutation “123121321”. The procedure for constructing a 4-symbol superpermutation from this 3-symbol superpermutation is illustrated in the following diagram:

A diagram that demonstrates how to construct a small superpermutation on 4 symbols from a small superpermutation on 3 symbols.

It is a straightforward inductive argument to show that the above method produces n-symbol superpermutations of length $\sum_{k=1}^nk!$ for all n. Although it has been conjectured that this superpermutation is minimal [1], this is only known to be true when n ≤ 4.

### Uniqueness

As a result of minimal superpermutations being unique when n ≤ 4, it has been conjectured that they are unique for all n [1]. However, it turns out that there are in fact many superpermutations of the conjectured minimal length – the main result of [2] shows that there are at least

$\displaystyle\prod_{k=1}^{n-4}(n-k-2)^{k\cdot k!}$

distinct n-symbol superpermutations of the conjectured minimal length. For n ≤ 4, this formula gives the empty product (and thus a value of 1), which agrees with the fact that minimal superpermutations are unique in these cases. However, the number of distinct superpermutations then grows extremely quickly with n: for n  = 5, 6, 7, 8, there are at least 2, 96, 8153726976, and approximately 3×1050 superpermutations of the conjectured minimal length. The 2 such superpermutations in the n = 5 case are as follows (each superpermutation has length 153 and is written on two lines):

12345123415234125341235412314523142531423514231542312453124351243152431254312
1345213425134215342135421324513241532413524132541321453214352143251432154321

and

12345123415234125341235412314523142531423514231542312453124351243152431254312
1354213524135214352134521325413251432513425132451321543215342153241532145321

Similarly, a text file containing all 96 known superpermutations of the expected minimal length 873 in the n = 6 case can be viewed here. It is unknown, however, whether or not these superpermutations are indeed minimal or if there are even more superpermutations of the conjectured minimal length.

Update [Aug. 13, 2014]: Ben Chaffin has shown that minimal superpermutations in the n = 5 case have length 153, and he has also shown that there are exactly 8 (not just 2) distinct minimal superpermutations in this case. See the write up here.

IMPORTANT UPDATE [August 22, 2014]: Robin Houston has disproved the minimal superpermutation conjecture for all n ≥ 6. See here.

References

1. D. Ashlock and J. Tillotson. Construction of small superpermutations and minimal injective superstrings. Congressus Numerantium, 93:91–98, 1993.
2. N. Johnston. Non-uniqueness of minimal superpermutations. Discrete Mathematics, 313:1553–1557, 2013.

Other Random Links Related to This Problem

1. A180632 – the main OEIS entry for this problem
2. Permutation Strings – a short note written by Jeffrey A. Barnett about this problem
3. Generate sequence with all permutations – a stackoverflow post about this problem
4. What is the shortest string that contains all permutations of an alphabet? – a mathexchange post about this problem
5. The shortest string containing all permutations of n symbols – an XKCD forums post that I made about this problem a couple years ago

## How to Construct Minimal Unextendible Product Bases

March 14th, 2013

In quantum information theory, a product state $|v\rangle \in \mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$ is a quantum state that can be written as an elementary tensor:

$|v\rangle=|v_1\rangle\otimes|v_2\rangle\text{ with }|v_i\rangle\in\mathbb{C}^{d_i}\ \text{ for } i=1,2,$

while states that can not be written in this form are called entangled. In this post, we will be investigating unextendible product bases (UPBs), which are sets $S\subset\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$ of mutually orthogonal product states with the property that no other product state is orthogonal to every member of $S$.

In this post, we will be looking at how to construct small UPBs. Note that UPBs can more generally be defined on multipartite spaces (i.e., $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}\otimes\cdots\otimes\mathbb{C}^{d_p}$ for arbitrary $p\geq 2$), but for simplicity we stick with the bipartite (i.e., $p= 2$) case in this blog post.

### Simple Examples

The most trivial unextendible product basis is simply the computational basis:

$S:=\big\{|0\rangle\otimes|0\rangle,\ldots,|0\rangle\otimes|d_2-1\rangle,\ldots,|d_1-1\rangle\otimes|0\rangle,\ldots,|d_1-1\rangle\otimes|d_2-1\rangle\big\}.$

However, the above UPB is rather trivial – the unextendibility condition holds vacuously because $S$ spans the entire Hilbert space, so of course there is no product state (or any state) orthogonal to every member of $S$.

It is known that when $\min\{d_1,d_2\}\leq 2$, the only UPBs that exist are trivial in this sense – they consist of a full set of $d_1d_2$ states. We are more interested in UPBs that contain fewer vectors than the dimension of the Hilbert space (since, for example, these UPBs can be used to construct bound entangled states [1]). One of the first such UPBs to be constructed was called “Pyramid” [1]. To construct this UPB, define $h:=\tfrac{1}{2}\sqrt{1+\sqrt{5}}$ and $N:=\tfrac{1}{2}\sqrt{5+\sqrt{5}}$, and let

$|\phi_j\rangle:=\tfrac{1}{N}[\cos(2\pi j/5),\sin(2\pi j/5),h]\text{ for }0\leq j\leq 4.$

Then the following set of 5 states in $\mathbb{C}^3\otimes\mathbb{C}^3$ is a UPB:

$S_{\textup{pyr}}:=\big\{|v_0\rangle,|v_1\rangle,|v_2\rangle,|v_3\rangle,|v_4\rangle\big\},$

where $|v_i\rangle:=|\phi_i\rangle\otimes|\phi_{2i(\text{mod }5)}\rangle$.

It is a straightforward calculation to verify that the members of $S_{\textup{pyr}}$ are mutually orthogonal (and thus form a product basis). To verify that there is no product state orthogonal to every member of $S_{\textup{pyr}}$, we first observe that any 3 of the $|\phi_j\rangle$‘s form a linearly independent set (verification of this claim is slightly tedious, but nonetheless straightforward). Thus there is no state $|w\rangle\in\mathbb{C}^3$ that is orthogonal to more than 2 of the $|\phi_j\rangle$‘s. Thus no product state $|w_1\rangle\otimes|w_2\rangle\in\mathbb{C}^3\otimes\mathbb{C}^3$ is orthogonal to more than 2 + 2 = 4 members of $S_{\textup{pyr}}$, which verifies unextendibility.

### Minimum Size

One interesting question concerning unextendible product bases asks for their minimum cardinality. It was immediately noted that any UPB in $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$ must have cardinality at least $d_1+d_2-1$. To see this, suppose for a contradiction that there existed a UPB $S$ containing $(d_1-1)+(d_2-1)$ or fewer product states. Then we could construct another product state that is orthogonal to $d_1-1$ members of $S$ on $\mathbb{C}^{d_1}$ and another $d_2-1$ members of $S$ on $\mathbb{C}^{d_2}$, for a total of $(d_1-1)+(d_2-1)$ members of $S$, which shows that $S$ is extendible.

Despite being such a simple lower bound, it is also attainable for many values of $d_1,d_2$ [2] (and very close to attainable in the other cases [3,4]). The goal of this post is to focus on the case when there exists a UPB of cardinality $d_1+d_2-1$, which is characterized by the following result of Alon and Lovász:

Theorem [2]. There exists a UPB in $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$ of (necessarily minimal) size $d_1+d_2-1$ if and only if $d_1,d_2\geq 3$ and at least one of $d_1$ or $d_2$ is odd.

In spite of the above result that demonstrates the existence of a UPB of the given minimal size in many cases, how to actually construct such a UPB in these cases is not immediately obvious, and is buried throughout the proofs of [2] and its references. The goal of the rest of this post is to make the construction of a minimal UPB in these cases explicit.

### Orthogonality Graphs

The orthogonality graph of a set of $s$ product states in $\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}$ is graph with coloured edges (there are 2 colours) on $s$ vertices (one for each product state), such that there is an edge connecting two vertices with the $i$th colour if and only if the two corresponding product states are orthogonal on the $i$th party.

For example, the orthogonality graph of the Pyramid UPB introduced earlier is illustrated below. Black edges represent states that are orthogonal on the first party, and red dotted edges represent states that are orthogonal on the second party.

If the product states under consideration are mutually orthogonal, then their orthogonality graph is the complete graph $K_s$. Unextendibility is a bit more difficult to determine, but nonetheless a useful technique for constructing UPBs is to first choose a colouring of the edges of $K_s$, and then try to construct product states that lead to that colouring.

### A Minimal Construction

In the orthogonality graph of the Pyramid UPB, all of the edges that connect a vertex to a neighbouring vertex are coloured black, and all other edges are coloured red. We can construct minimal UPBs by generalizing this graph in a natural way. Suppose without loss of generality that $d_1$ is odd, and we wish to construct a UPB of size $s := d_1 + d_2 - 1$. We construct the orthogonality graph by arranging $s$ vertices in a circle and connecting any vertices that are a distance of $(d_1-1)/2$ or less from each other via a black edge. All other edges are coloured red. For example, in the $d_1 = d_2 = 3$ case, this gives the orthogonality graph above. In the $d_1 = 5, d_2 = 4$ case, this gives the orthogonality graph below.

Our goal now is to construct product states that have the given orthogonality graph. This is straightforward to do, since every state must be orthogonal to $d_1-1$ of the other states on $\mathbb{C}^{d_1}$ and orthogonal to the $d_2-1$ other states on $\mathbb{C}^{d_2}$. Thus, we can just pick $|v_0\rangle$ arbitrarily, then pick $|v_1\rangle$ randomly subject to the constraint that it is orthogonal to $|v_0\rangle$ on the first subsystem, and so on, working our way clockwise around the orthogonality graph, generating each product state randomly subject to the orthogonality conditions.

Furthermore, it can be shown (but will not be shown here – the techniques are similar to those of [4] and are a bit technical) that this procedure leads to a product basis that is in fact unextendible with probability 1. In order to verify unextendibility explicitly, one approach is to check that any subset of $d_1$ of the product states are linearly independent on $\mathbb{C}^{d_1}$ and any subset of $d_2$ of the product states are linearly independent on $\mathbb{C}^{d_2}$.

References

1. C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal. Unextendible product bases and bound entanglement. Phys. Rev. Lett., 82:5385–5388, 1999. E-print: arXiv:quant-ph/9808030
2. N. Alon and L. Lovász. Unextendible product bases. J. Combinatorial Theory, Ser. A, 95:169–179, 2001.
3. K. Feng. Unextendible product bases and 1-factorization of complete graphs. Discrete Appl. Math., 154:942–949, 2006.
4. J. Chen and N. Johnston. The minimum size of unextendible product bases in the bipartite case (and some multipartite cases). Comm. Math. Phys., 333(1):351–365, 2015. E-print: arXiv:1301.1406 [quant-ph]

## Norms and Dual Norms as Supremums and Infimums

May 26th, 2012

Let $\mathcal{H}$ be a finite-dimensional Hilbert space over $\mathbb{R}$ or $\mathbb{C}$ (the fields of real and complex numbers, respectively). If we let $\|\cdot\|$ be a norm on $\mathcal{H}$ (not necessarily the norm induced by the inner product), then the dual norm of $\|\cdot\|$ is defined by

$\displaystyle\|\mathbf{v}\|^\circ := \sup_{\mathbf{w} \in \mathcal{H}}\Big\{ \big| \langle \mathbf{v}, \mathbf{w} \rangle \big| : \|\mathbf{w}\| \leq 1 \Big\}.$

The double-dual of a norm is equal to itself (i.e., $\|\cdot\|^{\circ\circ} = \|\cdot\|$) and the norm induced by the inner product is the unique norm that is its own dual. Similarly, if $\|\cdot\|_p$ is the vector p-norm, then $\|\cdot\|_p^\circ = \|\cdot\|_q$, where $q$ satisfies $1/p + 1/q = 1$.

In this post, we will demonstrate that $\|\cdot\|^\circ$ has an equivalent characterization as an infimum, and we use this characterization to provide a simple derivation of several known (but perhaps not well-known) formulas for norms such as the operator norm of matrices.

For certain norms (such as the “separability norms” presented at the end of this post), this ability to write a norm as both an infimum and a supremum is useful because computation of the norm may be difficult. However, having these two different characterizations of a norm allows us to bound it both from above and from below.

### The Dual Norm as an Infimum

Theorem 1. Let $S \subseteq \mathcal{H}$ be a bounded set satisfying ${\rm span}(S) = \mathcal{H}$ and define a norm $\|\cdot\|$ by

$\displaystyle\|\mathbf{v}\| := \sup_{\mathbf{w} \in S}\Big\{ \big| \langle \mathbf{v}, \mathbf{w} \rangle \big| \Big\}.$

Then $\|\cdot\|^\circ$ is given by

$\displaystyle\|\mathbf{v}\|^\circ = \inf\Big\{ \sum_i |c_i| : \mathbf{v} = \sum_i c_i \mathbf{v}_i, \mathbf{v}_i \in S \ \forall \, i \Big\},$

where the infimum is taken over all such decompositions of $\mathbf{v}$.

Before proving the result, we make two observations. Firstly, the quantity $\|\cdot\|$ described by Theorem 1 really is a norm: boundedness of $S$ ensures that the supremum is finite, and ${\rm span}(S) = \mathcal{H}$ ensures that $\|\mathbf{v}\| = 0 \implies \mathbf{v} = 0$. Secondly, every norm on $\mathcal{H}$ can be written in this way: we can always choose $S$ to be the unit ball of the dual norm $\|\cdot\|^\circ$. However, there are times when other choices of $S$ are more useful or enlightening (as we will see in the examples).

Proof of Theorem 1. Begin by noting that if $\mathbf{w} \in S$ and $\|\mathbf{v}\| \leq 1$ then $\big| \langle \mathbf{v}, \mathbf{w} \rangle \big| \leq 1$. It follows that $\|\mathbf{w}\|^{\circ} \leq 1$ whenever $\mathbf{w} \in S$. In fact, we now show that $\|\cdot\|^\circ$ is the largest norm on $\mathcal{H}$ with this property. To this end, let $\|\cdot\|_\prime$ be another norm satisfying $\|\mathbf{w}\|_{\prime}^{\circ} \leq 1$ whenever $\mathbf{w} \in S$. Then

$\displaystyle \| \mathbf{v} \| = \sup_{\mathbf{w} \in S} \Big\{ \big| \langle \mathbf{w}, \mathbf{v} \rangle \big| \Big\} \leq \sup_{\mathbf{w}} \Big\{ \big| \langle \mathbf{w}, \mathbf{v} \rangle \big| : \|\mathbf{w}\|_{\prime}^{\circ} \leq 1 \Big\} = \|\mathbf{v}\|_\prime.$

Thus  $\| \cdot \| \leq \| \cdot \|_\prime$, so by taking duals we see that $\| \cdot \|^\circ \geq \| \cdot \|_\prime^\circ$, as desired.

For the remainder of the proof, we denote the infimum in the statement of the theorem by $\|\cdot\|_{{\rm inf}}$. Our goal now is to show that: (1) $\|\cdot\|_{{\rm inf}}$ is a norm, (2) $\|\cdot\|_{{\rm inf}}$ satisfies $\|\mathbf{w}\|_{{\rm inf}} \leq 1$ whenever $\mathbf{w} \in S$, and (3) $\|\cdot\|_{{\rm inf}}$ is the largest norm satisfying property (2). The fact that $\|\cdot\|_{{\rm inf}} = \|\cdot\|^\circ$ will then follow from the first paragraph of this proof.

To see (1) (i.e., to prove that $\|\cdot\|_{{\rm inf}}$ is a norm), we only prove the triangle inequality, since positive homogeneity and the fact that $\|\mathbf{v}\|_{{\rm inf}} = 0$ if and only if $\mathbf{v} = 0$ are both straightforward (try them yourself!). Fix $\varepsilon > 0$ and let $\mathbf{v} = \sum_i c_i \mathbf{v}_i$, $\mathbf{w} = \sum_i d_i \mathbf{w}_i$ be decompositions of $\mathbf{v}, \mathbf{w}$ with $\mathbf{v}_i, \mathbf{w}_i \in S$ for all i, satisfying $\sum_i |c_i| \leq \|\mathbf{v}\|_{{\rm inf}} + \varepsilon$ and $\sum_i |d_i| \leq \|\mathbf{w}\|_{{\rm inf}} + \varepsilon$. Then

$\displaystyle \|\mathbf{v} + \mathbf{w}\|_{{\rm inf}} \leq \sum_i |c_i| + \sum_i |d_i| \leq \|\mathbf{v}\|_{{\rm inf}} + \|\mathbf{w}\|_{{\rm inf}} + 2\varepsilon.$

Since $\varepsilon > 0$ was arbitrary, the triangle inequality follows, so $\|\cdot\|_{{\rm inf}}$ is a norm.

To see (2) (i.e., to prove that $\|\mathbf{v}\|_{{\rm inf}} \leq 1$ whenever $\mathbf{v} \in S$), we simply write $\mathbf{v}$ in its trivial decomposition $\mathbf{v} = \mathbf{v}$, which gives the single coefficient $c_1 = 1$, so $\|\mathbf{v}\|_{{\rm inf}} \leq \sum_i c_i = c_1 = 1$.

To see (3) (i.e., to prove that $\|\cdot\|_{{\rm inf}}$ is the largest norm on $\mathcal{H}$ satisfying condition (2)), begin by letting $\|\cdot\|_\prime$ be any norm on $\mathcal{H}$ with the property that $\|\mathbf{v}\|_{\prime} \leq 1$ for all $\mathbf{v} \in S$. Then using the triangle inequality for $\|\cdot\|_\prime$ shows that if $\mathbf{v} = \sum_i c_i \mathbf{v}_i$ is any decomposition of $\mathbf{v}$ with $\mathbf{v}_i \in S$ for all i, then

$\displaystyle\|\mathbf{v}\|_\prime = \Big\|\sum_i c_i \mathbf{v}_i\Big\|_\prime \leq \sum_i |c_i| \|\mathbf{v}_i\|_\prime = \sum_i |c_i|.$

Taking the infimum over all such decompositions of $\mathbf{v}$ shows that $\|\mathbf{v}\|_\prime \leq \|\mathbf{v}\|_{{\rm inf}}$, which completes the proof.

The remainder of this post is devoted to investigating what Theorem 1 says about certain specific norms.

### Injective and Projective Cross Norms

If we let $\mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2$, where $\mathcal{H}_1$ and $\mathcal{H}_2$ are themselves finite-dimensional Hilbert spaces, then one often considers the injective and projective cross norms on $\mathcal{H}$, defined respectively as follows:

$\displaystyle \|\mathbf{v}\|_{I} := \sup\Big\{ \big| \langle \mathbf{v}, \mathbf{a} \otimes \mathbf{b} \rangle \big| : \|\mathbf{a}\| = \|\mathbf{b}\| = 1 \Big\} \text{ and}$

$\displaystyle \|\mathbf{v}\|_{P} := \inf\Big\{ \sum_i \| \mathbf{a}_i \| \| \mathbf{b}_i \| : \mathbf{v} = \sum_i \mathbf{a}_i \otimes \mathbf{b}_i \Big\},$

where $\|\cdot\|$ here refers to the norm induced by the inner product on $\mathcal{H}_1$ or $\mathcal{H}_2$. The fact that $\|\cdot\|_{I}$ and $\|\cdot\|_{P}$ are duals of each other is simply Theorem 1 in the case when S is the set of product vectors:

$\displaystyle S = \big\{ \mathbf{a} \otimes \mathbf{b} : \|\mathbf{a}\| = \|\mathbf{b}\| = 1 \big\}.$

In fact, the typical proof that the injective and projective cross norms are duals of each other is very similar to the proof of Theorem 1 provided above (see [1, Chapter 1]).

### Maximum and Taxicab Norms

Use $n$ to denote the dimension of $\mathcal{H}$ and let $\{\mathbf{e}_i\}_{i=1}^n$ be an orthonormal basis of $\mathcal{H}$. If we let $S = \{\mathbf{e}_i\}_{i=1}^n$ then the norm $\|\cdot\|$ in the statement of Theorem 1 is the maximum norm (i.e., the p = ∞ norm):

$\displaystyle\|\mathbf{v}\|_\infty = \sup_i\Big\{\big|\langle \mathbf{v}, \mathbf{e}_i \rangle \big| \Big\} = \max \big\{ |v_1|,\ldots,|v_n|\big\},$

where $v_i = \langle \mathbf{v}, \mathbf{e}_i \rangle$ is the i-th coordinate of $\mathbf{v}$ in the basis $\{\mathbf{e}_i\}_{i=1}^n$. The theorem then says that the dual of the maximum norm is

$\displaystyle \|\mathbf{v}\|_\infty^\circ = \inf \Big\{ \sum_i |c_i| : \mathbf{v} = \sum_i c_i \mathbf{e}_i \Big\} = \sum_{i=1}^n |v_i|,$

which is the taxicab norm (i.e., the p = 1 norm), as we expect.

### Operator and Trace Norm of Matrices

If we let $\mathcal{H} = M_n$, the space of $n \times n$ complex matrices with the Hilbert–Schmidt inner product

$\displaystyle \big\langle A, B \big\rangle := {\rm Tr}(AB^*),$

then it is well-known that the operator norm and the trace norm are dual to each other:

$\displaystyle \big\| A \big\|_{op} := \sup_{\mathbf{v}}\Big\{ \big\|A\mathbf{v}\big\| : \|\mathbf{v}\| = 1 \Big\} \text{ and}$

$\displaystyle \big\| A \big\|_{op}^\circ = \big\|A\big\|_{tr} := \sup_{U}\Big\{ \big| {\rm Tr}(AU) \big| : U \in M_n \text{ is unitary} \Big\},$

where $\|\cdot\|$ is the Euclidean norm on $\mathbb{C}^n$. If we let $S$ be the set of unitary matrices in $M_n$, then Theorem 1 provides the following alternate characterization of the operator norm:

Corollary 1. Let $A \in M_n$. Then

$\displaystyle \big\|A\big\|_{op} = \inf\Big\{ \sum_i |c_i| : A = \sum_i c_i U_i \text{ and each } U_i \text{ is unitary} \Big\}.$

As an application of Corollary 1, we are able to provide the following characterization of unitarily-invariant norms (i.e., norms $\|\cdot\|_{\prime}$ with the property that $\big\|UAV\big\|_{\prime} = \big\|A\big\|_{\prime}$ for all unitary matrices $U, V \in M_n$):

Corollary 2. Let $\|\cdot\|_\prime$ be a norm on $M_n$. Then $\|\cdot\|_\prime$ is unitarily-invariant if and only if

$\displaystyle \big\|ABC\big\|_\prime \leq \big\|A\big\|_{op}\big\|B\big\|_{\prime}\big\|C\big\|_{op}$

for all $A, B, C \in M_n$.

Proof of Corollary 2. The “if” direction is straightforward: if we let $A$ and $C$ be unitary, then

$\displaystyle \big\|B\big\|_\prime = \big\|A^*ABCC^*\big\|_\prime \leq \big\|ABC\big\|_\prime \leq \big\|B\big\|_{\prime},$

where we used the fact that $\big\|A\big\|_{op} = \big\|C\big\|_{op} = 1$. It follows that $\big\|ABC\big\|_\prime = \big\|B\big\|_\prime$, so $\|\cdot\|_\prime$ is unitarily-invariant.

To see the “only if” direction, write $A = \sum_i c_i U_i$ and $C = \sum_i d_i V_i$ with each $U_i$ and $V_i$ unitary. Then

$\displaystyle \big\|ABC\big\|_\prime = \Big\|\sum_{i,j}c_i d_j U_i B V_j\Big\|_\prime \leq \sum_{i,j} |c_i| |d_j| \big\|U_i B V_j\big\|_\prime = \sum_{i,j} |c_i| |d_j| \big\|B\big\|_\prime.$

By taking the infimum over all decompositions of $A$ and $C$ of the given form and using Corollary 1, the result follows.

An alternate proof of Corollary 2, making use of some results on singular values, can be found in [2, Proposition IV.2.4].

### Separability Norms

As our final (and least well-known) example, let $\mathcal{H} = M_m \otimes M_n$, again with the usual Hilbert–Schmidt inner product. If we let

$\displaystyle S = \{ \mathbf{a}\mathbf{b}^* \otimes \mathbf{c}\mathbf{d}^* : \|\mathbf{a}\| = \|\mathbf{b}\| = \|\mathbf{c}\| = \|\mathbf{d}\| = 1 \},$

where $\|\cdot\|$ is the Euclidean norm on $\mathbb{C}^m$ or $\mathbb{C}^n$, then Theorem 1 tells us that the following two norms are dual to each other:

$\displaystyle \big\|A\big\|_s := \sup\Big\{ \big| (\mathbf{a}^* \otimes \mathbf{c}^*)A(\mathbf{b} \otimes \mathbf{d}) \big| : \|\mathbf{a}\| = \|\mathbf{b}\| = \|\mathbf{c}\| = \|\mathbf{d}\| = 1 \Big\} \text{ and}$

$\displaystyle \big\|A\big\|_s^\circ = \inf\Big\{ \sum_i \big\|A_i\big\|_{tr}\big\|B_i\big\|_{tr} : A = \sum_i A_i \otimes B_i \Big\}.$

There’s actually a little bit of work to be done to show that $\|\cdot\|_s^\circ$ has the given form, but it’s only a couple lines – consider it an exercise for the interested reader.

Both of these norms come up frequently when dealing with quantum entanglement. The norm $\|\cdot\|_s^\circ$ was the subject of [3], where it was shown that a quantum state $\rho$ is entangled if and only if $\|\rho\|_s^\circ > 1$ (I use the above duality relationship to provide an alternate proof of this fact in [4, Theorem 6.1.5]). On the other hand, the norm $\|\cdot\|_s$ characterizes positive linear maps of matrices and was the subject of [5, 6].

References

1. J. Diestel, J. H. Fourie, and J. Swart. The Metric Theory of Tensor Products: Grothendieck’s Résumé Revisited. American Mathematical Society, 2008. Chapter 1: pdf
2. R. Bhatia. Matrix Analysis. Springer, 1997.
3. O. Rudolph. A separability criterion for density operators. J. Phys. A: Math. Gen., 33:3951–3955, 2000. E-print: arXiv:quant-ph/0002026
4. N. Johnston. Norms and Cones in the Theory of Quantum Entanglement. PhD thesis, University of Guelph, 2012.
5. N. Johnston and D. W. Kribs. A Family of Norms With Applications in Quantum Information TheoryJournal of Mathematical Physics, 51:082202, 2010.
6. N. Johnston and D. W. Kribs. A Family of Norms With Applications in Quantum Information Theory IIQuantum Information & Computation, 11(1 & 2):104–123, 2011.

## Counting and Solving Final Fantasy XIII-2’s Clock Puzzles

February 6th, 2012

Final Fantasy XIII-2 is a role-playing game, released last week in North America, that contains an abundance of mini-games. One of the more interesting mini-games is the “clock puzzle”, which presents the user with N integers arranged in a circle, with each integer being from 1 to $\lfloor N/2 \rfloor$.

A challenging late-game clock puzzle with N = 12

The way the game works is as follows:

1. The user may start by picking any of the N positions on the circle. Call the number in this position M.
2. You now have the option of picking either the number M positions clockwise from your last choice, or M positions counter-clockwise from your last choice. Update the value of M to be the number in the new position that you chose.
3. Repeat step 2 until you have performed it N-1 times.

You win the game if you choose each of the N positions exactly once, and you lose the game otherwise (if you are forced to choose the same position twice, or equivalently if there is a position that you have not chosen after performing step 2 a total of N-1 times). During the game, N ranges from 5 to 13, though N could theoretically be as large as we like.

### Example

To demonstrate the rules in action, consider the following simple example with N = 6 (I have labelled the six positions 05 in blue for easy reference):

If we start by choosing the 1 in position 1, then we have the option of choosing the 3 in either position 0 or 2. Let’s choose the 3 in position 0. Three moves either clockwise or counter-clockwise from here both give the 1 in position 3, so that is our only possible next choice. We continue on in this way, going through the N = 6 positions in the order 103425, as in the following image:

We have now selected each position exactly once, so we are done – we solved the puzzle! In fact, this is the unique solution for the given puzzle.

### Counting Clock Puzzles

Let’s work on determining how many different clock puzzles there are of a given size. As mentioned earlier, a clock puzzle with N positions has an integer in the interval $[1, \lfloor N/2 \rfloor]$ in each of the  positions. There are thus $\lfloor N/2 \rfloor^N$ distinct clock puzzles with N positions, which grows very quickly with N – its values for N = 1, 2, 3, … are given by the sequence 0, 1, 1, 16, 32, 729, 2187, 65536, 262144, … (A206344 in the OEIS).

However, this rather crude count of the number of clock puzzles ignores the fact that some clock puzzles have no solution. To illustrate this fact, we present the following simple proposition:

Proposition. There are unsolvable clock puzzles with N positions if and only if N = 4 or N ≥ 6.

To prove this proposition, first note that the clock puzzles for N = 2 or N = 3 are trivially solvable, since each number in the puzzle is forced to be $\lfloor N/2 \rfloor = 1$. The 32 clock puzzles in the N = 5 case can all easily be shown to be solvable via computer brute force (does anyone have a simple or elegant argument for this case?).

In the N = 4 case, exactly 3 of the 16 clock puzzles are unsolvable:

To complete the proof, it suffices to demonstrate an unsolvable clock puzzle for each N ≥ 6. To this end, we begin by considering the following clock puzzle in the N = 6 case:

The above puzzle is unsolvable because the only way to reach position 0 is to select it first, but from there only one of positions 2 or 4 can be reached – not both. This example generalizes in a straightforward manner to any N ≥ 6 simply by adding more 1’s to the bottom: it will still be necessary to choose position 0 first, and then it is impossible to reach both position 2 and position N-2 from there.

There doesn’t seem to be an elegant way to count the number of solvable clock puzzles with N positions (which is most likely related to the apparent difficulty of solving these puzzles, which will be discussed in the next section), so let’s count the number of solvable clock puzzles via brute force. Simply constructing each of the $\lfloor N/2 \rfloor^N$ clock puzzles and determining which of them are solvable (via the MATLAB script linked at the end of this post) shows that the number of solvable clock puzzles for N = 1, 2, 3, … is given by the sequence 0, 1, 1, 13, 32, 507, 1998, 33136, 193995, … (A206345 in the OEIS).

This count of puzzles is perhaps still unsatisfying, though, since it counts puzzles that are simply mirror images or rotations of each other multiple times. Again, there doesn’t seem to be an elegant counting argument for enumerating the solvable clock puzzles up to rotation and reflection, so we compute this sequence by brute force: 0, 1, 1, 4, 8, 72, 236, 3665, 19037, … (A206346 in the OEIS).

### Solving Clock Puzzles

Clock puzzles are one of the most challenging parts of Final Fantasy XIII-2, and with good reason: they are a well-studied graph theory problem in disguise. We can consider each clock puzzle with N positions as a directed graph with N vertices. If position N contains the number M, then there is a directed edge going from vertex N to the vertices M positions clockwise and counter-clockwise from it. In other words, we consider a clock puzzle as a directed graph on N vertices, where the directed edges describe the valid moves around the circle.

The directed graph corresponding to the earlier (solvable) N = 6 example

The problem of solving a clock puzzle is then exactly the problem of finding a directed Hamiltonian path on the associated graph. Because finding a directed Hamiltonian path in general is NP-hard, this seems to suggest that solving clock puzzles might be as well. There of course is the problem that the directed graphs relevant to this problem have very special structure – in particular, every vertex has outdegree ≤ 2, and the graph has a symmetry property that results from clockwise/counter-clockwise movement allowed in the clock puzzles.

The main result of [1] shows that the fact that the outdegree of each vertex is no larger than 2 is no real help: finding directed Hamiltonian paths is still NP-hard given such a promise. However, the symmetry condition seems more difficult to characterize in graph theoretic terms, and could potentially be exploited to produce a fast algorithm for solving these puzzles.

Regardless of the problem’s computational complexity, the puzzles found in the game are quite small (N ≤ 13), so they can be easily solved by brute force. Attached is a MATLAB script (solve_clock.m) that can be used to solve clock puzzles. The first input argument is a vector containing the numeric values in each of the positions, starting from the top and reading clockwise. By default, only one solution is computed. To compute all solutions, set the second (optional) input argument to 1.

The output of the script is either a vector of positions (labelled 0 through N-1, with 0 referring to the top position, 1 referring to one position clockwise from there, and so on) describing an order in which you can visit the positions to solve the puzzle, or 0 if there is no solution.

For example, the script can be used to find our solution to the N = 6 example provided earlier:

>> solve_clock([3,1,3,1,2,3])

ans =
1 0 3 4 2 5

Similarly, the script can be used to find all four solutions [Update, October 1, 2013: Whoops, there are six solutions! See the comments.] to the puzzle in the screenshot at the very top of this post:

>> solve_clock([6,5,1,4,2,1,6,4,2,1,5,2], 1)

ans =
3 7 11 9 10 5 4 2 1 8 6 0
7 3 11 9 10 5 4 2 1 8 6 0
9 10 5 4 2 3 7 11 1 8 6 0
9 8 10 5 4 2 3 7 11 1 6 0

Download

References

1. J. Plesnik. The NP-completeness of the Hamiltonian cycle problem in planar digraphs with degree bound two. Inform. Process. Lett., 8:199–201, 1979.