Counting the Possible Orderings of Pairwise Multiplication

February 12th, 2014

Suppose we are given n distinct positive real numbers a_1 > a_2 > \cdots > a_n > 0. The question we are going to consider in this post is as follows:

Question. How many different possible orderings are there of the n(n+1)/2 numbers \{a_ia_j\}_{1\leq i\leq j\leq n}?

To help illustrate what we mean by this question, consider the n = 2 case, where a_1 > a_2 > 0. Then the 3 possible products of a_1 and a_2 are a_1^2, a_2^2, a_1a_2, and it is straightforward to see that we must have a_1^2 > a_1a_2> a_2^2, so there is only one possible ordering in the n = 2 case.

In the n = 3 case, we have a_1 > a_2 > a_3 > 0 and 6 possible products: a_1^2, a_2^2, a_3^2, a_1a_2, a_1a_3, a_2a_3. Some relationships between these 6 numbers are immediate, such as a_1^2 > a_1a_2 > a_1a_3 > a_2a_3 > a_3^2. However, it could be the case that either a_2^2 > a_1a_3 or a_1a_3 > a_2^2 (we ignore the degenerate cases where two products are equal to each other), so there are two different possible orderings in this case:

a_1^2 > a_1a_2 > a_2^2 > a_1a_3 > a_2a_3 > a_3^2\quad\text{ or }\\ a_1^2 > a_1a_2 > a_1a_3 > a_2^2 > a_2a_3 > a_3^2.

In this post, we will consider the problem of how many such orderings exist for larger values of n. This problem arises naturally from a problem in quantum entanglement: the number of such orderings is exactly the minimum number of linear matrix inequalities needed to characterize the eigenvalues of quantum states that are “PPT from spectrum” [1].

A Rough Upper Bound

We now begin constructing upper bounds on the number of possible orderings of \{a_ia_j\}_{1\leq i\leq j\leq n}. Since we are counting orderings between n(n+1)/2 numbers, a trivial upper bound is given by (n(n+1)/2)!, since that is the number of possible orderings of n(n+1)/2 arbitrary numbers. However, this quantity is a gross overestimate.

We can improve this upper bound by creating an n \times n matrix whose (i,j)-entry is a_ia_j (note that this matrix is symmetric, positive semidefinite, and has rank 1, which is roughly how the connection to quantum entanglement arises). For example, in the n = 4 case, this matrix is as follows:

\begin{bmatrix}a_1^2 & a_1a_2 & a_1a_3 & a_1a_4 \\ * & a_2^2 & a_2a_3 & a_2a_4 \\ * & * & a_3^2 & a_3a_4 \\ * & * & * & a_4^2\end{bmatrix},

where we have used asterisks (*) to indicate entries that are determined by symmetry. The fact that a_1 > a_2 > \cdots > a_n > 0 implies that the rows and columns of the upper-triangular part of this matrix are decreasing. Thus we can get an upper bound to the solution to our problem by counting the number of ways that we can place the numbers 1, 2, \ldots, n(n+1)/2 (exactly once each) in the upper-triangular part of a matrix in such a way that the rows and columns of that upper-triangular part are decreasing. For example, this can be done in 2 different ways in the n = 3 case:

\begin{bmatrix}6 & 5 & 4 \\ * & 3 & 2 \\ * & * & 1\end{bmatrix} \quad \text{and} \quad \begin{bmatrix}6 & 5 & 3\\ * & 4 & 2\\ * & * & 1\end{bmatrix}.

The matrix above on the left corresponds to the case a_1a_3 > a_2^2 discussed earlier, while the matrix above on the right corresponds to the case a_2^2 > a_1a_3.

A formula for the number of such ways to place the integers 1, 2, \ldots, n(n+1)/2 in a matrix was derived in [2] (see also A003121 in the OEIS), which immediately gives us the following upper bound on the number of orderings of the products \{a_ia_j\}_{1\leq i\leq j\leq n}:

\displaystyle(n(n+1)/2)! \frac{1! 2! \cdots (n-1)!}{1! 3! \cdots (2n-1)!}.

For n = 1, 2, 3, …, this formula gives the values 1, 1, 2, 12, 286, 33592, 23178480, …

A Better Upper Bound

Before improving the upper bound that we just presented, let’s first discuss why it is not actually a solution to the original question. In the n = 4 case, our best upper bound so far is 12, since there are 12 different ways to place the integers 1,2,\ldots,10 in the upper-triangular part of a 4 \times 4 matrix such that the rows and columns of that upper-triangular part are decreasing. However, one such placement is as follows:

\begin{bmatrix}10 & 9 & 7 & 6 \\ * & 8 & 5 & 3 \\ * & * & 4 & 2 \\ * & * & * & 1\end{bmatrix}.

The above matrix corresponds to the following inequalities in terms of \{a_ia_j\}_{1\leq i\leq j\leq n}:

a_1^2 > a_1a_2 > a_2^2 > a_1a_3 > a_1a_4 > a_2a_3 > a_3^2 > a_2a_4 > a_3a_4 > a_4^2.

The problem here is that there actually do not exist real numbers a_1 > a_2 > a_3 > a_4 > 0 that satisfy the above string of inequalities. To see this, notice in particular that we have the following three inequalities: a_2^2 > a_1a_3, a_1a_4 > a_2a_3, and a_3^2 > a_2a_4. However, multiplying the first two inequalities together gives a_1a_2^2a_4 > a_1a_2a_3^2, so a_2a_4 > a_3^2, which contradicts the third inequality.

More generally, there can not be indices i,j,k,\ell,m,n such that we simultaneously have the following three inequalities:

a_ia_j > a_ka_\ell, a_\ell a_m > a_j a_n, and a_i a_m < a_k a_n.

I am not aware of a general formula for the number integer matrices that do not lead to these types of “bad” inequalities, but I have computed this quantity for n ≤ 7 (C code is available here), which gives the following better upper bound on the number of possible orderings of the products \{a_ia_j\}_{1\leq i\leq j\leq n} for n = 1, 2, 3, …: 1,1,2,10,114,2612,108664, …, which we see is significantly smaller than the upper bound found in the previous section for n ≥ 5.

This Bound is Not Tight

It is straightforward to write a script that generates random numbers a_1 > a_2 > \cdots > a_n > 0 and determines the resulting ordering of the pairwise products \{a_ia_j\}_{1\leq i\leq j\leq n}. By doing this, we can verify that the upper bounds from the previous section are in fact tight when n ≤ 5. However, when n = 6, we find that 4 of the 2612 potential orderings do not seem to actually be attained by any choice of a_1 > a_2 > \cdots > a_6 > 0. One of these “problematic” orderings is the one that arises from the following matrix:

\begin{bmatrix}21 & 20 & 19 & 18 & 17 & 11\\ * & 16 & 15 & 14 & 10 & 6\\ * & * & 13 & 12 & 8 & 5\\ * & * & * & 9 & 7 & 3\\ * & * & * & * & 4 & 2\\ * & * & * & * & * & 1\end{bmatrix}

The problem here is that the above matrix implies the following 5 inequalities:

a_1a_5 > a_2^2, \quad \ \ a_2a_4 > a_3^2, \quad \ \ a_2a_5 > a_4^2, \quad \ \ a_3a_4 > a_1a_6, \quad \text{and }\ \ \ a_3a_6 > a_5^2.

However, multiplying the first four inequalities gives a_1a_2^2a_3a_4^2a_5^2 > a_1a_2^2a_3^2a_4^2a_6, so a_5^2 > a_3a_6, which contradicts the fifth inequality above. We can similarly prove that the other 3 seemingly problematic orderings are in fact not attainable, so there are exactly 2608 possible orderings in the n = 6 case.

I haven’t been able to compute the number of orderings when n ≥ 7, as my methods for obtaining upper and lower bounds are both much too slow in these cases. The best bounds that I have in the n = 7 case say that the number of orderings is between 50900 and 108664, inclusive.

Update [Feb. 13, 2014]: Giovanni Resta has improved the lower bound in the n = 7 case to 107498, which narrows the n = 7 region down considerably. I’ve also improved the upper bound to 108146 (see this improved version of the C script). In all likelihood, 107498 is the correct number of orderings in this case, and it’s the upper bound 108146 that needs to be further improved.

Update [Feb. 14, 2014]: This sequence is now in the OEIS. See A237749.

Update [Feb. 18, 2014]: Hans Havermann has found a couple of references that talk about this problem (in the language of Golomb rulers) and compute all values for n ≤ 7. See [3] and [4].


  1. R. Hildebrand. Positive partial transpose from spectra. Phys. Rev. A, 76:052325, 2007. E-print: arXiv:quant-ph/0502170
  2. R. M. Thrall. A combinatorial problem. Michigan Math. J., 1:81–88, 1952.
  3. M. Beck, T. Bogart, and T. Pham. Enumeration of Golomb rulers and acyclic orientations of mixed graphs. Electron. J. Combin., 19:42, 2012. E-print: arXiv:1110.6154 [math.CO]
  4. T. Pham. Enumeration of Golomb rulers. Master’s Thesis, San Francisco State University, 2011.

In Search of a 4-by-11 Matrix

October 1st, 2013

IMPORTANT UPDATE [January 30, 2014]: I have managed to solve the 4-by-11 case: there is no such matrix! Details of the computation that led to this result, as well as several other related results, are given in [4]. See Table 3 in that paper for an updated list of which cases still remain open (the smallest open cases are now 5-by-11 and 6-by-10).

After spinning my wheels on a problem for far too long, I’ve decided that it’s time to enlist the help of the mathematical and programming geniuses of the world wide web. The problem I’m interested in asks for a 4-by-11 matrix whose columns satisfy certain relationships. While the conditions are relatively easy to state, the problem size seems to be just slightly too large for me to solve myself.

The Problem

The question I’m interested in (for reasons that are explained later in this blog post) is, given positive integers p and s, whether or not there exists a p-by-s matrix M with the following three properties:

  1. Every entry of M is a nonzero integer;
  2. The sum of any two columns of M contains a 0 entry; and
  3. There is no way to append a (s+1)th column to M so that M still has property 2.

In particular, I’m interested in whether or not such a matrix M exists when p = 4 and s = 11. But to help illustrate the above three properties, let’s consider the p = 3, s = 4 case first, where one such matrix M is:

M = \begin{bmatrix}1 & -1 & 2 & -2 \\ 1 & -2 & -1 & 2 \\ 1 & 2 & -2 & -1\end{bmatrix}.

The fact that M satisfies condition 2 can be checked by hand easily enough. For example, the sum of the first two columns of M is [0, -1, 3]T which contains a 0 entry, and it is similarly straightforward to check that the other 5 sums of two columns of M each contain a 0 entry as well.

Checking property 3 is slightly more technical (NP-hard, even), but is still doable in small cases such as this one. For the above example, suppose that we could add a 5th column (which we will call z = [z1, z2, z3]T) to M such that its sum with any of the first 4 columns has a 0 entry. By looking at M’s first column, we see that one of z’s entries must be -1 (and by the cyclic symmetry of the entries of the last 3 columns of M, we can assume without loss of generality that z1 = -1). By looking at the last 3 columns of M, we then see that either z2 = 2 or z3 = -2, either z2 = 1 or z3 = 2, and either z2 = -2 or z3 = 1. Since there is no way to simultaneously satisfy all 3 of these requirements, no such column z exists.

What’s Known (and What Isn’t)

As I mentioned earlier, the instance of this problem that I’m really interested in is when p = 4 and s = 11. Let’s first back up and briefly discuss what is known for different values of p and s:

  • If s ≤ p then M does not exist. To see this, simply note that property 3 can never be satisfied since you can always append one more column. If we denote the (i,j)-entry of M by mij and the i-th entry of the new column z by zi, then you can choose zi = -mii for i = 1, 2, …, s.
  • Given p, the smallest value of s for which M exists is: (a) s = p+1 if p is odd, (b) s = p+2 if p = 4 or p ≡ 2 (mod 4), (c) s = p+3 if p = 8, and (d) s = p+4 otherwise. This result was proved in [1] (the connection between that paper and this blog post will be explained in the “Motivation” section below).
  • If s > 2p then M does not exist. In this case, there is no way to satisfy property 2. This fact is trivial when p = 1 and can be proved for all p by induction (an exercise left to the reader?).
  • If s = 2p then M exists. To see this claim, let the columns of M be the 2p different columns consisting only of the entries 1 and -1. To see that property 2 is satisfied, simply notice that each column is different, so for any pair of columns, there is a row in which one column is 1 and the other column is -1. To see that property 3 is satisfied, observe that any new column must also consist entirely of 1′s and -1′s. However, every such column is already a column of M itself, and the sum of a column with itself will not have any 0 entries.
  • If s = 2p – 4 (and p ≥ 3) then M exists. There is an inductive construction (with the p = 3, s = 4 example from the previous section as the base case) that works here. More specifically, if we let Mp denote a matrix M that works for a given value of p and s = 2p – 4, we let Bp be the matrix from the s = 2p case above, and 1k denotes the row vector with k ones, then
    M_{p+1} = \begin{bmatrix}M_p & B_p \\ 1_{2^p-4} & -1_{2^p}\end{bmatrix}

    is a solution to the problem for p’ = p+1 and s’ = 2p+1 – 4.
  • If 2p – 3 ≤ s ≤ 2p – 1 then M does not exist. This is a non-trivial result that follows from [2].

Given p, the above results essentially tell us the largest and smallest values of s for which a solution M to the problem exists. However, we still don’t really know much about when solutions exist for intermediate values of s – we just have scattered results that say a solution does or does not exist in certain specific cases, without really illuminating what is going on. The following table summarizes what we know about when solutions do and do not exist for small values of p and s (a check mark ✓ means that a solution exists, a dash - means no solution exists, and ? means we don’t know).

s \ p 1 2 3 4 5
1 - - - - -
2 - - - -
3 - - - - -
4 - - -
5 - - - - -
6 - - -
7 - - - -
8 - -
9 - - - ?
10 - - - ?
11 - - - ? ?
12 - - -
13 - - - -
14 - - - -
15 - - - -
16 - - -
17 – 26 - - - -
27 - - - - ?
28 - - - -
29 - - - - -
30 - - - - -
31 - - - - -
32 - - - -

The table above shows why I am interested in the p = 4, s = 11 case: it is the only case when p ≤ 4 whose solution still is not known. The other unknown cases (i.e., p = 5 and s ∈ {9,10,11,27}, and far too many to list when p ≥ 6) would be interesting to solve as well, but are a bit lower-priority.

Some Simplifications

Some assumptions about the matrix M can be made without loss of generality, in order to reduce the search space a little bit. For example, since the values of the entries of M don’t really matter (other than the fact that they come in positive/negative pairs), the first column of M can always be chosen to consist entirely of ones (or any other value). Similarly, permuting the rows or columns of M does not affect whether or not it satisfies the three desired properties, so you can assume (for example) that the first row is in non-decreasing order.

Finally, since there is no advantage to having the integer k present in M unless -k is also present somewhere in M (i.e., if M does not contain any -k entries, you could always just replace every instance of k by 1 without affecting any of the three properties we want), we can assume that the entries of M are between -floor(s/2) and floor(s/2), inclusive.


The given problem arises from unextendible product bases (UPBs) in quantum information theory. A set of pure quantum states |v_1\rangle, \ldots, |v_s\rangle \in \mathbb{C}^{d_1} \otimes \cdots \otimes \mathbb{C}^{d_p} forms a UPB if and only if the following three properties hold:

  1. (product) Each state |v_j\rangle is a product state (i.e., can be written in the form |v_j\rangle = |v_j^{(1)}\rangle \otimes \cdots \otimes |v_j^{(p)}\rangle, where |v_j^{(i)}\rangle \in \mathbb{C}^{d_i} for all i);
  2. (basis) The states are mutually orthogonal (i.e., \langle v_i | v_j \rangle = 0 for all i ≠ j); and
  3. (unextendible) There does not exist a product state |z\rangle with the property that \langle z | v_j \rangle = 0 for all j.

UPBs are useful because they can be used to construct quantum states with very strange entanglement properties [3], but their mathematical structure still isn’t very well-understood. While we can’t really expect an answer to the question of what sizes of UPBs are possible when the local dimensions d_1, \ldots, d_p are arbitrary (even just the minimum size of a UPB is still not known in full generality!), we might be able to hope for an answer if we focus on multi-qubit systems (i.e., the case when d_1 = \cdots = d_p = 2).

In this case, the 3 properties above are isomorphic in a sense to the 3 properties listed at the start of this post. We associate each state |v_j\rangle with the j-th column of the matrix M. To each state in the product state decomposition of |v_j\rangle, we associate a unique integer in such a way that orthogonal states are associated with negatives of each other. The fact that \langle v_i | v_j \rangle = 0 for all i ≠ j is then equivalent to the requirement that te sum of any two columns of M has a 0 entry, and unextendibility of the product basis corresponds to not being able to add a new column to M without destroying property 2.

Thus this blog post is really asking whether or not there exists an 11-state UPB on 4 qubits. In order to illustrate this connection more explicitly, we return to the p = 3, s = 4 example from earlier. If we associate the matrix entries 1 and -1 with the orthogonal standard basis states |0\rangle, |1\rangle \in \mathbb{C}^2 and the entries 2 and -2 with the orthogonal states |\pm\rangle := (|0\rangle \pm |1\rangle)/\sqrt{2}, then the matrix M corresponds to the following set of s = 4 product states in \mathbb{C}^2 \otimes \mathbb{C}^2 \otimes \mathbb{C}^2:

|0\rangle|0\rangle|0\rangle, \quad |1\rangle|-\rangle|+\rangle, \quad |+\rangle|1\rangle|-\rangle, \quad|-\rangle|+\rangle|1\rangle.

The fact that these states form a UPB is well-known – this is the “Shifts” UPB from [3], and was one of the first UPBs found.


  1. N. Johnston. The minimum size of qubit unextendible product bases. In Proceedings of the 8th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC), 2013. E-print: arXiv:1302.1604 [quant-ph], 2013.
  2. L. Chen and D. Ž. Ðjoković. Separability problem for multipartite states of rank at most four. J. Phys. A: Math. Theor., 46:275304, 2013. E-print: arXiv:1301.2372 [quant-ph]
  3. C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal. Unextendible product bases and bound entanglement. Phys. Rev. Lett., 82:5385–5388, 1999. E-print: arXiv:quant-ph/9808030
  4. N. Johnston. The structure of qubit unextendible product bases. E-print: arXiv:1401.7920 [quant-ph], 2014.

The Spectrum of the Partial Transpose of a Density Matrix

July 3rd, 2013

It is a simple fact that, given any density matrix (i.e., quantum state) \rho\in M_n, the eigenvalues of \rho are the same as the eigenvalues of \rho^T (the transpose of \rho). However, strange things can happen if we instead only apply the transpose to one half of a quantum state. That is, if \rho\in M_m \otimes M_n then its eigenvalues in general will be very different from the eigenvalues of (id_m\otimes T)(\rho), where id_m is the identity map on M_m and T is the transpose map on M_n (the map id_m\otimes T is called the partial transpose).

In fact, even though \rho is positive semidefinite (since it is a density matrix), the matrix (id_m\otimes T)(\rho) in general can have negative eigenvalues. To see this, define p:={\rm min}\{m,n\} and let \rho=|\psi\rangle\langle\psi|, where


is the standard maximally-entangled pure state. It then follows that

(id_m\otimes T)(\rho)=\displaystyle\frac{1}{p}\sum_{i,j=1}^{p}|i\rangle\langle j|\otimes|j\rangle\langle i|,

which has p(p+1)/2 eigenvalues equal to 1/pp(p-1)/2 eigenvalues equal to -1/p, and p|m-n| eigenvalues equal to 0.

The fact that (id_m\otimes T)(\rho) can have negative eigenvalues is another way of saying that the transpose map is positive but not completely positive, and thus plays a big role in entanglement theory. In this post we consider the question of how exactly the partial transpose map can transform the eigenvalues of \rho:

Question. For which ordered lists \lambda_1\geq\lambda_2\geq\cdots\geq\lambda_{mn}\in\mathbb{R} does there exist a density matrix \rho such that (id_m\otimes T)(\rho) has eigenvalues \lambda_1,\lambda_2,\ldots,\lambda_{mn}?

The Answer for Pure States

In the case when \rho is a pure state (i.e., has rank 1), we can completely characterize the eigenvalues of (id_m\otimes T)(\rho) by making use of the Schmidt decomposition. In particular, we have the following:

Theorem 1. Let |\phi\rangle have Schmidt rank r and Schmidt coefficients \alpha_1\geq\alpha_2\geq\cdots\geq\alpha_r>0. Then the spectrum of (id_m\otimes T)(|\phi\rangle\langle\phi|) is

\{\alpha_i^2 : 1\leq i\leq r\}\cup\{\pm\alpha_i\alpha_j:1\leq i<j\leq r\},

together with the eigenvalue 0 with multiplicity p|n-m|+p^2-r^2.

Proof. If |\phi\rangle has Schmidt decomposition



\displaystyle(id_m\otimes T)(|\phi\rangle\langle\phi|)=\sum_{i,j=1}^r\alpha_i\alpha_j|a_i\rangle\langle a_j|\otimes|b_j\rangle\langle b_i|.

It is then straightforward to verify, for all 1\leq i<j\leq r, that:

  • |a_i\rangle\otimes|b_i\rangle is an eigenvector with eigenvalue \alpha_i^2;
  • |a_i\rangle\otimes|b_j\rangle\pm|a_j\rangle\otimes|b_i\rangle is an eigenvector with eigenvalue \pm\alpha_i\alpha_j; and
  • {\rm rank}\big((id_m\otimes T)(|\phi\rangle\langle\phi|)\big)= r^2, from which it follows that the remaining p|n-m|+p^2-r^2 eigenvalues are 0.

Despite such a simple characterization in the case of rank-1 density matrices, there is no known characterization for general density matrices, since eigenvalues aren’t well-behaved under convex combinations.

The Number of Negative Eigenvalues

Instead of asking for a complete characterization of the possible spectra of (id_m\otimes T)(\rho), for now we focus on the simpler question that asks how many of the eigenvalues of (id_m\otimes T)(\rho) can be negative. Theorem 1 answers this question when \rho=|\phi\rangle\langle\phi| is a pure state: the number of negative eigenvalues is r(r-1)/2, where r is the Schmidt rank of |\phi\rangle. Since r\leq p, it follows that (id_m\otimes T)(\rho) has at most p(p-1)/2 negative eigenvalues when \rho is a pure state.

It was conjectured in [1] that a similar fact holds for general (not necessarily pure) density matrices \rho as well. In particular, they conjectured that if \rho\in M_n\otimes M_n then (id_n\otimes T)(\rho) has at most n(n-1)/2 negative eigenvalues. However, this conjecture is easily shown to be false just by randomly-generating many density matrices \rho and then counting the number of negative eigenvalues of (id_n\otimes T)(\rho); density matrices whose partial transposes have more than n(n-1)/2 negative eigenvalues are very common.

In [2,3], it was shown that if \rho\in M_m\otimes M_n then (id_m\otimes T)(\rho) can not have more than (m-1)(n-1) negative eigenvalues. In [4], this bound was shown to be tight when {\rm min}\{m,n\}=2 by explicitly constructing density matrices \rho\in M_2\otimes M_n such that (id_2\otimes T)(\rho) has n-1 negative eigenvalues. Similarly, this bound was shown to be tight via explicit construction when m=n=3 in [3]. Finally, it was shown in [5] that this bound is tight in general. That is, we have the following result:

Theorem 2. The maximum number of negative eigenvalues that (id_m\otimes T)(\rho) can have when \rho\in M_m\otimes M_n is (m-1)(n-1).

It is worth pointing out that the method used in [5] to prove that this bound is tight is not completely analytic. Instead, a numerical method was presented that is proved to always generate a density matrix \rho\in M_m\otimes M_n such that (id_m\otimes T)(\rho) has (m-1)(n-1) negative eigenvalues. Code that implements this numerical procedure in MATLAB is available here, but no general analytic form for such density matrices is known.

Other Bounds on the Spectrum

Unfortunately, not a whole lot more is known about the spectrum of (id_m\otimes T)(\rho). Here are some miscellaneous other results that impose certain restrictions on its maximal and minimal eigenvalues (which we denote by \lambda_\textup{max} and \lambda_\textup{min}, respectively):

Theorem 3 [3]. 1\geq\lambda_\textup{max}\geq\lambda_\textup{min}\geq -1/2.

Theorem 4 [2]. \lambda_\textup{min}\geq\lambda_\textup{max}(1-{\rm min}\{m,n\}).

Theorem 5 [6]. If (id_m\otimes T)(\rho) has q negative eigenvalues then

\displaystyle\lambda_\textup{min}\geq\lambda_\textup{max}\Big(1-\big\lceil\tfrac{1}{2}\big(m+n-\sqrt{(m-n)^2+4q-4}\big)\big\rceil\Big) and


However, these bounds in general are fairly weak and the question of what the possible spectra of (id_m\otimes T)(\rho) are is still far beyond our grasp.


  1. R. Xi-Jun, H. Yong-Jian, W. Yu-Chun, and G. Guang-Can. Partial transposition on bipartite system. Chinese Phys. Lett., 25:35, 2008.
  2. N. Johnston and D. W. Kribs. A family of norms with applications in quantum information theory. J. Math. Phys., 51:082202, 2010. E-print: arXiv:0909.3907 [quant-ph]
  3. S. Rana. Negative eigenvalues of partial transposition of arbitrary bipartite states. Phys. Rev. A, 87:054301, 2013. E-print: arXiv:1304.6775 [quant-ph]
  4. L. Chen, D. Z. Djokovic. Qubit-qudit states with positive partial transpose. Phys. Rev. A, 86:062332, 2012. E-print: arXiv:1210.0111 [quant-ph]
  5. N. Johnston. Non-positive-partial-transpose subspaces can be as large as any entangled subspace. Phys. Rev. A, 87:064302, 2013. E-print: arXiv:1305.0257 [quant-ph]
  6. N. Johnston. Norms and Cones in the Theory of Quantum Entanglement. PhD thesis, University of Guelph, 2012.

The Minimal Superpermutation Problem

April 10th, 2013

Imagine that there is a TV series that you want to watch. The series consists of n episodes, with each episode on a single DVD. Unfortunately, however, the DVDs have become mixed up and the order of the episodes is in no way marked (and furthermore, the episodes of the TV show are not connected by any continuous storyline – there is no way to determine the order of the episodes just from watching them).

Suppose that you want to watch the episodes of the TV series, consecutively, in the correct order. The question is: how many episodes must you watch in order to do this?

To illustrate what we mean by this question, suppose for now that n = 2 (i.e., the show was so terrible that it was cancelled after only 2 episodes). If we arbitrarily label one of the episodes “1″ and the other episode “2″, then we could watch the episodes in the order “1″, “2″, and then “1″ again. Then, regardless of which episode is really the first episode, we’ve seen the two episodes consecutively in the correct order. Furthermore, this is clearly minimal – there is no way to watch fewer than 3 episodes while ensuring that you see both episodes in the correct order.

So what is the minimal number of episodes we must watch for a TV show consisting of n episodes? Somewhat surprisingly, no one knows. So let’s discuss what is known.

Minimal Superpermutations

Rephrased a bit more mathematically, we are interested in finding a shortest possible string on the symbols “1″, “2″, …, “n” that contains every permutation of those symbols as a contiguous substring. We call a string that contains every permutation in this way a superpermutation, and one of minimal length is called a minimal superpermutation. Minimal superpermutations when n = 1, 2, 3, 4 are easily found via brute-force computer search, and are presented here:

n Minimal Superpermutation Length
1 1 1
2 121 3
3 123121321 9
4 123412314231243121342132413214321 33

By the time n = 5, the strings we are looking for are much too long to find via brute-force. However, the strings in the n ≤ 4 cases provide some insight that we can hope might generalize to larger n. For example, there is a natural construction that allows us to construct a short superpermutation on n+1 symbols from a short superpermutation on n symbols (which we will describe in the next section), and this construction gives the minimal superpermutations presented in the above table when n ≤ 4.

Similarly, the minimal superpermutations in the above table can be shown via brute-force to be unique (up the relabeling the characters – for example, we don’t count the string “213212312″ as distinct from “123121321″, since they are related to each other simply by interchanging the roles of “1″ and “2″). Are minimal superpermutations unique for all n?

Minimal Length

A trivial lower bound on the length of a superpermutation on n symbols is n! + n – 1, since it must contain each of the n! permutations as a substring – the first permutation contributes a length of n characters to the string, and each of the remaining n! – 1 permutations contributes a length of at least 1 character more.

It is not difficult to improve this lower bound to n! + (n-1)! + n – 2 (I won’t provide a proof here, but the idea is to note that when building the superpermutation, you can not add more than n-1 permutations by appending just 1 character each to the string – you eventually have to add 2 or more characters to add a permutation that is not already present). In fact, this argument can be stretched further to show that n! + (n-1)! + (n-2)! + n – 3 is a lower bound as well (a rough proof is provided here). However, the same arguments do not seem to extend to lower bounds like n! + (n-1)! + (n-2)! + (n-3)! + n – 4 and so on.

There is also a trivial upper bound on the length of a minimal superpermutation: n×n!, since this is the length of the string obtained by writing out the n! permutations in order without overlapping. However, there is a well-known construction of small superpermutations that provides a much better upper bound, which we now describe.

Suppose we know a small superpermutation on n symbols (such as one of the superpermutations provided in the table in the previous section) and we want to construct a small superpermutation on n+1 symbols. To do so, simply replace each permutation in the n-symbol superpermutation by: (1) that permutation, (2) the symbol n+1, and (3) that permutation again. For example, if we start with the 2-symbol superpermutation “121″, we replace the permutation “12″ by “12312″ and we replace the permutation “21″ by “21321″, which results in the 3-symbol superpermutation “123121321″. The procedure for constructing a 4-symbol superpermutation from this 3-symbol superpermutation is illustrated in the following diagram:

A diagram that demonstrates how to construct a small superpermutation on 4 characters from a small superpermutation on 3 characters.

A diagram that demonstrates how to construct a small superpermutation on 4 symbols from a small superpermutation on 3 symbols.

It is a straightforward inductive argument to show that the above method produces n-symbol superpermutations of length \sum_{k=1}^nk! for all n. Although it has been conjectured that this superpermutation is minimal [1], this is only known to be true when n ≤ 4.


As a result of minimal superpermutations being unique when n ≤ 4, it has been conjectured that they are unique for all n [1]. However, it turns out that there are in fact many superpermutations of the conjectured minimal length – the main result of [2] shows that there are at least

\displaystyle\prod_{k=1}^{n-4}(n-k-2)^{k\cdot k!}

distinct n-symbol superpermutations of the conjectured minimal length. For n ≤ 4, this formula gives the empty product (and thus a value of 1), which agrees with the fact that minimal superpermutations are unique in these cases. However, the number of distinct superpermutations then grows extremely quickly with n: for n  = 5, 6, 7, 8, there are at least 2, 96, 8153726976, and approximately 3×1050 superpermutations of the conjectured minimal length. The 2 such superpermutations in the n = 5 case are as follows (each superpermutation has length 153 and is written on two lines):




Similarly, a text file containing all 96 known superpermutations of the expected minimal length 873 in the n = 6 case can be viewed here. It is unknown, however, whether or not these superpermutations are indeed minimal or if there are even more superpermutations of the conjectured minimal length.


  1. D. Ashlock and J. Tillotson. Construction of small superpermutations and minimal injective superstrings. Congressus Numerantium, 93:91–98, 1993.
  2. N. Johnston. Non-uniqueness of minimal superpermutations. Discrete Mathematics, 313:1553–1557, 2013.

Other Random Links Related to This Problem

  1. A180632 – the main OEIS entry for this problem
  2. Permutation Strings – a short note written by Jeffrey A. Barnett about this problem
  3. Generate sequence with all permutations – a stackoverflow post about this problem
  4. What is the shortest string that contains all permutations of an alphabet? – a mathexchange post about this problem
  5. The shortest string containing all permutations of n symbols – an XKCD forums post that I made about this problem a couple years ago

How to Construct Minimal Unextendible Product Bases

March 14th, 2013

In quantum information theory, a product state |v\rangle \in \mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2} is a quantum state that can be written as an elementary tensor:

|v\rangle=|v_1\rangle\otimes|v_2\rangle\text{ with }|v_i\rangle\in\mathbb{C}^{d_i}\ \text{ for } i=1,2,

while states that can not be written in this form are called entangled. In this post, we will be investigating unextendible product bases (UPBs), which are sets S\subset\mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2} of mutually orthogonal product states with the property that no other product state is orthogonal to every member of S.

In this post, we will be looking at how to construct small UPBs. Note that UPBs can more generally be defined on multipartite spaces (i.e., \mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2}\otimes\cdots\otimes\mathbb{C}^{d_p} for arbitrary p\geq 2), but for simplicity we stick with the bipartite (i.e., p= 2) case in this blog post.

Simple Examples

The most trivial unextendible product basis is simply the computational basis:


However, the above UPB is rather trivial – the unextendibility condition holds vacuously because S spans the entire Hilbert space, so of course there is no product state (or any state) orthogonal to every member of S.

It is known that when \min\{d_1,d_2\}\leq 2, the only UPBs that exist are trivial in this sense – they consist of a full set of d_1d_2 states. We are more interested in UPBs that contain fewer vectors than the dimension of the Hilbert space (since, for example, these UPBs can be used to construct bound entangled states [1]). One of the first such UPBs to be constructed was called “Pyramid” [1]. To construct this UPB, define h:=\tfrac{1}{2}\sqrt{1+\sqrt{5}} and N:=\tfrac{1}{2}\sqrt{5+\sqrt{5}}, and let

|\phi_j\rangle:=\tfrac{1}{N}[\cos(2\pi j/5),\sin(2\pi j/5),h]\text{ for }0\leq j\leq 4.

Then the following set of 5 states in \mathbb{C}^3\otimes\mathbb{C}^3 is a UPB:


where |v_i\rangle:=|\phi_i\rangle\otimes|\phi_{2i(\text{mod }5)}\rangle.

It is a straightforward calculation to verify that the members of S_{\textup{pyr}} are mutually orthogonal (and thus form a product basis). To verify that there is no product state orthogonal to every member of S_{\textup{pyr}}, we first observe that any 3 of the |\phi_j\rangle‘s form a linearly independent set (verification of this claim is slightly tedious, but nonetheless straightforward). Thus there is no state |w\rangle\in\mathbb{C}^3 that is orthogonal to more than 2 of the |\phi_j\rangle‘s. Thus no product state |w_1\rangle\otimes|w_2\rangle\in\mathbb{C}^3\otimes\mathbb{C}^3 is orthogonal to more than 2 + 2 = 4 members of S_{\textup{pyr}}, which verifies unextendibility.

Minimum Size

One interesting question concerning unextendible product bases asks for their minimum cardinality. It was immediately noted that any UPB in \mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2} must have cardinality at least d_1+d_2-1. To see this, suppose for a contradiction that there existed a UPB S containing (d_1-1)+(d_2-1) or fewer product states. Then we could construct another product state that is orthogonal to d_1-1 members of S on \mathbb{C}^{d_1} and another d_2-1 members of S on \mathbb{C}^{d_2}, for a total of (d_1-1)+(d_2-1) members of S, which shows that S is extendible.

Despite being such a simple lower bound, it is also attainable for many values of d_1,d_2 [2] (and very close to attainable in the other cases [3,4]). The goal of this post is to focus on the case when there exists a UPB of cardinality d_1+d_2-1, which is characterized by the following result of Alon and Lovász:

Theorem [2]. There exists a UPB in \mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2} of (necessarily minimal) size d_1+d_2-1 if and only if d_1,d_2\geq 3 and at least one of d_1 or d_2 is odd.

In spite of the above result that demonstrates the existence of a UPB of the given minimal size in many cases, how to actually construct such a UPB in these cases is not immediately obvious, and is buried throughout the proofs of [2] and its references. The goal of the rest of this post is to make the construction of a minimal UPB in these cases explicit.

Orthogonality Graphs

The orthogonality graph of a set of s product states in \mathbb{C}^{d_1}\otimes\mathbb{C}^{d_2} is graph with coloured edges (there are 2 colours) on s vertices (one for each product state), such that there is an edge connecting two vertices with the ith colour if and only if the two corresponding product states are orthogonal on the ith party.

For example, the orthogonality graph of the Pyramid UPB introduced earlier is illustrated below. Black edges represent states that are orthogonal on the first party, and red dotted edges represent states that are orthogonal on the second party.

Pyramid Orthogonality Graph

If the product states under consideration are mutually orthogonal, then their orthogonality graph is the complete graph K_s. Unextendibility is a bit more difficult to determine, but nonetheless a useful technique for constructing UPBs is to first choose a colouring of the edges of K_s, and then try to construct product states that lead to that colouring.

A Minimal Construction

In the orthogonality graph of the Pyramid UPB, all of the edges that connect a vertex to a neighbouring vertex are coloured black, and all other edges are coloured red. We can construct minimal UPBs by generalizing this graph in a natural way. Suppose without loss of generality that d_1 is odd, and we wish to construct a UPB of size s := d_1 + d_2 - 1. We construct the orthogonality graph by arranging s vertices in a circle and connecting any vertices that are a distance of (d_1-1)/2 or less from each other via a black edge. All other edges are coloured red. For example, in the d_1 = d_2 = 3 case, this gives the orthogonality graph above. In the d_1 = 5, d_2 = 4 case, this gives the orthogonality graph below.

(5,4) orthogonality graph

Our goal now is to construct product states that have the given orthogonality graph. This is straightforward to do, since every state must be orthogonal to d_1-1 of the other states on \mathbb{C}^{d_1} and orthogonal to the d_2-1 other states on \mathbb{C}^{d_2}. Thus, we can just pick |v_0\rangle arbitrarily, then pick |v_1\rangle randomly subject to the constraint that it is orthogonal to |v_0\rangle on the first subsystem, and so on, working our way clockwise around the orthogonality graph, generating each product state randomly subject to the orthogonality conditions.

Furthermore, it can be shown (but will not be shown here – the techniques are similar to those of [4] and are a bit technical) that this procedure leads to a product basis that is in fact unextendible with probability 1. In order to verify unextendibility explicitly, one approach is to check that any subset of d_1 of the product states are linearly independent on \mathbb{C}^{d_1} and any subset of d_2 of the product states are linearly independent on \mathbb{C}^{d_2}.


  1. C. H. Bennett, D. P. DiVincenzo, T. Mor, P. W. Shor, J. A. Smolin, and B. M. Terhal. Unextendible product bases and bound entanglement. Phys. Rev. Lett., 82:5385–5388, 1999. E-print: arXiv:quant-ph/9808030
  2. N. Alon and L. Lovász. Unextendible product bases. J. Combinatorial Theory, Ser. A, 95:169–179, 2001.
  3. K. Feng. Unextendible product bases and 1-factorization of complete graphs. Discrete Appl. Math., 154:942–949, 2006.
  4. J. Chen and N. Johnston. The minimum size of unextendible product bases in the bipartite case (and some multipartite cases). E-print: arXiv:1301.1406 [quant-ph], 2013.

Norms and Dual Norms as Supremums and Infimums

May 26th, 2012

Let \mathcal{H} be a finite-dimensional Hilbert space over \mathbb{R} or \mathbb{C} (the fields of real and complex numbers, respectively). If we let \|\cdot\| be a norm on \mathcal{H} (not necessarily the norm induced by the inner product), then the dual norm of \|\cdot\| is defined by

\displaystyle\|\mathbf{v}\|^\circ := \sup_{\mathbf{w} \in \mathcal{H}}\Big\{ \big| \langle \mathbf{v}, \mathbf{w} \rangle \big| : \|\mathbf{w}\| \leq 1 \Big\}.

The double-dual of a norm is equal to itself (i.e., \|\cdot\|^{\circ\circ} = \|\cdot\|) and the norm induced by the inner product is the unique norm that is its own dual. Similarly, if \|\cdot\|_p is the vector p-norm, then \|\cdot\|_p^\circ = \|\cdot\|_q, where q satisfies 1/p + 1/q = 1.

In this post, we will demonstrate that \|\cdot\|^\circ has an equivalent characterization as an infimum, and we use this characterization to provide a simple derivation of several known (but perhaps not well-known) formulas for norms such as the operator norm of matrices.

For certain norms (such as the “separability norms” presented at the end of this post), this ability to write a norm as both an infimum and a supremum is useful because computation of the norm may be difficult. However, having these two different characterizations of a norm allows us to bound it both from above and from below.

The Dual Norm as an Infimum

Theorem 1. Let S \subseteq \mathcal{H} be a bounded set satisfying {\rm span}(S) = \mathcal{H} and define a norm \|\cdot\| by

\displaystyle\|\mathbf{v}\| := \sup_{\mathbf{w} \in S}\Big\{ \big| \langle \mathbf{v}, \mathbf{w} \rangle \big| \Big\}.

Then \|\cdot\|^\circ is given by

\displaystyle\|\mathbf{v}\|^\circ = \inf\Big\{ \sum_i |c_i| : \mathbf{v} = \sum_i c_i \mathbf{v}_i, \mathbf{v}_i \in S \ \forall \, i \Big\},

where the infimum is taken over all such decompositions of \mathbf{v}.

Before proving the result, we make two observations. Firstly, the quantity \|\cdot\| described by Theorem 1 really is a norm: boundedness of S ensures that the supremum is finite, and {\rm span}(S) = \mathcal{H} ensures that \|\mathbf{v}\| = 0 \implies \mathbf{v} = 0. Secondly, every norm on \mathcal{H} can be written in this way: we can always choose S to be the unit ball of the dual norm \|\cdot\|^\circ. However, there are times when other choices of S are more useful or enlightening (as we will see in the examples).

Proof of Theorem 1. Begin by noting that if \mathbf{w} \in S and \|\mathbf{v}\| \leq 1 then \big| \langle \mathbf{v}, \mathbf{w} \rangle \big| \leq 1. It follows that \|\mathbf{w}\|^{\circ} \leq 1 whenever \mathbf{w} \in S. In fact, we now show that \|\cdot\|^\circ is the largest norm on \mathcal{H} with this property. To this end, let \|\cdot\|_\prime be another norm satisfying \|\mathbf{w}\|_{\prime}^{\circ} \leq 1 whenever \mathbf{w} \in S. Then

\displaystyle \| \mathbf{v} \| = \sup_{\mathbf{w} \in S} \Big\{ \big| \langle \mathbf{w}, \mathbf{v} \rangle \big| \Big\} \leq \sup_{\mathbf{w}} \Big\{ \big| \langle \mathbf{w}, \mathbf{v} \rangle \big| : \|\mathbf{w}\|_{\prime}^{\circ} \leq 1 \Big\} = \|\mathbf{v}\|_\prime.

Thus  \| \cdot \| \leq \| \cdot \|_\prime, so by taking duals we see that \| \cdot \|^\circ \geq \| \cdot \|_\prime^\circ, as desired.

For the remainder of the proof, we denote the infimum in the statement of the theorem by \|\cdot\|_{{\rm inf}}. Our goal now is to show that: (1) \|\cdot\|_{{\rm inf}} is a norm, (2) \|\cdot\|_{{\rm inf}} satisfies \|\mathbf{w}\|_{{\rm inf}} \leq 1 whenever \mathbf{w} \in S, and (3) \|\cdot\|_{{\rm inf}} is the largest norm satisfying property (2). The fact that \|\cdot\|_{{\rm inf}} = \|\cdot\|^\circ will then follow from the first paragraph of this proof.

To see (1) (i.e., to prove that \|\cdot\|_{{\rm inf}} is a norm), we only prove the triangle inequality, since positive homogeneity and the fact that \|\mathbf{v}\|_{{\rm inf}} = 0 if and only if \mathbf{v} = 0 are both straightforward (try them yourself!). Fix \varepsilon > 0 and let \mathbf{v} = \sum_i c_i \mathbf{v}_i, \mathbf{w} = \sum_i d_i \mathbf{w}_i be decompositions of \mathbf{v}, \mathbf{w} with \mathbf{v}_i, \mathbf{w}_i \in S for all i, satisfying \sum_i |c_i| \leq \|\mathbf{v}\|_{{\rm inf}} + \varepsilon and \sum_i |d_i| \leq \|\mathbf{w}\|_{{\rm inf}} + \varepsilon. Then

\displaystyle \|\mathbf{v} + \mathbf{w}\|_{{\rm inf}} \leq \sum_i |c_i| + \sum_i |d_i| \leq \|\mathbf{v}\|_{{\rm inf}} + \|\mathbf{w}\|_{{\rm inf}} + 2\varepsilon.

Since \varepsilon > 0 was arbitrary, the triangle inequality follows, so \|\cdot\|_{{\rm inf}} is a norm.

To see (2) (i.e., to prove that \|\mathbf{v}\|_{{\rm inf}} \leq 1 whenever \mathbf{v} \in S), we simply write \mathbf{v} in its trivial decomposition \mathbf{v} = \mathbf{v}, which gives the single coefficient c_1 = 1, so \|\mathbf{v}\|_{{\rm inf}} \leq \sum_i c_i = c_1 = 1.

To see (3) (i.e., to prove that \|\cdot\|_{{\rm inf}} is the largest norm on \mathcal{H} satisfying condition (2)), begin by letting \|\cdot\|_\prime be any norm on \mathcal{H} with the property that \|\mathbf{v}\|_{\prime} \leq 1 for all \mathbf{v} \in S. Then using the triangle inequality for \|\cdot\|_\prime shows that if \mathbf{v} = \sum_i c_i \mathbf{v}_i is any decomposition of \mathbf{v} with \mathbf{v}_i \in S for all i, then

\displaystyle\|\mathbf{v}\|_\prime = \Big\|\sum_i c_i \mathbf{v}_i\Big\|_\prime \leq \sum_i |c_i| \|\mathbf{v}_i\|_\prime = \sum_i |c_i|.

Taking the infimum over all such decompositions of \mathbf{v} shows that \|\mathbf{v}\|_\prime \leq \|\mathbf{v}\|_{{\rm inf}}, which completes the proof.

The remainder of this post is devoted to investigating what Theorem 1 says about certain specific norms.

Injective and Projective Cross Norms

If we let \mathcal{H} = \mathcal{H}_1 \otimes \mathcal{H}_2, where \mathcal{H}_1 and \mathcal{H}_2 are themselves finite-dimensional Hilbert spaces, then one often considers the injective and projective cross norms on \mathcal{H}, defined respectively as follows:

\displaystyle \|\mathbf{v}\|_{I} := \sup\Big\{ \big| \langle \mathbf{v}, \mathbf{a} \otimes \mathbf{b} \rangle \big| : \|\mathbf{a}\| = \|\mathbf{b}\| = 1 \Big\} \text{ and}

\displaystyle \|\mathbf{v}\|_{P} := \inf\Big\{ \sum_i \| \mathbf{a}_i \| \| \mathbf{b}_i \| : \mathbf{v} = \sum_i \mathbf{a}_i \otimes \mathbf{b}_i \Big\},

where \|\cdot\| here refers to the norm induced by the inner product on \mathcal{H}_1 or \mathcal{H}_2. The fact that \|\cdot\|_{I} and \|\cdot\|_{P} are duals of each other is simply Theorem 1 in the case when S is the set of product vectors:

\displaystyle S = \big\{ \mathbf{a} \otimes \mathbf{b} : \|\mathbf{a}\| = \|\mathbf{b}\| = 1 \big\}.

In fact, the typical proof that the injective and projective cross norms are duals of each other is very similar to the proof of Theorem 1 provided above (see [1, Chapter 1]).

Maximum and Taxicab Norms

Use n to denote the dimension of \mathcal{H} and let \{\mathbf{e}_i\}_{i=1}^n be an orthonormal basis of \mathcal{H}. If we let S = \{\mathbf{e}_i\}_{i=1}^n then the norm \|\cdot\| in the statement of Theorem 1 is the maximum norm (i.e., the p = ∞ norm):

\displaystyle\|\mathbf{v}\|_\infty = \sup_i\Big\{\big|\langle \mathbf{v}, \mathbf{e}_i \rangle \big| \Big\} = \max \big\{ |v_1|,\ldots,|v_n|\big\},

where v_i = \langle \mathbf{v}, \mathbf{e}_i \rangle is the i-th coordinate of \mathbf{v} in the basis \{\mathbf{e}_i\}_{i=1}^n. The theorem then says that the dual of the maximum norm is

\displaystyle \|\mathbf{v}\|_\infty^\circ = \inf \Big\{ \sum_i |c_i| : \mathbf{v} = \sum_i c_i \mathbf{e}_i \Big\} = \sum_{i=1}^n |v_i|,

which is the taxicab norm (i.e., the p = 1 norm), as we expect.

Operator and Trace Norm of Matrices

If we let \mathcal{H} = M_n, the space of n \times n complex matrices with the Hilbert–Schmidt inner product

\displaystyle \big\langle A, B \big\rangle := {\rm Tr}(AB^*),

then it is well-known that the operator norm and the trace norm are dual to each other:

\displaystyle \big\| A \big\|_{op} := \sup_{\mathbf{v}}\Big\{ \big\|A\mathbf{v}\big\| : \|\mathbf{v}\| = 1 \Big\} \text{ and}

\displaystyle \big\| A \big\|_{op}^\circ = \big\|A\big\|_{tr} := \sup_{U}\Big\{ \big| {\rm Tr}(AU) \big| : U \in M_n \text{ is unitary} \Big\},

where \|\cdot\| is the Euclidean norm on \mathbb{C}^n. If we let S be the set of unitary matrices in M_n, then Theorem 1 provides the following alternate characterization of the operator norm:

Corollary 1. Let A \in M_n. Then

\displaystyle \big\|A\big\|_{op} = \inf\Big\{ \sum_i |c_i| : A = \sum_i c_i U_i \text{ and each } U_i \text{ is unitary} \Big\}.

As an application of Corollary 1, we are able to provide the following characterization of unitarily-invariant norms (i.e., norms \|\cdot\|_{\prime} with the property that \big\|UAV\big\|_{\prime} = \big\|A\big\|_{\prime} for all unitary matrices U, V \in M_n):

Corollary 2. Let \|\cdot\|_\prime be a norm on M_n. Then \|\cdot\|_\prime is unitarily-invariant if and only if

\displaystyle \big\|ABC\big\|_\prime \leq \big\|A\big\|_{op}\big\|B\big\|_{\prime}\big\|C\big\|_{op}

for all A, B, C \in M_n.

Proof of Corollary 2. The “if” direction is straightforward: if we let A and C be unitary, then

\displaystyle \big\|B\big\|_\prime = \big\|A^*ABCC^*\big\|_\prime \leq \big\|ABC\big\|_\prime \leq \big\|B\big\|_{\prime},

where we used the fact that \big\|A\big\|_{op} = \big\|C\big\|_{op} = 1. It follows that \big\|ABC\big\|_\prime = \big\|B\big\|_\prime, so \|\cdot\|_\prime is unitarily-invariant.

To see the “only if” direction, write A = \sum_i c_i U_i and C = \sum_i d_i V_i with each U_i and V_i unitary. Then

\displaystyle \big\|ABC\big\|_\prime = \Big\|\sum_{i,j}c_i d_j U_i B V_j\Big\|_\prime \leq \sum_{i,j} |c_i| |d_j| \big\|U_i B V_j\big\|_\prime = \sum_{i,j} |c_i| |d_j| \big\|B\big\|_\prime.

By taking the infimum over all decompositions of A and C of the given form and using Corollary 1, the result follows.

An alternate proof of Corollary 2, making use of some results on singular values, can be found in [2, Proposition IV.2.4].

Separability Norms

As our final (and least well-known) example, let \mathcal{H} = M_m \otimes M_n, again with the usual Hilbert–Schmidt inner product. If we let

\displaystyle S = \{ \mathbf{a}\mathbf{b}^* \otimes \mathbf{c}\mathbf{d}^* : \|\mathbf{a}\| = \|\mathbf{b}\| = \|\mathbf{c}\| = \|\mathbf{d}\| = 1 \},

where \|\cdot\| is the Euclidean norm on \mathbb{C}^m or \mathbb{C}^n, then Theorem 1 tells us that the following two norms are dual to each other:

\displaystyle \big\|A\big\|_s := \sup\Big\{ \big| (\mathbf{a}^* \otimes \mathbf{c}^*)A(\mathbf{b} \otimes \mathbf{d}) \big| : \|\mathbf{a}\| = \|\mathbf{b}\| = \|\mathbf{c}\| = \|\mathbf{d}\| = 1 \Big\} \text{ and}

\displaystyle \big\|A\big\|_s^\circ = \inf\Big\{ \sum_i \big\|A_i\big\|_{tr}\big\|B_i\big\|_{tr} : A = \sum_i A_i \otimes B_i \Big\}.

There’s actually a little bit of work to be done to show that \|\cdot\|_s^\circ has the given form, but it’s only a couple lines – consider it an exercise for the interested reader.

Both of these norms come up frequently when dealing with quantum entanglement. The norm \|\cdot\|_s^\circ was the subject of [3], where it was shown that a quantum state \rho is entangled if and only if \|\rho\|_s^\circ > 1 (I use the above duality relationship to provide an alternate proof of this fact in [4, Theorem 6.1.5]). On the other hand, the norm \|\cdot\|_s characterizes positive linear maps of matrices and was the subject of [5, 6].


  1. J. Diestel, J. H. Fourie, and J. Swart. The Metric Theory of Tensor Products: Grothendieck’s Résumé Revisited. American Mathematical Society, 2008. Chapter 1: pdf
  2. R. Bhatia. Matrix Analysis. Springer, 1997.
  3. O. Rudolph. A separability criterion for density operators. J. Phys. A: Math. Gen., 33:3951–3955, 2000. E-print: arXiv:quant-ph/0002026
  4. N. Johnston. Norms and Cones in the Theory of Quantum Entanglement. PhD thesis, University of Guelph, 2012.
  5. N. Johnston and D. W. Kribs. A Family of Norms With Applications in Quantum Information TheoryJournal of Mathematical Physics, 51:082202, 2010.
  6. N. Johnston and D. W. Kribs. A Family of Norms With Applications in Quantum Information Theory IIQuantum Information & Computation, 11(1 & 2):104–123, 2011.