The basic idea is that every norm can be written as a maximization of a convex function over a convex set (in particular, every norm can be written as a maximization over the unit ball of the dual norm). However, this maximization is often difficult to deal with or solve analytically, so instead it can help to write the norm as a maximization over two or more simpler sets, each of which *can* be solved individually. To illustrate how this works, let’s start with the induced matrix norms.

The induced p → q norm of a matrix B is defined as follows:

where

is the vector p-norm. There are three special cases of these norms that are easy to compute:

- When p = q = 2, this is the usual operator norm of B (i.e., its largest singular value).
- When p = q = 1, this is the maximum absolute column sum: .
- When p = q = ∞, this is the maximum absolute row sum: .

However, outside of these three special cases (and some other special cases, such as when B only has real entries that are non-negative [1]), this norm is much messier. In general, its computation is NP-hard [2], so how can we get a good idea of its value? Well, we rewrite the norm as the following double maximization:

where is the positive real number such that (and we take if , and vice-versa). The idea is then to maximize over and one at a time, alternately.

- Start by setting and fixing a randomly-chosen vector , scaled so that .
- Compute
keeping fixed, and let be the vector attaining this maximum. By Hölder’s inequality, we know that this maximum value is exactly equal to . Furthermore, the equality condition of Hölder’s inequality tells us that the vector attaining this maximum is the one with complex phases that are the same as those of , and whose magnitudes are such that is a multiple of (here the notation means we take the absolute value and the q-th power of every entry of the vector).

- Compute
keeping fixed, and let be the vector attaining this maximum. By an argument almost identical to that of step 2, this maximum is equal to , where is the positive real number such that . Furthermore, the vector attaining this maximum is the one with complex phases that are the same as those of , and whose magnitudes are such that is a multiple of .

- Increment by 1 and return to step 2. Repeat until negligible gains are made after each iteration.

This algorithm is extremely quick to run, since Hölder’s inequality tells us exactly how to solve each of the two maximizations separately, so we’re left only performing simple vector calculations at each step. The downside of this algorithm is that, even though it will always converge to *some* local maximum, it might converge to a value that is smaller than the true induced p → q norm. However, in practice this algorithm is fast enough that it can be run several thousand times with different (randomly-chosen) starting vectors to get an extremely good idea of the value of .

It is worth noting that this algorithm is essentially the same as the one presented in [3], and reduces to the power method for finding the largest singular value when p = q = 2. This algorithm has been implemented in the QETLAB package for MATLAB as the InducedMatrixNorm function.

There is a natural family of induced norms on superoperators (i.e., linear maps ) as well. First, for a matrix , we define its Schatten p-norm to be the p-norm of its vector of singular values:

Three special cases of the Schatten p-norms include:

- p = 1, which is often called the “trace norm” or “nuclear norm”,
- p = 2, which is often called the “Frobenius norm” or “Hilbert–Schmidt norm”, and
- p = ∞, which is the usual operator norm.

The Schatten norms themselves are easy to compute (since singular values are easy to compute), but their induced counter-parts are not.

Given a superoperator , its induced Schatten p → q norm is defined as follows:

These induced Schatten norms were studied in some depth in [4], and crop up fairly frequently in quantum information theory (especially when p = q = 1) and operator theory (especially when p = q = ∞). The fact that they are NP-hard to compute in general is not surprising, since they reduce to the induced matrix norms (discussed earlier) in the case when only acts on the diagonal entries of and just zeros out the off-diagonal entries. However, it seems likely that this norm’s computation is also difficult even in the special cases p = q = 1 and p = q = ∞ (however, it is straightforward to compute when p = q = 2).

Nevertheless, we can obtain good estimates of this norm’s value numerically using essentially the same method as discussed in the previous section. We start by rewriting the norm as a double maximization, where each maximization individually is easy to deal with:

where is again the positive real number (or infinity) satisfying . We now maximize over and , one at a time, alternately, just as before:

- Start by setting and fixing a randomly-chosen matrix , scaled so that .
- Compute
keeping fixed, and let be the matrix attaining this maximum. By the Hölder inequality for Schatten norms, we know that this maximum value is exactly equal to . Furthermore, the matrix attaining this maximum is the one with the same left and right singular vectors as , and whose singular values are such that there is a constant so that for all (i.e., the vector of singular values of , raised to the power, is a multiple of the vector of singular values of , raised to the power).

- Compute
keeping fixed, and let be the matrix attaining this maximum. By essentially the same argument as in step 2, we know that this maximum value is exactly equal to , where is the map that is dual to in the Hilbert–Schmidt inner product. Furthermore, the matrix attaining this maximum is the one with the same left and right singular vectors as , and whose singular values are such that there is a constant so that for all .

- Increment by 1 and return to step 2. Repeat until negligible gains are made after each iteration.

The above algorithm is almost identical to the algorithm presented for induced matrix norms, but with absolute values and complex phases of the vectors and replaced by the singular values and singular vectors of the matrices and , respectively. The entire algorithm is still extremely quick to run, since each step just involves computing one singular value decomposition.

The downside of this algorithm, as with the induced matrix norm algorithm, is that we have no guarantee that this method will actually converge to the induced Schatten p → q norm; only that it will converge to some lower bound of it. However, the algorithm works pretty well in practice, and is fast enough that we can simply run it a few thousand times to get a very good idea of what the norm actually is. If you’re interested in making use of this algorithm, it has been implemented in QETLAB as the InducedSchattenNorm function.

The central idea used for the previous two families of norms can also be used to get lower bounds on the following norm on that comes up from time to time when dealing with quantum entanglement:

(As a side note: this norm, and some other ones like it, were the central focus on my thesis.) This norm is already written for us as a double maximization, so the idea presented in the previous two sections is somewhat clearer from the start: we fix randomly-generated vectors and and then maximize over all vectors and , which can be done simply by computing the left and right singular vectors associated with the maximum singular value of the operator

We then fix and as those singular vectors and then maximize over all vectors and (which is again a singular value problem), and we iterate back and forth until we converge to some value.

As with the previously-discussed norms, this algorithm always converges, and it converges to a lower bound of , but perhaps not its exact value. If you want to take this algorithm out for a spin, it has been implemented in QETLAB as the sk_iterate function.

It’s also worth mentioning that this algorithm generalizes straightforwardly in several different directions. For example, it can be used to find lower bounds on the norms where we maximize on the left and right by pure states with Schmidt rank not larger than k rather than separable pure states, and it can be used to find lower bounds on the geometric measure of entanglement [5].

**References:**

- D. Steinberg.
*Computation of matrix norms with applications to robust optimization*. Research thesis. Technion – Israel University of Technology, 2005. - J. M. Hendrickx and A. Olshevsky.
*Matrix p-norms are NP-hard to approximate if p ≠ 1,2,∞.*2009. E-print: arXiv:0908.1397 - D. W. Boyd. The power method for ℓ
^{p}norms.*Linear Algebra and Its Applications*, 9:95–101, 1974. - J. Watrous. Notes on super-operator norms induced by Schatten norms.
*Quantum Information & Computation*, 5(1):58–68, 2005. E-print: arXiv:quant-ph/0411077 - T.-C. Wei and P. M. Goldbart. Geometric measure of entanglement and applications to bipartite and multipartite quantum states.
*Physical Review A*, 68:042307, 2003. E-print: arXiv:quant-ph/0212030

First off, QETLAB has a variety of functions for dealing with “simple” things like tensor products, Schmidt decompositions, random pure and mixed states, applying superoperators to quantum states, computing Choi matrices and Kraus operators, and so on, which are fairly standard daily tasks for quantum information theorists. These sorts of functions are somewhat standard, and are also found in a few other MATLAB packages (such as Toby Cubitt’s nice Quantinf package and Géza Tóth’s QUBIT4MATLAB package), so I won’t really spend any time discussing them here.

The “motivating problem” for QETLAB is the separability problem, which asks us to (efficiently / operationally / practically) determine whether a given mixed quantum state is separable or entangled. The (by far) most well-known tool for this job is the positive partial transpose (PPT) criterion, which says that every separable state remains positive semidefinite when the partial transpose map is applied to it. However, this is just a quick-and-dirty one-way test, and going beyond it is much more difficult.

The QETLAB function that tries to solve this problem is the IsSeparable function, which goes through several separability criteria in an attempt to prove the given state separable or entangled, and provides a journal reference to the paper that contains the separability criteria that works (if one was found).

As an example, consider the “tiles” state, introduced in [1], which is an example of a quantum state that is entangled, but is not detected by the simple PPT test for entanglement. We can construct this state using QETLAB’s UPB function, which lets the user easily construct a wide variety of unextendible product bases, and then verify its entanglement as follows:

>> u = UPB('Tiles'); % generates the "Tiles" UPB >> rho = eye(9) - u*u'; % rho is the projection onto the orthogonal complement of the UPB >> rho = rho/trace(rho); % we are now done constructing the bound entangled state >> IsSeparable(rho) Determined to be entangled via the realignment criterion. Reference: K. Chen and L.-A. Wu. A matrix realignment method for recognizing entanglement. Quantum Inf. Comput., 3:193-202, 2003. ans = 0

And of course more advanced tests for entanglement, such as those based on symmetric extensions, are also checked. Generally, quick and easy tests are done first, and slow but powerful tests are only performed if the script has difficulty finding an answer.

Alternatively, if you want to check individual tests for entanglement yourself, you can do that too, as there are stand-alone functions for the partial transpose, the realignment criterion, the Choi map (a specific positive map in 3-dimensional systems), symmetric extensions, and so on.

One problem that I’ve come across repeatedly in my work is the need for robust functions relating to permuting quantum systems that have been tensored together, and dealing with the symmetric and antisymmetric subspaces (and indeed, this type of thing is quite common in quantum information theory). Some very basic functionality of this type has been provided in other MATLAB packages, but it has never been as comprehensive as I would have liked. For example, QUBIT4MATLAB has a function that is capable of computing the symmetric projection on two systems, or on an arbitrary number of 2- or 3-dimensional systems, but not on an arbitrary number of systems of any dimension. QETLAB’s SymmetricProjection function fills this gap.

Similarly, there are functions for computing the antisymmetric projection, for permuting different subsystems, and for constructing the unitary swap operator that implements this permutation.

QETLAB also has a set of functions for dealing with quantum non-locality and Bell inequalities. For example, consider the CHSH inequality, which says that if and are -valued measurement settings, then the following inequality holds in classical physics (where denotes expectation):

However, in quantum-mechanical settings, this inequality can be violated, and the quantity on the left can take on a value as large as (this is Tsirelson’s bound). Finally, in no-signalling theories, the quantity on the left can take on a value as large as .

All three of these quantities can be easily computed in QETLAB via the BellInequalityMax function:

>> coeffs = [1 1;1 -1]; % coefficients of the terms <A_iB_j> in the Bell inequality >> a_coe = [0 0]; % coefficients of <A_i> in the Bell inequality >> b_coe = [0 0]; % coefficients of <B_i> in the Bell inequality >> a_val = [-1 1]; % values of the A_i measurements >> b_val = [-1 1]; % values of the B_i measurements >> BellInequalityMax(coeffs, a_coe, b_coe, a_val, b_val, 'classical') ans = 2 >> BellInequalityMax(coeffs, a_coe, b_coe, a_val, b_val, 'quantum') ans = 2.8284 >> BellInequalityMax(coeffs, a_coe, b_coe, a_val, b_val, 'nosignal') ans = 4.0000

The classical value of the Bell inequality is computed simply by brute force, and the no-signalling value is computed via a linear program. However, no reasonably efficient method is known for computing the quantum value of a Bell inequality, so this quantity is estimated using the NPA hierarchy [2]. Advanced users who want more control can specify which level of the NPA hierarchy to use, or even call the NPAHierarchy function directly themselves. There is also a closely-related function for computing the classical, quantum, or no-signalling value of a nonlocal game (in case you’re a computer scientist instead of a physicist).

QETLAB v0.8 is currently available at qetlab.com (where you will also find its documentation) and also on github. If you have any suggestions/comments/requests/anything, or if you have used QETLAB in your work, please let me know!

**References:**

- C.H. Bennett, D.P. DiVincenzo, T. Mor, P.W. Shor, J.A. Smolin, and B.M. Terhal. Unextendible product bases and bound entanglement.
*Phys. Rev. Lett.*82, 5385–5388, 1999. E-print: arXiv:quant-ph/9808030 - M. Navascués, S. Pironio, and A. Acín. A convergent hierarchy of semidefinite programs characterizing the set of quantum correlations.
*New J. Phys.*, 10:073013, 2008. E-print: arXiv:0803.4290 [quant-ph]

The reason that the above statement seems so obvious is that the similar fact *does* hold, so it’s very tempting to think “inclusion-exclusion, yadda yadda, it’s simple enough to prove that it’s not worth writing down or working through the details”. However, it’s not true: a counterexample is provided by 3 distinct lines through the origin in .

There is another problem that I’ve been thinking about for quite some time that is also “obvious”: the minimal superpermutation conjecture. This conjecture was so obvious, in fact, that it appeared as a question in a national programming contest in 1998. Well, last night Robin Houston posted a note on the arXiv showing that, despite being obvious, the conjecture is false [1].

What is the shortest string that contains each permutation of “123” as a contiguous substring? It is straightforward to check that “123121321” contains each of “123”, “132”, “213”, “231”, “312”, and “321” as substrings (i.e., it is a *superpermutation* of 3 symbols), and it’s not difficult to argue (or use a computer search to show) that it is the shortest string with this property.

Well, we can repeat this question for any number of symbols. I won’t repeat all of the details (because I already wrote about the problem here), but there is a natural recursive construction that takes an (n-1)-symbol superpermutation of length L and spits out an n-symbol superpermutation of length L+n!. This immediately gives us an n-symbol superpermutation of length 1! + 2! + 3! + … + n! for all n. Importantly, it seemed like this construction was the best we could do: computer search verifies that these superpermutations are the smallest possible, and are even unique, for n ≤ 4.

Furthermore, it is not difficult to come up with some lower bounds on the length of superpermutations that seem to suggest that we have found the right answer. A trivial argument shows that an n-symbol superpermutation must have length at least (n-1) + n!, since we need n characters for the first permutation, and 1 additional character for each of the remaining n!-1 permutations. This argument can be refined to show that a superpermutation must actually have length at least (n-2) + (n-1)! + n!, since there is no way to pack the permutations tightly enough so that each one only uses 1 additional character (spend a few minutes trying to construct superpermutations by hand and you’ll see this for yourself). In fact, we can even refine this argument further (see a not-so-pretty proof sketch here) to show that n-symbol superpermutations must have length at least (n-3) + (n-2)! + (n-1)! + n!.

A-ha! A pattern has emerged – surely we can just keep refining this argument over and over again to eventually get a lower bound of 1! + 2! + 3! + … + n!, which shows that the superpermutations we already have are indeed minimal, right? Some variant of this line of thought seemed to be where almost everyone’s mind went when introduced to this problem, and it seemed fairly convincing: this argument is more or less contained within the answers when this question was posted on MathExchange and on StackOverflow (although the authors are usually careful to state that their method only *appears* to be optimal), and this problem was presented as a programming question in the 1998 Turkish Informatics Olympiad (see the resulting thread here). Furthermore, even on pages where this was acknowledged to be a difficult open problem, it was sometimes claimed that it had been proved for n ≤ 11 (example).

For the above reasons, it was a long time before I was even convinced that this problem was indeed unsolved – it seemed like people had solved this problem but just found it not worth the effort of writing up a full proof, or that people had found a simple way to tackle the problem for moderately large values of n like 10 or 11 that I couldn’t even dream of handling.

It turns out that the minimal superpermutation conjecture is false for all n ≥ 6. That is, there exists a superpermutation of length strictly less than 1! + 2! + 3! + … + n! in all of these cases [1]. In particular, Robin Houston found the following 6-symbol superpermutation of length 872, which is smaller than the conjectured minimum length of 1! + 2! + … + 6! = 873:

12345612345162345126345123645132645136245136425136452136451234651234156234152634 15236415234615234165234125634125364125346125341625341265341235641235461235416235 41263541236541326543126453162435162431562431652431625431624531642531462531426531 42563142536142531645231465231456231452631452361452316453216453126435126431526431 25643215642315462315426315423615423165423156421356421536241536214536215436215346 21354621345621346521346251346215364215634216534216354216345216342516342156432516 43256143256413256431265432165432615342613542613452613425613426513426153246513246 53124635124631524631254632154632514632541632546132546312456321456324156324516324 56132456312465321465324165324615326415326145326154326514362514365214356214352614 35216435214635214365124361524361254361245361243561243651423561423516423514623514 263514236514326541362541365241356241352641352461352416352413654213654123

So not only are congratulations due to Robin for settling the conjecture, but a big “thank you” are due to him as well for (hopefully) convincing everyone that this problem is not as easy as it appears upon first glance.

**References**

- R. Houston.
*Tackling the Minimal Superpermutation Problem*. E-print: arXiv:1408.5108 [math.CO], 2014.

Until now, the length of minimal superpermutations has only been known when n ≤ 4: they have length 1, 3, 9, and 33 in these cases, respectively. It has been conjectured that minimal superpermutations have length for all n, and I am happy to announce that Ben Chaffin has proved this conjecture when n = 5. More specifically, he showed that minimal superpermutations in the n = 5 case have length 153, and there are exactly 8 such superpermutations (previously, it was only known that minimal superpermutations have either length 152 or 153 in this case, and there are at least 2 superpermutations of length 153).

The eight superpermutations that Ben found are available here (they’re too large to include in the body of this post). Notice that the first superpermutation is the well-known “easy-to-construct” superpermutation described here, and the second superpermutation is the one that was found in [1]. The other six superpermutations are new.

One really interesting thing about the six new superpermutations is that they are the first known minimal superpermutations to break the “gap pattern” that previously-known constructions have. To explain what I mean by this, consider the minimal superpermutation “123121321” on three symbols. We can think about generating this superpermutation greedily: we start with “123”, then we append the character “1” to add the permutation “231” to the string, and then we append the character “2” to add the permutation “312” to the string. But now we are stuck: we have “12312”, and there is no way to append just one character to this string in such a way as to add another permutation to it: we have to append the *two* characters “13” to get the new permutation “213”.

This phenomenon seemed to be fairly general: in all known small superpermutations on n symbols, there was always a point (approximately halfway through the superpermutation) where n-2 consecutive characters were “wasted”: they did not add any new permutations themselves, but only “prepared” the next symbol to add a new permutation.

However, none of the six new minimal superpermutations have this property: they all never have more than 2 consecutive “wasted” characters, whereas the two previously-known superpermutations each have a run of n-2 = 3 consecutive “wasted” characters. Thus these six new superpermutations are really quite different from any superpermutations that we currently know and love.

The idea of Ben’s search is to do a depth-first search on the placement of the “wasted” characters (recall that “wasted” characters were defined and discussed in the previous section). Since the shortest known superpermutation on 5 symbols has length 153, and there are 120 permutations of 5 symbols, and the first n-1 = 4 characters of the superpermutation *must* be wasted, we are left with the problem of trying to place 153 – 120 – 4 = 29 wasted characters. If we can find a superpermutation with only 28 wasted characters (other than the initial 4), then we’ve found a superpermutation of length 152; if we really need all 29 wasted characters, then minimal superpermutations have length 153.

So now we do the depth-first search:

- Find (via brute-force) the maximum number of permutations that we can fit in a string if we are allowed only 1 wasted character: the answer is 10 permutations (for example, the string “123451234152341” does the job).
- Now find the maximum number of permutations that we can fit in a string if we are allowed 2 wasted characters. To speed up the search, once we have found a string that contains some number (call it p) of permutations, we can ignore all other strings that use a wasted character before p-10 permutations, since we know from the previous bullet point that the second wasted character can add at most 10 more permutations, for a total of (p-10)+10 = p permutations.
- We now repeat this process for higher and higher numbers of wasted characters: we find the maximum number of permutations that we can fit in a string with 3 wasted characters, using the results from the previous two bullets to speed up the search by ignoring strings that place 1 or 2 wasted characters too early.
- Etc.

The results of this computation are summarized in the following table:

Wasted characters | Maximum # of permutations |
---|---|

0 | 5 |

1 | 10 |

2 | 15 |

3 | 20 |

4 | 23 |

5 | 28 |

6 | 33 |

7 | 36 |

8 | 41 |

9 | 46 |

10 | 49 |

11 | 53 |

12 | 58 |

13 | 62 |

14 | 66 |

15 | 70 |

16 | 74 |

17 | 79 |

18 | 83 |

19 | 87 |

20 | 92 |

21 | 96 |

22 | 99 |

23 | 103 |

24 | 107 |

25 | 111 |

26 | 114 |

27 | 116 |

28 | 118 |

29 | 120 |

As we can see, it is not possible to place all 120 permutations in a string with 28 or fewer wasted characters, which proves that there is no superpermutation of length 152 in the n = 5 case. C code that computes the values in the above table is available here.

**Update [August 18, 2014]:** Robin Houston has found a superpermutation on 6 symbols of length 873 (i.e., the conjectured minimal length) with the interesting property that it never has more than one consecutive wasted character! The superpermutation is available here.

**IMPORTANT UPDATE [August 22, 2014]:** Robin Houston has gone one step further and disproved the minimal superpermutation conjecture for all n ≥ 6. See here.

**References**

- N. Johnston. Non-uniqueness of minimal superpermutations.
*Discrete Mathematics*, 313:1553–1557, 2013.

where each is a real scalar and the sets and form orthonormal bases of .

The Schmidt decomposition theorem isn’t anything fancy: it is just the singular value decomposition in disguise (the ‘s are singular values of some matrix and the sets and are its left and right singular vectors). However, it tells us everything we could ever want to know about the entanglement of : it is entangled if and only if it has more than one non-zero , and in this case the question of “how much” entanglement is contained within is answered by a certain function of the ‘s.

Well, we can find a similar decomposition of mixed quantum states. If is a mixed quantum state then it can be written in its *operator-Schmidt decomposition*:

where each is a real scalar and the sets and form orthonormal bases of Hermitian matrices in (under the Hilbert–Schmidt inner product ).

Once again, we haven’t really done anything fancy: the operator-Schmidt decomposition is also just the singular value decomposition in disguise in almost the exact same way as the regular Schmidt decomposition. However, its relationship with entanglement of mixed states is much weaker (as we might expect from the fact that the singular value decomposition can be computed in polynomial time, but determining whether a mixed state is entangled or separable (i.e., not entangled) is expected to be hard [1]). In this post, we’ll investigate some cases when the operator-Schmidt decomposition *does* let us conclude that is separable or entangled.

One reasonably well-known method for proving that a mixed state is entangled is the *realignment criterion* [2,3]. What is slightly less well-known is that the realignment criterion can be phrased in terms of the coefficients in the operator-Schmidt decomposition of .

**Theorem 1 (realignment criterion).** Let have operator-Schmidt decomposition

If then is entangled.

*Proof.* The idea is to construct a specific entanglement witness that detects the entanglement in . In particular, the entanglement witness that we will use is . To see that is indeed an entanglement witness, we must show that for all and . Well, some algebra shows that

so it suffices to show that . To see this notice that

where the inequality is the Cauchy–Schwarz inequality and the equality comes from the fact that the sets and are orthonormal bases, so (and similarly for ).

Now that we know that is an entanglement witness, we must check that it detects the entanglement in (that is, we want to show that ). This is straightforward to show by making use of the fact that the sets and are orthonormal:

It follows that is entangled, which completes the proof.

A more popular formulation of the realignment criterion says that if we define the *realignment map* by and extending by linearity, and let denote the *trace norm* (i.e., the sum of the singular values), then implies that is entangled. The equivalence of these two formulations of the realignment criterion comes from the fact that the singular values of are exactly the coefficients in its operator-Schmidt decomposition.

We might naturally wonder whether we can prove that even more states are entangled based on their operator-Schmidt decomposition than those detected by the realignment criterion. The following theorem gives one sense in which the answer to this question is “no”: if we only look at “nice” functions of the coefficients then the realignment criterion gives the best method of entanglement detection possible.

**Theorem 2.** Let be a symmetric gauge function (i.e., a norm that is invariant under permutations and sign changes of the entries of the input vector). If we can conclude that is entangled based on the value of then it must be the case that .

*Proof.* Without loss of generality, we scale so that . We first prove two facts about .

**Claim 1:** for all mixed states . This follows from the fact that (which itself is kind of a pain to prove: it follows from the fact that the Schatten norm of the realignment map is , but if anyone knows of a more direct and/or simpler way to prove this, I’d love to see it). If we assume without loss of generality that then

as desired.

**Claim 2:** There exists a separable state for which equals any given value in the interval . To see why this is the case, first notice that there exists a separable state with and for all : the state is one such example. Similarly, there is a separable state with and for all : the state is one such example. Furthermore, it is straightforward to interpolate between these two extremes to find separable states (even product states) with for all and any value of . For such states we have

which can take any value in the interval as claimed.

By combining claims 1 and 2, we see that we could only ever use the value of to conclude that is entangled if . However, in this case we have

which completes the proof.

Theorem 2 can be phrased naturally in terms of the other formulation of the realignment criterion as well: it says that there is no unitarily-invariant matrix norm with the property that we can use the value of to conclude that is entangled, except in those cases where the trace norm (i.e., the realignment criterion) itself already tells us that is entangled.

Nonetheless, we can certainly imagine using functions of the coefficients that are *not* symmetric gauge functions. Alternatively, we could take into account some (hopefully easily-computable) properties of the matrices and . One such method for detecting entanglement that depends on the coefficients and the trace of each and is as follows.

**Theorem 3 [4,5].** Let have operator-Schmidt decomposition

If

then is entangled.

I won’t prove Theorem 3 here, but I will note that it is strictly stronger than the realignment criterion, which can be seen by showing that the left-hand side of Theorem 3 is at least as large as the left-hand side of Theorem 1. To show this, observe that

and

which is nonnegative.

Much like we can use the operator-Schmidt decomposition to sometimes prove that a state is entangled, we can also use it to sometimes prove that a state is separable. To this end, we will use the *operator-Schmidt rank* of , which is the number of non-zero coefficients in its operator-Schmidt decomposition. One trivial observation is as follows:

**Proposition 4.** If the operator-Schmidt rank of is then is separable.

*Proof.* If the operator-Schmidt rank of is then we can write for some . Since is positive semidefinite, it follows that either and are both positive semidefinite or both negative semidefinite. If they are both positive semidefinite, we are done. If they are both negative semidefinite then we can write and then we are done.

Somewhat surprisingly, however, we can go further than this: it turns out that all states with operator-Schmidt rank are also separable, as was shown in [6].

**Theorem 5 [6].** If the operator-Schmidt rank of is then is separable.

*Proof.* If has operator-Schmidt rank then it can be written in the form for some . Throughout this proof, we use the notation , and so on.

Since is positive semidefinite, so are each of its partial traces. Thus and are both positive semidefinite operators. It is then straightforward to verify that

What is important here is that we have found a rank- tensor decomposition of in which one of the terms is positive semidefinite. Now we define

and notice that for some (in order to do this, we actually need the partial traces of to be nonsingular, but this is easily taken care of by standard continuity arguments, so we’ll sweep it under the rug). Furthermore, is also positive semidefinite, and it is separable if and only if is separable. Since is positive semidefinite, we know that for all eigenvalues of and of . If we absorb scalars between and so that then this implies that for all . Thus and are both positive semidefinite. Furthermore, a straightforward calculation shows that

We now play a similar game as before: we define a new matrix

and notice that for some (similar to before, we note that there is a standard continuity argument that can be used to handle the fact that and might be singluar). The minimum eigenvalue of is then , which is non-negative as a result of being positive semidefinite. It then follows that

Since each term in the above decomposition is positive semidefinite, it follows that is separable, which implies that is separable, which finally implies that is separable.

In light of Theorem 6, it seems somewhat natural to ask how far we can push things: what values of the operator-Schmidt rank imply that a state is separable? Certainly we cannot expect all states with an operator-Schmidt rank of to be separable, since every state in has operator-Schmidt rank or less, and there are entangled states in this space (more concretely, it’s easy to check that the maximally-entangled pure state has operator-Schmidt rank ).

This left the case of operator-Schmidt rank open. Very recently, it was shown in [7] that a mixed state in with operator-Schmidt rank is indeed separable, yet there are entangled states with operator-Schmidt rank in .

**References**

- L. Gurvits. Classical deterministic complexity of Edmonds’ problem and quantum entanglement. In
*Proceedings of the Thirty-Fifth Annual ACM Symposium on Theory of Computing*, pages 10–19, 2003. E-print: arXiv:quant-ph/0303055 - K. Chen and L.-A. Wu. A matrix realignment method for recognizing entanglement.
*Quantum Inf. Comput.*, 3:193–202, 2003. E-print: arXiv:quant-ph/0205017 - O. Rudolph. Some properties of the computable cross norm criterion for separability.
*Phys. Rev. A*, 67:032312, 2003. E-print: E-print: arXiv:quant-ph/0212047 - C.-J. Zhang, Y.-S. Zhang, S. Zhang, and G.-C. Guo. Entanglement detection beyond the cross-norm or realignment criterion.
*Phys. Rev. A*, 77:060301(R), 2008. E-print: arXiv:0709.3766 [quant-ph] - O. Gittsovich, O. Gühne, P. Hyllus, and J. Eisert. Unifying several separability conditions using the covariance matrix criterion.
*Phys. Rev. A*, 78:052319, 2008. E-print: arXiv:0803.0757 [quant-ph] - D. Cariello. Separability for weak irreducible matrices. E-print: arXiv:1311.7275 [quant-ph]
- D. Cariello. Does symmetry imply PPT property?. E-print: arXiv:1405.3634 [math-ph]

**Question.** How many different possible orderings are there of the numbers ?

To help illustrate what we mean by this question, consider the n = 2 case, where . Then the 3 possible products of and are , and it is straightforward to see that we must have , so there is only one possible ordering in the n = 2 case.

In the n = 3 case, we have and 6 possible products: . Some relationships between these 6 numbers are immediate, such as . However, it could be the case that either or (we ignore the degenerate cases where two products are equal to each other), so there are two different possible orderings in this case:

In this post, we will consider the problem of how many such orderings exist for larger values of n. This problem arises naturally from a problem in quantum entanglement: the number of such orderings is exactly the minimum number of linear matrix inequalities needed to characterize the eigenvalues of quantum states that are “PPT from spectrum” [1].

We now begin constructing upper bounds on the number of possible orderings of . Since we are counting orderings between numbers, a trivial upper bound is given by , since that is the number of possible orderings of arbitrary numbers. However, this quantity is a gross overestimate.

We can improve this upper bound by creating an matrix whose -entry is (note that this matrix is symmetric, positive semidefinite, and has rank 1, which is roughly how the connection to quantum entanglement arises). For example, in the n = 4 case, this matrix is as follows:

where we have used asterisks (*) to indicate entries that are determined by symmetry. The fact that implies that the rows and columns of the upper-triangular part of this matrix are decreasing. Thus we can get an upper bound to the solution to our problem by counting the number of ways that we can place the numbers (exactly once each) in the upper-triangular part of a matrix in such a way that the rows and columns of that upper-triangular part are decreasing. For example, this can be done in 2 different ways in the n = 3 case:

The matrix above on the left corresponds to the case discussed earlier, while the matrix above on the right corresponds to the case .

A formula for the number of such ways to place the integers in a matrix was derived in [2] (see also A003121 in the OEIS), which immediately gives us the following upper bound on the number of orderings of the products :

For n = 1, 2, 3, …, this formula gives the values 1, 1, 2, 12, 286, 33592, 23178480, …

Before improving the upper bound that we just presented, let’s first discuss why it is not actually a solution to the original question. In the n = 4 case, our best upper bound so far is 12, since there are 12 different ways to place the integers in the upper-triangular part of a matrix such that the rows and columns of that upper-triangular part are decreasing. However, one such placement is as follows:

The above matrix corresponds to the following inequalities in terms of :

The problem here is that there actually do not exist real numbers that satisfy the above string of inequalities. To see this, notice in particular that we have the following three inequalities: , , and . However, multiplying the first two inequalities together gives , so , which contradicts the third inequality.

More generally, there can not be indices such that we simultaneously have the following three inequalities:

, , and .

I am not aware of a general formula for the number integer matrices that do not lead to these types of “bad” inequalities, but I have computed this quantity for n ≤ 7 (C code is available here), which gives the following better upper bound on the number of possible orderings of the products for n = 1, 2, 3, …: 1,1,2,10,114,2612,108664, …, which we see is significantly smaller than the upper bound found in the previous section for n ≥ 5.

It is straightforward to write a script that generates random numbers and determines the resulting ordering of the pairwise products . By doing this, we can verify that the upper bounds from the previous section are in fact tight when n ≤ 5. However, when n = 6, we find that 4 of the 2612 potential orderings do not seem to actually be attained by any choice of . One of these “problematic” orderings is the one that arises from the following matrix:

The problem here is that the above matrix implies the following 5 inequalities:

However, multiplying the first four inequalities gives , so , which contradicts the fifth inequality above. We can similarly prove that the other 3 seemingly problematic orderings are in fact not attainable, so there are exactly 2608 possible orderings in the n = 6 case.

I haven’t been able to compute the number of orderings when n ≥ 7, as my methods for obtaining upper and lower bounds are both much too slow in these cases. The best bounds that I have in the n = 7 case say that the number of orderings is between 50900 and 108664, inclusive.

**Update [Feb. 13, 2014]:** Giovanni Resta has improved the lower bound in the n = 7 case to 107498, which narrows the n = 7 region down considerably. I’ve also improved the upper bound to 108146 (see this improved version of the C script). In all likelihood, 107498 is the correct number of orderings in this case, and it’s the upper bound 108146 that needs to be further improved.

**Update [Feb. 14, 2014]:** This sequence is now in the OEIS. See A237749.

**Update [Feb. 18, 2014]:** Hans Havermann has found a couple of references that talk about this problem (in the language of Golomb rulers) and compute all values for n ≤ 7. See [3] and [4].

**References**

- R. Hildebrand. Positive partial transpose from spectra.
*Phys. Rev. A*, 76:052325, 2007. E-print: arXiv:quant-ph/0502170 - R. M. Thrall. A combinatorial problem.
*Michigan Math. J.*, 1:81–88, 1952. - M. Beck, T. Bogart, and T. Pham. Enumeration of Golomb rulers and acyclic orientations of mixed graphs.
*Electron. J. Combin.*, 19:42, 2012. E-print: arXiv:1110.6154 [math.CO] - T. Pham.
*Enumeration of Golomb rulers*. Master’s Thesis, San Francisco State University, 2011.