Statistical Analysis of Password Strength via Gawker’s Leaked Database

December 15th, 2010

This past weekend, Gawker Media was hacked and its user account database was leaked online. The database contained about 1.3 million rows of information containing usernames, e-mail addresses, and passwords (encrypted via DES). This security breach is unfortunate for people whose information is contained within that database, but the silver lining is that it provides a rare opportunity for statistics nerds like me to analyze some otherwise completely unobtainable data.

Because the passwords were encrypted using such an out-of-date scheme (tsk, tsk, Gawker), about 200,000 of the passwords contained in the database have been decrypted. Of course, the passwords that were cracked were relatively weak. For example, all 2641 accounts that used some trivial modification of “password” or “querty” as their password were of course decrypted. In this post I will look at trends in which users’ passwords were cracked to gain insight into which users do and do not create strong passwords.

It should of course be made clear that, because this data comes from a single database, the results that follow may not be representative of the population as a whole, but rather may be skewed by the fact that people with Gawker accounts are generally more “techy” than the average internet user.

Preliminaries: Cleaning Up the Database

The database of course had to be significantly cleaned before it could be of too much use statistically, so some of the numbers here may differ slightly from the raw numbers you see from news outlets or if you download the raw database yourself. The numbers here are the result of removing any incomplete rows from the database (i.e., rows missing a password, e-mail address, or both) and removing any accounts that were clearly created by SPAMbots (I’m only interested in the password strength of real users).

Also, I will only look at accounts that contain an e-mail address with a domain that was registered in the database at least 50 times. This restriction is in place partly because it is extremely difficult to compute any sort of meaningful statistics on something with a sample size that is much smaller than 50, and it is partly due to the fact that Gawker doesn’t require verified e-mail addresses (so 46993 of the 52593 domain names listed in the database were used by exactly one person, many of which are clearly fake and/or for SPAM).

After making the aforementioned “fixes” to the database, there are 412670 accounts, 157794 (38.2%) of which had their password decrypted.

Password Strength by Domain Name

The following table displays the 10 most frequently-occurring domain names used for e-mail addresses in the database along with how many users of the domain had their password cracked.

Domain Total Accounts Decrypted Passwords Decryption %
gmail.com 158031 50530 32.0%
yahoo.com 94147 40964 43.5%
hotmail.com 66752 27332 40.9%
aol.com 17534 8151 46.5%
comcast.net 7222 2801 38.8%
msn.com 5544 2250 40.6%
mac.com 4951 1750 35.3%
sbcglobal.net 3896 1667 42.8%
hotmail.co.uk 3204 1476 46.1%
verizon.net 2211 860 38.9%

The following table shows the z-values associated with the statistical test that the two given domains have the same proportion of users with strong passwords. Differences that are statistically significant at the α = 0.01 level are in bold. Click on a z-value to see a normal distribution showing the associated p-value. Notice in particular that gmail.com users have stronger passwords than users of any of the other top-10 domain names, while aol.com and hotmail.co.uk users have the weakest passwords.

Yahoo Hotmail AOL Comcast MSN Mac SBC HotmailUK Verizon
GMail 58.28 40.84 38.65 12.10 13.48 5.00 14.27 16.89 6.92
Yahoo -10.26 7.29 -7.81 -4.27 -11.31 -0.89 2.87 -4.33
Hotmail 13.23 -3.55 -0.53 -7.74 2.27 5.75 -1.93
AOL -11.09 -7.70 -13.94 -4.19 -0.44 -6.75
Comcast 2.06 -3.85 4.11 6.98 0.09
MSN -5.52 2.14 5.00 -1.37
Mac 7.14 9.67 2.88
SBC 2.77 -2.97
HotmailUK -5.24

Educational Institutions

Not surprisingly, users who entered an e-mail address from an educational institution typically had stronger passwords than the general population. Of the 2092 users who provided a college or university-based e-mail address, only 697 (33.3%) were decrypted. This proportion is significantly lower than the corresponding proportion for the general population (z = 4.64, p < 0.001).

However, two universities stood out as having particularly weak passwords: of the 56 users who used a University of Texas e-mail address, 27 (48.2%) had their password decrypted, and similarly 101 (45.1%) of 224 New York University passwords were decrypted.

ISP-Provided E-Mail Users

Users who used an e-mail address provided to them by their ISP (such as something@comcast.net) typically had weaker passwords than the general population, a fact that can perhaps be explained by the fact that tech-unsavvy folks are less likely to go out and get a new e-mail address for themselves at a place like GMail. Of the 31667 users who provided an ISP-based e-mail address, 13053 (41.2%) of them had their password decrypted. This proportion is significantly higher than the corresponding proportion for the general population (z = -11.36, p < 0.001).

E-Mail Addresses with Typos

Also unsurprisingly, users who entered an obvious typo in their e-mail address were much more likely to have a weak password than people who entered their e-mail address correctly (by “obvious typo” I basically mean an e-mail address containing a typo of a common domain name, such as “fred@yahoo,com” or “fred@hotmail”). Of the 530 users with a typo in their e-mail address, 280 (52.8%) had passwords that were decrypted. This proportion is significantly higher than the average (z = -6.87, p < 0.001).

Password Strength by Country

The following table shows the strength of user passwords based on the country associated with their e-mail address. Of course some e-mail addresses provide no information about the user’s country, so domains that serve a largely international market (such as gmail.com, mac.com and aim.com) are excluded from this analysis.

Country Total Accounts Decrypted Passwords Decryption %
India 3129 1448 46.3%
United Kingdom 6874 3057 44.5%
China 1411 600 42.5%
Canada 2825 1160 41.1%
United States 30891 12507 40.5%
Germany 1378 484 35.1%
Russia 2223 533 24.0%

So Russia and Germany are the big winners when it comes to password strength, while India and the United Kingdom seem to have the weakest passwords. The following table shows the z-values associated with the statistical test that the two given countries have the same proportion of users with strong passwords. Differences that are statistically significant at the α = 0.01 level are in bold. Click on a z-value to see a normal distribution showing the associated p-value.

UK China Canada US Germany Russia
India -1.67 -2.32 -4.03 -6.26 -6.94 -16.62
UK -1.31 -3.06 -6.05 -6.37 -17.16
China -0.88 -1.49 -3.97 -11.72
Canada -0.57 -3.67 -12.73
United States -3.95 -15.37
Germany -7.18

Attached below is an Excel Spreadsheet containing significantly more detailed information than the snippets contained in this post (though of course all passwords, e-mail addresses and personally-identifiable information has been removed).

Download: Gawker Database Statistics [Excel spreadsheet]

The Binary “Look-and-Say” Sequence

November 7th, 2010

The look-and-say sequence (which I talked about here) is the sequence that you get by starting with the number 1 and constructing the next term in the sequence by “reading” the previous term. So 1 becomes “one one”, or 11. That becomes “two ones”, or 21. That becomes “one two, one one”, or 1211, and so on.

In this post, I am going to investigate the related binary version of the sequence, which starts off 1, 11 much like the regular sequence. But then when reading 11, we read it as “two ones”. Since two in binary is 10, the next term in the sequence is 101. When reading that term, we read it as “one one, one zero, one one”, so the next term is 111011. That term is read as “three ones, one zero, two ones”, and since three is 11 in binary and two is 10 in binary, the next term is 11110101, and so on. In this post we will answer two questions in particular about this sequence:

1) On average, how much longer is the (n+1)th term in the sequence than the nth term in the sequence?

2) On average, what is the ratio of the number of ones to the number of zeroes in the sequence?

Non-Interacting Subsequences

Much like the regular look-and-say sequence, we are able to study this sequence by constructing a “basis” of non-interacting subsequences that every term in the binary look-and-say sequence is made up of. Fortunately, constructing such a family of subsequences for the binary version of the look-and-say sequence is much simpler than it is for the decimal version of the sequence – here we only need ten different basic subsequences (whereas we needed 92 different subsequences for the regular look-and-say sequence!). These ten subsequences, and the subsequences they evolve into, are summarized in the following table.

# Subsequence Evolves Into
1 1 (2)
2 11 (3)(1)
3 10 (5)
4 110 (3)(4)
5 1110 (6)
6 11110 (7)(4)
7 100 (9)
8 1100 (3)(8)
9 11100 (10)
10 111100 (7)(8)

So for example, the first term in the sequence, 1, evolves into the subsequence (2), which is 11. That term then evolves into subsequence (3) followed by subsequence (1), or 101. That term then evolves into the subsequence (5) followed by the subsequence (2), or 111011, and so on. The reason that this representation of the sequence is useful is we can use it to describe the evolution of the binary look-and-say sequence entirely within a matrix T. In particular, we let T be the matrix with 1 in its (i,j) entry if the subsequence (i) appears in the evolution rule for subsequence (j), and 0 in its (i,j) entry otherwise:

Now if v is a 10-dimensional vector whose ith entry indicates how many times the subsequence (i) appears in a particular term of the binary look-and-say sequence, it follows that the entries of Tv tell us how many times each subsequence appears in the next term of the binary look-and-say sequence. So it follows from standard theory of linear homogeneous recurrence relations that we can now read off all of the long-term behaviour of the binary look-and-say sequence from the eigenvalues and eigenvectors of T.

Rate of Growth of the Sequence

The asymptotic rate of growth of the number of digits in the terms of the binary look-and-say sequence is simply the magnitude of the largest eigenvalue of the transition matrix T above. Using Maple it is simple to derive this value. If Ln is the number of digits in the nth term of the binary look-and-say sequence, then

This limit is approximately 1.465571, which means that the binary version of this sequence grows much faster than the decimal version of the sequence (recall that the growth rate of the number of digits of the regular look and say sequence is approximately 1.303577). This limit is also the unique real root of the cubic x3 – x2 – 1, which follows from the fact that the characteristic polynomial of T is

Ratio of Number of Ones to Zeroes

If we let Nn denote the number of ones in the nth term of the binary look-and-say sequence, and if we let Zn denote the number of zeroes in the nth term of the sequence, what is

In other words, what is the average ratio of ones to zeroes in this sequence? The following table shows the value of Nn/Zn for n = 3, 4, …, 25, which might give some intuition to the problem:

n Nn/Zn
3 2.000
4 5.000
5 3.000
6 2.000
7 2.000
8 2.000
9 1.786
10 1.762
11 1.742
12 1.717
13 1.691
14 1.690
15 1.680
16 1.676
17 1.672
18 1.671
19 1.669
20 1.668
21 1.667
22 1.667
23 1.666
24 1.666
25 1.666

Based on numerical estimates like those given in the table above, it has been conjectured that the limiting ratio is 5/3 (or some nearby value). We will now show that the limit does indeed exist, but its value is not 5/3 — it just happens to be really close to 5/3.

Much like the maximal eigenvalue of T tells us the overall growth rate of the sequence, the corresponding eigenvector tells us the distribution of the different subsequences that are present in the limit. Once we know the distribution of the individual subsequences, it is not difficult to find out the overall ratio of ones to zeroes by weighing the different subsequences appropriately. So our first step is to find the eigenvector corresponding to the maximal eigenvalue. To this end, it will be convenient to let

α is the same as in the previous section, and β is exactly the growth rate limit that we computed. Then the eigenvector corresponding to the maximal eigenvalue of T is:

What this means is that, in the limit, the fifth subsequence, 1110, is β times as frequently-occurring as the sixth subsequence, 11110 (for example). Now we just weigh each subsequence according to how many zeros and ones they contain, and we find the limiting ratio of ones to zeroes is

In particular, this ratio does not equal 5/3, but rather its decimal expansion begins 1.6657272222676… (which is less than 1/1000 away from 5/3).

A Derivation of Conway’s Degree-71 “Look-and-Say” Polynomial

October 31st, 2010

The look-and-say sequence is the sequence of numbers 1, 11, 21, 1211, 111221, 312211, …, in which each term is constructed by “reading” the previous term in the sequence. For example, the term 1 is read as “one 1″, which becomes the next term: 11. Then 11 is read as “two ones”, which becomes the next term: 21, and so on.

The remarkable thing about this sequence is that even though it seems at first glance to be quite arbitrary and non-mathematical, it has some interesting properties that were unearthed by John Conway. Most notably, he showed that the number of digits in each term of the sequence on average grows by about 30% from one term to the next. A bit more specifically, he showed that if Ln is the number of digits in the nth term in the sequence, then

where λ is the unique positive real root of the following degree-71 polynomial:

In order to demystify this seemingly bizarre fact, in this post we will show where this polynomial comes from and prove that the above limit does indeed equal its largest root (which happens to be its one and only positive real root).

The Cosmological Theorem

What lets us formally study the look-and-say sequence is a rather ominous-sounding result known as the cosmological theorem, which says that the eighth term and every term after it in the sequence is made up of one or more of 92 “basic” non-interacting subsequences. These 92 basic subsequences are summarized in lexicographical order in the following table. The fourth column in the table says what other subsequence(s) the given subsequence evolves into. For example, the first subsequence, 1112, evolves into the 63rd subsequence: 3112. Similarly, the second subsequence, 1112133, evolves into the 64th subsequence followed by the 62nd subsequence: 31121123.

# Subsequence Length Evolves Into
1 1112 4 (63)
2 1112133 7 (64)(62)
3 111213322112 12 (65)
4 111213322113 12 (66)
5 1113 4 (68)
6 11131 5 (69)
7 111311222112 12 (84)(55)
8 111312 6 (70)
9 11131221 8 (71)
10 1113122112 10 (76)
11 1113122113 10 (77)
12 11131221131112 14 (82)
13 111312211312 12 (78)
14 11131221131211 14 (79)
15 111312211312113211 18 (80)
16 111312211312113221133211322112211213322112 42 (81)(29)(91)
17 111312211312113221133211322112211213322113 42 (81)(29)(90)
18 11131221131211322113322112 26 (81)(30)
19 11131221133112 14 (75)(29)(92)
20 1113122113322113111221131221 28 (75)(32)
21 11131221222112 14 (72)
22 111312212221121123222112 24 (73)
23 111312212221121123222113 24 (74)
24 11132 5 (83)
25 1113222 7 (86)
26 1113222112 10 (87)
27 1113222113 10 (88)
28 11133112 8 (89)(92)
29 12 2 (1)
30 123222112 9 (3)
31 123222113 9 (4)
32 12322211331222113112211 23 (2)(61)(29)(85)
33 13 2 (5)
34 131112 6 (28)
35 13112221133211322112211213322112 32 (24)(33)(61)(29)(91)
36 13112221133211322112211213322113 32 (24)(33)(61)(29)(90)
37 13122112 8 (7)
38 132 3 (8)
39 13211 5 (9)
40 132112 6 (10)
41 1321122112 10 (21)
42 132112211213322112 18 (22)
43 132112211213322113 18 (23)
44 132113 6 (11)
45 1321131112 10 (19)
46 13211312 8 (12)
47 1321132 7 (13)
48 13211321 8 (14)
49 132113212221 12 (15)
50 13211321222113222112 20 (18)
51 1321132122211322212221121123222112 34 (16)
52 1321132122211322212221121123222113 34 (17)
53 13211322211312113211 20 (20)
54 1321133112 10 (6)(61)(29)(92)
55 1322112 7 (26)
56 1322113 7 (27)
57 13221133112 11 (25)(29)(92)
58 1322113312211 13 (25)(29)(67)
59 132211331222113112211 21 (25)(29)(85)
60 13221133122211332 17 (25)(29)(68)(61)(29)(89)
61 22 2 (61)
62 3 1 (33)
63 3112 4 (40)
64 3112112 7 (41)
65 31121123222112 14 (42)
66 31121123222113 14 (43)
67 3112221 7 (38)(39)
68 3113 4 (44)
69 311311 6 (48)
70 31131112 8 (54)
71 3113112211 10 (49)
72 3113112211322112 16 (50)
73 3113112211322112211213322112 28 (51)
74 3113112211322112211213322113 28 (52)
75 311311222 9 (47)(38)
76 311311222112 12 (47)(55)
77 311311222113 12 (47)(56)
78 3113112221131112 16 (47)(57)
79 311311222113111221 18 (47)(58)
80 311311222113111221131221 24 (47)(59)
81 31131122211311122113222 23 (47)(60)
82 3113112221133112 16 (47)(33)(61)(29)(92)
83 311312 6 (45)
84 31132 5 (46)
85 311322113212221 15 (53)
86 311332 6 (38)(29)(89)
87 3113322112 10 (38)(30)
88 3113322113 10 (38)(31)
89 312 3 (34)
90 312211322212221121123222113 27 (36)
91 312211322212221121123222112 27 (35)
92 32112 5 (37)

The important thing about this particular basis of subsequences is that the evolution of any sequence made up of these subsequences is determined entirely by the evolution rule for the subsequences given in the final column of the above table. For example, the eighth term in the look-and-say sequence is 1113213211 = (24)(39). The subsequence (24) evolves into (83) and the subsequence (39) evolves into (9), so the ninth term in the look-and-say sequence is (83)(9), which is 31131211131221.

Computing the Number of Digits in Sequences

Since the evolution of every term in the look-and-say sequence after the eighth can be computed using the table above, we can easily compute the length of every term after the eighth as well. For example, the eighth term in the sequence evolves into (83)(9), so the number of digits of the ninth term in the sequence is 6 + 8 = 14. The subsequence (83) evolves into a subsequence with 10 digits, and (9) evolves into a subsequence with 10 digits, so the tenth term in the look-and-say sequence has 10 + 10 = 20 digits.

All of the information about how the lengths of the 92 subsequences change can be represented in a 92×92 matrix T. In particular, the matrix T has its (i,j) entry equal to Cij × ℓi/ℓj, where Cij is the number of times subsequence (i) appears in the evolution rule for subsequence (j) and ℓi is the length of subsequence (i). This matrix is represented in the following image – white squares represent zero entries in the matrix, and black squares represent the number 2, which is the largest value present in the matrix. Shades of grey represent non-zero numbers, with larger numbers being darker.

Then if we represent a term in the look-and-say sequence as a vector v with its ith entry being ci × ℓi, where ci is the number of times the subsequence (i) appears in that term, we find that the sum of the entries in v is the total length of that term of the look-and-say sequence. More important, however, is the fact that the sum of the entries in Tv is the length of the next term in the look-and-say sequence. The sum of the entries in T2v is the length of the next term in the look-and-say sequence, and so on. So we have found a degree-92 recurrence relation for the length of terms in the look-and-say sequence, and the corresponding transition matrix is T.

Computing the Limit

It is a basic fact of linear homogeneous recurrence relations that a closed-form solution to the recurrence relation can be written down in terms of the eigenvalues of the transition matrix (see the linked Wikipedia page for specifics). As a corollary of this, the limiting ratio of terms in the sequence is equal to the spectral radius of the transition matrix. Fortunately, the transition matrix in this case is quite sparse, so its characteristic polynomial isn’t too difficult to compute:

Indeed, the degree-71 polynomial that λ is a root of is one of the factors of the characteristic polynomial of the transition matrix T. All that remains to do is to get MATLAB to compute the largest root of that polynomial (i.e., the spectral radius of T):

>> max(abs(eig(T)))

ans =
    1.303577269034287

The matrix T is attached below for those who would like to play with it. Something fun to think about: what do the rational eigenvalues (-1, 0, and 1) of T represent?

Download: Transition matrix [plaintext file]

Update [March 17, 2013]: Entry 91 of the subsequence table has been corrected – thanks to Marcus Stuhr and liuguangxi for the correction.

Isometries of the Vector p-Norms: Signed and Complex Permutation Matrices

September 17th, 2010

Recall that in linear algebra, the vector p-norm of a vector x ∈ Cn (or x ∈ Rn) is defined to be

where xi is the ith element of x and 1 ≤ p ≤ ∞ (where the p = ∞ case is understood to mean the limit as p approaches ∞, which gives the maximum norm). By far the most well-known of these norms is the Euclidean norm, which arises when p = 2. Another well-known norm arises when p = 1, which gives the “taxicab” norm.

The problem that will be investigated in this post is to characterize what operators preserve the p-norms – i.e., what their isometries are. In the p = 2 case of the Euclidean norm, the answer is well-known: the isometries of the real Euclidean norm are exactly the orthogonal matrices, and the isometries of the complex Euclidean norm are exactly the unitary matrices. It turns out that if p ≠ 2 then the isometry group looks much different. Indeed, Exercise IV.1.3 of [1] asks the reader to show that the isometries of the p = 1 and p = ∞ norms are what are known as complex permutation matrices (to be defined). We will investigate those cases as well as a situation when p ≠ 1, 2, ∞.

p = 1: The “Taxicab” Norm

Recall that a permutation matrix is a matrix with exactly one “1″ in each of its rows and columns, and a “0″ in every other position. A signed permutation matrix (sometimes called a generalized permutation matrix) is similar – every row and column has exactly one non-zero entry, which is either 1 or -1. Similarly, a complex permutation matrix is a matrix for which every row and column has exactly one non-zero entry, and every non-zero entry is a complex number with modulus 1.

It is not difficult to show that if x ∈ Rn then the taxicab norm of x is preserved by signed permutation matrices, and if x ∈ Cn then the taxicab norm of x is preserved by complex permutation matrices. We will now show that the converse holds:

Theorem 1. Let P ∈ Mn be an n × n matrix. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

Proof. We only prove the “only if” implication, because the “if” implication is trivial (an exercise left for the reader?). So let’s suppose that P is an isometry of the p = 1 vector norm. Let ei denote the ith standard basis vector, let pi denote the ith column of P, and let pij denote the (j,i)-entry of P (i.e., the jth entry of pi). Then Pei = pi for all i, so

Similarly, P(ei + ek) = pi + pk for all i,k, so

However, by the triangle inequality for the absolute value we know that the above equality can only hold if there exist non-negative real constants cijk ≥ 0 such that pij = cijkpkj. However, it is similarly the case that P(ei – ek) = pi – pk for all i,k, so

Using the equality condition for the complex absolute value again we then know that there exist non-negative real constants dijk ≥ 0 such that pij = -dijkpkj. Using the fact that each cijk and each dijk is non-negative, it follows that each row contains at most one non-zero entry (and each row must indeed contain at least one non-zero entry since the isometries of any norm must be nonsingular).

Thus every row has exactly one non-zero entry. By using (again) the fact that isometries must be nonsingular, it follows that each of the non-zero entries must occur in a distinct column (otherwise there would be a zero column). The fact that each non-zero entry has modulus 1 follows from simply noting that P must preserve the p = 1 norm of each ei.

p = ∞: The Maximum Norm

As with the p = 1 case, it is not difficult to show that if x ∈ Rn then the maximum norm of x is preserved by signed permutation matrices, and if x ∈ Cn then the maximum norm of x is preserved by complex permutation matrices. We will now show that the converse holds in this case as well:

Theorem 2. Let P ∈ Mn be an n × n matrix. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

Proof. Again, we only prove the “only if” implication, since the “if” implication is trivial. So suppose that P is an isometry of the p = ∞ vector norm. As before, let ei denote the ith standard basis vector, let pi denote the ith column of P, and let pij denote the (j,i)-entry of P (i.e., the jth entry of pi). Then Pei = pi for all i, so

In other words, each entry of P has modulus at most 1, and each column has at least one element with modulus equal to 1. Also, P(ei ± ek) = pi ± pk for all i,k, so

It follows that if |pij| = 1, then pkj = 0 for all k ≠ i. Because each column has an element with modulus 1, it follows that each row has exactly 1 non-zero entry. Because each column has an entry with modulus 1, it follows that each row and column has exactly 1 non-zero entry, which must have modulus 1, so P is a signed or complex permutation matrix.

Any p ≠ 2

When p = 2, the isometries are orthogonal/unitary matrices. When p = 1 or p = ∞, the isometries are signed/complex permutation matrices, which are a very small subset of the orthogonal/unitary matrices. One might naively expect that the isometries for other values of p somehow interpolate between those two extremes. Alternatively, one might expect that the signed/complex permutation matrices are the only isometries for all other values of p as well. It turns out that the latter conjecture is correct [2,3].

Theorem 3. Let P ∈ Mn be an n × n matrix and let p ∈ [1,2) ∪ (2,∞]. Then

if and only if P is a complex permutation matrix (or a signed permutation matrix, respectively).

References:

  1. R. Bhatia, Matrix analysis. Volume 169 of Graduate texts in mathematics (1997).
  2. S. Chang and C. K. Li, Certain Isometries on Rn. Linear Algebra Appl. 165, 251–265 (1992).
  3. C. K. Li, W. So, Isometries of lp norm. Amer. Math. Monthly 101, 452–453 (1994).

P-Value Calculators and Graphers in Javascript

September 5th, 2010

There are a lot of online tools out there for computing p-values and test statistics associated with common statistical distributions such as the normal or Student’s t-distributions. Unfortunately, most of them are either ad-ridden or powered by Java (and hence slow to initially load and finicky when it comes to which browsers they work with). So one of my summertime projects this year was to create a website that solves both of those problems:

The website computes p-values and test statistics in real-time via javascript (and thus does not need Java or any other plug-in). The computations themselves are fairly straightforward and are performed via the trapezoid rule. The graphic on the right is composed of a static PNG that displays the appropriate distribution. The distribution’s image is transparent under the graph and opaque above the graph, which makes it easy to display the p-value graphically – the light blue area is actually just a blue rectangle that is drawn beneath the distribution’s image.

Additionally, through the magic of PHP the tool automatically creates a URL that links to the current computation (and thus makes it much more citable). So, for example, if you want to know the T-value that corresponds to a right-tailed test with 12 degrees of freedom and a p-value of 0.1, you could simply click here.

Anyway, if you’re a nerd like me then enjoy it and of course feel free to leave any feedback/suggestions that you might have.

Isometries of Unitarily-Invariant Complex Matrix Norms

August 15th, 2010

Recall that a unitarily-invariant matrix norm is a norm on matrices X ∈ Mn such that

One nice way to think about unitarily-invariant norms is that they are the matrix norms that depend only on the matrix’s singular values. Some unitarily-invariant norms that are particularly well-known are the operator (spectral) norm, trace norm, Frobenius (Hilbert-Schmidt) norm, Ky Fan norms, and Schatten p-norms (in fact, I would say that the induced p-norms for p ≠ 2 are the only really common matrix norms that aren’t unitarily-invariant – I will consider these norms in the future).

The core question that I am going to consider today is what linear maps preserve singular values and unitarily-invariant matrix norms. Clearly multiplication on the left and right by unitary matrices preserve such norms (by definition). However, the matrix transpose also preserves singular values and all unitarily-invariant norms – are there other linear maps on complex matrices that preserve these norms? For a more thorough treatment of this question, the interested reader is directed to [1,2].

Linear Maps That Preserve Singular Values

We first consider the simplest of the above questions: what linear maps Φ : Mn → Mn are such that the singular values of Φ(X) are the same as the singular values of X for all X ∈ Mn? In order to answer this question, recall Theorem 1 from my previous post, which states [3] that if Φ is an invertible map such that Φ(X) is nonsingular whenever X is nonsingular, then there exist M, N ∈ Mn with det(MN) ≠ 0 such that

In order to make use of this result, we will first have to show that any singular-value-preserving map is invertible and sends nonsingular matrices to nonsingular matrices. To this end, notice (recall?) that the operator norm of a matrix is equal to its largest singular value. Thus, any map that preserves singular values must be an isometry of the operator norm, and thus must be invertible (since all isometries are easily seen to be invertible).

Furthermore,  if we use the singular value decomposition to write X = USV for some unitaries U, V ∈ Mn and a diagonal matrix of singular values S ∈ Mn, then det(X) = det(USV) = det(U)det(S)det(V) = det(UV)det(S). Because UV is unitary, we know that |det(UV)| = 1, so we have |det(X)| = |det(S)| = det(S); that is, the product of the singular values of X equals the absolute value of its determinant. So any map that preserves singular values also preserves the absolute value of the matrix determinant. But any map that preserves the absolute value of determinants must preserve the set of nonsingular matrices because X is nonsingular if and only if det(X) ≠ 0. It follows from the above result about invertibility-preserving maps that if Φ preserves singular values then there exist M, N ∈ Mn with det(MN) ≠ 0 such that either Φ(X) = MXN or Φ(X) = MXTN.

We will now prove that M and N must each in fact be unitary. To this end, pick any unit vector x ∈ Cn and let c denote the Euclidean length of Mx:

By the fact that Φ must preserve singular values (and hence the operator norm) we have that if y ∈ Cn is any other unit vector, then

Because y was an arbitrary unit vector, we have that N* = (1/c)U, where U ∈ Mn is some unitary matrix. It can now be similarly argued that M = cV for some unitary matrix V ∈ Mn. By simply adjusting constants, we have proved the following:

Theorem 1. Let Φ : Mn → Mn be a linear map. Then the singular values of Φ(X) equal the singular values of X for all X ∈ Mn if and only if there exist unitary matrices U, V ∈ Mn such that

Isometries of the Frobenius Norm

We now consider the problem of characterizing isometries of the Frobenius norm defined for X ∈ Mn by

That is, we want to describe the maps Φ that preserve the Frobenius norm. It is clear that the Frobenius norm of X is just the Euclidean norm of vec(X), the vectorization of X. Thus we know immediately from the standard isomorphism that sends operators to bipartite vectors and super operators to bipartite operators that Φ preserves the Frobenius norm if and only if there exist families of operators {Ai}, {Bi} such that Σi Ai ⊗ Bi is a unitary matrix and

It is clear that any map of the form described by Theorem 1 above can be written in this form, but there are also many other maps of this type that are not of the form described by Theorem 1. In the next section we will see that the Frobenius norm is essentially the only unitarily-invariant complex matrix norm containing isometries that are not of the form described by Theorem 1.

Isometries of Other Unitarily-Invariant Norms

One way of thinking about Theorem 1 is as providing a canonical form for any map Φ that preserves all unitarily-invariant norms. However, in many cases it is enough that Φ preserves a single unitarily-invariant norm for it to be of that form. For example, it was shown by Schur in 1925 [4] that if Φ preserves the operator norm then it must be of the form described by Theorem 1. The same result was proved for the trace norm by Russo in 1969 [5]. Li and Tsing extended the same result to the remaining Schatten p-norms, Ky Fan norms, and (p,k)-norms in 1988 [6].

In fact, the following result, which completely characterizes isometries of all unitarily-invariant complex matrix norms other than the Frobenius norm, was obtained in [7]:

Theorem 2. Let Φ : Mn → Mn be a linear map. Then Φ preserves a given unitarily-invariant norm that is not a multiple of the Frobenius norm if and only if there exist unitary matrices U, V ∈ Mn such that

References:

  1. C.-K. Li and S. Pierce, Linear preserver problems. The American Mathematical Monthly 108, 591–605 (2001).
  2. C.-K. Li, Some aspects of the theory of norms. Linear Algebra and its Applications 212213, 71–100 (1994).
  3. J. Dieudonne, Sur une generalisation du groupe orthogonal a quatre variables. Arch. Math. 1, 282–287 (1949).
  4. I. Schur, Einige bemerkungen zur determinanten theorie. Sitzungsber. Preuss. Akad. Wiss. Berlin 25, 454–463 (1925).
  5. B. Russo, Trace preserving mappings of matrix algebra. Duke Math. J. 36, 297–300 (1969).
  6. C.-K. Li and N.-K. Tsing, Some isometries of rectangular complex matrices. Linear and Multilinear Algebra 23, 47–53 (1988).
  7. C.-K. Li and N.-K. Tsing, Linear operators preserving unitarily invariant norms of matrices. Linear and Multilinear Algebra 26, 119–132 (1990).