# The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising

See allHide authors and affiliations

Contributed by David L. Donoho, April 3, 2013 (sent for review February 5, 2013)

## Abstract

Let be an unknown *M* by *N* matrix. In matrix recovery, one takes linear measurements of , where and each is an *M* by *N* matrix. A popular approach for matrix recovery is nuclear norm minimization (NNM): solving the convex optimization problem for all , where denotes the nuclear norm, namely, the sum of singular values. Empirical work reveals a phase transition curve, stated in terms of the undersampling fraction , rank fraction , and aspect ratio . Specifically when the measurement matrices *A _{i}* have independent standard Gaussian random entries, a curve exists such that, if , NNM typically succeeds for large

*M*,

*N*, whereas if , it typically fails. An apparently quite different problem is matrix denoising in Gaussian noise, in which an unknown

*M*by

*N*matrix is to be estimated based on direct noisy measurements , where the matrix

*Z*has independent and identically distributed Gaussian entries. A popular matrix denoising scheme solves the unconstrained optimization problem . When optimally tuned, this scheme achieves the asymptotic minimax mean-squared error , where . We report extensive experiments showing that the phase transition in the first problem, matrix recovery from Gaussian measurements, coincides with the minimax risk curve in the second problem, matrix denoising in Gaussian noise: , for any rank fraction (at each common aspect ratio β). Our experiments considered matrices belonging to two constraint classes: real

*M*by

*N*matrices, of various ranks and aspect ratios, and real symmetric positive-semidefinite

*N*by

*N*matrices, of various ranks.

Let be an unknown *M* by *N* matrix. How many measurements must we obtain to “completely know”? Although it seems that measurements must be necessary, in recent years intense research in applied mathematics, optimization, and information theory has shown that, when is of low rank, we may efficiently recover it from a relatively small number of linear measurements by convex optimization (1⇓–3). Applications have been developed in fields ranging widely, for example from video and image processing (4) to quantum state tomography (5) to collaborative filtering (2, 6).

Specifically, let be a linear operator and consider measurements . If , the problem of inferring from *y* may be viewed as attempting to solve an underdetermined system of equations. Under certain circumstances, it has been observed that this (seemingly hopeless) task can be accomplished by solving the so-called nuclear norm minimization problem:Here, the nuclear norm is the sum of singular values of *X*. For example, it was found that if is sufficiently low rank, with a principal subspace in a certain sense incoherent to the measurement operator , then the solution to is precisely . Such incoherence can be obtained by letting be random, for instance if with having independent and identically distributed (i.i.d) Gaussian entries. In this case we speak of “matrix recovery from Gaussian measurements” (1).

A key phrase from the previous paragraph is “if is sufficiently low rank.” Clearly, there must be a quantitative tradeoff between the rank of and the number of measurements required, such that higher-rank matrices require more measurements. In the Gaussian measurements model, with sufficiently large, empirical work by Recht et al. (1, 7, 8), Tanner and Wei (9), and Oymak and Hassibi (10) documents a phase transition phenomenon. For matrices of a given rank, there is a fairly precise number of required samples, in the sense that a transition from nonrecovery to complete recovery takes place sharply as the number of samples varies across this value. For example, in Table S1 we present data from our own experiments, showing that, for reconstructing matrices of size 60 by 60 that are of rank 20, 2,600 Gaussian measurements are sufficient with very high probability, but 2,400 Gaussian measurements are insufficient with very high probability.

In this paper, we present a simple and explicit formula for the phase transition curve in matrix recovery from Gaussian measurements. The formula arises in an apparently unrelated problem: matrix denoising in Gaussian noise. In this problem, we again let denote an *M* by *N* matrix, and we observe , where *Z* is Gaussian i.i.d noise . We consider the following nuclear norm denoising scheme:In this problem the measurements *Y* are direct, so in some sense complete, but noisy. The solution can be viewed as a shrinkage estimator. In the basis of the singular vectors and of *Y*, the solution is diagonal, and the diagonal entries are produced by soft thresholding of the singular values of *Y*.

Because the measurements *y* in the matrix recovery problem are noiseless but incomplete, whereas the measurements *Y* in the matrix denoising problem are complete but noisy, the problems seem quite different. Nevertheless, we show here that there is a deep connection between the two problems.

Let us quantify performance in the denoising problem by the minimax MSE, namelywhere MSE refers to the dimension-normalized mean-squared error and subscript *F* denotes Frobenius norm. The asymptotic minimax MSE has been derived in ref. 11. Explicit formulas for the curve appear in *Appendix*. A parametric form is given for the case of asymptotically square matrices, . Fig. 1 depicts the various minimax MSE curves.

We can now state our main hypothesis for matrix recovery from Gaussian measurements.

### Main Hypothesis: Asymptotic Phase Transition Formula.

*Consider a sequence of matrix recovery problems with parameters* *having limiting fractional rank* , *limiting aspect ratio* , *and limiting incompleteness fraction* . *In the limit of large problem size N*, *the solution* *to the nuclear norm minimization problem* *is correct with probability converging to one if* *and incorrect with probability converging to one if* . *In short*, *the asymptotic phase transition* *in Gaussian matrix recovery is equal to the asymptotic minimax MSE* .

In particular, for the case of small rank *r* and large , by studying the small ρ asymptotics of Eq. **14**, we obtain that reconstruction is possible from measurements, but not from substantially less.

This brief announcement tests this hypothesis by conducting a substantial computer experiment generating large numbers of random problem instances. We use statistical methods to check for disagreement between the hypothesis and the predicted phase transition. To bolster the solidity of our results, we conduct the experiment in two different settings: (*i*) the matrix is a general *M* by *N* matrix, for various rank fractions ρ and aspect ratios β and (*ii*) is a symmetric positive-semidefinite matrix, for various rank fractions ρ. In the latter case the positive-semidefinite constraint is added to the convex program . As described below, there are different asymptotic MSE curves for the two settings. We demonstrate an empirically accurate match in each of the cases, showing the depth and significance of the connection we expose here.

In the discussion and conclusion we connect our result with related work in the field of sparsity-promoting reconstructions, where the same formal identity between a minimax MSE and a phase transition boundary has been observed, and in some cases even proved. We also discuss recent rigorous evidence toward establishing the above matrix recovery phase transition formula.

## Methods

We investigated the hypothesis that the asymptotic phase transition boundary agrees with the proposed phase transition formula to within experimental error.

For notational simplicity we will focus here on the case , and defer the case of nonsquare matrices to *Supporting Information*. Hence, we will drop throughout the main text the argument . The asymptotic phase plane at point is associated to triples , where is the rank fraction, and is the undersampling ratio, where is the dimension of the underlying collection of matrices . We performed a sequence of experiments, one for each tuple, in which we generated random rank-*r N* by *N* matrices , random measurement matrices of size , and obtained random problem instances . We then applied a convex optimization procedure, obtaining a putative reconstruction . We declared a reconstruction successful when the Frobenius norm of the error was smaller than a threshold. Our raw empirical observations consist of a count of empirical successes and sample sizes at a selected set of positions and a selected set of problem sizes *N*. From these raw counts we produce fitted success probabilities ; the finite-*N* phase transition is the place where the true underlying probability of successful reconstruction takes the value 50%. We tested the hypothesis that the finite-*N* transition was consistent with the proposed asymptotic phase transition formula.

This section discusses details of data generation and data analysis.

### Generation of Problem Instances.

Each problem instance was generated by, first, generating a random rank-*r* matrix , then, generating a random measurement matrix and then applying .

We considered problem instances of two specific types, corresponding to matrices , with one of two classes of matrices:

: all matrices with real-valued entries and

: all real symmetric matrices which are positive-semidefinite.

In the case , we consider low-rank matrices , where *U* and *V* are each *N* by *r* partial orthogonal matrices uniformly distributed in the Stiefel manifold . In the case , we consider low-rank matrices , where *U* is an *N* by *r* partial orthogonal matrix in , and again the matrix is uniformly distributed.

For measurement matrices *A*, we use Gaussian random matrices satisfying .

### Convex Optimization.

For a given problem instance , we attempt to recover the underlying low-rank object from the measurements *y* by convex optimization. Each of our choices gives rise to an associated optimization problem:

Here, is one of the two classes of matrices, or . The two resulting optimization problems can each be reduced to a so-called semidefinite programming problem (see refs. 12, 13).

### Probability of Exact Recovery.

Because both the measurement matrix *A* and the underlying low-rank object are random, is a random instance for . The probability of exact recovery is defined by

Clearly, ; for fixed *N*, π is monotone decreasing in *r* and monotone increasing in *n*. Also .

### Estimating the Probability of Exact Recovery.

Our procedure follows refs. 14 and 15. For a given matrix type and rank *r* we conduct an experiment whose purpose is to estimate using *T* Monte Carlo trials. In each trial we generate a random instance , which we supply to a solver for , obtaining the result . We compare the result to . If the relative error is smaller than a numerical tolerance, we declare the recovery a success; if not, we declare it a failure. (In this paper, we used an error tolerance of 0.001.) We thus obtain *T* binary measurements indicating success or failure in reconstruction. The empirical success fraction is then calculated as

These are the raw observations generated by our experiments.

### Asymptotic Phase Transition Hypothesis.

Consider a sequence of tuples with and . We assume that there is an asymptotic phase transition curve , i.e., a curve obeying

For many convex optimization problems the existence of such an asymptotic phase transition is rigorously proven; see *Discussion: Existing Literature and Our Contributions*.

The hypothesis we investigate concerns the value of ; specifically, whether

Here, , respectively , is the minimax MSE for Eq. **2** for general matrices (respectively, positive-semidefinite ones). Formulas for were derived by the authors in ref. 11; computational details are provided in *Appendix*.^{†}

### Empirical Phase Transitions.

The empirical phase transition point is estimated by fitting a smooth function (in fact a logistic function) to the empirical data , where *r* is held fixed, using the glm() command in the R computing environment. In fact we fit the logistic model that , where is the offset between δ and the predicted phase transition. The coefficients *a* and *b* are called the intercept and slope, and will be tabulated below. The intercept gives the predicted logit exactly at , i.e., . The empirical phase transition is located at . This is the value of solving

Under the hypothesis of Eq. **4** we have

Consequently, in data analysis we will compare the fitted values with .^{‡}

### Experimental Design.

To address our main hypothesis regarding the agreement of phase transition boundaries, we measure at points and in the phase plane , which we expect to be maximally informative about the location of the phase transition. In fact, the informative locations in binomial response models correspond to points where the probability of response is in the middle range (16). As a rough approximation to such an optimal design, we sample at equispaced .

### Computing.

We primarily used the MATLAB computing environment, and the popular CVX convex optimization package (17), a modeling system for disciplined convex programming by Boyd, Grant, and others supporting two open-source interior-point solvers: SeDuMi and SDPT3 (18, 19).

We also studied the robustness of our results across solvers. Zulfikar Ahmed translated our code into Python and used the general-purpose solver package CVXOPT by Anderson et al. (20).

## Results

Our experimental data have been deposited (21). The data are contained in a text file with more than 100,000 lines, each line reporting one batch of Monte Carlo experiments at a given , and . Each line also documents the number of Monte Carlo trials *T*, and the observed success fraction . The file also contains metadata identifying the solver and the researcher responsible for the run.

In all cases, we observed a transition from no observed successes at to no observed failures at . Fig. 1, *Right* shows results we obtained at nonsquare matrix ensembles, with varying . The minimax MSE curves vary widely, but the observed phase transitions track the MSE curves closely.

Fig. 2 shows a small subset of our results in the square case , to explain our empirical results; the full tables are given in *Supporting Information* for the square, nonsquare, and symmetric positive-semidefinite cases. In the square case, the empirical phase transition agrees in all cases with the formula to two-digit accuracy. Table S3 shows that, in the symmetric positive-semidefinite case , the empirical phase transition falls within .

Previous empirical studies of phase transition behavior in sparse recovery show that, even in cases where the asymptotic phase transition curve is rigorously proven and analytically available, such large-*N* theory cannot be expected to match empirical finite-*N* data to within the usual naive SEs. Instead, one observes a finite transition zone of width and a small displacement of the empirical phase transition away from the asymptotic Gaussian phase transition, of size (14, 15). Hence, the strict literal device of testing the hypothesis that is not appropriate in this setting.^{§}

A precise statement of our hypothesis uses the fitted logistic parameters and , estimating the model parameters *a* and *b* defined above. The asymptotic relation^{¶}implies . Now note that in Fig. 2 the coefficient *b* scales directly with *N* and takes values of order several hundred for large *N*. This means that, in these experiments, the transition from complete failure to complete success happens in a zone of width . Notice also that *a* stays bounded, for the most part between 0 and 2. This means that the response probability evaluated exactly at obeys typicallyFig. 3, *Left* presents the fitted slopes *b* and problem sizes *N*, as well as an empirical fit. All of the data came from Table S2, but we omitted results for because those slopes were large multiples of all other slopes. Similarly, Fig. 3, *Right* presents the slopes from Table S3. In each plot, the line is overlaid for comparison. Both plots show a clear tendency of the slopes to increase with *N*.

The combination of linear growth of *b* with *N* and nongrowth of *a*^{‖} implies Eq. **5**, and acceptance of our main hypothesis.

### Non-Gaussian Measurements.

This paper studies matrix recovery from Gaussian measurements of the unknown matrix and specifically does not study recovery from partial entry-wise measurements, known in the literature as “matrix completion.” Entry-wise measurements yield a phase transition at a different location (9). Our conclusions do extend to certain non-Gaussian measurements based on with independent and identically distributed entries that are equiprobable (a.k.a. Rademacher matrices). See Table S6.

## Discussion: Existing Literature and Our Contributions

Phase transitions in the success of convex optimization at solving nonconvex problems have been observed previously. Donoho and Tanner considered linear programs useful in the recovery of sparse vectors, established the existence of asymptotic phase transitions, and derived exact formulas for the asymptotic phase transitions (14, 22–25⇓⇓⇓) as well as finite *N* exponential bounds. Their work considered the recovery of *k*-sparse vectors with *N* entries from *n* Gaussian measurements. The phase diagram in those papers could be stated in terms of variables , where, as in this paper, is the under sampling fraction, and is the sparsity fraction, which plays a role analogous to the role played here by the rank fraction ρ. The proofs in those papers were obtained by techniques from high-dimensional combinatorial geometry, and the formulas for the asymptotic phase transition were implicit, written in terms of special functions.

Donoho et al. (26) later developed a new so-called Approximate Message Passing (AMP) approach to the sparse recovery problem, which gave new formulas for phase boundaries, confirmed rigorously by Bayati and Montanari (27, 28). Whereas the previous formulas involved combinatorial geometry, the new (necessarily equivalent) ones involved minimax decision theory instead. An extensive literature on AMP algorithms has since been developed (see, e.g., refs. 29 and 30), most notably implying universality in the sparse recovery phase transition (31).

Donoho, Johnstone, and Montanari (DJM) (32) generalized previous AMP-based phase transition results to block-sparse, monotone, and piecewise constant signals. They provided evidence that the observed phase transition of the associated convex optimization reconstruction methods obeyed formulas of the formHere, *κ* is a variable measuring generalized sparsity and the minimax MSE of an appropriate denoiser based on direct noisy measurements. The main result in this brief report fits in this general framework whereby the sparsity *κ* is identified with the fractional rank ρ, and the minimax MSE symbol applies to the singular value thresholding denoiser. Our main result is therefore an extension of DJM-style formulas from the sparse recovery setting problem to the low rank recovery setting.^{††}

Much mathematical study of matrix recovery (1, 2, 7, 33) has focused on providing rigorous bounds that show the existence of a region of success, without, however, establishing a phase transition phenomenon, or determining its exact boundary. A relatively accurate result by Candès and Recht (CR) for the case of Gaussian measurements (34) implies that measurements are sufficient. Our formulas for the square case (or equivalently ) show thatThis agrees with the CR bound in the very low-rank case. However, our relation is apparently noticeably more accurate than the CR formula at finite *N*. Table S5 presents experiments in which the rank is fixed at , or 4, and *N* varies between 40 and 90. Even though in such cases the corresponding is rather small, for example in the case and , the empirical phase transition in our experiments agrees much more closely with the nonlinear formula than it does with the linear formula . Also, in the nonsquare case , , which is strictly smaller than for , so the CR formula is noticeably less accurate than the formula in the nonsquare case.

Of the mathematical methods developed to identify phase transitions but that are not based on combinatorial geometry or approximate message passing, the most precise exploit the “Escape Through the Mesh” (ETM) bounds of Gordon (10, 35, 36). ETM was used to prove upper bounds on the number of Gaussian measurements needed for reconstruction of sparse signals by Stojnic (36) and for low-rank matrices by Oymak and Hassibi (10). In particular, they (10) studied one of the two cases studied here and observed in passing that in the square case ETM gives bounds that seemingly agree with actual phase transition measurements. Very recently, building on the same approach, a DJM-style inequality has been announced by Oymak and Hassibi for a wide range of convex optimization problems , including nuclear norm minimization (37); they also presented empirical evidence for in the square case .

### Our Contributions.

This paper presents explicit formulas for the minimax MSE of singular value soft thresholding in various cases and shows that these formulas match the appropriate empirical phase transitions in a formal comparison. Compared with earlier work, we make here the following contributions:

A broad range of phase transition studies, covering nonsquare, square, and symmetric positive-semidefinite matrix recovery from Gaussian measurements. We also made certain non-Gaussian measurements and observed similar results.

A broad range of prediction formulas. We here make available explicit formulas for minimax MSE in the square asymmetric, and nonsquare cases, as well as in the symmetric square positive-semidefinite case

Careful empirical technique, including the following.

#### Reproducibility.

Publication of the code and data underlying our conclusions.

#### Validation.

Matlab/CVX results were reimplemented in Python/CVXOPT, with similar results. Code was executed on three different computing clusters, with similar results.

#### Study of finite-*N*-scaling.

We studied tendencies of *a*,*b* as *N* varies and observed behavior consistent with our asymptotic phase transition hypothesis. Studies at a single *N* could only have shown that an empirical phase transition was near to a given theoretical curve, at a given *N*, but not shown the scaling behavior with *N* that the main hypothesis properly involves.

## Conclusions

For the problem of matrix recovery from Gaussian measurements, our experiments, as well as those of others, document the existence of a finite-*N* phase transition. We compared our measured empirical phase transition curve with a formula from the theory of matrix denoising and observed a compelling match. Although the matrix recovery and matrix denoising problems are superficially different, this match evidences a deeper connection, such that MSE properties of a denoiser in a noise removal problem give precisely the exact recovery properties of a matrix recovery rule in a noiseless, but incomplete, data problem.

This connection suggests both new limits on what is possible in the matrix recovery problem and also new ways of trying to reach those limits.

## Appendix

### Asymptotic Minimax MSE Formula.

The following provides explicit formulas for the matrix denoising minimax curves and used above. Please see ref. 11 for the derivations. A simple web-based calculator, and its underlying source code implementing an efficient computation of these quantities, are provided in ref. 38. Letwhere , denote the complementary incomplete moments of the Marčenko–Pastur distribution (39). Definewith .

### Case .

It is convenient to orient the *M* by *N* matrix so that , i.e., *β* ≤ 1. The minimax curve is given by

### Case .

The minimax curve is given by

### Computing .

The map is convex. Solving we get that is the unique root of the equationThe left-hand side of Eq. **9** is decreasing in and the solution is determined numerically by binary search.

For square matrices () Eq. **8** can be expressed using elementary trigonometric functions. In ref. 11 it is shown thatwhere

are the complementary incomplete moments of the Quarter Circle law. Moreover,where is the unique solution to the transcendental equationwhich is a simplified version of Eq. **9**.

### Parametric Representation of the Minimax Curves.

For square matrices () the minimax curves and admit a parametric representation in the plane using elementary trigonometric functions (see ref. 11). As θ ranges over , the curve *θ* ↦ (*ρ*(*θ*|*Mat*),ℳ_{1}(*θ*)) is a parametric representation of ℳ(*ρ*,1|*Mat*), whereis a parametric representation of , and similarly *θ* ↦ (*ρ*(*θ*|*Sym*),ℳ_{1/2}(*θ*)) is a parametric representation of ℳ(*ρ*|*Sym*), where

### Note Added in Proof.

After this paper was refereed, a preprint by Amelunxen and coauthors appeared (40), which claims to prove *δ**(*ρ*) = ℳ(*ρ*) in cases including the present ones.

## Acknowledgments

We thank Iain Johnstone for advice at several crucial points, Ewout van den Berg for helpful comments, and Brian Wandell for leadership in scientific computing at Stanford. This work was partially supported by National Science Foundation (NSF) Division of Mathematical Sciences Grant 0906812 (American Recovery and Reinvestment Act of 2009). M.G. was supported by a William R. and Sara Hart Kimball Stanford Graduate Fellowship and a Technion EE Sohnis Promising Scientist Award. A.M. was partially supported by NSF Faculty Early Career Development Award CCF-0743978 and Defense Advanced Research Projects Agency/Air Force Office of Scientific Research Grant FA9550-12-1-0411.

## Footnotes

- ↵
^{1}To whom correspondence should be addressed. E-mail: donoho{at}stat.stanford.edu.

Author contributions: D.L.D., M.G., and A.M. designed research; D.L.D. and M.G. performed research; D.L.D., M.G., and A.M. contributed new reagents/analytic tools; D.L.D., M.G., and A.M. analyzed data; and D.L.D., M.G., and A.M. wrote the paper.

The authors declare no conflict of interest.

↵

^{†}Previous empirical work (9) used the minimum degrees of freedom rather than*r*as the*x*-coordinate for phase transition curves; however, for present purposes, seems more natural, because of the way the formulas for work out.↵

^{‡}Tanner and Wei (9) used a “horizontal” approach to phase transition measurement in contrast to the “vertical” approach used here. They hold δ constant and vary*r*, observing very sharp transitions of a practically all-or-nothing nature.- , and in Tables S2–S5, our results in many cases do generate
*Z*scores for the intercept term in the logistic regression that are consistent with traditional acceptance of this hypothesis. However, traditional acceptance is not needed, in order for the main hypothesis to be valid, and because of finite-*N*scaling effects indicated above would not ordinarily be expected to hold. ↵

^{¶}=_{P}denotes convergence in probability.↵

^{‖}Actually, even sublinear growth of*a*implies the result.↵

^{††}The same DJM heuristics based on state evolution apply in both settings.Data deposition: Data for this article have been deposited in the Stanford Digital Repository, http://purl.stanford.edu/tz124hw0000.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1306110110/-/DCSupplemental.

Freely available online through the PNAS open access option.

## References

- ↵
- ↵
- Candès EJ,
- Recht B

- ↵
- ↵Candès EJ, Li X, Ma Y, Wright J (2009) Robust principal component analysis?
*JACM*58(3), 10.1145/1970392.1970395. - ↵
- ↵
- Keshavan RH,
- Montanari A,
- Oh S

- ↵
- Recht B,
- Xu W,
- Hassibi B

- ↵Recht B, Xu W, Hassibi B (2008) Necessary and sufficient conditions for success of the nuclear norm heuristic for rank minimization.
*Proceedings of the 47th IEEE Conference on Decision and Control*(IEEE, New York), pp 3065–3070. - ↵Tanner J, Wei K (2012) Normalized iterative hard thresholding for matrix completion. Available at http://people.maths.ox.ac.uk/tanner/papers/TaWei\_NIHT.pdf. Accessed April 15, 2013.
- ↵Oymak S, Hassibi B (2010) New null space results and recovery thresholds for matrix rank minimization. Available at http://arxiv.org/abs/1011.6326. Accessed April 15, 2013.
- ↵Donoho DL, Gavish M (2013)
*Minimax Risk of Matrix Denoising by Singular Value Thresholding*(Stanford Univ Dept. of Statistics, Stanford, CA). Available at http://arxiv.org/abs/1304.2085. Accessed April 15, 2013. Tech Rep 2013-03. - ↵Fazel M, Hindi H, Boyd SP (2001) A rank minimization heuristic with application to minimum order system approximation.
*Proceedings of the 2001 American Control Conference*(IEEE, New York), pp 4734–4739. - ↵
- Fazel M

- ↵
- Donoho DL,
- Tanner J

- ↵
- Monajemi H,
- Jafarpour S,
- Gavish M,
- Donoho DL,
- Stat 330/CME 362 Collaboration

- ↵
- ↵Grant M, Boyd SP (2010)
*CVX: Matlab Software for Disciplined Convex Programming*(CVX Research Inc.). Available at http://cxvr.com/cvx. Accessed April 15, 2013. - ↵
- Toh KC,
- Todd MJ,
- Tutuncu RH

*Optimization Methods Software*. - ↵Sturm JF (1999) Using SeDuMi 1.02, a MATLAB toolbox for optimization over symmetric cones.
*Optimization Methods Software*. Available at http://www.tandfonline.com/doi/abs/10.1080/10556789908805766. Accessed April 15, 2013. - ↵Andersen D, Dahl J, Vandenberghe L (2012) CVXOPT: Python software for convex optimization. Available at http://abel.ee.ucla.edu/cvxopt. Accessed April 15, 2013.
- ↵Donoho DL, Gavish M, Montanari A (2013) Data for the article ‘The phase transition of matrix recovery from gaussian measurements matches the minimax MSE of matrix denoising.’ Available at http://purl.stanford.edu/tz124hw0000. Accessed February 15, 2013.
- ↵
- Donoho DL,
- Tanner J

- ↵
- Donoho DL

- ↵
- Donoho DL,
- Tanner J

- ↵
- ↵
- Donoho DL,
- Maleki A,
- Montanari A

- ↵
- ↵
- ↵Vila J, Schniter P (2011) Expectation-maximization Bernoulli-Gaussian approximate message passing.
*Conference Record of the Forty-Fifth ASILOMAR Conference on Signals, Systems and Computers*(IEEE, New York), pp 799–803. - ↵Rangan S (2011) Generalized approximate message passing for estimation with random linear mixing.
*2011 IEEE International Symposium on Information Theory Proceedings*(IEEE, New York), pp 2168–2172. - ↵Bayati M, Lelarge M, Montanari A (2012) Universality in polytope phase transitions and message passing algorithms. Available at http://arxiv.org/abs/1207.7321. Accessed April 15, 2013.
- ↵
- Donoho DL,
- Johnstone IM,
- Montanari A

- ↵Oymak S, Mohan K, Fazel M, Hassibi B (2011) A simplified approach to recovery conditions for low rank matrices.
*2011 IEEE International Symposium on Information Theory Proceedings*(IEEE, New York), pp 2318–2322. - ↵
- Candès EJ,
- Recht B

- ↵Gordon Y (1988) On Milman’s inequality and random subspaces which escape through a mesh in RN.
*Lecture Notes in Mathematics*(Springer, Berlin), Vol 1317, pp 84–106. - ↵Stojnic M (2009) Various thresholds for l1-optimization in compressed sensing. Available at http://arxiv.org/abs/0907.3666. Accessed April 15, 2013.
- ↵Oymak S, Hassibi B (2012) On a relation between the minimax denoising and the phase transitions of convex functions. Available at www.its.caltech.edu/∼soymak/Relation.pdf. Accessed April 15, 2013.
- ↵Donoho DL, Gavish M (2013) Companion website for the article ‘The phase transition of matrix recovery from Gaussian measurements matches the minimax MSE of matrix denoising. Available at www.runmycode.org/CompanionSite/Site265. Accessed February 15, 2013.
- ↵
- ↵
- Amelunxen D,
- Lotz M,
- McCoy MB,
- Tropp JA

## Citation Manager Formats

## Article Classifications

- Physical Sciences
- Applied Mathematics