Function list
Main TFOCS programs  
tfocs  Minimize a convex problem using a firstorder algorithm. 
tfocs_SCD  Smoothed conic dual form of TFOCS, for problems with nontrivial linear operators. 
continuation  Metawrapper to run TFOCS_SCD in continuation mode. 
Miscellaneous functions  
tfocs_version  Version information. 
tfocs_where  Returns the location of the TFOCS system. 
Operator calculus  
linop_adjoint  Computes the adjoint operator of a TFOCS linear operator 
linop_compose  Composes two TFOCS linear operators 
linop_scale  Scaling linear operator. 
prox_dualize  Define a proximity function by its dual 
prox_scale  Scaling a proximity/projection function. 
tfunc_scale  Scaling a function. 
tfunc_sum  Sum of functions. 
tfocs_normsq  Squared norm. 
linop_normest  Estimates the operator norm. 
Linear operators  
linop_matrix  Linear operator, assembled from a matrix. 
linop_dot  Linear operator formed from a dot product. 
linop_fft  Fast Fourier transform linear operator. 
linop_TV  2D TotalVariation (TV) linear operator. 
linop_TV3D  3D TotalVariation (TV) linear operator. 
linop_handles  Linear operator from usersupplied function handles. 
linop_spot  Linear operator, assembled from a SPOT operator. 
linop_reshape  Linear operator to perform reshaping of matrices. 
linop_subsample  Subsampling linear operator. 
linop_vec  Matrix to vector reshape operator 
Projection operators (proximity operators for indicator functions)  
proj_0  Projection onto the set {0} (for equality constraints) 
proj_box  Projection onto box constraints. 
proj_l1  Projection onto the scaled 1norm ball \(\{x\,\,\x\_1\leq q\}\). 
proj_l2  Projection onto the scaled 2norm ball \(\{x\,\,\x\_2\leq q\}\). 
proj_linf  Projection onto the scaled \(\infty\)norm ball \(\{x\,\,\x\_\infty\leq q\}\). 
proj_linfl2  Projection of each row of a matrix onto the scaled \(2\)norm ball \(\{x\,\,\x\_2\leq q\}\). 
proj_max  Projection onto the scaled set of vectors \(\{x\,\,\max_i x_i\leq q\}\). 
proj_nuclear  Projection onto the set of matrices with bounded nuclear norm: \(\{X\,\,\sum_i \sigma_i(X) \leq q\}\) 
proj_psd  Projection onto the positive semidefinite (PSD) cone: \(\{X\,\,X=X^T,\,\lambda_{\text{min}}(X)\geq 0\}\) 
proj_psdUTrace  Projection onto the PSD cone with fixed trace: \(\{X\,\,X=X^T,\,\lambda_{\text{min}}(X)\geq 0,\,\mathop{\textrm{Tr}}(X)=q\}\) 
proj_Rn  “Projection” onto the entire space. 
proj_Rplus  Projection onto the nonnegative orthant \(\{x\,\,\min_i x_i\geq 0\}\). 
proj_simplex  Projection onto the simplex \(\{x\,\,\min_i x_i\geq 0\,\sum_i x_i=1\}\). 
proj_conic  Projection onto the secondorder (aka Lorentz) cone. 
proj_singleAffine  Projection onto a single affine equality or inequality constraint. 
proj_boxAffine  Projection onto a single affine equality along with box constraints. 
proj_affine  Projection onto general affine equations, e.g., solutions of linear equations. 
proj_l2group  Projection of each group of coordinates onto 2norm balls. 
proj_spectral  Projection onto the set of matrices with bounded spectral norm: \(\{X\,\,\sigma_{\text{max}}(X) \leq q\}\) 
proj_maxEig  Projection onto the set of symmetric matrices with bounded maximum eigenvalue: \(\{X\,\,X=X^T,\,\lambda_{\text{max}}(X)\leq q\}\). 
Proximity operators of general convex functions  
prox_0  The zero proximity function: 
prox_boxDual  Dual function of box indicator function { l <= x <= u } 
prox_hinge  Hingeloss function. 
prox_hingeDual  Dual function of the Hingeloss function. 
prox_l1  \(\ell_1\) norm. 
prox_Sl1  Sorted/ordered \(\ell_1\) norm; see the ordered l1 page for info. 
prox_l1l2  \(\ell_1\)\(\ell_2\) block norm: the sum of the \(\ell_2\) norms of rows. 
prox_l1linf  \(\ell_1\)\(\ell_\infty\) block norm: the max of the \(\ell_2\) norms of rows. 
prox_l1pos  \(\ell_1\) norm, restricted to \(x \geq 0\). 
prox_l2  \(\ell_2\) norm. 
prox_linf  \(\ell_\infty\) norm. 
prox_max  Maximum function. 
prox_nuclear  Nuclear norm. 
prox_spectral  Spectral norm, i.e. max singular value. 
prox_maxEig  Maximum eigenvalue of a symmetric matrix. 
prox_trace  Nuclear norm, for positive semidefinite matrices. Equivalent to trace. 
Smooth functions  
smooth_constant  Constant function generation. 
smooth_entropy  The entropy function \(\sum_i x_i\log x_i\) 
smooth_handles  Smooth function from separate \(f\) / \(g\) handles. 
smooth_huber  Huber function generation. 
smooth_linear  Linear function generation. 
smooth_logdet  The \(\log\det(X)\) function. 
smooth_logLLogistic  Loglikelihood function of a logistic: \(\sum_i y_i \mu_i – log( 1+\exp(\mu_i) )\) 
smooth_logLPoisson  Loglikelihood of a Poisson: \(\sum_i \lambda_i + x_i \cdot \log( \lambda_i)\) 
smooth_logsumexp  The function \(\log(\sum_i \exp(x_i))\) 
smooth_quad  Quadratic function generation. 
Testing functions  
test_nonsmooth  Runs diagnostic tests to ensure a nonsmooth function conforms to TFOCS conventions 
test_proxPair  Runs diagnostics on a pair of functions to check if they are Legendre conjugates. 
test_smooth  Runs diagnostic checks on a TFOCS smooth function object. 
linop_test  Performs an adjoint test on a linear operator. 
Premade solvers for specific problems (vector variables)  
solver_L1RLS  l1regularized least squares problem, sometimes called the LASSO. \( \min_x .5\Axb\_2^2 + \lambda \x\_1 \) 
solver_LASSO  Minimize residual subject to \(\ell_1\)norm constraints. \( \min_x .5\Axb\_2^2\) subject to \( \x\_1 \le \tau \) 
solver_SLOPE  Sorted LOne Penalized Estimation; like LASSO but with an ordered \(\ell_1\)norm; see documentation. 
solver_sBP  Basis pursuit (l1norm with equality constraints). Uses smoothing. \( \min_x \x\_1 \) subject to \(Ax=b\) 
solver_sBPDN  Basis pursuit denoising. BP with relaxed constraints. Uses smoothing. \( \min_x \x\_1 \) subject to \(\Axb\_2 \le \epsilon\) 
solver_sBPDN_W  Weighted BPDN problem. Uses smoothing. \( \min_x \Wx\_1 \) subject to \(\Axb\_2 \le \epsilon\) 
solver_sBPDN_WW  BPDN with two separate (weighted) \(\ell_1\)norm terms. Uses smoothing. \( \min_x \W^{(1)}x\_1+\W^{(2)}x\_1 \) subject to \(\Axb\_2 \le \epsilon\) 
solver_sDantzig  Dantzig selector problem. Uses smoothing. \( \min_x \x\_1 \) subject to \(\A^T(Axb)\_\infty \le \delta\) 
solver_sDantzig_W  Weighted Dantzig selector problem. Uses smoothing. \( \min_x \Wx\_1 \) subject to \(\A^T(Axb)\_\infty \le \delta\) 
solver_sLP  Generic linear programming in standard form. Uses smoothing. \( \min_x c^Tx \) subject to \(x \ge 0\) and \( Ax=b\) 
solver_sLP_box  Generic linear programming with box constraints. Uses smoothing. \( \min_x c^Tx \) subject to \(\ell \le x \le u\) and \( Ax=b\) 
Premade solvers for specific problems (matrix variables)  
Here, \(\mathcal{A}\) is a generic linear operator and \(\mathcal{A}_\Omega\) is the subsampling operator that samples entries in \(\Omega\). The nuclear/trace norm is written \( \\cdot\_{\text{tr}} \) 

solver_psdComp  Matrix completion for PSD matrices. \( \min_X .5\ \mathcal{A}_\Omega(X) – b\_2^2 \) subject to \( X \succeq 0\) 
solver_psdCompConstrainedTrace  Matrix completion with constrained trace, for PSD matrices. \( \min_X .5\ \mathcal{A}_\Omega(X) – b\_2^2 \) subject to \( X \succeq 0\) and \( \textrm{trace}(X)=\tau) 
solver_TraceLS  Unconstrained form of traceregularized leastsquares problem. \( \min_X .5\ \mathcal{A}(X) – b\_2^2 + \lambda \textrm{trace}(X)\) subject to \( X \succeq 0\) 
solver_sNuclearBP  Nuclear norm basis pursuit problem (i.e. matrix completion). Uses smoothing. \( \min_X \X\_{\text{tr}} \) subject to \( \mathcal{A}_\Omega(X) = b\) 
solver_sNuclearBPDN  Nuclear norm basis pursuit problem with relaxed constraints. Uses smoothing. \( \min_X \X\_{\text{tr}} \) subject to \( \\mathcal{A}_\Omega(X) – b\_2 \le \epsilon\) 
solver_sSDP  Generic semidefinite programs (SDP). Uses smoothing. \( \min_X \text{tr}(C^TX) \) subject to \(X \succeq 0\) and \( \mathcal{A}(X) = b\) 
solver_sLMI  Generic linear matrix inequality problems (LMI is the dual of a SDP). Uses smoothing. \( \min_y b^Ty \) subject to \( A_0 + \sum_i A_i y_i \succeq 0\) 
Algorithm variants  
tfocs_AT  Auslender and Teboulle’s accelerated method. Reference: Auslender, A., Teboulle, M.: Interior gradient and proximal methods for convex and conic optimization. SIAM J. Optim. 16(3), 697–725 (2006) 
tfocs_GRA  Gradient descent. 
tfocs_LLM  Lan, Lu and Monteiro’s accelerated method. Reference: Lan, G., Lu, Z., Monteiro, R.D.C.: Primaldual firstorder methods with O(1/eps) iterationcomplexity for cone programming. Math. Program. Ser. A (2009) 
tfocs_N07  Nesterov’s 2007 accelerated method. References: Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Applied Optimization, vol. 87. Kluwer, Boston (2004); Nesterov,Y.: Gradient methods for minimizing composite objective function. Technical report (later published in Math. Prog. Series B), CORE 2007/76, Université Catholique de Louvain, LouvainlaNeuve, Belgium (2007) 
tfocs_N83  Nesterov’s 1983 accelerated method; also by Beck and Teboulle 2005 (FISTA). References: Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence O(1/k2). Doklady AN USSR; Nesterov, Y.: Smooth minimization of nonsmooth functions. Math. Program. Ser. A 103, 127– 152 (2005); Beck, A., Teboulle, M.:A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009) 
tfocs_TS  Tseng’s modification of Nesterov’s 2007 method. Reference: Tseng, P.: On accelerated proximal gradient methods for convexconcave optimization. (2008). 