%0 Unpublished Work %T Mean and Variance Estimation Complexity in Arbitrary Distributions via Wasserstein Minimization %A V. Iverson %A S Vavasis %G eng %U https://arxiv.org/abs/2501.10172 %0 Journal Article %J SIAM J. Optimization %D Accepted %T MGProx: A nonsmooth multigrid proximal gradient method with adaptive restriction for strongly convex optimization %A Andersen Ang %A Hans De Sterck %A Stephen Vavasis %B SIAM J. Optimization %G eng %U https://arxiv.org/abs/2302.04077 %0 Journal Article %J Mathematical Programming - Computation %D Accepted %T Nonlinear conjugate gradient for smooth convex functions %A Sahar Karimi %A Stephen A. Vavasis %X

The method of nonlinear conjugate gradients (NCG) is widely used in practice for unconstrained optimization, but it satisfies weak complexity bounds at best when applied to smooth convex functions. In contrast, Nesterov’s accelerated gradient (AG) method is optimal up to constant factors for this class.
 

However, when specialized to quadratic function, conjugate gradient is optimal in a strong sense among function-gradient methods. Therefore, there is seemingly a gap in the menu of available algorithms: NCG, the optimal algorithm for quadratic functions that also exhibits good practical performance for general functions, has poor complexity bounds compared to AG.
 

We propose an NCG method called C+AG (“conjugate plus accelerated gradient”) to close this gap, that is, it is optimal for quadratic functions and still satisfiesthe best possible complexity bound for more general smooth convex functions. It takes conjugate gradient steps until insufficient progress is made, at which time it switches to accelerated gradient steps, and later retries conjugate gradient. The proposed method has the following theoretical properties: (i) It is identical to linear conjugate gradient (and hence terminates finitely) if the objective function is quadratic; (ii) Its running-time bound is O(ǫ1/2) gradient evaluations for an L-smooth convex function, where ǫ is the desired residual reduction, (iii) Its running-time bound is O(/ℓ ln(1)) if the function is both L-smooth and -strongly convex. We also conjecture and outline a proof that a variant of the method has the property: (iv) It is n-step quadratically convergent for a function whose second derivative is smooth and invertible at the optimizer. Note that the bounds in (ii) and (iii) match AG and are the best possible, i.e., they match lower bounds up to constant factors for the classes of functions under consideration. On the other hand, (i) and (iv) match NCG.
 

%B Mathematical Programming - Computation %G eng %U https://arxiv.org/pdf/2111.11613.pdf %0 Report %D 2024 %T A Primal-Dual Frank-Wolfe Algorithm for Linear Programming %A Matthew Hough %A Stephen Vavasis %X We present two first-order primal-dual algorithms for solving saddle point formulations of linear programs, namely FWLP (Frank-Wolfe Linear Programming) and FWLP-P. The former iteratively applies the Frank-Wolfe algorithm to both the primal and dual of the saddle point formulation of a standard-form LP. The latter is a modification of FWLP in which regularizing perturbations are used in computing the iterates. We show that FWLP-P converges to a primal-dual solution with error O(1/sqrt(k)) after k iterations, while no convergence guarantees are provided for FWLP. We also discuss the advantages of using FWLP and FWLP-P for solving very large LPs. In particular, we argue that only part of the matrix A is needed at each iteration, in contrast to other first-order methods. %G eng %U https://arxiv.org/abs/2402.18514 %0 Report %D 2023 %T Accelerated gradient descent: A guaranteed bound for a heuristic restart strategy %A Walaa Moursi %A Viktor Pavlovic %A Stephen Vavasis %X The O(1/k^2) convergence rate in function value of accelerated gradient descent is optimal, but there are many modifications that have been used to speed up convergence in practice. Among these modifications are restarts, that is, starting the algorithm with the current iteration being considered as the initial point. We focus on the adaptive restart techniques introduced by O'Donoghue and Candès, specifically their gradient restart strategy. While the gradient restart strategy is a heuristic in general, we prove that applying gradient restarts preserves and in fact improves the O(1/k^2) bound, hence establishing function value convergence, for one-dimensional functions. Applications of our results to separable and nearly separable functions are presented. %G eng %U https://arxiv.org/abs/2310.07674 %0 Journal Article %J Foundations of Computational Mathematics %D 2023 %T Computational complexity of decomposing a symmetric matrix as a sum of positive semidefinite and diagonal matrices %A Levent Tunçel %A Stephen A. Vavasis %A Jingye Xu %X We study several variants of decomposing a symmetric matrix into a sum of a low-rank positive semidefinite matrix and a diagonal matrix. Such decompositions have applications in factor analysis and they have been studied for many decades. On the one hand, we prove that when the rank of the positive semidefinite matrix in the decomposition is bounded above by an absolute constant, the problem can be solved in polynomial time. On the other hand, we prove that, in general, these problems as well as their certain approximation versions are all NP-hard. Finally, we prove that many of these low-rank decomposition problems are complete in the first-order theory of the reals; i.e., given any system of polynomial equations, we can write down a low-rank decomposition problem in polynomial time so that the original system has a solution iff our corresponding decomposition problem has a feasible solution of certain (lowest) rank. %B Foundations of Computational Mathematics %V 2023 %P 1-47 %G eng %U https://arxiv.org/abs/2209.05678 %0 Report %D 2023 %T Range of the displacement operator of PDHG with applications to quadratic and conic programming %A Tao Jiang %A Walaa Moursi %A Stephen Vavasis %X Primal-dual hybrid gradient (PDHG) is a first-order method for saddle-point problems and convex programming introduced by Chambolle and Pock. Recently, Applegate et al.\ analyzed the behavior of PDHG when applied to an infeasible or unbounded instance of linear programming, and in particular, showed that PDHG is able to diagnose these conditions. Their analysis hinges on the notion of the infimal displacement vector in the closure of the range of the displacement mapping of the splitting operator that encodes the PDHG algorithm. In this paper, we develop a novel formula for this range using monotone operator theory. The analysis is then specialized to conic programming and further to quadratic programming (QP) and second-order cone programming (SOCP). A consequence of our analysis is that PDHG is able to diagnose infeasible or unbounded instances of QP and of the ellipsoid-separation problem, a subclass of SOCP. %G eng %U https://arxiv.org/abs/2309.15009 %0 Report %D 2023 %T Re-embedding data to strengthen recovery guarantees of clustering %A Tao Jiang %A Samuel Tang %A Stephen Vavasis %X We propose a clustering method that involves chaining four known techniques into a pipeline yielding an algorithm with stronger recovery guarantees than any of the four components separately. Given n points in R d , the first component of our pipeline, which we call leapfrog distances, is reminiscent of density-based clustering, yielding an n × n distance matrix. The leapfrog distances are then translated to new embeddings using multidimensional scaling and spectral methods, two other known techniques, yielding new embeddings of the n points in R^d' , where d' satisfies d' << d in general. Finally, sum-of-norms (SON) clustering is applied to the re-embedded points. Although the fourth step (SON clustering) can in principle be replaced by any other clustering method, our focus is on provable guarantees of recovery of underlying structure. Therefore, we establish that the re-embedding improves recovery SON clustering, since SON clustering is a well-studied method that already has provable guarantees. %G eng %U https://arxiv.org/abs/2301.10901 %0 Journal Article %J Journal of Global Optimization %D 2022 %T Low-rank matrix recovery with Ky Fan 2-k-norm %A Xuan Vinh Doan %A Stephen A. Vavasis %X Low-rank matrix recovery problem is difficult due to its non-convex properties and it is usually solved using convex relaxation approaches. In this paper, we formulate the non-convex low-rank matrix recovery problem exactly using novel Ky Fan 2-k-norm-based models. A general difference of convex functions algorithm (DCA) is developed to solve these models. A proximal point algorithm (PPA) framework is proposed to solve sub-problems within the DCA, which allows us to handle large instances. Numerical results show that the proposed models achieve high recoverability rates as compared to the truncated nuclear norm method and the alternating bilinear optimization approach. The results also demonstrate that the proposed DCA with the PPA framework is efficient in handling larger instances. %B Journal of Global Optimization %V 82 %P 727-751 %G eng %U https://link.springer.com/article/10.1007/s10898-021-01031-0 %0 Journal Article %D 2021 %T Certifying clusters from sum-of-norms clustering %A Tao Jiang %A Stephen A Vavasis %X Sum-of-norms clustering is a clustering formulation based on convex optimization that automatically induces hierarchy. Multiple algorithms have been proposed to solve the optimization problem: subgradient descent by Hocking et al., ADMM and ADA by Chi and Lange, stochastic incremental algorithm by Panahi et al. and semismooth Newton-CG augmented Lagrangian method by Sun et al. All algorithms yield approximate solutions, even though an exact solution is demanded to determine the correct cluster assignment. The purpose of this paper is to close the gap between the output from existing algorithms and the exact solution to the optimization problem. We present a clustering test that identifies and certifies the correct cluster assignment from an approximate solution yielded by any primal-dual algorithm. Our certification validates clustering for both unit and multiplicative weights. The test may not succeed if the approximation is inaccurate. However, we show the correct cluster assignment is guaranteed to be certified by a primal-dual path following algorithm after sufficiently many iterations, provided that the model parameter λ avoids a finite number of bad values. Numerical experiments are conducted on Gaussian mixture and half-moon data, which indicate that carefully chosen multiplicative weights increase the recovery power of sum-of-norms clustering. %G eng %U https://arxiv.org/abs/2006.11355 %0 Conference Paper %B Neural Information Processing Systems (NeurIPS) %D 2020 %T Provable overlapping community detection in weighted graphs %A Jimit Majmudar %A Stephen Vavasis %X Community detection is a widely-studied unsupervised learning problem in which the task is to group similar entities together based on observed pairwise entity interactions. This problem has applications in diverse domains such as social network analysis and computational biology. There is a significant amount of literature studying this problem under the assumption that the communities do not overlap. When the communities are allowed to overlap, often a pure nodes assumption is made, i.e. each community has a node that belongs exclusively to that community. This assumption, however, may not always be satisfied in practice. In this paper, we provide a provable method to detect overlapping communities in weighted graphs without explicitly making the pure nodes assumption. Moreover, contrary to most existing algorithms, our approach is based on convex optimization, for which many useful theoretical properties are already known. We demonstrate the success of our algorithm on artificial and real-world datasets %B Neural Information Processing Systems (NeurIPS) %V 2020 %G eng %U https://proceedings.neurips.cc/paper/2020 %0 Journal Article %J Journal of Machine Learning Research %D 2020 %T Recovery of a mixture of Gaussians by sum-of-norms clustering %A Jiang, T. %A Vavasis, S. %A Zhai., C. W. %X Sum-of-norms clustering is a method for assigning n points in Rd to K clusters, 1Kn, using convex optimization. Recently, Panahi et al. (2017) proved that sum-of-norms clustering is guaranteed to recover a mixture of Gaussians under the restriction that the number of samples is not too large. The purpose of this note is to lift this restriction, that is, show that sum-of-norms clustering can recover a mixture of Gaussians even as the number of samples tends to infinity. Our proof relies on an interesting characterization of clusters computed by sum-of-norms clustering that was developed inside a proof of the agglomeration conjecture by Chiquet et al. (2017). Because we believe this theorem has independent interest, we restate and reprove the Chiquet et al. (2017) result herein. %B Journal of Machine Learning Research %V 21 %P 1-16 %G eng %U https://jmlr.org/papers/volume21/19-218/19-218.pdf %N 225 %0 Journal Article %J Comput. Meth. Appl. Mech. Engr. %D 2020 %T Second-order cone interior-point method for quasistatic and moderate dynamic cohesive fracture %A Vavasis, Stephen %A Papoulia, Katerina %A Hirmand, M. Reza %X

Second-order cone interior-point method for quasistatic and moderate dynamic cohesive fracture by S. Vavasis, , M. R. Hirmand

  Cohesive fracture is among the few techniques able to
  model complex fracture nucleation and propagation
  with a sharp (nonsmeared) representation
  of the crack.  Implicit time-stepping schemes are often favored
  in mechanics due to their ability to take larger time steps in
  quasistatic and moderate dynamic problems.  Furthermore,
  initially rigid cohesive models are typically preferred when
  the location of the crack is not known in advance, since
  initially elastic models artificially lower the material stiffness.
  It is challenging to include an initially rigid
  cohesive model in an implicit scheme because
  the initiation of fracture corresponds
  to a nondifferentiability of the underlying potential.  In
  this work, an interior-point method is proposed for implicit time
  stepping of initially rigid cohesive
  fracture.  It uses techniques developed for convex second-order
  cone programming for the nonconvex problem at hand.  The underlying cohesive model
  is taken from Papoulia (2017) and is based on a nondifferentiable
  energy function.  That previous work proposed an algorithm based on successive
  smooth approximations to the nondifferential objective for solving
  the resulting optimization problem.  It is argued herein that cone
  programming can capture the nondifferentiability without smoothing,
  and the resulting cone formulation is amenable to interior-point
  algorithms.  A further benefit of the formulation is that other
  conic inequality constraints are straightforward to incorporate.
  Computational results are provided showing that certain contact
  constraints can be easily handled and that the
  method is practical.

 

%B Comput. Meth. Appl. Mech. Engr. %V 358 %P 112633 %G eng %U https://arxiv.org/abs/1909.10641 %0 Manuscript %D 2020 %T A termination criterion for stochastic gradient descent for binary classification %A Sina Baghal %A Courtney Paquette %A Stephen Vavasis %X We propose a new, simple, and computationally inexpensive termination test for constant step-sizestochastic gradient descent (SGD) applied to binary classification on the logistic and hinge loss with homogeneous linear predictors. Our theoretical results support the effectiveness of our stopping criterion when the data is Gaussian distributed. This presence of noise allows for the possibility of non-separable data. We show that our test terminates in a finite number of iterations and when the noise in the data isnot too large, the expected classifier at termination nearly minimizes the probability of misclassification. Finally, numerical experiments indicate for both real and synthetic data sets that our termination test exhibits a good degree of predictability on accuracy and running time. %G eng %U https://arxiv.org/abs/2003.10312 %0 Book Section %B Optimization of Complex Systems: Theory, Models and Applications %D 2019 %T Low-rank matrix recovery with Ky Fan 2-k-norm %A Doan, X. V. %A Vavasis, S. %E Le Thi et al. %B Optimization of Complex Systems: Theory, Models and Applications %P 310-319 %G eng %0 Unpublished Work %D 2019 %T Potential-based analyses of first-order methods for constrained and composite optimization %A Paquette, C. %A Vavasis, S. %G eng %0 Journal Article %J Mathematics of Operations Research %D 2018 %T On the Complexity of Robust PCA and l1-Norm Low-Rank Matrix Approximation %A Nicolas Gillis %A Stephen A. Vavasis %B Mathematics of Operations Research %V 43 %P 1072-1084 %G eng %0 Unpublished Work %D 2018 %T Second-order cone interior-point method for quasistatic and moderate dynamic cohesive fracture %A S. Vavasis %A K. Papoulia %A M. Hirmand %G eng %0 Journal Article %J SIAM J. Optimiz %D 2017 %T IMRO: A Proximal Quasi-Newton Method for Solving $\ell_1$-Regularized Least Squares Problems %A S. Karimi %A S. Vavasis %B SIAM J. Optimiz %V 27 %P 583-615 %G eng %0 Unpublished Work %D 2017 %T A single potential governing convergence of conjugate gradient, accelerated gradient and geometric descent %A Sahar Karimi %A Stephen Vavasis %G eng %0 Journal Article %J SIAM J. Optim. %D 2016 %T Finding the largest low-rank clusters with Ky Fan 2-k-norm and l1-norm %A X. V. Doan %A S. Vavasis %B SIAM J. Optim. %V 26 %P 274-312 %G eng %0 Unpublished Work %D 2016 %T A unified convergence bound for conjugate gradient and accelerated gradient %A S. Karimi %A S. Vavasis %G eng %0 Journal Article %J Mathematical Programming %D 2015 %T Extreme point inequalities and geometry of the rank sparsity ball %A Drusvyatskiy, D. %A Vavasis, S. A. %A Wolkowicz, H. %X We investigate geometric features of the unit ball corresponding to the sum of the nuclear norm of a matrix and the \$\$l_1\$\$l1norm of its entries–-a common penalty function encouraging joint low rank and high sparsity. As a byproduct of this effort, we develop a calculus (or algebra) of faces for general convex functions, yielding a simple and unified approach for deriving inequalities balancing the various features of the optimization problem at hand, at the extreme points of the solution set. %B Mathematical Programming %V 152 %P 521–544 %8 Aug %G eng %U https://doi.org/10.1007/s10107-014-0795-8 %R 10.1007/s10107-014-0795-8 %0 Journal Article %J SIAM J. Optim. %D 2015 %T Semidefinite Programming Based Preconditioning for More Robust Near-Separable Nonnegative Matrix Factorization %A Nicolas Gillis %A Stephen A. Vavasis %B SIAM J. Optim. %V 25 %P 677-698 %G eng %0 Journal Article %J Math. Progr. %D 2014 %T Convex optimization for the planted k-disjoint-clique problem %A Brendan P.W. Ames %A Stephen A. Vavasis %B Math. Progr. %V 143 %P 299-337 %G eng %0 Unpublished Work %D 2013 %T Convex relaxation for finding planted influential nodes in a social network %A L. Elkin %A Ting Kei Pong %A S. Vavasis %G eng %0 Journal Article %J IEEE Trans. Pattern Analysis and Machine Intelligence %D 2013 %T Fast and Robust Recursive Algorithms for Separable Nonnegative Matrix Factorization %A N. Gillis %A S. A. Vavasis %B IEEE Trans. Pattern Analysis and Machine Intelligence %V 36 %P 698-714 %G eng %0 Journal Article %J SIAM J. Optimiz. %D 2013 %T Finding approximately rank-one submatrices with the nuclear norm and $\ell_1$ norm %A X. V. Doan %A S. Vavasis %B SIAM J. Optimiz. %V 23 %P 2502-2540 %G eng %0 Journal Article %J SIAM J. Sci. Comput. %D 2013 %T A proximal point algorithm for sequential feature extraction applications %A X. V. Doan %A K.-C. Toh %A S. Vavasis %B SIAM J. Sci. Comput. %P A517-A540 %G eng %0 Unpublished Work %D 2013 %T Some notes on applying computational divided differencing in optimization %A S. Vavasis %G eng %0 Unpublished Work %D 2012 %T Detecting and correcting loss of independence in nonlinear conjugate gradient %A S. Karimi %A S. Vavasis %G eng %0 Conference Paper %B Proceedings of 20th International Meshing Roundtable %D 2012 %T A log-barrier method for mesh quality improvement %A S. Sastry %A S. Shontz %A S. Vavasis %B Proceedings of 20th International Meshing Roundtable %I Springer %P 329-346 %G eng %0 Journal Article %J SIAM J. Sci. Comput. %D 2011 %T A condition number analysis of an algorithm for solving a system of polynomial equations with one degree of freedom %A Gun Srijuntongsiri %A Stephen Vavasis %B SIAM J. Sci. Comput. %V 33 %P 433-454 %G eng %0 Journal Article %J Mathematical Programming %D 2011 %T Nuclear norm minimization for the planted clique and biclique problems %A Ames, B. %A Vavasis, S. %B Mathematical Programming %V 129 %P 69-89 %G eng %0 Journal Article %J \em Engineering with Computers %D 2011 %T A robust solution procedure for hyperelastic solids with large boundary deformation %A S. Shontz %A S. Vavasis %B \em Engineering with Computers %V 28 %P 135-147 %G eng %0 Journal Article %J BIT Numerical Mathematics %D 2010 %T Analysis of and workarounds for element reversal for a finite element-based algorithm for warping triangular and tetrahedral meshes %A S. Shontz %A S. Vavasis %B BIT Numerical Mathematics %V 50 %P 863-884 %G eng %0 Journal Article %J SIAM J. Optim. %D 2009 %T On the complexity of nonnegative matrix factorization %A Stephen A. Vavasis %B SIAM J. Optim. %V 20 %P 1364-1377 %G eng %0 Journal Article %J SIAM J. Sci. Comput. %D 2008 %T A Condition Number Analysis of a Line-Surface Intersection Algorithm %A Gun Srijuntongsiri %A Stephen A. Vavasis %B SIAM J. Sci. Comput. %V 30 %P 1064-1081 %G eng %0 Unpublished Work %D 2008 %T A new secant method for unconstrained optimization %A Stephen A. Vavasis %G eng %0 Conference Paper %B Proceedings of the 2008 International Conference on Machine Learning %D 2008 %T Nonnegative matrix factorization via rank-one downdating %A Michael Biggs %A Ali Ghodsi %A Stephen A. Vavasis %B Proceedings of the 2008 International Conference on Machine Learning %G eng %0 Journal Article %J SIAM J. Numer. Anal. %D 2008 %T Solving Elliptic Finite Element Systems in Near-Linear Time with Support Preconditioners %A E. Boman %A B. Hendrickson %A S. Vavasis %B SIAM J. Numer. Anal. %V 46 %P 3264-3284 %G eng %0 Unpublished Work %D 2007 %T A Condition Number Analysis of a Surface-Surface Intersection Algorithm %A Gun Srijuntongsiri %A Stephen A. Vavasis %G eng %0 Unpublished Work %D 2007 %T Properties of polynomial bases used in line-surface intersection algorithm %A Gun Srijuntongsiri %A Stephen A. Vavasis %G eng %0 Journal Article %J SIAM J. Scientific Computing %D 2006 %T An algorithm for two-dimensional mesh generation based on the pinwheel tiling %A P. Ganguly %A S. Vavasis %A K. Papoulia %B SIAM J. Scientific Computing %V 28 %P 1533-1562 %G eng %0 Unpublished Work %D 2006 %T A conjecture that the roots of a univariate polynomial lie in a union of annuli %A S. Vavasis %G eng %0 Journal Article %J Internat. J. Numer. Meth. Eng. %D 2006 %T Spatial convergence of crack nucleation using a cohesive finite element model on a pinwheel-based mesh %A K. Papoulia %A S. Vavasis %A P. Ganguly %X

Spatial convergence of crack nucleation using a cohesive finite‐element model on a pinwheel‐based mesh by , Stephen A. Vavasis, Pritam Ganguly

We consider the use of initially rigid cohesive interface models in a two‐dimensional dynamic finite‐element solution of a fracture process. Our focus is on convergence of finite‐element solutions to a solution of the undiscretized medium as the mesh spacing Δx (and therefore time‐step Δt) tends to zero. We propose the use of pinwheel meshes, which possess the ‘isoperimetric property’ that for any curve C in the computational domain, there is an approximation to C using mesh edges that tends to C including a correct representation of its length, as the grid size tends to zero. We suggest that the isoperimetric property is a necessary condition for any possible spatial convergence proof in the general case that the crack path is not known in advance. Conversely, we establish that if the pinwheel mesh is used, the discrete interface first activated in the finite‐element model will converge to the initial crack in the undiscretized medium. Finally, we carry out a mesh refinement experiment to check convergence of both nucleation and propagation. Our results indicate that the crack path computed in the pinwheel mesh is more stable as the mesh is refined compared to other types of meshes. %B Internat. J. Numer. Meth. Eng. %V 67 %P 1-16 %G eng %0 Journal Article %J Mathematics of Computation %D 2005 %T Accurate solution of polynomial equations using Macaulay resultant matrices %A G. Jónsson %A S. Vavasis %B Mathematics of Computation %V 74 %P 221-262 %G eng %0 Journal Article %J SIAM J. Matrix Analysis App. %D 2005 %T An iterative method for solving complex-symmetric systems arising in electrical power modeling %A V. E. Howle %A S. A. Vavasis %B SIAM J. Matrix Analysis App. %V 26 %P 1150-1178 %G eng %0 Journal Article %J Engineering Fracture Mechanics %D 2005 %T Obtaining initially rigid cohesive finite element models that are temporally convergent %A Chin-Hang Sam %A K. Papoulia %A S. Vavasis %B Engineering Fracture Mechanics %V 72 %P 2247-2267 %G eng %0 Journal Article %J SIAM J. Matrix Analysis App. %D 2005 %T Solving polynomials with small leading coefficients %A G. Jónsson %A S. Vavasis %B SIAM J. Matrix Analysis App. %V 26 %P 400-414 %G eng %0 Unpublished Work %D 2004 %T A Fully Sparse Implementation of a Primal- Dual Interior-Point Potential Reduction Method for Semidefinite Programming %A Gun Srijuntongsiri %A S. Vavasis %G eng %0 Unpublished Work %D 2003 %T A Bernstein-Bézier Sufficient Condition for Invertibility of Polynomial Mappings %A S. Vavasis %G eng %0 Journal Article %J International J. Numer. Meth. Eng. %D 2003 %T Time continuity in cohesive finite element modeling %A K. Papoulia %A S. Vavasis %A C.-H. Sam %B International J. Numer. Meth. Eng. %V 58 %P 679-701 %G eng %0 Journal Article %J SIAM J. Matrix Anal. App. %D 2001 %T Accurate Solution of Weighted Least Squares by Iterative Methods %A E. Y. Bobrovnikova %A S. A. Vavasis %B SIAM J. Matrix Anal. App. %V 22 %P 1153-1174 %G eng %0 Journal Article %J Linear Algebra and its Applications %D 2000 %T A Norm Bound for Projections with Complex Weights %A E. Bobrovnikova %A S. Vavasis %B Linear Algebra and its Applications %V 307 %P 69-75 %G eng %0 Journal Article %J SIAM J. Computing %D 2000 %T Quality Mesh Generation in Higher Dimensions %A S. A. Mitchell %A S. A. Vavasis %B SIAM J. Computing %V 29 %P 1334-1370 %G eng %0 Book Section %B Algorithms and Theory of Computation Handbook %D 1999 %T Convex Optimization %A S. A. Vavasis %B Algorithms and Theory of Computation Handbook %I CRC Press %G eng %0 Unpublished Work %D 1999 %T A note on efficient computation of the gradient in semidefinite programming %A S. Vavasis %G eng %0 Journal Article %J SIAM J. Sci. Comput. %D 1998 %T Geometric Separators for Finite-Element Meshes %A G. L. Miller %A S.-H. Teng %A W. Thurston %A S. Vavasis %B SIAM J. Sci. Comput. %V 19 %P 364-386 %G eng %0 Journal Article %J SIAM J. Sci. Comput. %D 1998 %T Numerical conformal mapping using cross-ratios and Delaunay triangulation %A T. A. Driscoll %A S. A. Vavasis %B SIAM J. Sci. Comput. %V 19 %P 1783-1803 %G eng %0 Conference Paper %B Proceedings of the Copper Mountain Conference on Iterative Methods %D 1998 %T Preconditioning complex-symmetric layered systems arising in electrical power modeling %A V. E. Howle %A S. Vavasis %B Proceedings of the Copper Mountain Conference on Iterative Methods %G eng %0 Journal Article %J SIAM J. Matrix Anal. Appl. %D 1997 %T Complete orthogonal decomposition for weighted least squares %A P. Hough %A S. Vavasis %B SIAM J. Matrix Anal. Appl. %V 18 %G eng %0 Journal Article %J J. ACM %D 1997 %T Separators for sphere-packings and nearest neighbor graphs %A G. L. Miller %A S.-H. Teng %A W. Thurston %A S. A. Vavasis %B J. ACM %V 44 %P 1-29 %G eng %0 Conference Paper %B Proc. 12th ACM Symposium on Computational Geometry %D 1996 %T An aspect ratio bound for triangulating a $d$-grid cut by a hyperplane (extended abstract) %A S. A. Mitchell %A S. Vavasis %B Proc. 12th ACM Symposium on Computational Geometry %P 48-57 %G eng %0 Journal Article %J Annals of Operations Research %D 1996 %T Identifying an Optimal Basis in Linear Programming %A S. A. Vavasis %A Y. Ye %B Annals of Operations Research %V 62 %P 565–572 %G eng %0 Journal Article %J Mathematical Programming %D 1996 %T A primal-dual interior point method whose running time depends only on the constraint matrix %A S. A. Vavasis %A Y. Ye %B Mathematical Programming %V 74 %P 79–120 %G eng %0 Unpublished Work %D 1996 %T QMG: Software for finite-element mesh generation %A S. A. Vavasis %G eng %0 Book Section %B Lectures in Applied Mathematics, volume 32 %D 1996 %T On the relationship between layered least squares and affine scaling steps %A S. A. Vavasis %A Y. Ye %B Lectures in Applied Mathematics, volume 32 %I American Mathematical Society %C Providence, RI %G eng %0 Journal Article %J SIAM J. Numer. Anal. %D 1996 %T Stable finite elements for problems with wild coefficients %A S. A. Vavasis %B SIAM J. Numer. Anal. %V 33 %P 890–916 %G eng %0 Journal Article %J Operations Research Letters %D 1995 %T Condition Numbers for Polyhedra with Real Number Data %A S. A. Vavasis %A Y. Ye %B Operations Research Letters %V 17 %P 209-214 %G eng %0 Conference Paper %B Proceedings of the 26th Symposium on the Theory of Computing %D 1994 %T An accelerated interior point method whose running time depends only on $A$ (extended abstract) %A S. A. Vavasis %A Y. Ye %B Proceedings of the 26th Symposium on the Theory of Computing %I ACM Press %P 512–521 %G eng %0 Generic %D 1994 %T Fast Wavelet Transforms for Matrices Arising from Boundary Element Methods %A D. M. Bond %A S. A. Vavasis %I Cornell Theory Center, Cornell University %G eng %0 Journal Article %J SIAM J. Matrix Anal. Appl %D 1994 %T Stable numerical algorithms for equilibrium systems %A S. A. Vavasis %B SIAM J. Matrix Anal. Appl %V 15 %P 1108–1131 %G eng %0 Journal Article %J Matemática Aplicada e Computacional %D 1993 %T Active set methods for problems in column block angular form %A J. M. Stern %A S. A. Vavasis %B Matemática Aplicada e Computacional %V 12 %P 199–226 %G eng %0 Book Section %B Graph Theory and Sparse Matrix Computation %D 1993 %T Automatic Mesh Partitioning %A G. Miller %A S.-H. Teng %A W. Thurston %A S. Vavasis %E Alan George %E John Gilbert %E Joseph W. H. Liu %B Graph Theory and Sparse Matrix Computation %I Springer Verlag %G eng %0 Journal Article %J SIAM Journal on Optimization %D 1993 %T Black-box complexity of local minimization %A S. A. Vavasis %B SIAM Journal on Optimization %V 3 %P 60–80 %G eng %0 Book Section %B Handbook for Global Optimization %D 1993 %T Complexity issues in global optimization: A survey %A Vavasis, S. A. %E R. Horst %E P. M. Pardalos %B Handbook for Global Optimization %I Kluwer Academic Publishers %G eng %0 Journal Article %J simax %D 1993 %T Nested dissection for sparse nullspace bases %A J. M. Stern %A S. A. Vavasis %B simax %V 14 %P 766–775 %G eng %0 Book Section %B Complexity in Numerical Optimization %D 1993 %T Polynomial time weak approximation algorithms for quadratic programming %A Vavasis, S. A. %E P. M. Pardalos %B Complexity in Numerical Optimization %I World Scientific %P 490–500 %G eng %0 Book Section %B Recent Advances in Global Optimization %D 1992 %T On approximation algorithms for concave quadratic programming %A Vavasis, S. A. %E C. A. Floudas %E P. M. Pardalos %B Recent Advances in Global Optimization %I Princeton University Press %C Princeton, N.J. %P 3–18 %G eng %0 Journal Article %D 1992 %T Approximation algorithms for indefinite quadratic programming %A Vavasis, S. A. %V 57 %P 279–311 %G eng %0 Journal Article %J matpro %D 1992 %T Local minima for indefinite quadratic knapsack problems %A S. A. Vavasis %B matpro %V 54 %P 127–153 %G eng %0 Conference Paper %B Proc. 1992 Copper Mountain Conference on Iterative Methods %D 1992 %T Multigrid for mixed boundary integral equations %A D. M. Bond %A S. A. Vavasis %B Proc. 1992 Copper Mountain Conference on Iterative Methods %G eng %0 Journal Article %D 1992 %T Preconditioners for boundary integral equations %A S. A. Vavasis %V 13 %P 905–925 %G eng %0 Conference Paper %B Proceedings of the ACM Computational Geometry Conference %D 1992 %T Quality mesh generation in three dimensions %A S. A. Mitchell %A S. A. Vavasis %B Proceedings of the ACM Computational Geometry Conference %P 212–221 %G eng %0 Journal Article %D 1991 %T Automatic domain partitioning in three dimensions %A S. A. Vavasis %V 12 %P 950–970 %G eng %0 Generic %D 1991 %T A boundary element method for three-dimensional free surface flow %A H.-W. Hsu %A M. H. Lean %A P. L.-F. Liu %A S. A. Vavasis %I Xerox %G eng %0 Conference Paper %B Proc. SIAM-ACM Symposium on Discrete Algorithms %D 1991 %T Density graphs and separators %A G. L. Miller %A S. A. Vavasis %B Proc. SIAM-ACM Symposium on Discrete Algorithms %G eng %0 Book %D 1991 %T Nonlinear Optimization: Complexity Issues %A Vavasis, S. A. %I Oxford University Press %C New York %G eng %0 Journal Article %J Journal of Global Optimization %D 1991 %T Quadratic programming with one negative eigenvalue is NP-hard %A P. M. Pardalos %A S. A. Vavasis %B Journal of Global Optimization %V 1 %P 15–22 %G eng %0 Journal Article %J matpro %D 1991 %T On the solution of concave knapsack problems %A J. J. Moré %A S. A. Vavasis %B matpro %V 49 %P 397–411 %G eng %0 Conference Paper %B Proceedings of the Symposium on Foundations of Computer Science %D 1991 %T A unified geometric approach to graph separators %A G. L. Miller %A S.-H. Teng %A S. A. Vavasis %B Proceedings of the Symposium on Foundations of Computer Science %P 538–547 %G eng %0 Generic %D 1990 %T A note on wavelet bases for two-dimensional surfaces %A S. A. Vavasis %I Department of Computer Science, Cornell University %C Ithaca, NY %G eng %0 Generic %D 1990 %T Proving polynomial-time for sphere-constrained quadratic programming %A Vavasis, S. A. %A Zippel, R. %I Department of Computer Science, Cornell University %C Ithaca, NY %G eng %0 Journal Article %J Information Processing Letters %D 1990 %T Quadratic programming is in NP %A Vavasis, S. A. %B Information Processing Letters %V 36 %P 73–77 %G eng %0 Journal Article %J Journal of Complexity %D 1989 %T Exponential lower bounds for finding Brouwer fixed points %A M. D. Hirsch %A C. H. Papadimitriou %A S. A. Vavasis %B Journal of Complexity %V 5 %P 379–416 %G eng %0 Journal Article %J SIAM Journal on Discrete Mathematics %D 1989 %T Gaussian elimination with pivoting is P-complete %A S. A. Vavasis %B SIAM Journal on Discrete Mathematics %V 2 %P 412–423 %G eng %0 Conference Paper %B Proceedings 28th Symposium on Foundations of Computer Science %D 1987 %T Exponential lower bounds for finding Brouwer fixed points %A M. D. Hirsch %A S. A. Vavasis %B Proceedings 28th Symposium on Foundations of Computer Science %I IEEE Computer Society Press %P 401–-410 %G eng %0 Book Section %B Theoretical Computer Science: 6th GI conference, Dortmund %D 1982 %T Analysis of polynomial approximation algorithms for constraint expressions %A K. J. Lieberherr %A S. A. Vavasis %E A. B. Cremers %E H. P. Kriegel %B Theoretical Computer Science: 6th GI conference, Dortmund %S Lecture Notes in Computer Science %I Springer Verlag %C Berlin %V 145 %G eng