Many problems in science require the computation of only one singular vector or, more generally, a singular subspace of a matrix. Instead of computing the complete singular-value decomposition, iterative methods can be used to improve the computational speed, in particular when a priori information about the solution is available. This paper deals with the computation of a basis of a singular subspace of a matrix associated with its smallest singular values. Three iterative methods-inverse, Chebyshev, and inverse Chebyshev iteration-are compared by analyzing their convergence properties. Based on the convergence rate and the operation counts per iteration step, it is shown in which problems a particular iterative algorithm is most efficient. If the gap between the singular values associated with the desired and undesired singular subspace is large, inverse iteration is clearly the best choice. If not, convergence can be accelerated by applying inverse Chebyshev iteration provided sufficiently tight bounds delineating the undesired singular-value spectrum are known. The smaller the gap, the larger the gain in speed. Unless the matrix is structured or sparse, this method is shown to be always more efficient than ordinary Chebyshev iteration.