Brent's scheduling principle provides a general simulation scheme when fewer processors are available than specified by the fastest parallel algorithm. Such a scheme preserves, under slow-down, the actual number of executed operations, also called work. In this paper we take the complementary viewpoint, and rather than consider the work-preserving slow-down of some fast parallel algorithm, we investigate the problem of the achievable speedups of computation while preserving the work of the best-known sequential algorithm for the same problem. The proposed technique, eminently applicable to problems of matrix-computational flavor, achieves its result through the interplay of two algorithms with significantly different features. Analogous but structurally different ''interplays'' have been used previously to improve the algorithmic efficiency of graph computations, selection, and list ranking. We demonstrate the efficacy of our technique for the computation of path algebras in graphs and digraphs and various fundamental computations in linear algebra. Some of the fundamental new algorithms may have practical value; for instance, we substantially improve the algorithmic performance of the parallel solution of triangular and Toeplitz linear systems of equations and the computation of the transitive closure of digraphs.