We consider the problem of scheduling dynamically arriving jobs in a non-clairvoyant setting, that is, when the size of a job in remains unknown until the job finishes execution. Our focus is on minimizing the mean slowdown, where the slowdown (also known as stretch) of a job is defined as the ratio of the flow time to the size of the job. We use resource augmentation in terms of allowing a faster processor to the online algorithm to make up for its lack of knowledge of job sizes. Our main result is that the Shortest Elapsed Time First (SETF) algorithm, a close variant of which is used in the Windows NT and Unix operating system scheduling policies, is a $(1+\epsilon)$-speed, $O((1/\epsilon)^5 \log^2 B)$-competitive algorithm for minimizing mean slowdown non-clairvoyantly, when $B$ is the ratio between the largest and smallest job sizes. In a sense, this provides a theoretical justification of the effectiveness of an algorithm widely used in practice. On the other hand, we also show that any $O(1)$-speed algorithm, deterministic or randomized, is $\Omega(\min(n,\log B))$-competitive. The motivation for resource augmentation is supported by an $\Omega(\min(n,B))$ lower bound on the competitive ratio without any speedup. For the static case, i.e., when all jobs arrive at time 0, we show that SETF is $O(\log{B})$ competitive without any resource augmentation and also give a matching $\Omega(\log{B})$ lower bound on the competitiveness.