This paper continues the study of the software reliability model of Fakhre-Zakeri & Slud (1995), an "exponential order statistic model" in the sense of Miller (1986) with general mixing distribution, imperfect debugging and large-sample asymptotics reflecting increase of the initial number of bugs,vith software size. The parameters of the model are theta (proportional to the initial number of bugs in the software), G(., mu) (the mixing df, with finite dimensional unknown parameter mu, for the rates lambda(i) with which the bugs in the software cause observable system failures), and p (the probability with which a detected bug is instantaneously replaced with another bug Instead of being removed). Maximum likelihood estimation theory for (theta, p, mu) is applied to construct a likelihood-based score test for large sample data of the hypothesis of "perfect debugging" (p = 0) vs "imperfect" (p > 0) within the models studied. There are important models (including the Jelinski-Moranda) under which the score statistics with 1 root n normalization are asymptotically degenerate. These statistics, illustrated on a software reliability data of Musa (1980), can serve nevertheless as important diagnostics for inadequacy of simple models.