We continue the study of doubly-efficient proof systems for verifying agnostic PAC learning, for which we obtain the following results. We construct an interactive protocol for learning the t largest Fourier characters of a given function f : {0,1}(n) -> {0,1} up to an arbitrarily small error, wherein the verifier uses poly(t) random examples. This improves upon the Interactive Goldreich-Levin protocol of Goldwasser, Rothblum, Shafer, and Yehudayoff (ITCS 2021) whose sample complexity is poly(t,n). For agnostically learning the class AC(0)[2] under the uniform distribution, we build on the work of Carmosino, Impagliazzo, Kabanets, and Kolokolova (APPROX/RANDOM 2017) and design an interactive protocol, where given a function f : {0,1}(n) -> {0,1}, the verifier learns the closest hypothesis up to polylog(n) multiplicative factor, using quasi-polynomially many random examples. In contrast, this class has been notoriously resistant even for constructing realisable learners (without a prover) using random examples. For agnostically learning k-juntas under the uniform distribution, we obtain an interactive protocol, where the verifier uses O(2(k)) random examples to a given function f : {0,1}(n) -> {0,1}. Crucially, the sample complexity of the verifier is independent of n. We also show that if we do not insist on doubly-efficient proof systems, then the model becomes trivial. Specifically, we show a protocol for an arbitrary class C of Boolean functions in the distribution-free setting, where the verifier uses O(1) labeled examples to learn f.