k-Decision lists and decision trees play important roles in learning theory as well as in practical learning systems. k-Decision lists generalize classes such as monomials, k-DNF, and k-CNF, and like these subclasses they are polynomially PAC-learnable [R. Rivest, Mach. Learning 2 (1987), 229-246], This leaves open the question of whether k-decision lists can be learned as efficiently as k-DNF. We answer this question negatively in a certain sense, thus disproving a claim in a popular textbook [M. Anthony and N. Biggs, ''Computational Learning Theory,'' Cambridge Univ. Press, Cambridge, UK, 1992]. Decision trees, on the other hand, are not even known to be polynomially PAC-learnable, despite their widespread practical application. We will show that decision trees are not likely to be efficiently PAC-learnable. We summarize our specific results. The following problems cannot be approximated in polynomial time within a factor of 2(log delta n) for any delta > 1, unless NP subset of DTIME[2(polylog n)]: a generalized set cover, k-decision lists, k-decision lists by monotone decision lists, and decision trees. Decision lists cannot be approximated in polynomial time within a factor oi n(delta), for some constant delta > 0, unless NP = P. Also, k-decision lists with l 0-1 alternations cannot be approximated within a factor log' n unless NP subset of DTIME[n(O(log log n))] (providing an interesting comparison to the upper bound obtained by A. Dhagat and L. Hellerstein [in ''FOCS '94,'' pp. 64-74]). (C) 1996 Academic Press, Inc.