In this paper, we propose adaptive L-p (0 < p < 1) estimators in sparse, high-dimensional, linear regression models when the number of covariates depends on the sample size. Other than the case of the number of covariates is smaller than the sample size, in this paper, we prove that under appropriate conditions, these adaptive L-p estimators possess the oracle property in the case that the number of covariates is much larger than the sample size. We present a series of experiments demonstrating the remarkable performance of this estimator with adaptive L-p regularization, in comparison with the L-1 regularization, the adaptive L-1 regularization, and non-adaptive L-p regularization with 0 < p < 1, and its broad applicability in variable selection, signal recovery and shape reconstruction.