In this paper, we consider a class of possibly non-convex and non-smooth optimization problems arising in many contemporary applications such as machine learning, variable selection and image processing. To solve this class of problems, we propose a proximal gradient method with extrapolation and line search (PGels). This method is developed based on a special potential function and successfully incorporates both extrapolation and non-monotone line search, which are two simple and efficient acceleration techniques for the proximal gradient method. Thanks to the non-monotone line search, this method allows more flexibility in choosing the extrapolation parameters and updates them adaptively at each iteration if a certain criterion is not satisfied. Moreover, with proper choices of parameters, our PGels reduces to many existing algorithms. We also show that, under some mild conditions, our line search criterion is well defined and any cluster point of the sequence generated by the PGels is a stationary point of our problem. In addition, by making assumptions on the Kurdyka-Lojasiewicz exponent of the objective in our problem, we further analyze the local convergence rate of two special cases of the PGels, including the widely used nonmonotone proximal gradient method as one case. Finally, we conduct some preliminary numerical experiments for solving the l(1) regularized logistic regression problem and the l(1-2) regularized least squares problem. The obtained numerical results show the promising performance of the PGels and validate the potential advantage of combining two acceleration techniques.