Motivated by ever-increasing computational resources at edge devices and increasing privacy concerns, a new machine learning (ML) framework called federated learning (FL) has been proposed. FL enables user devices, such as mobile and Internet of Things (IoT) devices, to collaboratively train an ML model by only sending the model parameters instead of raw data. FL is considered the key enabling approach for privacy-preserving, distributed ML systems. However, FL requires frequent exchange of learned model updates between multiple user devices and the cloud/edge server, which introduces a significant communication overhead and hence imposes a major challenge in FL over wireless networks that are limited in communication resources. Moreover, FL consumes a considerable amount of energy in the process of transmitting learned model updates, which imposes another challenge in FL over wireless networks that usually include unplugged devices with limited battery resources. Besides, there are still other privacy issues in practical implementations of FL over wireless networks. In this survey, we discuss each of the mentioned challenges and their respective state-of-the-art proposed solutions in an in-depth manner. By illustrating the tradeoff between each of the solutions, we discuss the underlying effect of the wireless network on the performance of FL. Finally, by highlighting the gaps between research and practical implementations, we identify future research directions for engineering FL over wireless networks.