Recently, the concept of autonomous driving became prevalent in the domain of intelligent transportation due to the promises of increased safety, traffic efficiency, fuel economy and reduced travel time. Numerous studies have been conducted in this area to help newcomer vehicles plan their trajectory and velocity. However, most of these proposals only consider trajectory planning using conjunction with a limited data set (i.e., metropolis areas, highways, and residential areas) or assume fully connected and automated vehicle environment. Moreover, these approaches are not explainable and lack trust regarding the contributions of the participating vehicles. To tackle these problems, we design an Explainable Artificial Intelligence (XAI) Federated Deep Reinforcement Learning model to improve the effectiveness and trustworthiness of the trajectory decisions for newcomer Autonomous Vehicles (AVs). When a newcomer AV seeks help for trajectory planning, the edge server launches a federated learning process to train the trajectory and velocity prediction model in a distributed collaborative fashion among participating AVs. One essential challenge in this approach is AVs selection, i.e., how to select the appropriate AVs that should participate in the federated learning process. For this purpose, XAI is first used to compute the contribution of each feature contributed by each vehicle to the overall solution. This helps us compute the trust value for each AV in the model. Then, a trust-based deep reinforcement learning model is put forward to make the selection decisions. Experiments using a real-life dataset show that our solution achieves better performance than benchmark solutions (i.e., Deep Q-Network (DQN), and Random Selection (RS)).