Intelligent edge computing based on federated learning has a wide application prospect in the field of Internet of Things (IoT). However, it is still faced with the dilemma of lacking enough data sources in the current practice of artificial intelligence. In this context, distributed machine learning aggregates edge devices' raw data into a parameter server for model training, but it easily leads to data privacy leakage and causes excessive storage overhead. In particular, federated learning (FL) is a distributed machine learning framework that stores data locally, which can effectively protect the data privacy of edge intelligent nodes. According to client settings, FL can be classified into two types: cross-device FL and cross-silo FL. In cross-device FL, a central entity acts as the central parameter server, which is also the owner of the global model. Meanwhile, the participating nodes as the clients to perform local training. In cross-silo FL, all participating nodes act as the clients to perform local training. In addition, they are also the owners of the global model and can make use of the trained global model. In this paper, we focus on cross-device FL, in which intelligence edge devices can provide model training services by sensing the raw data from IoT devices such as intelligence vehicles, smartphones etc. Most of the existing cross-device FL implements model aggregated by uploading the intermediate parameters of model training to the parameter server. There are two problems in this process. On the one hand, there is privacy leakage of intermediate parameters. The existing privacy protection schemes usually use differential privacy to add the noise on intermediate parameters, but excessive noise will reduce the quality of the global model. On the other hand, the training process of node self-interest and full autonomy may lead to malicious nodes uploading false parameters or low-quality models, thus affect the aggregation processes and model quality. In this paper, the centralized parameter server in federated learning is constructed as a decentralized parameter aggregation chain, and the intermediate parameters of the model training process recorded on the blockchain as evidence. Moreover, the cooperative nodes are encouraged to verify the model parameters and punish the participating nodes who upload false parameters or low-quality models so as to restrict their self-interest. In view of above challenges, we take the model quality as the metric to dynamically adjust privacy noise of intermediate parameters and propose a federated adaptive (FedAdp) model aggregation algorithm. The prototype development and experimental simulations show that the proposed FedAdp model aggregation algorithm can achieve higher accuracy of aggregation model when occur poisoning attack. By dynamically adjusting the Laplace random noise, it's realized the tradeoff between privacy protection and the accuracy error of the aggregation model. The experiment of blockchain performance confirmed that our scheme has good practicability. It is proved that the model can not only enhance the mutual trust between the participating nodes of federated learning, but also prevent the privacy disclosure of intermediate parameters, so as to realize the federated learning model with enhanced trust and privacy protection. © 2021, Science Press. All right reserved.