Federated Learning (FL) is a distributed machine learning framework that allows multiple clients to collaboratively train an intermediate model with keeping data local, however, sensitive information may be still inferred during exchanging local models. Although homomorphic encryption and multi-party computation are applied into FL solutions to mitigate such privacy risks, they lead to costly communication overhead and long training time. As a result, functional encryption (FE) is introduced into the field of privacy-preserving FL (PPFL) for boosting efficiency and enhancing security. Nevertheless, existing FE-based PPFL frameworks that support dynamic participation either required a trusted third party that may lead to single-point failure, or require multiple rounds of interaction that inevitably incur large communication overhead. Therefore, we propose PrivLDFL, a lightweight and dynamic PPFL framework for resource-constrained devices. Technically, we formalize dynamic decentralized multi-client FE and give instantiations, then present efficiency optimizations via designing a vector compression funnel based on Chinese Remainder Theorem, and finally achieve client dropouts via a client partitioning strategy. Besides formal security analysis on PrivLDFL, we implement it and state-of-the-art solutions on Raspberry Pi to conduct extensive experiments, confirming the practical performance of PrivLDFL on best-known public datasets.