Federated learning (FL) can generate huge communication overhead for the central server, which may cause operational challenges. Furthermore, the central server's failure or compromise may result in a breakdown of the entire system. To mitigate this issue, decentralized federated learning (DFL) has been proposed as a more resilient framework that does not rely on a central server, as demonstrated in previous works. DFL involves the exchange of parameters between each device through a wireless network. To optimize the communication efficiency of the DFL system, various transmission schemes have been proposed and investigated. However, the limited communication resources present a significant challenge for these schemes. Therefore, to explore the impact of constrained resources, such as computation and communication costs on the DFL, this study analyzes the model performance of resource-constrained DFL using different communication schemes (digital and analog) over wireless networks. Specifically, we provide convergence bounds for both digital and analog transmission approaches, enabling analysis of the model performance trained on DFL. Furthermore, for digital transmission, we investigate and analyze resource allocation between computation and communication and convergence rates, obtaining its communication complexity and the minimum probability of correction communication required for convergence guarantee. For analog transmission, we discuss the impact of channel fading and noise on the model performance and the maximum errors accumulation with convergence guarantee over fading channels. Finally, we conduct numerical simulations to evaluate the performance and convergence rate of convolutional neural networks (CNNs) and Vision Transformer (ViT) trained in the DFL framework on fashion-MNIST and CIFAR-10 datasets. Our simulation results validate our analysis and discussion, revealing how to improve performance by optimizing system parameters under different communication conditions.