With the rise of Artificial Intelligence of Things (AIoT), integrating deep neural networks (DNNs) into mobile and embedded devices has become a significant trend, enhancing the data collection and analysis capabilities of IoT devices. Traditional integration paradigms rely on cloud-based training and terminal deployment, but they often suffer from delayed model updates, decreased accuracy, and increased communication overhead in dynamic real-world environments. Consequently, on-device training methods have garnered research focus. However, the limited local perception data and computational resources pose bottlenecks to training efficiency. To address these challenges, Federated Learning emerged but faces issues such as slow model convergence and reduced accuracy due to data privacy concerns that restrict sharing data or model details. In contrast, we propose the concept of trusted clusters in the real world (such as personal devices in smart spaces, trusted devices from the same organization/company, etc.), where devices in trusted clusters focus more on computational efficiency and can also share privacy. We propose CrowdLearning, a decentralized distributed training framework based on trusted AIoT device collectives. This framework comprises two collaborative modules: A heterogeneous resource-aware task offloading module aimed at alleviating training latency bottlenecks, and an efficient communication data reallocation module responsible for determining the timing, manner, and recipients of data transmission, thereby enhancing DNN training efficiency and effectiveness. Experimental results demonstrate that in various scenarios, CrowdLearning outperforms existing federated learning and distributed training baselines on devices, reducing training latency by 55.8% and lowering communication costs by 67.1%.