Wireless networks using resource management with the enormous number of Internet of Things (IoT) users is a critical problem in developing networks for the fifth generation. The primary aim of this research is to optimize the use of IoT network resources. Earth surface features can be identified and their geo-biophysical proper-ties estimated using radiation as the medium of interaction in remote sensing techniques (RST). Deep reinforcement learning (DRL) has significantly improved traditional resource management, which is challenging to model. The Industrial Internet of Things (IIoT) network has to be carried out in real time with excess network resources. Conventional techniques have a significant challenge because of the extensive range and complexity of wireless networks. The DRL method has been used in several areas, including management and allocation of resources, dynamic channel access, mobile downloading, unified edge computing, caching and communication, and fog radio access networks. DRL-IIoT is more successful than the Q-learning technique for a single agent. The design and analysis of the DRL-based ap-proach in stationary base stations to solve the typical assignment of resources issues have been mostly restricted. The DRL is used as a clustering technique to construct the primary model of the system with k-means. This article discusses optical and microwave sensors in RST techniques and applications, examines the areas where there are gaps, and discusses Earth hazards. Furthermore, a comprehensive resource-based strengthening learning system is developed to ensure the best use of resources. Simulation results show that the suggested method efficiently (97.24%) allocates available spectrum, cache, and computer resources to deep deterministic policy gradient benchmarks.