An age of unheard of data production at the border of the network has begun for the development of Internet interconnected Things (IoT) devices. The challenge lies in using this data for artificial intelligence tasks while maintaining user privacy. One such solution is federated Learning (FL). The adoption and improvement of FL especially inside IoT settings are explored in this study, which also addresses issues with communication effectiveness, model accumulation, and compatibility between devices. The methodological basis consists of an analytical philosophy, a deductive strategy, and a design based on description. Utilizing published literature as well as technical documents, secondary data collecting is done. The study's conclusions stress the importance of communication protocols, such as Secure Socket Layer (SSL), which ensures strong encryption for safe transmission of information, and messaging queue telemetry transport (MQTT), which offers quick and easy communications. The paper also investigates how aggregation mechanisms affect model convergence. In circumstances where privacy is an issue, Federated Averaging shows effective convergence, whereas Secure Aggregation guarantees anonymity. The research also explores algorithm optimization methods that improve model efficiency on restricted resources IoT devices, such as Modelling Pruning, Quantization, as well as Lightweight Cognitive Architectures.