As the population of older individuals living independently rises, coupled with the heightened risk of falls among this demographic, the need for automatic fall detection systems becomes increasingly urgent to ensure timely medical intervention. Computer vision (CV)-based methodologies have emerged as a preferred approach among researchers due to their contactless and pervasive nature. However, existing CV-based solutions often suffer from either poor robustness or prohibitively high computational requirements, impeding their practical implementation in elderly living environments. To address these challenges, we introduce TCNTE, a real-time skeleton-based fall detection algorithm that combines Temporal Convolutional Network (TCN) with Transformer Encoder (TE). We also successfully mitigate the severe class imbalance issue by implementing weighted focal loss. Cross-validation on multiple publicly available vision-based fall datasets demonstrates TCNTE's superiority over individual models (TCN and TE) and existing state-of-the-art fall detection algorithms, achieving remarkable accuracies (front view of UPFall: 99.58 %; side view of UP-Fall: 98.75 %; Le2i: 97.01 %; GMDCSA-24: 92.99 %) alongside practical viability. Visualizations using t-distributed stochastic neighbor embedding (t-SNE) reveal TCNTE's superior separation margin and cohesive clustering between fall and non-fall classes compared to TCN and TE. Crucially, TCNTE is designed for pervasive deployment in mobile and resource-constrained environments. Integrated with YOLOv8 pose estimation and BoT-SORT human tracking, the algorithm operates on NVIDIA Jetson Orin NX edge device, achieving an average frame rate of 19 fps for single-person and 17 fps for two-person scenarios. With its validated accuracy and impressive real-time performance, TCNTE holds significant promise for practical fall detection applications in older adult care settings.