In the rapidly advancing era of Artificial Intelligence (AI), Large Language Models (LLMs) have emerged as a pivotal force, revolutionizing the automated creation of diverse content tailored to user preferences and intents. At the same time, Machine Learning (ML), particularly Deep Learning (DL), has achieved state-of-the-art (SoTA) performance in optimization and inference tasks across various domains, including telecommunications. However, the division of technological domains poses challenges to the integration of powerful AI/ML capabilities towards realizing the vision of "AI-native" networks, such as future 6G networks, aiming to offer ubiquitous intelligence across their infrastructure and service planes, seamlessly adapting and evolving to support new application classes. The paper proposes a novel out-of-the-box framework, TimesLM, for automated time series analytics leveraging Large Language Models (LLMs). It addresses the challenge of developing general-purpose solutions by utilizing lightweight LLMs as forecasters and optimizers. TimesLM offers zero-shot and few-shot inference capabilities, enabling confident uncertainty navigation for decision-making in AI-Native networks. Its architecture integrates global optimization with lightweight LLMs, followed by input enrichment and local prompt optimization. Evaluation results on diverse real-world datasets demonstrate improved accuracy and reduced latency, making TimesLM highly competitive against SoTA approaches, while reducing operational costs and carbon-footprint due to its lightweight LLMs. A limitations analysis is also presented. The proposed framework stands as a promising and sustainable step towards realizing the envisioned capabilities of AI-Native networks through efficient and effective time series analytics.