Tensor robust principal component analysis (TRPCA) is a fundamental technique for recovering low-rank and sparse components from multidimensional data corrupted by noise or outliers. Recently, a method based on tensor-correlated total variation (t-CTV) was introduced, where t-CTV serves as a regularizer to simultaneously encode both the low-rank structure and local smoothness of the tensor, eliminating the need for parameter tuning. However, this method may introduce bias when encoding the low-rank structure of the data, which limits its recovery performance. To address this limitation, we propose a novel weighted t-CTV pseudo-norm that more accurately captures both the low-rank structure and local smoothness of a tensor. Building on this, we introduce the self-adaptive learnable weighted t-CTV (SALW-CTV) method for TRPCA. In contrast to traditional TRPCA methods that use the suboptimal & ell;(1)-norm for noise filtering, our method incorporates an improved weighted & ell;(1)-norm to further enhance recovery performance. Additionally, we design a data-driven, self-adaptive learnable weight selection scheme that dynamically determines the optimal weights for both the weighted t-CTV and the weighted & ell;(1)-norm. To solve the resulting optimization problem, we develop an efficient algorithm and analyze its computational complexity and convergence. Extensive numerical experiments on various datasets validate the superior performance of our proposed method compared to existing state-of-the-art approaches.