Multi-way arrays (tensors) can naturally model multi-relational data. RESCAL is a popular tensor-based relational learning model. Despite its documented success, the original RESCAL solver exhibits sensitivity towards outliers, arguably, due to its L2-norm formulation. Absolute-projection RESCAL (A-RESCAL), an L1-norm reformulation of RESCAL, has been proposed as an outlier-resistant alternative. However, although in both cases efficient algorithms have been proposed, they were designed to optimize the factor matrices corresponding to the first and second modes of the data tensor independently. To our knowledge, no formal guarantees have been presented that this treatment can always lead to monotonic convergence of the original non-relaxed RESCAL formulation. To this end, in this work we propose a novel L1-norm based algorithm that solves this problem, which at the same time enjoys robustness features of the same nature as those in A-RESCAL. Additionally, we show that our proposed method is closely related to a heavily studied problem in the optimization literature, which enables it to be equipped with numerical stability and computational efficiency features. Lastly, we present a series of numerical studies on artificial and real-world datasets that corroborate the robustness advantages of the L1-norm formulation as compared to its L2-norm counterpart.