Apart from high accuracy, what interests many researchers and practitioners in real-life tabular learning problems (e.g., fraud detection and credit scoring) is uncovering hidden patterns in the data and/or providing meaningful justification of decisions made by machine learning models. In this concern, an important question arises: should one use inherently interpretable models or explain full-complexity models such as XGBoost, Random Forest with post hoc tools? Opting for the second choice is typically supported by the accuracy metric, but it is not always evident that the performance gap is sufficiently significant, especially considering the current trend of accurate and inherently interpretable models, as well as accounting for other real-life evaluation metrics such as faithfulness, stability, and computational cost of explanations. In this work, we show through benchmarking on 45 datasets that the relative accuracy loss is less than 4% in average when using intelligible models such as explainable boosting machine. Furthermore, we propose a simple use of model ensembling to improve the expressiveness of TabSRALinear, a novel attention-based inherently interpretable solution, and demonstrate both theoretically and empirically that it is a viable option for (1) generating stable or robust explanations and (2) incorporating human knowledge during the training phase. Source code is available at https://github.com/anselmeamekoe/TabSRA.