We consider functional regression models with noisy outputs resulting from linear transformations. In the setting of regularization theory in reproducing kernel Hilbert spaces (RKHSs), much work has been devoted to build uncertainty bounds around kernel-based estimates, hence characterizing their convergence rates. Such results are typically formulated using either the average squared loss for the prediction or the RKHS norm. However, in signal processing and in emerging areas, such as learning for control, measuring the estimation error through the L1 norm is often more advantageous. This can, e.g., provide insights on the convergence rate in the Laplace/Fourier domain whose role is crucial in the analysis of dynamical systems. For this reason, we consider all the RKHSs H associated with Lebesgue measurable positive-definite kernels, which induce subspaces of L1, also known as stable RKHSs in the literature. The inclusion H ⊂ L1 is then characterized. This permits to convert all the error bounds, which depend on the RKHS norm in terms of the L1 norm. We also show that our result is optimal: there does not exist any better reformulation of the bounds in L1 than the one presented here. © 1963-2012 IEEE.