In the generalized linear model and its relatives, the loss depends on the parameter via a transformation (the inverse link function) of the linear function (or linear predictor). In this chapter such a structure is not assumed. Moreover, the chapter hints at cases where the effective parameter space is very large and the localization arguments discussed so far cannot be applied. (The graphical Lasso is an example.) With the help of Brouwer's fixed point theorem it is shown that when R(beta) is Omega(*)-close to its linear approximation when beta is Omega(*)-close to the target beta(0), then also the Omega-structured sparsity M-estimator. (beta) over cap is Omega(*)-close to the target. Here, the second derivative inverse matrix R-1 (beta(0)) is assumed to have Omega-small enough rows, where Omega is the dual norm of Omega(*). Next, weakly decomposable norms Omega are considered. A generalized irrepresentable condition on a set S of indices yields that there is a solution (beta) over tilde (S) of the KKT-conditions with zeroes outside the set S. At such a solution. (beta) over tilde (S) the problem is under certain conditions localized, so that one can apply the linear approximation of R(beta). This scheme is carried out for exponential families and in particular for the graphical Lasso.