Despite the huge amount of methods available in literature, the practical use of multiobjective optimization tools in industry is still an open issue. A strategy to reduce objective function evaluations is essential, at a fixed degree of Pareto optimal front ([inline-graphic not available: see fulltext]) approximation accuracy. To this aim, an extension of single objective Generalized response surface (GRS) methods to [inline-graphic not available: see fulltext] approximation is proposed. Such an extension is not at all straightforward due to the usually complex shape of the Pareto optimal set ([inline-graphic not available: see fulltext]) as well as the non-linear relation between the [inline-graphic not available: see fulltext] and the [inline-graphic not available: see fulltext]. As a consequence of such complexity, it is extremely difficult to identify a multiobjective analogue of single objective current optimum region. Consequently, the design domain search space zooming strategy around the current optimum region, which is the core of a GRS method, has to be carefully reconsidered when [inline-graphic not available: see fulltext] approximation is concerned. In this paper, a GRS strategy for multiobjective optimization is proposed. This strategy links the optimization (based on evolutionary computation) to the interpolation (based on Neural Networks). The strategy is explained in detail and tested on various test cases. Moreover, a detailed analysis of approximation errors and computational cost is given together with a description of real-life applications.