A general graph optimization task is to compute omega(Q)(G), defined as the maximum size of an induced subgraph in G that has a (hereditary) property Q. Computing omega(Q)(G) is well known to be NP-hard for all nontrivial properties Q epsilon P and qften remains hard for even to roughly approximate. We show that the problem of determing wQ(G) becomes easier if relaxed to accept an answer that is "almost surely almost exact", i.e., the result has a relative error smaller than an arbitrary fixed epsilon > 0; with probability approaching 1. The probability is meant over the random input graph that is generated by a generalized random graph model, called Random Vertex Model, which includes many different known random graph models as special cases. In particular, we show that the "almost surely almost exact" optimum value can be found in nonuniform polynomial time for every hereditary property Q epsilon P. As a consequence, we obtain that in this setting constant factor approximation never remains NP-hard, unless the polynomial hierarchy collapses. Thus, if PH does not collapse, then the NP-hardness of approximating omega(Q)(G) for all input graphs can only be caused by a vanishing set of "malicious graphs". We call this phenomenon the concentration of hardness.