This paper presents an optimal bound on the Shannon function L(n, m, epsilon) that gives the worst-case circuit-size complexity to approximate, within an approximation degree at least epsilon, partial boolean functions having n inputs and domain size m. That is L(n,m,epsilon) = Theta(m epsilon(2)/log(2 + m epsilon(2))) + O(n). Our bound applies to any partial boolean function and any approximation degree, and thus completes the study of boolean function approximation introduced by Pippenger (1977). Our results give an upper bound for the hardness function h(f), introduced by Nisan and Wigderson (1994), which denotes the minimum value l for which there exists a circuit of size at most I that approximates a boolean function f with degree at least 1/l. Indeed, if H(n) denotes the maximum hardness value achieved by boolean functions with n inputs, we prove that for almost every n H(n) less than or equal to 2(n/3) + n(2) + O(1). The exponent n/3 in the above inequality implies that no family of boolean functions exists which has 'full' hardness. This fact establishes connections with Allender and Strauss' (1994) work that explores the structure of BPP. Finally, we show that for almost every n and for almost every boolean function f of n inputs we have h(f) greater than or equal to 2(n/3-2logn). The contribution in the proof of the upper bound for L(n, m, epsilon) can be viewed as a set of technical results that globally show how boolean linear operators are 'well' distributed on the class of 4-regular domains. This property is then applied to approximate partial boolean functions on general domains using a suitable composition of boolean linear operators.