Approximate computation is a central concept in algorithms and computation theory. Our notion of approximation is that the algorithm performs correctly on most of the inputs. We propose some finite automata models to study the question of how well a finite automaton can approximately recognize a non-regular language. On the one hand, we show that there are natural problems for which a DFA can correctly solve almost all the instances, but not all instances. An example of such a problem is a decision question about the number of digits in the square of a given integer. On the other hand, we show that some languages (such as L-majority = {x is an element of (0 + 1)* vertical bar x has more 1's than 0's}) can't be approximated by any regular language in a strong sense. We also show that there are problems that are intermediate (between the extremes stated above) in terms of how we well a regular language can approximate it. An example of such a problem is a decision question about the number of digits in the product of two integers. We also present results comparing different models of approximation.