Based on item response theory, Beck and Aitken introduced a method of item factor analysis, termed full-information item factor (FIIF) analysis by Bartholomew because it uses all distinct item response vectors as data. But a limitation of their fitting algorithm is its reliance on fixed-point Gauss-Hermite quadrature, which can produce appreciable numerical errors, especially in high-dimension problems. The first purpose of this article is to offer more reliable methods by using recent advances in statistical computation. Specifically, we illustrate two ways of implementing Monte Carlo Expectation Maximization (EM) algorithm to fit a FIIF model, using the Gibbs sampler to carry out the computation for the E steps. We also show how to use bridge sampling to simulate the likelihood ratios for monitoring the convergence of a Monte Carlo EM, a strategy that is useful in general. Simulations demonstrate substantial improvement over Beck and Aitken's algorithm in recovering known factor loadings in high dimensions. To test our methods, we also apply them to data from LSAT and from a survey on quality of American life, and compare the results to those from the fixed-point approach. Using the FIIF model as a working example, the second purpose of this article is to provide an empirical investigation of the theoretical development of Meng and Wong on bridge sampling, an efficient method for computing normalizing constants. In contrast to importance sampling, which uses draws from one density, bridge sampling uses draws from two (or more) densities and then introduces intermediate densities to ''bridge'' them. Our empirical investigation confirms the results of Meng and Wong and echoes the empirical evidences documented in computational physics; that is, bridge sampling can reduce simulation errors by orders of magnitude when compared to importance sampling with the same simulation sizes.