Let {X-n, n greater than or equal to 0} be a Markov chain on a general state space X with transition probability P and stationary probability pi. Suppose an additive component S-n takes values in the real line R and is adjoined to the chain such that {(X-n, S-n), n greater than or equal to 0} is a Markov random walk. In this paper, we prove a uniform Markov renewal theorem with an estimate on the rate of convergence. This result is applied to boundary crossing problems for {(X-n, S-n), n greater than or equal to 0}. To be more precise, for given b greater than or equal to 0, define the stopping time tau = tau(b) = inf{n:S-n > b}. When a drift mu of the random walk S-n is 0, we derive a one-term Edgeworth type asymptotic expansion for the first passage probabilities P-pi {tau < m} and P-pi {tau < m, S-m < c}, where m less than or equal to infinity, c less than or equal to b and P-pi denotes the probability under the initial distribution pi. When mu not equal 0, Brownian approximations for the first passage probabilities with correction terms are derived. Applications to sequential estimation and truncated tests in random coefficient models and first passage times in products of random matrices are also given.