We propose a communication and computation efficient second-order method for distributed optimization. For each iteration, our method only requires O(d) communication complexity, where d is the problem dimension. We also provide theoretical analysis to show the proposed method has the similar convergence rate as the classical second-order optimization algorithms. Concretely, our method can find (epsilon, root dL epsilon)-second-order stationary points for nonconvex problem by O(root dL epsilon(-3/2)) iterations, where L is the Lipschitz constant of Hessian. Moreover, it enjoys a local superlinear convergence under the strongly-convex assumption. Experiments on both convex and nonconvex problems show that our proposed method performs significantly better than baselines.