Neural networks (NN), especially deep neural networks (DNN), have achieved great success in lots of fields. ReRAM crossbar, as a promising candidate, is widely employed to accelerate neural network owing to its nature of processing MVM. However, ReRAM crossbar suffers high conductance variation due to many non-ideal effects, resulting in great inference accuracy degradation. Recent works use uniform quantization to enhance the tolerance of conductance variation, but these methods still suffer high accuracy loss with large variation. In this paper, firstly, we analyze the impact of the quantization and conductance variation on the accuracy. Then, based on two observation, we propose a quantized training framework to enhance the robustness and accuracy of the neural network running on the accelerator, by introducing a smart non-uniform quantizer. This framework consists of a robust trainable quantizer and a corresponding training method, and needs no extra hardware overhead and compatible with a standard neural network training procedure. Experimental results show that our proposed method can improve inference accuracy by 10% similar to 30% under large variation, compared with uniform quantization method.