Nowadays, many application tools provide the video matting function. The accuracy of the results of video matting is of great importance in practical applications. Existing video matting methods view video as multiple consecutive frames. The matting for video is also a continuous single-frame matting, and the synthesized video will have obvious flickering problems. We introduce a human video matting method that can address this problem well. We use the temporal information existing in the video to perform video matting. Our method uses a recurrent structure to exploit the temporal information in videos, resulting in improvements in both temporal coherence and matting quality. We train the segmentation and matting on the network at the same time, and take the results of semantic segmentation as input. The method does not require any auxiliary inputs, such as trimap or pre-captured background images, and can be widely applied to existing human matting applications. A large number of experimental results show that our model is superior to MODNet in terms of evaluation metrics, where the lift value is 2.73 on MAD(Mean Absolute Difference), 1.83 on MSE(Mean Squared Error), 0.46 on Grad( Spatial Gradient), 0.3 on Conn(Connectivity), and 0.49 on dtSSD. We also designed a simple, real-time, visual, user-friendly and understandable video matting system, which is convenient for users to achieve video matting.