In free viewpoint video (FVV) framework, a large number of viewpoints from limited number of views are generated to reduce the amount of transmitting, receiving and processing video data significantly. To generate a virtual view, the disparity among adjacent views or temporal correlation between different frames of the intermediate views are normally exploited. Those techniques may concern poor rendering quality by missing pixel values (i.e. creating holes) due to the occluded region, rounding error and disparity discontinuity. To address these problems recent techniques use inpainting, however, they still suffer quality degradation due to lack of spatial correlation on the foreground-background boundary areas. The background updating techniques with Gaussian mixture based modelling (GMM) can improve quality in some occluded areas, however, due to the dependencies on warping of background image and spatial correlation they still suffer quality degradation. In this paper, we propose a view synthesized prediction using Gaussian model (VSPGM) technique using the number of GMM models rather than the background image to identify background/foreground pixels. The missing pixels of background and foreground are recovered from the background pixel and the weighted average of warped and foreground model pixels respectively. The experimental results show that the proposed approach provides 0.50 similar to 2.14dB PSNR improved synthesized view compared with the state-of-the-art methods. To verify the effectiveness of the proposed synthesized view, we use it as a reference frame with immediate previous frame of current view in the motion estimation for multi-view video coding (MVC). The experimental results confirm that the proposed technique is able to improve PSNR by 0.17 to 1.00dB compared to the conventional three reference frames.