Despite the significant advancements achieved by recent 3D-Gaussian-based approaches in dynamic scene reconstruction, their efficacy is markedly diminished in monocular settings, particularly under conditions of rapid object motion. This issue arises from the inherent one-to-many mapping between monocular video and the dynamic scene, i.e., discerning precise object motion states from a monocular video is challenging while varying motion states may correspond to distinct scenes. To alleviate the issue, firstly, we explicitly extract the object motion states information from the monocular video wth a pretrained video tracking model, TAM, and then separate 3D Gaussians into static and dynamic subsets based on such motion states information. Secondly, we present a three-stage training strategy to optimize 3D Gaussian across distinct motion states. Moreover, we introduce an innovative augmentation technique that provides augment views for supervising 3D Gaussians, thereby enriching the model with more multi-view information, pivotal for accurate interpretation of motion states. Our empirical evaluations on Nvidia and iPhone, two of the most challenging monocular datasets, demonstrates our method's superior reconstruction capabilities over other dynamic Gaussian models.