With the explosive development of multi-view data from diverse sources, multi-view clustering (MVC) has drawn widespread attention. Existing MVC methods still have several limitations. First, it is difficult to sufficiently consider the local invariance within data views. Second, view fusion usually utilizes weighted averages, thus how to fuse views is warranting further exploration. Towards these two issues, this paper proposes a multi-channel augmented graph embedding convolutional network (MAGEC-Net) for multi-view clustering and its extended end-to-end model (EMAGEC-Net). The proposed frameworks are dedicated to exploring the consistency and complementarity of multi-view data. Specifically, on one hand, the augmented graphs are derived from generative adversarial networks, which explore the information and features of a single view more comprehensively. On the other hand, each augmented view is considered as a channel and fused by a deep fusion network, thus this method effectively improves the complementary information across views. Finally, feature extraction is performed on the fused consistent graphs to enable better clustering. Extensive experiments on six real challenging datasets demonstrate the effectiveness of the proposed method and its superiority over eight compared state-of-the-art methods.