Deep learning-based medical image segmentation has received great attention both from academic researchers and physicians and made significant progress in recent years. However, there is still some room to further improve the accuracy of segmentation. In this manuscript, a network, (TGNet)TransGraphNet, based on Transformer and graph convolution is proposed. The network exploits both the advantages of Transformers and graph convolutional networks (GCN). The proposed network has an encoder-decoder structure and learns effectively the global and local features simultaneously. With Transformer the encoder extracts global features, and through GCN, the decoder restores the spatial structure of the image, and thus the proposed network can understand the medical image more comprehensively and improve the segmentation accuracy. A spatial and channel parallel module (SCPM) is proposed, which is more flexible and can adjust the attention to image features at multiple levels. A fully convolutional Transformer attention module (FTAM) is developed to address local details. Together with SCPM, the FTAM module can understand the fine structures and features in the image, and further improve segmentation performances. With a graph convolutional network (GCN), a graph convolutional hybrid attention module (GCHA) is proposed to deal with irregular structures and global relationships in medical image segmentation. Extensive comparison experiments are conducted and the performances are evaluated on ACDC, Synapse, and Coronacases datasets, showing that the proposed network has better accuracy than most existing models. In particular, TransGraphNet achieves a performance improvement of 0.55% on the ACDC dataset, a 1.6% performance improvement on the Synapse dataset, and a 2.34% performance improvement on the Coronacases dataset. Ablation studies show that the proposed SCPM, FTAM, and GCHA modules improve the segmentation performances significantly.