Considering the rapid development of embedding surveillance video systems for fire monitoring, we need to distribute systems with high accuracy and detection speed. Recent progress in vision-based fire detection techniques achieved remarkable success by the powerful ability of deep convolutional neural networks. CNN's have long been the architecture of choice for computer vision tasks. However, current CNN-based methods consider fire classification entire image pixels as equal, ignoring regardless of information. Thus, this can cause a low accuracy rate and delay detection. To increase detection speed and achieve high accuracy, we propose a fire detection approach based on Vision Transformer as a viable alternative to CNN. Different from convolutional networks, transformers operate with images as a sequence of patches, selectively attending to different image parts based on context. In addition, the attention mechanism in the transformer solves the problem with a small flame, thereby provide detection fire in the early stage. Since transformers using global self-attention, which conducts complex computing, we utilize fine-tuned Swin Transformer as our backbone architecture that computes self-attention with local windows. Thus, solving the classification problems with high-resolution images. Experimental results conducted on the image fire dataset demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains a classification accuracy of 98.54% on the publicly available dataset.