In oncology research, accurate 3D segmentation of lesions from CT scans is essential for the extraction of 3D radiomics features in lesions and the modeling of lesion growth kinetics. However, following the RECIST criteria, radiologists routinely only delineate each lesion on the axial slice showing the largest transverse area, and occasionally delineate a small number of lesions in 3D for research purposes. As a result, to train models to segment the lesions automatically, we normally have plenty of unlabeled 3D volumes, an adequate number of labeled 2D images, and scarce labeled 3D volumes, which makes training a 3D segmentation model a challenging task. In this work, we propose a novel U-shaped deep learning model, denoted a multi-dimension unified Swin transformer (MDU-ST), to automatically conduct 3D lesion segmentation. The MDU-ST consists of a Shifted-window transformer (Swin-transformer) encoder and a convolutional neural network (CNN) decoder, allowing it to adapt to 2D and 3D inputs and learn the corresponding semantic information from various inputs in the same encoder. Based on this model, we introduce a three-stage framework to train the model effectively: 1) leveraging large amount of unlabeled 3D lesion volumes through multiple self-supervised pretext tasks to learn the underlying pattern of lesion anatomy in the Swin-transformer encoder; 2) fine-tune the Swin-transformer encoder to perform 2D lesion segmentation with 2D RECIST slices to learn slice-level segmentation information; 3) further fine-tune the Swin-transformer encoder to perform 3D lesion segmentation with labeled 3D volumes to learn volume-level segmentation information. We compare the proposed MDU-ST with state-of-the-art CNN-based and transformer-based segmentation models using an internal lesion dataset with 593 lesions extracted from multiple anatomical locations and delineated in 3D. The network's performance is evaluated by the Dice similarity coefficient (DSC) for volume-based accuracy and Hausdorff distance (HD) for surface-based accuracy. The average DSC achieved by the MDU-ST with proposed pipeline is 0.78; HD is 5.55 mm. The proposed MDU-ST trained with the 3-stage framework demonstrates significant improvement over the competing models. The proposed method can be used to conduct automated 3D lesion segmentation to assist large-scale radiomics and tumor growth modeling studies.