A capsule network (CapsNet) is a deep learning model for image classification that provides robustness to changes in the poses of objects in the images. A capsule is a vector whose direction represents the presence, position, size, and pose of an object. However, with CapsNet, the distribution of capsules is concentrated in a class, and the number of capsules increases with the number of classes. In addition, learning is com-putationally expensive for a CapsNet. We proposed a method to increase the diversity of capsule direc-tions and decrease the computational cost of CapsNet training by allowing a single capsule to represent mul-tiple object classes. To determine the distance be-tween classes, we used an additive angular margin loss called ArcFace. To validate the proposed method, the distribution of the capsules was determined us-ing principal component analysis to validate the pro-posed method. In addition, using the MNIST, fashion-MNIST, EMNIST, SVHN, and CIFAR-10 datasets, as well as the corresponding affine-transformed datasets, we determined the accuracy and training time of the proposed method and original CapsNet. The accuracy of the proposed method improved by 8.91% on the CIFAR-10 dataset, and the training time reduced by more than 19% for each dataset compared with those of the original CapsNets.