Face swapping aims to synthesize a face image, in which the facial identity is well transplanted from the source image and the context (e.g., hairstyle, head posture, facial expression, lighting, and background) keeps consistent with the reference image. The prior work mainly accomplishes the task in two stages, i.e., generating the inner face with the source identity, and then stitching the generation with the complementary part of the reference image by image blending techniques. The blending mask, which is usually obtained by the additional face segmentation model, is a common practice towards photo-realistic face swapping. However, artifacts usually appear at the blending boundary, especially in areas occluded by the hair, eyeglasses, accessories, etc. To address this problem, rather thanstruggling with the blending mask in the two-stage routine, we develop a novel one-stage context and identity hallucination network, which learns a series of hallucination maps to softly divide the context areas and identity areas. For context areas, the features are fully utilized by a multi-level context encoder. For identity areas, we design a novel two-cascading AdaIN to transfer the identity while retaining the context. Besides, with the help of hallucination maps, we introduce an effectively improved reconstruction loss to utilize unlimited unpaired face images for training. Our network performs well on both context areas and identity areas without any dependency on post-processing. Extensive qualitative and quantitative experiments demonstrate the superiority of our network.