Fraud detection is usually regarded as finding a needle in haystack, which is a challenging task because fraudulences are buried in massive normal behaviors. Indeed, a fraudulent incident usually takes place in consecutive time steps to gain illegal benefits, which provides unique clues to probing frauds by considering a complete behavioral sequence, rather than detecting frauds from a snapshot of behaviors. Also, fraudulent behaviors may entail different parties, such that the interaction pattern between sources and targets can help distinguish frauds from normal behaviors. Therefore, in this paper, we model the attributed behavioral sequences generated from consecutive behaviors, in order to capture the sequential patterns, while those deviate from the pattern can be regarded as fraudulence. Considering the characteristics of behavioral sequence, we propose a novel model, HAInt-LSTM, by augmenting traditional LSTM with a modified forget gate where interval time between consecutive time steps are considered. Meanwhile, we employ a self-historical attention mechanism to allow for long-time dependencies, which can help identify repeated or cyclical appearances. In addition, we encode the source information as an interaction module to enhance the learning of behavioral sequences. To validate the effectiveness of the learned sequential behavior representations, we experiment on real-world telecommunication dataset under both supervised and unsupervised scenarios. Experimental results show that the learned representations can better identify fraudulent behaviors, and also show a clear cut with normal sequences in the lower dimensional embedding space through visualization. Last but not least, we visualize the weights of attention mechanism to provide rational interpretation of human behavioral periodicity.