This paper addresses a novel membership inference framework, by exploring the essential property of machine learning models in that a training data sample will behave differently from the unseen data, including its augmented data versions. We leverage such property by exploiting the differences between data and their augmented versions, i.e., self-comparison, to facilitate effective membership inference. In essence, we propose a solution based on the self-comparison classifier, which relies on a shadow model to fit the output patterns of a target model. Then, three comparative metrics are designed to measure the difference between the output vectors of each data and its augmented data versions queried from the shadow model. Such self-comparison differences are used to train a binary classifier for determining if a probing data belongs to the training dataset or not. Extensive experiments have been conducted and their results exhibit that our proposed self-comparison classifier: 1) always outperforms its counterparts, 2) can be applied to different learning paradigms (i.e., supervised, semi-supervised, and unsupervised learning ones) under different background knowledge types, and 3) achieves superior performance even in the full black -box scenario.