Differential privacy is a widely accepted concept of privacy preservation, and the Laplace mechanism is a famous instance of differentially private mechanisms used to deal with numerical data. In this paper, we find that differential privacy does not take the linear property of queries into account, resulting in unexpected information leakage. Specifically, the linear property makes it possible to divide one query into two queries, such as q(D) = q(D-1)+ q(D-2) if D = D-1 boolean OR D-2 and D-1 boolean OR D-2 = phi. If attackers try to obtain an answer to q(D), they can not only issue the query q(D) but also issue q(D-1) and calculate q(D-2) by themselves as long as they know D-2. Through different divisions of one query, attackers can obtain multiple different answers to the same query from differentially private mechanisms. However, from the attackers' perspective and differentially private mechanisms' perspective, the total consumed privacy budget is different if divisions are delicately designed. This difference leads to unexpected information leakage because the privacy budget is the key parameter for controlling the amount of information that is legally released from differentially private mechanisms. To demonstrate unexpected information leakage, we present a membership inference attack against the Laplace mechanism. Specifically, under the constraints of differential privacy, we propose a method for obtaining multiple independent identically distributed samples of answers to queries that satisfy the linear property. The proposed method is based on a linear property and some background knowledge of the attackers. When the background knowledge is sufficient, the proposed method can obtain a sufficient number of samples from differentially private mechanisms such that the total consumed privacy budget can be made unreasonably large. Based on the obtained samples, a hypothesis testing method is used to determine whether a target record is in a target dataset.