Recurrent neural networks (RNNs) have been widely used in previous work on extractive text summarization. However, the information stored by RNNs structure is typically limited and is not compartmentalized enough to accurately record facts from the past. That makes it difficult to obtain relationships of the sentences in a document and select salient sentences with important information for a summary. In order to rectify this problem, we propose a memory-based extractive summarization (MES) model which is mainly constructed by memory generalization and sentence extractor. Our model can store more information of features extracted from sentences, relationships between sentences and implications of document, thus giving richer representations for selecting sentences for summary. Our experimental results show that MES model outperforms the baselines. We obtain improvements of up to 3.8%, 8.1%, 8.5%, 3.6% and 5.6% in terms of R-1, R-2, R-3, R-4 and R-L, respectively, over a relevant state-of-the-art baseline.