Deep learning has promoted remarkable progress in various tasks while the effort devoted to these hand-crafting neural networks has motivated so-called neural architecture search (NAS) to discover them automatically. Recent aging evolution (AE) automatic search algorithm turns to discard the oldest model in population and finds image classifiers beyond manual design. However, it achieves a low speed of convergence. A nonaging evolution (NAE) algorithm tends to neglect the worst architecture in population to accelerate the search process whereas it obtains a lower performance compared with AE. To address this issue, in this letter, we propose to use an optimized evolution algorithm for recurrent NAS (EvoRNAS) by setting a probability epsilon to remove the worst or oldest model in population alternatively, which can balance the performance and time length. Besides, parameter sharing mechanism is introduced in our approach due to the heavy cost of evaluating the candidate models in both AE and NAE. Furthermore, we train the sharing parameters only once instead of many epochs like ENAS, which makes the evaluation of candidate models faster. On Penn Treebank, we first explore different epsilon in EvoRNAS and find the best value suited for the learning task, which is also better than AE and NAE. Second, the optimal cell found by EvoRNAS can achieve state-of-the-art performance within only 0.6 GPU hours, which is 20 x and 40 x faster than ENAS and DARTS. Moreover, the transferability of the learned architecture to WikiText-2 also shows strong performance compared with ENAS or DARTS.