This study presents a new metaheuristic method that is derived from the gradient-based search method. In an exact optimization method, the gradient is used to find extreme points, as well as the optimal point. This study modifies a gradient method, and creates a metaheuristic method that uses a gradient theorem as its basic updating rule. This new method, named gradient evolution, explores the search space using a set of vectors and includes three major operators: vector updating, jumping and refreshing. Vector updating is the main updating rule in gradient evolution. The search direction is determined using the Newton-Raphson method. Vector jumping and refreshing enable this method to avoid local optima. In order to evaluate the performance of the gradient evolution method, three different experiments are conducted, using fifteen test functions. The first experiment determines the influence of parameter settings on the result. It also determines the best parameter setting. There follows a comparison between the basic and improved metaheuristic methods. The experimental results show that gradient evolution performs better than, or as well as, other methods, such as particle swarm optimization, differential evolution, an artificial bee colony and continuous genetic algorithm, for most of the benchmark problems tested. (C) 2015 Elsevier Inc. All rights reserved.