Keyword search systems provide users with a friendly alternative to access Resource Description Framework (RDF) datasets. The evaluation of such systems requires adequate benchmarks, consisting of RDF datasets and keyword queries, with their correct answers. However, the sets of correct answers such benchmarks provide for each query are often incomplete, mostly because they are manually built with experts' help. The central contribution of this paper is an offline method that helps build RDF keyword search benchmarks automatically, leading to more complete sets of correct answers, called solution generators. The paper focuses on computing sets of generators and describes heuristics that circumvent the combinatorial nature of the problem. The paper then describes five benchmarks, constructed with the proposed method and based on three real datasets, DBpedia, IMDb, and Mondial, and two synthetic datasets, LUBM and BSBM. Finally, the paper compares the constructed benchmarks with keyword search benchmarks published in the literature.