The efficiency of state-of-the-art convolutional networks trained to detect lung cancer nodules depends on their feature extraction model. Various feature extraction models have been proposed based on convolutional networks, such as VGG-Net, or ResNet. It has been demonstrated that such models effectively extract features from objects in an image. However, their efficacy is limited when the objects of interest are very small, such as lung nodules. One of the widely used feature extraction models for detecting small objects is the VGG16 network. The model, which has a small kernel of 3 x 3 and optimal layers, can extract the features of small objects with reasonable accuracy. In this article, feature maps are created by combining the last three layers of the VGG16 network to extract features of various sizes of nodules. This study utilizes a Region Proposal Network (RPN) to compare the accuracy of the feature map created in the proposed method and the original VGG16. An RPN is a fully-convolutional network that simultaneously predicts object bounds and objectness scores at each position. RPNs are trained end-to-end to generate high-quality region proposals, which Faster R-CNN uses for detection. In this article, we select 300, 1, 000 and 2, 000 regions chosen by the RPN network for each method; then, we calculate the recall for different Intersection over Union (IoU) ratios with ground-truth boxes. The results show that the feature map of the proposed method works more optimally than the feature map of different layers of VGG16 for extracting various sizes of nodules. Also, by reducing the number of selected region proposals, the recall of the proposed method has fewer changes than other methods.