Monitoring plant growth is crucial for effective crop management, and using color and depth (RGBD) cameras to model lettuce has emerged as one of the most convenient and non-invasive methods. In recent years, deep learning techniques, particularly neural networks, have become popular for estimating lettuce fresh weight. However, these models are typically specific to particular datasets, lack domain adaptation, and are often limited by the availability of open-access datasets. In this study, we propose a method based on plant geometric features for estimating the rosette structure and volume of lettuce. This new approach was compared to existing methods that reconstruct surfaces from point clouds, such as Ball Pivoting and Alpha Shapes. The proposed method creates a tight hull around the plant's point cloud, preserving high detail of the rosette structure while filling in surface holes in areas not visible to 3D cameras. Using a linear regression model, we estimated fresh weight for this dataset, achieving a root mean square error (RMSE) of 18.2 g when using only the estimated plant volume, and 17.3 g when both volume and geometric features were included. Additionally, we introduced new geometric features that characterize leaf density, which could be useful for breeding applications. A dataset of 402 point clouds of lettuce plants, captured before harvest, was compiled using one top-down and three side-view 3D cameras.