Compared to fully supervised 3-D large-scale point cloud segmentation methods, which necessitate extensive manual point-wise annotations, weakly supervised segmentation has emerged as a popular approach for significantly reducing labeling costs while maintaining effectiveness. However, the existing methods have exhibited inferior segmentation performance and unsatisfactory generalization capabilities in some scenarios with unique structures (e.g., building facades). In this article, we propose an effective and generalized weakly supervised semantic segmentation framework, called multistage scene-level constraints (MSCs), to solve the above problem. To address the issue regarding inadequate labeled data, we use pseudo-labels for unlabeled data and propose an uncertainty-guided adaptive reweighting strategy to reduce the negative impact of erroneous pseudo-labeled data on the model learning process. To address the class imbalance issue, we employ MSCs (i.e., encoder, decoder, and classifier stages) to treat each class equally and improve perception ability of the model for each class. Evaluations conducted on multiple large-scale point cloud datasets collected in different scenarios, including building facades, indoor scenes, outdoor scenes, and UAV scenes, show that our MSC achieves a large gain over the existing weakly supervised methods and even surpasses some fully supervised methods.