Salient object refers to the conspicuous objects or regions within an image that stand out prominently from its surroundings. Depth maps are commonly utilized as supplementary inputs for salient object detection, referred to as RGB-D SOD. Due to the diverse acquisition sensors, such as infrared detectors and stereo cameras, the quality of acquired depth maps varies considerably. The low-quality depth introduces noise that seriously reduces detection accuracy. To tackle this problem, a triple attention architecture based on a 3D convolutional neural network tailored for quality-aware salient object detection is proposed in this paper, which capitalizes on the strengths across modality, channel, and spatial dimensions. The modality attention learns the quality factors based on the overall modal features. The channel attention highlights features in the dimension of channels, and the patch-level spatial attention establishes long-range dependencies. Thus, the quality factors, channel differences, and spatial contrast are combined to achieve global and local fusion. To enable the evaluations on low-quality depth maps, an assessment criterion is further introduced to categorize the RGB-D datasets. Experimental results of state -of -the -art methods on different quality levels demonstrate the proposed method's effectiveness, especially for the low-quality depth.