The Synthetic Transaural Audio Rendering (STAR) method was recently published in the Journal of the Audio Engineering Society. That article proposed a method for sound spatialization in a perceptual way, by the reproduction of acoustic cues based on some models, as well as tests for its validation. In that article, the authors focused on azimuth and gave only hints for extensions to distance and elevation. Since then, the implementation and testing of these extensions have been carried out, and this article aims at completing the STAR method. Although for the distance, the authors rather simulate physical phenomena, but for the elevation, they propose to reproduce monaural cues by shaping the Head-Related Transfer Functions with peaks and notches controlled by some models, in order to give the listener the sensation of elevation. The extensions to distance and elevation have been validated by subjective listening tests. The independence of these two parameters is also demonstrated. For the azimuth, there is a robust localization method giving objective results consistent with human hearing. Thanks to this method, the independence of azimuth and distance or elevation is also demonstrated. Finally, there is now a full 3D system for sound spatialization, managing each parameter of each sound source position (azimuth, elevation, and distance) independently.