This letter presents a novel hybrid method that leverages deep learning to exploit the multiresolution analysis capability of the wavelets, in order to denoise a photoplethysmography (PPG) signal. Under the proposed method, a noisy PPG sequence of length N is first decomposed into L detailed coefficients using the fast wavelet transform (FWT). Then, the clean PPG sequence is reconstructed with the help of a custom feedforward neural network (FFNN) that provides the binary weights for each of the wavelet subsignals outputted by the inverse-FWT block. This way, all those subsignals which correspond to noise or artefacts are discarded during reconstruction. The FFNN is trained on the Beth Israel Deaconess Medical Center dataset and a custom video-PPG dataset, whereby we compute the mean squared-error (MSE) between the denoised sequence and the reference clean PPG signal, and compute the gradient of the MSE for the back-propagation. Simulation results reveal that our proposed method reduces the MSE of the PPG signal significantly (compared to the MSE of the original noisy PPG signal): by 56.40% for Gaussian noise, by 64.01% for Poisson noise, 46.02% for uniform noise, and by 72.36% for salt-and-pepper noise (with "db10" mother wavelet).