Images captured by image acquisition systems in scenes with fog or haze contain missing details, dull color, and reduced brightness. To address this problem, the dual multiscale neural network model based on the AOD theory is proposed in this paper. First, two parameters, namely transmittance and atmospheric layer coefficient, of the atmospheric scattering model are combined into a single parameter. The new neural network model proposed in this paper is then used to train this parameter. The network model proposed in this paper consists of two multiscale modules and a mapping module. In order to extract more perfect image features, this paper designs two multiscale modules for feature extraction. The convolution parameters of Multiscale Module 1 are designed to maintain the size of original images during feature extraction by adding pooling, sampling, etc. After each convolution operation, multiscale module 2 uses multiple small-sized convolution kernels for convolution, in which the concat operation is added to better connect the individual kernels, the mapping module maps the fogged images onto the extracted feature map and is able to extract more detail from the original image to obtain better defogging results after processing. Training is performed to derive a unified parameter model for image defogging, and finally, the defogged image is obtained using this parameter estimation model. The experimental results show that the model proposed this paper not only outperforms the AOD network in terms of peak signal-to-noise ratio, structural similarity, and subjective vision but also outperforms the mainstream deep learning and traditional methods in terms of image defogging; moreover, the defogged images are optimized in terms of detail, color, and brightness. In addition, ablation experiments had demonstrated that all of the structures in this paper were necessary.