Low-light image enhancement has been an important research direction in the field of image processing. Recently, U-Net networks have shown better promise in low-light image enhancement. However, because of the semantic gap and the lack of connection between global contextual information in the U-shaped network, it leads to problems such as inaccurate color information in the enhanced images. To address the above problems, this paper proposes a Dual UNet low-light image enhancement network (DUAMNet) based on an attention mechanism. Firstly, the local texture features of the original image are extracted using the Local Binary Pattern(LBP) operator, and the illumination invariance of the LBP operator better maintains the texture information of the original image. Next, use the Brightness Enhancement Module(BEM). In the BEM module, the outer U-Net network captures feature information at different levels and luminance information of different regions, and the inner densely connected U-Net++ network enhances the correlation of feature information at different levels, mines more hidden feature information extracted by the encoder, and reduces the feature semantic gap between the encoder and decoder. The attention module Convolutional Block Attention Module(CBAM) is introduced in the decoder of U-Net++ network. CBAM further enhances the ability to model the global contextual information linkage and effectively improves the network's attention to the weak light region. The network adopts a progressive recursive structure. The entire network includes four recursive units, and the output of the previous recursive unit is used as the input of the next recursive unit. Comparative experiments are conducted on seven public datasets, and the results are analyzed quantitatively and qualitatively. The results show that despite the simple structure of the network in this paper, the network in this paper outperforms other methods in image quality compared to other methods.