The conditional generation of high-quality floorplan images using deep-learning methods is challenging because the generated floorplans are required to match specific conditions, such as floorplan silhouettes and spatial layouts. Recently, diffusion models have emerged as alternatives of conditional generative adversarial networks in image generation, offering higher image quality, pairing-free training datasets, and adaptability to various image domains via parameter fine-tuning of pretrained diffusion models. However, diffusion models are rarely used for floorplan generation because when fine-tuning them on image domains that were not learned in pretraining, such as floorplans, the quality of the generated images is poor and tuning takes a long time. This phenomenon arises from the so-called catastrophic forgetting problem, where traditional fine-tuning methods that update all parameters easily destroy the knowledge of pretrained diffusion models. To address this problem, we propose FloorDiffusion, a diffusion model-based conditional floorplan generation method. In this method, only a few key parameters of the pretrained diffusion model are fine-tuned, which allows adaptation to the floorplan domain while retaining its useful knowledge. Then, the fine-tuned diffusion model performs conditional floorplan generation by inpainting the unfinished regions of the input conditional image. Comparative experiments with existing methods demonstrate that our method can produce more architecturally realistic floorplan images with up to 72 % image quality improvement. It can also generate various floorplan images for a single input condition image. Finally, ablation studies show that all components of the proposed method are essential for optimal operation.