Generative models, especially diffusion models, have gained traction in image generation for their high-quality image synthesis, surpassing generative adversarial networks (GANs). They have shown to excel in anomaly detection by modeling healthy reference data for scoring anomalies. However, one major disadvantage of these models is its sampling speed, which so far has made it unsuitable for use in time-sensitive scenarios. The time taken to generate a single image using the iterative sampling procedure introduced in denoising diffusion probabilistic model (DDPM) is quite significant. To address this, we propose a novel single-step sampling procedure that hugely improves the sampling speed while generating images of comparable quality. While DDPMs usually denoise images containing pure noise to generate an original image, we utilize a partial diffusion approach to preserve the image structure. In anomaly detection, we want the reconstructed image to have a structure similar to the original anomalous image, so that we can compare the pixel-level difference between them in order to segment the anomaly. The original DDPM algorithm suggests an iterative sampling procedure where the model slowly reduces the noise, until we have a noise-free image. Our single-step sampling approach attempts to remove all the noise in the image within a single step, while still being able to repair the anomaly and achieve comparable results. The output is a binary image showing the predicted anomalous regions, which is then compared to the ground truth to evaluate its segmentation performance. We find that, while it does achieve slightly better anomaly masks, the main improvement is in sampling speed, where our approach was found to perform significantly faster as compared to the iterative procedure. Our work is mainly focused on anomaly detection in brain MR volumes, and therefore, this approach could be used by radiologists in a clinical setting to find anomalies in large quantities of brain MRI.