Deepfake technology leverages deep learning to create realistic fake videos and images, originally relying on Generative Adversarial Networks (GANs). Diffusion Models (DMs) have recently achieved unparalleled visual realism in synthetic images. Deepfake technology is evolving from GANs to DMs, further enhancing the quality of fake content. Attackers exploit these technologies to produce pornographic videos, forge political statements, commit financial fraud via face-swapping, and erode trust on social media. Therefore, detecting synthetic images has become crucial. GANs have been extensively studied from a forensic perspective, and detecting DM-generated images remains under-researched. Existing GAN detectors struggle to reliably identify DM-generated images, creating a need for a unified fake image detector. In this work, we developed a detector that effectively differentiates between real and synthetic images using a unified interface. Our analysis reveals that GANs typically exhibit grid-like frequency patterns, whereas DMs do not always show clear frequency patterns. To address this issue, we utilized an improved frequency domain extraction block to capture frequency artifacts. Gradients of convolutional neural networks (CNNs) trained on generated images highlight key pixels relevant to the target task. We employ ResNet50 as a classifier, training it on synthetic images to act as a transformation model, converting input images into gradient maps for gradient feature extraction. Moreover, we incorporate color space analysis to detect subtle manipulation traces that are invisible in the visual domain. Extensive experiments on multiple public datasets demonstrate the superiority of our method over state-of-the-art competitors and generalize well to previously unseen synthetic methods.