Distinguishing AI-Generated Images from Real Images: A Multi-Criteria Analysis Using SF-AHP

Authors

  • Ahmet Suha Hancioglu Department of Industrial Engineering, Faculty of Engineering and Natural Sciences, Ankara Yıldırım Beyazıt University Ankara, Turkey. Author https://orcid.org/0000-0001-6583-5488
  • Ibrahim Yilmaz Department of Industrial Engineering, Faculty of Engineering and Natural Sciences, Ankara Yıldırım Beyazıt University Ankara, Turkey. Author https://orcid.org/0000-0002-5959-7353

DOI:

https://doi.org/10.31181/sa32202550

Keywords:

AI-generated images, Image authentication, Spherical fuzzy AHP (SF-AHP), Multi-criteria decision making

Abstract

Recent advancements in the widespread use of the internet and computers have led to the storage of vast amounts of data, increasing computational power, and enhanced computer capabilities, fundamentally transforming human life. The large data sets generated through the internet provide an unlimited source of data for Artificial Intelligence (AI). These developments, coupled with current AI techniques, challenge the traditional concepts of photography and images, particularly due to the ability to produce realistic visuals. As synthetic images produced by AI become increasingly indistinguishable from real, human-made photographs, the distinction between the two is becoming more difficult over time. Considering the development of Large Language Models (LLM), which can produce highly fluent and coherent communication almost indistinguishable from human-generated text within a few years of their introduction, it is expected that synthetic images will rapidly increase in realism as well. This situation has become a critical issue in preventing disinformation and maintaining visual credibility. This study addresses this problem and aims to present a holistic approach for distinguishing between real and synthetic images. In this context, a comprehensive evaluation framework consisting of six main criteria and eighteen sub-criteria is presented, and the relative importance of these criteria is analyzed using the Spherical Fuzzy AHP (SF-AHP) method. These importance levels should be continuously updated and discussed as technology evolves. In this study, an innovative approach is used, incorporating expert opinions (such as those from advanced language models like ChatGPT o-3), and based on SF-AHP analysis, physical consistency and sensor trace analysis emerged as the most critical determinants for distinguishing between synthetic and real images. The findings emphasize the necessity of a multi-criteria approach in AI image detection and provide insights for future validation methods.

References

Statista. (2024). Amount of data created, consumed, and stored 2010-2023, with forecasts to 2028. https://www.statista.com/statistics/871513/worldwide-data-created/#statisticContainer

Cao, H., Tan, C., Gao, Z., Xu, Y., Chen, G., Heng, P. A., & Li, S. Z. (2024). A survey on generative diffusion models. IEEE transactions on knowledge and data engineering, 36(7), 2814–2830. https://doi.org/10.1109/TKDE.2024.3361474

Borji, A. (2022). Generated faces in the wild: Quantitative comparison of stable diffusion, midjourney and dall-e 2. ArXiv Preprint ArXiv:2210.00586. https://doi.org/10.48550/arXiv.2210.00586

Perrigo, B., & Johnson, V. (2023). How to spot an AI-Generated image like the ‘Balenciaga Pope. https://time.com/6266606/how-to-spot-deepfake-pope/

Hausken, L. (2024). Photorealism versus photography. AI-generated depiction in the age of visual disinformation. Journal of aesthetics & culture, 16(1), 2340787. https://doi.org/10.1080/20004214.2024.2340787

Bontcheva, K., Papadopoulous, S., Tsalakanidou, F., Gallotti, R., Dutkiewicz, L., Krack, N., … & Srba, I. (2024). Generative AI and disinformation: Recent advances, challenges, and opportunities. https://lirias.kuleuven.be/retrieve/758830

Yoo, Y., Na, D., Nathanson, S., Cao, Y., & Watkins, L. (2024). Disinformation at scale: detecting ai-human composite images via convolution ensembles. MILCOM 2024-2024 IEEE military communications conference (milcom) (pp. 621–626). IEEE. https://doi.org/10.1109/MILCOM61039.2024.10773642

Jagadish, T., & Jasmine, S. G. (2024). Detection of ai-generated image content in news and journalism. 2024 15th international conference on computing communication and networking technologies (ICCCNT) (pp. 1–6). IEEE. https://doi.org/10.1109/ICCCNT61001.2024.10724589

Moeßner, P., & Adel, H. (2024). Human VS. AI: A novel benchmark and a comparative study on the detection of generated images and the impact of prompts. ArXiv Preprint ArXiv:2412.09715. https://doi.org/10.48550/arXiv.2412.09715

Cetinic, E., & She, J. (2022). Understanding and creating art with AI: Review and outlook. ACM transactions on multimedia computing, communications, and applications, 18(2), 1–22. https://doi.org/10.1145/3475799

Bellaiche, L., Shahi, R., Turpin, M. H., Ragnhildstveit, A., Sprockett, S., Barr, N., … & Seli, P. (2023). Humans versus AI: Whether and why we prefer human-created compared to AI-created artwork. Cognitive research: principles and implications, 8(1), 1-22. https://doi.org/10.1186/s41235-023-00499-6

Guo, H., Hu, S., Wang, X., Chang, M. C., & Lyu, S. (2022). Eyes tell all: Irregular pupil shapes reveal gan-generated faces. ICASSP 2022-2022 IEEE international conference on acoustics, speech and signal processing (ICASSP) (pp. 2904–2908). IEEE. https://doi.org/10.1109/ICASSP43922.2022.9746597

Hua, M., Li, S., & Wang, J. (2025). A dual-stream model based on PRNU and quaternion RGB for detecting fake faces. PloS one, 20(1), e0314041. https://doi.org/10.1371/journal.pone.0314041

Xiao, S., Guo, Y., Peng, H., Liu, Z., Yang, L., & Wang, Y. (2025). Generalizable AI-generated image detection based on fractal self-similarity in the spectrum. ArXiv Preprint ArXiv:2503.08484. https://doi.org/10.48550/arXiv.2503.08484

Kusuma, S. W., Natalia, F., Ko, C. S., & Sudirman, S. (2024). Detection of AI-generated anime images using deep learning. ICIC express letters, part B: Applications, 15(3), 295–301. https://doi.org/10.24507/icicelb.15.03.295

Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A., & Ortega-Garcia, J. (2020). Deepfakes and beyond: A survey of face manipulation and fake detection. Information fusion, 64, 131–148. https://doi.org/10.1016/j.inffus.2020.06.014

Ghiurău, D., & Popescu, D. E. (2025). Distinguishing reality from AI: Approaches for detecting synthetic content. Computers, 14(1), 1–33. https://doi.org/10.3390/computers14010001

Afchar, D., Nozick, V., Yamagishi, J., & Echizen, I. (2018). Mesonet: A compact facial video forgery detection network. 2018 IEEE international workshop on information forensics and security (WIFS) (pp. 1–7). IEEE. https://doi.org/10.1109/WIFS.2018.8630761

Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D., & Holz, T. (2020). Leveraging frequency analysis for deep fake image recognition. International conference on machine learning (pp. 3247–3258). PMLR. http://proceedings.mlr.press/v119/frank20a

Chen, H., Gu, J., Chen, A., Tian, W., Tu, Z., Liu, L., & Su, H. (2023). Single-stage diffusion NeRF: A unified approach to 3D generation and reconstruction. https://doi.org/10.48550/arXiv.2304.06714

Zadeh, L. A. (1965). Fuzzy sets. Information and control, 8(3), 338–353. https://doi.org/10.1016/S0019-9958(65)90241-X

Kutlu Gündoğdu, F., & Kahraman, C. (2019). Spherical fuzzy sets and spherical fuzzy TOPSIS method. Journal of intelligent & fuzzy systems, 36(1), 337–352. https://doi.org/10.3233/JIFS-181401

Kieu, P. T., Nguyen, V. T., Nguyen, V. T., & Ho, T. P. (2021). A spherical fuzzy analytic hierarchy process (SF-AHP) and combined compromise solution (CoCoSo) algorithm in distribution center location selection: A case study in agricultural supply chain. Axioms, 10(2), 1–13. https://doi.org/10.3390/axioms10020053

Published

2025-06-24

How to Cite

Suha Hancioglu, A. ., & Yilmaz, I. . (2025). Distinguishing AI-Generated Images from Real Images: A Multi-Criteria Analysis Using SF-AHP. Systemic Analytics, 3(2), 152-163. https://doi.org/10.31181/sa32202550