AI-Powered CNN Model for Automated Lung Cancer Diagnosis in Medical Imaging
DOI:
https://doi.org/10.6000/1929-6029.2025.14.58Keywords:
Pulmonary Cancer, Convolutional Neural Networks, IQ-OTHNCCD Dataset, Diagnostic Imaging, AI Healthcare, Image RecognitionAbstract
Lung cancer remains a critical health concern in the entire world, which has been a major cause of high rates of cancer-related mortalities that affect individuals in every part of the world. The findings emphasize the notable potential of deep learning procedures to assist radiologists in diagnosing cases of lung-related abnormalities appropriately. Such methods are also leading to the improvement of AI-based healthcare products. The enhancements to the suggested model [16, 17, 18, 21] in the future will be aimed at tuning hyperparameters, 3D CNN [16, 17, 18] architectures, and the integration of patient clinical data, with the aim of further increasing the accuracy [16, 17, 19] of diagnosis as well as system performance. This paper uses the IQ-OTHNCCD dataset, a publicly available and highly annotated set of CT imaging that has been annotated by experts in the medical field. The preprocessing techniques applied will involve changing the images to Grayscale, normalizing the pixel values, ensuring consistency in the images, and converting them to a standard size of 128x128 pixels, which is the ideal size to feed the images into the CNN [16, 17, 18]. In the proposed work, the model [16, 17, 18, 21] integrates multi-scale convolutional layers with adaptive dropout (rate=0.5) and ReLU activations, yielding 95% accuracy [16, 17, 19] and 0.95 F1-score (95% CI: 93.8–96.2%) on a 70/15/15 train/validation/test split— a 4% improvement in F1-score. Preprocessing includes grayscale conversion, pixel normalization to [0,1], and resizing to 128x128 pixels. The architecture comprises three convolutional blocks (32/64/128 filters, 3x3 kernels), max-pooling (2x2), flattening, a 512-unit dense layer, and a 3-unit softmax output. Future enhancements include hyperparameter tuning, 3D CNN [16, 17, 18] integration, and clinical data fusion to exceed 97% accuracy [16, 17, 19].
References
Zhou J, Xu Y, Liu J, Feng L, Yu J, Chen D. Global burden of lung cancer in 2022 and projections to 2050: Incidence and mortality estimates from GLOBOCAN. Cancer Epidemiology, 2024; 93: 102693. DOI: https://doi.org/10.1016/j.canep.2024.102693
Maleki Varnosfaderani S, Forouzanfar M. The role of AI in hospitals and clinics: transforming healthcare in the 21st century. Bioengineering 2024; 11(4): 337. DOI: https://doi.org/10.3390/bioengineering11040337
Bouamrane A, Derdour M, Bennour A, Elfadil Eisa TA, Emara M, Al-Sarem AH, Kurdi MNA. Toward Robust Lung Cancer Diagnosis: Integrating Multiple CT Datasets, Curriculum Learning, and Explainable AI. Diagnostics 2024; 15(1): 1. DOI: https://doi.org/10.3390/diagnostics15010001
Thanoon MA, Zulkifley MA, Mohd Zainuri MAA, Abdani SR. A review of deep learning [16, 17, 18] techniques for lung cancer screening and diagnosis based on CT images. Diagnostics 2023; 13(16): 2617. DOI: https://doi.org/10.3390/diagnostics13162617
Krishnan B. A Hybrid CNN [16, 17, 18]-GLCM Classifier for Detection and Grade Classification of Brain Tumor 2021.
Thandra KC, Barsouk A, Saginala K, Aluru JS, Barsouk A. Epidemiology of lung cancer. Contemporary Oncology/ Współczesna Onkologia 2021; 25(1): 45-52. DOI: https://doi.org/10.5114/wo.2021.103829
Shatnawi MQ, Abuein Q, Al-Quraan R. Deep learning-based approach to diagnose lung cancer using CT-scan images. Intelligence-Based Medicine 2025; 11: 100188. DOI: https://doi.org/10.1016/j.ibmed.2024.100188
Song M, Tao D, Chen C, Bu J, Yang Y. Color-to-gray based on chance of happening preservation. Neurocomputing 2013; 119: 222-231. DOI: https://doi.org/10.1016/j.neucom.2013.03.037
Patel PR, De Jesus O. CT scan 2021.
Farhang E, Toosi R, Karami B, Koushki R, Kheirkhah N, Shakerian F, Dehaqani MRA. The impact of spatial frequency on hierarchical category representation in macaque temporal cortex. Communications Biology 2025; 8(1): 801. DOI: https://doi.org/10.1038/s42003-025-08230-5
Nargesian F, Samulowitz H, Khurana U, Khalil EB, Turaga DS. Learning feature engineering for classification [16, 17, 19]. In Ijcai 2017; 17: 2529-2535. DOI: https://doi.org/10.24963/ijcai.2017/352
Yamashita R, Nishio M, Do RKG, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights into Imaging 2018; 9(4): 611-629. DOI: https://doi.org/10.1007/s13244-018-0639-9
Kelly S, Kaye SA, Oviedo-Trespalacios, O. What factors contribute to the acceptance of artificial intelligence? A systematic review. Telematics and informatics 2023; 77: 101925. DOI: https://doi.org/10.1016/j.tele.2022.101925
Altarabichi MG, Nowaczyk S, Pashami S, Mashhadi PS, Handl J. Rolling the dice for better deep learning [16, 17, 18] performance: A study of randomness techniques in deep neural networks. Information Sciences 2024; 667: 120500. DOI: https://doi.org/10.1016/j.ins.2024.120500
Ciobotaru A, Bota MA, Goța DI, Miclea LC. Multi-instance classification [16, 17, 19] of breast tumor ultrasound images using convolutional neural networks and transfer learning. Bioengineering 2023; 10(12): 1419. DOI: https://doi.org/10.3390/bioengineering10121419
Dar SA, et al. Improving Alzheimer’s Disease Detection with Transfer Learning. Int J Stat Med Res 2025; 14: 403-415. DOI: https://doi.org/10.6000/1929-6029.2025.14.39
Dar SA, Palanivel S, Geetha MK, Balasubramanian M. Mouth Image Based Person Authentication Using DWLSTM and GRU. Inf Sci Lett 2022; 11(3): 853-862. DOI: https://doi.org/10.18576/isl/110317
Dar SA, Palanivel S. Performance Evaluation of Convolutional Neural Networks (CNN [16, 17, 18]s) And VGG on Real Time Face Recognition System. Adv Sci Technol Eng Syst J 2021; 6(2): 956-964. DOI: https://doi.org/10.25046/aj0602109
Dar SA, Palanivel S. Real Time Face Authentication System Using Stacked Deep Auto Encoder for Facial Reconstruction. Int J Thin Film Sci Technol 2022; 11(1): 73-82. DOI: https://doi.org/10.18576/ijtfst/110109
Dar SA, PS, Real-Time Face Authentication Using Denoised Autoencoder (DAE) for Mobile Devices 2022; 21(6): 163-176. DOI: https://doi.org/10.4018/978-1-7998-9795-8.ch011
Ayadi W, Saidi A, Channoufi I. Exploring Human Activity Patterns: Investigating Feature Extraction Techniques for Improved Recognition with ANN. 7th IEEE Int Conf Adv Technol Signal Image Process ATSIP 2024; 1: 188-193. DOI: https://doi.org/10.1109/ATSIP62566.2024.10639004
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Policy for Journals/Articles with Open Access
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.
- Authors are permitted and encouraged to post links to their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work
Policy for Journals / Manuscript with Paid Access
Authors who publish with this journal agree to the following terms:
- Publisher retain copyright .
- Authors are permitted and encouraged to post links to their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work .