Cervical Cancer Papsmear Classification through Meta-Learning Technique using Convolution Neural Networks.
DOI:
https://doi.org/10.37034/medinftech.v1i4.23Keywords:
Machine Learning, Convolutional Neural Networks, Cervical Cancer, Cancerous, Benign TumorsAbstract
This study uses convolutional neural networks (CNNs) and meta-learning techniques to create an accurate and efficient model for classifying the risk factors of cervical cancer. The dataset includes four types of cervical lesions, and the main objective is to categorize these lesions as either benign or malignant. This classification is essential for early and succesfull treatment of cervical cancer. The challenge arises from the complexity and variations in the images, resulting in the inability of conventional machine learning and deep learning approaches to provide correct classifications. Meta ensemble learning approaches are employed to improve the model's classification accuracy. The dataset of cervical cancer risk factors is preprocessed before being used to train and evaluate numerous CNNs utilizing pre-trained models and various architectures. Subsequently, a meta-learning is employed to optimize the learning process, and used to aggregate the outputs of the multiple CNNs. Moreover, the assessment findings show the model achieves high accuracy and effectiveness. Finally, the suggested model's accuracy score will be contrasted against the current cutting-edge methods used by other existing systems.
Downloads
References
J. Lu, E. Song, A. Ghoneim, and M. Alrashoud, “Machine learning for assisting cervical cancer diagnosis: An ensemble approach,” Future Generation Computer Systems, vol. 106, pp. 199–205, May 2020, doi: 10.1016/j.future.2019.12.033.
Y. Xiang, W. Sun, C. Pan, M. Yan, Z. Yin, and Y. Liang, “A novel automation-assisted cervical cancer reading method based on convolutional neural network,” Biocybernetics and Biomedical Engineering, vol. 40, no. 2, pp. 611–623, Apr. 2020, doi: 10.1016/j.bbe.2020.01.016.
S. C. Turaga et al., “Convolutional networks can learn to generate affinity graphs for image segmentation,” Neural Comput, vol. 22, no. 2, pp. 511–538, Feb. 2010, doi: 10.1162/neco.2009.10-08-881.
K. O’Shea and R. Nash, “An Introduction to Convolutional Neural Networks.” arXiv, Dec. 02, 2015. doi: 10.48550/arXiv.1511.08458.
D. Cireşan, U. Meier, and J. Schmidhuber, “Multi-column Deep Neural Networks for Image Classification.” arXiv, Feb. 13, 2012. doi: 10.48550/arXiv.1202.2745.
“FIGURE 4 | The overall architecture of the CNN used in this work.,” ResearchGate. Accessed: Dec. 22, 2023. [Online]. Available: https://www.researchgate.net/figure/The-overall-architecture-of-the-CNN-used-in-this-work_fig1_328752197
W. Yang, X. Gou, T. Xu, X. Yi, and M. Jiang, “Cervical Cancer Risk Prediction Model and Analysis of Risk Factors based on Machine Learning,” in Proceedings of the 2019 11th International Conference on Bioinformatics and Biomedical Technology, in ICBBT’19. New York, NY, USA: Association for Computing Machinery, Mei 2019, pp. 50–54. doi: 10.1145/3340074.3340078.
B. Navaneeth and M. Suchetha, “PSO optimized 1-D CNN-SVM architecture for real-time detection and classification applications,” Computers in Biology and Medicine, vol. 108, pp. 85–92, May 2019, doi: 10.1016/j.compbiomed.2019.03.017.
M. Szarvas, A. Yoshizawa, M. Yamamoto, and J. Ogata, “Pedestrian detection with convolutional neural networks,” in IEEE Proceedings. Intelligent Vehicles Symposium, 2005., Jun. 2005, pp. 224–229. doi: 10.1109/IVS.2005.1505106.
C. Szegedy, A. Toshev, and D. Erhan, “Deep Neural Networks for Object Detection,” in Advances in Neural Information Processing Systems, Curran Associates, Inc., 2013. Accessed: Dec. 22, 2023. [Online]. Available: https://papers.nips.cc/paper_files/paper/2013/hash/f7cade80b7cc92b991cf4d2806d6bd78-Abstract.html
F. H. C. Tivive and A. Bouzerdoum, “A new class of convolutional neural networks (SICoNNets) and their application of face detection,” in Proceedings of the International Joint Conference on Neural Networks, 2003., Jul. 2003, pp. 2157–2162 vol.3. doi: 10.1109/IJCNN.2003.1223742.