Class-oriented Method of Fundus Images Augmentation
DOI:
https://doi.org/10.31649/1997-9266-2025-182-5-140-145Keywords:
machine learning, augmentation, neural networks, medical imagesAbstract
Innovative class-oriented method of fundus images augmentation is proposed. The advantages of this method compared to the standard method are given. The augmentation strategy for images with the existing signs of glaucoma, cataracts, diabetic retinopathy and a healthy eye is substantiated and selected according to the specifics of each class. Augmentation approximates real clinical variation. Resistance to precisely those variations that are characteristic of a specific disease has been improved. Sensitivity and specificity specifically to pathologies in medical images is increased. A neural network model was developed using the class-oriented method. GlobalAveragePooling layers, Dropout with rate 0.5, Dense with 1024 neurons, l2-regularization with a value of 0.001 and categorical crossentropy loss function, Dense classification layer with 4 neurons and softmax activation function were added to the EfficientNetB3 core network by learning methods transfer. Half of the layers of the base model were frozen. The model was compiled using an Adam compiler with an initial learning rate of 0.0001 and a cost feature of categorical crossentropy. In pre-processing, image sizes resized to 224 by 224. The normalization of images automatically occurred during the generation of data for training. The following callback functions were used to adjust the model learning: ModelCheckpoint — to save the best model, EarlyStopping — with to stop in the absence of improvement of the val accuracy metric during 15 eras, ReduceLROnPlateau — to reduce the learning rate by 3 times when stagnating. As a result of the training, high indicators of metrics were obtained. The trained model was compressed by quantification for later use on mobile and disabled devices. The proposed approach makes it possible to increase the overall accuracy and robustness of the neural network, to overcome the limitations of the traditional method.
References
Z. Wang, “A comprehensive survey on data augmentation,” arXiv preprint, arXiv:2401.12345, 2024.
M. Xu, “A comprehensive survey of image augmentation,” Information Fusion, vol. 97, pp. 1-23, 2023.
M. Buda, A. Maki, and M. A. Mazurowski, “A systematic study of the class imbalance problem in convolutional neural networks,” Neural Networks, vol. 106, pp. 249-259, 2018.
C. Shorten, and T. Khoshgoftaar, “A survey on image data augmentation for deep learning,” Journal of Big Data, vol. 6, no. 60, 2019.
G. Ghiasi, Y. Cui, and A. Srinivas, “Simple copy-paste is a strong data augmentation method for instance segmentation,” in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 2918-2928,
Z. Gong, L. Duan, F. Xiao, and Y. Wang, “MSAug: Multi-Strategy Augmentation for rare classes in semantic segmentation of remote sensing images,” Displays, vol. 84, Art.102779, 2024.
M. Frid-Adar, et al., “GAN-based synthetic medical image augmentation for increased CNN performance in liver lesion classification,” Neurocomputing, vol. 321, pp. 321-331, 2018.
Y. Chen, et al., “Conditional GAN-based data augmentation for medical imaging,” IEEE Transactions on Medical Imaging, vol. 42, no. 5, pp. 1123-1134, 2023.
L. Zhang, et al., “Evaluation metrics for synthetic data in computer vision,” Pattern Analysis and Applications, vol. 27, no. 3, pp. 455-470, 2024.
G. Hu, et al., “Semantics-preserved graph siamese networks with class-oriented feature vector generation,” Neurocomputing, vol. 527, pp. 123-135, 2023.
T. Li, et al., “Balanced contrastive learning with class-aware augmentation,” Pattern Recognition, vol. 142, Art. 109702, 2025.
L. Perez, and J. Wang, “The effectiveness of data augmentation in image classification using deep learning,” ArXiv preprint, ArXiv:1712.04621, 2017.
D. V. Prochukhan, “Features of the modification of the inceptionresnetv2 architecture and the creation of a diagnostic system for determining the degree of damage to retinal vessels,” Computer systems and information technologies, no. 1, pp. 27-32, 2024, https://doi.org/10.31891/csit-2024-1-3 .
Д. В. Прочухан. «Особливості конкатенації згорткових нейронних мереж для скринінгу діабетичної ретинопатії,» Системи обробки інформації, № 1 (176), с. 89-94, 2024, https://doi.org/10.30748/soi.2024.176.11
Downloads
-
pdf (Українська)
Downloads: 22
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).