Target Class Transformation for Segmentation Task Using U-GAN

Authors

  • Ya. O. Isaienkov Vinnytsia National Technical University
  • O. B. Mokin Vinnytsia National Technical University

DOI:

https://doi.org/10.31649/1997-9266-2024-172-1-81-87

Keywords:

augmentation, data generation, generative adversarial network,, segmentation, deep learning, GAN, U-GAN, U-generator

Abstract

The paper presents a review of modern generative adversarial models for data augmentation, focusing on research, aimed at creating images and their corresponding segmentation masks. This task is particularly useful in cases where data is insufficient, hard to access, has confidential nature, or where labeling requires significant resources. The paper is aimed at the task of augmenting the minority class by transforming an image from another class and creating a segmentation mask. New approach is proposed for the simultaneous generation of the image and segmentation mask, using a generative adversarial network with U-Net generator. This generator takes an image of one class and noise, which is fed as an additional image channel. The generator tries to create an image of another class, minimizing changes in the original image and adding features of another along with the segmentation mask of the new class. The discriminator then determines whether the picture-mask pair is real or generated. The algorithm that applies only those changes of the generated image that are indicated by the created segmentation mask used to preserve the original appearance of the input image with minimal changes. This technique allows to obtain an image with features of the new class with minimal changes. The practical implementation of the proposed approach was conducted on a dataset of panoramic dental X-rays, based on which a set of individual teeth was created, some with fillings and some without. The experimental data set included 128 teeth without fillings and 128 with fillings. The GAN is trained to transform images without fillings into similar ones with fillings using all input images. Two experiments of 50 simulations each with different random states were conducted for training the segmentation model U-Net with ResNet-34 backbone to check the effectiveness of this augmentation. The first experiment used only real data for training, while the second included 64 additional images and masks created by the generator based on existing zero-class images. The average Jaccard score among all simulations for the first and second experiments were respectively 94.2 and 96.1. This result indicates that data generated using the proposed augmentation helps improve the quality of segmentation models and this approach can be combined with other augmentation techniques.

Author Biographies

Ya. O. Isaienkov, Vinnytsia National Technical University

Post-Graduate Student of the Chair of System Analysis and Information Technologies

O. B. Mokin, Vinnytsia National Technical University

Dr. Sc. (Eng.), Professor, Professor of the Chair of System Analysis and Information Technologies

References

P. Dhariwal and A. Q. Nichol, “Diffusion Models Beat GANs on Image Synthesis,” in Advances in Neural Information Processing Systems, 2021. [Online]. Available: https://openreview.net/forum?id=AAWuCvzaVt. Accessed on: January 30, 2024.

V. Sandfort, K. Yan, P. J. Pickhardt, et al., “Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks,” Sci Rep, vol. 9, Article no. 16884, 2019. https://doi.org/10.1038/s41598-019-52737-x .

H. Mansourifar, L. Chen and W. Shi, “Virtual Big Data for GAN Based Data Augmentation,” 2019 IEEE International Conference on Big Data (Big Data), Los Angeles, CA, USA, 2019, pp. 1478-1487, https://doi.org/10.1109/BigData47090.2019.9006268 .

A. Sauer, K. Schwarz, and A. Geiger, “StyleGAN-XL: Scaling StyleGAN to Large Diverse Datasets,” in ACM SIGGRAPH 2022 Conference Proceedings (SIGGRAPH ‘22), Association for Computing Machinery, New York, NY, USA, 2022, Article 49, pp. 1–10. https://doi.org/10.1145/3528233.3530738 .

O. Ronneberger, P. Fischer, and T. Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Computer Science Department and BIOSS Centre for Biological Signalling Studies, University of Freiburg, Germany, 2015. [Online]. Available: https://arxiv.org/pdf/1505.04597.pdf . Accessed on: January 30, 2024.

R. Gulakala, B. Markert, and M. Stoffel, “Generative adversarial network based data augmentation for CNN based detection of Covid-19,” Sci Rep, vol. 12, Article no. 19186, 2022. https://doi.org/10.1038/s41598-022-23692-x .

X. Chen, et al., “Generative Adversarial U-Net for Domain-free Medical Image Augmentation,” in arXiv e-prints, 2021. [Online]. Available: https://arxiv.org/pdf/2101.04793.pdf . Accessed on: January 30, 2024.

E. Yıldız, et al., “Generative Adversarial Network Based Automatic Segmentation of Corneal Subbasal Nerves on In Vivo Confocal Microscopy Images,” Trans. Vis. Sci. Tech., vol. 10, no. 6, Article 33, 2021. https://doi.org/10.1167/tvst.10.6.33 .

T. Neff, C. Payer, D. Štern, and M. Urschler, “Generative Adversarial Network based Synthesis for Supervised Medical Image Segmentation,” OAGM & ARW Joint Workshop, 2017. https://doi.org/10.3217/978-3-85125-524-9-30 .

C. Bowles, et al., “GAN Augmentation: Augmenting Training Data using Generative Adversarial Networks,” in arXiv e-prints, 2018. [Online]. Available: https://arxiv.org/abs/1810.10863 . Accessed on: January 30, 2024.

V. Sushko, D. Zhang, J. Gall, and A. Khoreva, “One-Shot Synthesis of Images and Segmentation Masks,” in arXiv e-prints, 2022. [Online]. Available: https://arxiv.org/abs/2209.07547 . Accessed on: January 30, 2024.

T. Malygina, E. Ericheva, and I. Drokin, “Data Augmentation with GAN: Improving Chest X-Ray Pathologies Prediction on Class-Imbalanced Cases,” in W. van der Aalst et al. (Eds.), Analysis of Images, Social Networks and Texts, AIST 2019, Lecture Notes in Computer Science, vol. 11832, Springer, Cham, 2019. https://doi.org/10.1007/978-3-030-37334-429 .

H. Abdi, S. Kasaei, and M. Mehdizadeh, “Automatic segmentation of mandible in panoramic x-ray,” J. Med. Imaging (Bellingham), vol. 2, no. 4, 044003, 2015. [Online]. Available:

https://www.academia.edu/36038975/PreProcessing_of_Dental_X-Ray_Images_Using_Adaptive_Histogram_Equalization_Method. Accessed on: January 30, 2024.

Я. О. Ісаєнков, і О. Б. Мокін, «Аналіз генеративних моделей глибокого навчання та особливостей їх реалізації на прикладі WGAN,» Вісник Вінницького політехнічного інституту, № 1, с. 82-94, Березень 2022. https://doi.org/10.31649/1997-9266-2022-160-1-82-94 .

О. В. Коменчук, і О. Б. Мокін, «Аналіз методів передоброблення панорамних стоматологічних рентгенівських знімків для задач сегментації зображень,» Вісник Вінницького політехнічного інституту, № 5, с. 41-49. Листопад 2023. https://doi.org/10.31649/1997-9266-2023-170-5-41-49 .

Downloads

Abstract views: 169

Published

2024-02-27

How to Cite

[1]
Y. O. Isaienkov and O. B. Mokin, “Target Class Transformation for Segmentation Task Using U-GAN”, Вісник ВПІ, no. 1, pp. 81–87, Feb. 2024.

Issue

Section

Information technologies and computer sciences

Metrics

Downloads

Download data is not yet available.

Most read articles by the same author(s)

1 2 3 4 5 6 7 8 > >>