Method of Text Generation Based on the BERT LLM

Authors

  • A. A. Yarovyi Vinnytsia National Technical University
  • D. S. Kudriavtsev Vinnytsia National Technical University

DOI:

https://doi.org/10.31649/1997-9266-2024-177-6-113-120

Keywords:

BERT, terminological knowledge bases, semantic search, language models, term generation

Abstract

The application of the BERT language model for tasks of term search and generation in terminological knowledge bases (TKB) with optimization for intelligent chatbots is proposed. The architecture of the BERT model, its bidirectional attention mechanism, text processing algorithms, and the main stages of model training are described. The use of BERT for semantic search of terms and methods for adapting the model for text generation, considering the semantic value of each term, are considered. A comparative analysis of the BERT language model with models from the GPT series is carried out, highlighting the strengths and weaknesses of BERT in the context of search and generative tasks. The paper also thoroughly examines metrics for evaluating the quality of term search, such as Precision, Recall, F1-score, Mean Reciprocal Rank (MRR), Normalized Discounted Cumulative Gain (nDCG), and others, which allow for a comprehensive assessment of the effectiveness of term search and generation. Practical aspects of integrating BERT into knowledge management systems are discussed, and recommendations are provided for fine-tuning the model for specialized TKBs. Additionally, the ethical aspects of using language models are emphasized, particularly the risks of bias in term search and generation, as well as the importance of ensuring accuracy and objectivity in the generated results. The responsible use of BERT is discussed to avoid incorrect or harmful conclusions during the automatic processing of knowledge. Software was developed for testing the BERT language model, and training of the language model was tested on various datasets. The testing results demonstrated the high efficiency of using the BERT language model, considering optimizations for text generation tasks. Potential improvements to BERT for working with TKBs are discussed, including methods for fine-tuning the model on domain-specific data, using the multilingual version of BERT for processing multilingual knowledge bases, as well as optimization techniques for improving performance in resource-constrained environments. Approaches for testing and evaluating search effectiveness are proposed, including the use of expert evaluations and automatic metrics. The final part of the article outlines future research directions, including the integration of BERT with neural search systems, automatic generation of new terms, and the expansion of knowledge management systems’ functionality based on deep learning.

Author Biographies

A. A. Yarovyi, Vinnytsia National Technical University

Dr. Sc. (Eng.), Professor, Head of the Chair of Computer Science

D. S. Kudriavtsev, Vinnytsia National Technical University

Post-Graduate Student, Assistant of the Chair of Computer Science

References

СПИСОК ВИКОРИСТАНОЇ ЛІТЕРАТУРИ

A. Subakti, H. Murfi, and N. Hariadi, “The performance of BERT as data representation of text clustering,” J. Big Data, no. 9, 15, 2022. https://doi.org/10.1186/s40537-022-00564-9 .

W. Liu, et al., “K-BERT: Enabling Language Representation with Knowledge Graph,” in Proceedings of the AAAI Conference on Artificial Intelligence, no. 34 (03), 2020, pp. 2901-2908. https://doi.org/10.1609/aaai.v34i03.5681 .

S. Shen, et al., “Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT,” in Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 2020, pp. 8815-8821. https://doi.org/10.1609/aaai.v34i05.6409 .

M. Pankiewicz, “Large Language Models (GPT) for automating feedback on programming assignments,” in 31st International Conference on Computers in Education (ICCE), vol. 13, 2023. https://doi.org/10.48550/arXiv.2307.00150 .

A. Moffat, “Computing Maximized Effectiveness Distance for Recall-Based Metrics,” in IEEE Transactions on Knowledge and Data Engineering, vol. 30, no. 1, pp. 198-203, 1 Jan. 2018. https://doi.org/10.1109/TKDE.2017.2754371 .

J. Sun, et al., “Deep learning-based methods for natural hazard named entity recognition,” Sci Rep., no. 12, pp. 4598, 2022. https://doi.org/10.1038/s41598-022-08667-2.

A. Bello, S.-C. Ng, and M.-F. Leung, “A BERT Framework to Sentiment Analysis of Tweets,” Sensors, no. 23, pp. 506, 2023. https://doi.org/10.3390/s23010506 .

Y. Chen, X. Kou, J. Bai, and Y. Tong, “Improving BERT With Self-Supervised Attention, ” in IEEE Access, vol. 9, pp. 144129-144139, 2021. https://doi.org/10.1109/ACCESS.2021.3122273 .

M. H. Syed, S-T. Chung, “MenuNER: Domain-Adapted BERT Based NER Approach for a Domain with Limited Dataset and Its Application to Food Menu Domain,” Applied Sciences, no. 11(13), pp. 6007, 2021. https://doi.org/10.3390/app11136007 .

A. Wang, and K. Cho, BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model, 2019. https://doi.org/10.48550/arXiv.1902.04094 .

Dr. A. Shaji George, A. S. Hovan George, Dr. T. Baskarand A. S. Gabrio Martin, “Revolutionizing Business Communication: Exploring the Potential of GPT-4 in Corporate Settings,” Partners Universal International Research Journal (PUIRJ), vol. 2, no. 1, pp. 149-157, Mar. 2023, https://doi.org/10.5281/zenodo.7775900 . ISSN: 2583-5602.

S. Jacobs, and S. Jaschke, “Leveraging Lecture Content for Improved Feedback: Explorations with GPT-4 and Retrieval Augmented Generation, ” in 36th International Conference on Software Engineering Education and Training (CSEE&T), Würzburg, Germany, 2024, pp. 1-5, https://doi.org/10.1109/CSEET62301.2024.10663001 .

Gabriel A., Kensho Derived Wikimedia Dataset, 2020. [Electronic resource]. Available: https://www.kaggle.com/datasets/kenshoresearch/kensho-derived-wikimedia-data. Accessed on September 1, 2024.

B. D. Lund, and T. Wang, “Chatting about ChatGPT: how may AI and GPT impact academia and libraries?” Library Hi Tech News, vol. 40, no. 3, pp. 26-29, 2023. https://doi.org/10.1108/LHTN-01-2023-0009 .

Downloads

Abstract views: 4

Published

2024-12-27

How to Cite

[1]
A. A. Yarovyi and D. S. Kudriavtsev, “Method of Text Generation Based on the BERT LLM”, Вісник ВПІ, no. 6, pp. 113–120, Dec. 2024.

Issue

Section

Information technologies and computer sciences

Metrics

Downloads

Download data is not yet available.