Domain-Specific Fine-Tuning of BERT and ChatGPT for Enhanced Medical Text Analysis
Abstract
This paper is a research work that talks about the effect of fine-tuning domain specif model like BERT and ChatGPT in medical text analysis. The research was done to compare and determine if these models are very good at tasks like named entity recognition (NER), relation extraction, and medical document classification. We hope to show the ways in which BERT and ChatGPT can only provide answers to some of the issues that are met and fall short in other cases, that is, we will embody a comparison of their performance. By becoming the translator of a medical dictionary to BERT and ChatGPT, we want to declare their talents and weaknesses, whereas, we will bring more illumination to their use in the field of medicine.
Downloads
Published
Issue
Section
License
Copyright (c) 2024 Journal of Computational Innovation
This work is licensed under a Creative Commons Attribution 4.0 International License.