Domain-Specific Fine-Tuning of BERT and ChatGPT for Enhanced Medical Text Analysis

Authors

  • Atika Nishat University of Gujrat, Pakistan
  • Areej Mustafa University of Gujrat, Pakistan

Abstract

This paper is a research work that talks about the effect of fine-tuning domain specif model like BERT and ChatGPT in medical text analysis. The research was done to compare and determine if these models are very good at tasks like named entity recognition (NER), relation extraction, and medical document classification. We hope to show the ways in which BERT and ChatGPT can only provide answers to some of the issues that are met and fall short in other cases, that is, we will embody a comparison of their performance. By becoming the translator of a medical dictionary to BERT and ChatGPT, we want to declare their talents and weaknesses, whereas, we will bring more illumination to their use in the field of medicine.

Downloads

Published

2024-09-12

Issue

Section

Articles