Benchmarking Human-Like Quality in Neural Machine Translation Systems

Authors

  • Arooj Basharat University of the Punjab, Lahore, Pakistan
  • zilly Huma University of Gujrat, Pakistan

Abstract

Assessing the human-like nature of NMT machine translation (NMT) outputs is a vital, but also a difficult task in natural language processing (NLP). In this research we detail the mechanisms and criteria intended for evaluation of likeness to human capabilities in the NMT. Central to this investigation are various evaluation approaches that mostly include linguistic fluency, semantic accuracy, and cultural correctness. The work looks into the utilization of well-established measures like BLEU (Bilingual Evaluation Understudy), METEOR (Metric for Evaluation of Translation with Explicit ORdering), and newer metrics that can capture the subtle elements of human-like translation. Furthermore, the role which human assessment plays with respect to automated metrics as an assistive tool which acknowledges the subjective nature of translations is also discussed. By empirical analysis of various datasets and language pairs, the philosophical lessons of the current evaluation frameworks concerning the near-human qualities they describe are exposed. The results are consistent in demonstrating the necessity of more nuanced evaluation strategies which should take into account the cultural context, idiomatic expressions, and stylistic nuances, and thereby enhance the development of NMTs that effectively mimic human translation proficiency. The development of NMTs that effectively.

Downloads

Published

2024-07-16

Issue

Section

Articles