Search:
Match:
1 results

Analysis

This paper addresses the important problem of detecting AI-generated text, specifically focusing on the Bengali language, which has received less attention. The study compares zero-shot and fine-tuned transformer models, demonstrating the significant improvement achieved through fine-tuning. The findings are valuable for developing tools to combat the misuse of AI-generated content in Bengali.
Reference

Fine-tuning significantly improves performance, with XLM-RoBERTa, mDeBERTa and MultilingualBERT achieving around 91% on both accuracy and F1-score.