How Good are LM and LLMs in Bangla Newspaper Article Summarization
Published in 27th International Conference on Pattern Recognition (ICPR), 2024
ABSTRACT
This research explores the performance of various language models for generating Bangla text, with an emphasis on the task of text abstractive summarization, specifically newspaper headline generation. Given the concern regarding the lack of diversity in previous newspaper article datasets, we have created a dataset from Bangla online newspapers, focusing on the most recent and diverse news for evaluation purposes. The dataset contains a wider variety of article types and includes articles from a greater number of newspapers than previous datasets. Through comprehensive experimentation and evaluation, we identify BanglaT5 and GPT-3.5 as standout performers in this domain. While GPT-3.5 falls short of surpassing the fine-tuned BanglaT5, its performance notably outshines that of other large language models (LLMs), boasting a substantial performance margin exceeding 10% in comparison. Moreover, the analysis we conducted indicates that the fine-tuned BanglaT5 performs much better than GPT-3.5 by 5% for both ROUGE-1 and ROUGE-L scores, demonstrating the effectiveness of capturing the subtleties of this task. These findings underscore the pivotal role of model fine-tuning and highlight the nuanced interplay between various language models. They showcase that while LLMs are making progress, they still do not perform as well as traditional LMs in the Bangla language processing landscape.
Recommended citation: Faria Sultana, Md Tahmid Hasan Fuad, Md Fahim, Rahat Rizvi Rahman, Meheraj Hossain, M Ashraful Amin, A K M Mahbubur Rahman, Amin Ahsan Ali. (2024). "How Good are LM and LLMs in Bangla Newspaper Article Summarization," in Proceedings of the 27th International Conference on Pattern Recognition, ICPR 2024.
Download Paper