WSEAS Transactions on Information Science and Applications
Print ISSN: 1790-0832, E-ISSN: 2224-3402
Volume 22, 2025
Enhancing Language Models with Retrieval-Augmented Generation A Comparative Study on Performance
Authors: , ,
Abstract: Retrieval-Augmented Generation (RAG) is a powerful technique that enhances the capabilities of
Large Language Models (LLMs) by integrating information retrieval with text generation. By accessing and
incorporating relevant external knowledge, RAG systems address the limitations of traditional LLMs, such as
memory constraints and the inability to access up-to-date information. This research explores the implementation
and evaluation of RAG systems, focusing on their potential to improve the accuracy and relevance of LLM
responses. It investigates the impact of different LLM types (causal, question-answering, conversational)
and retrieval-augmentation strategies (sentence-level, paragraph-level) on the performance of RAG systems.
We conducted experiments using various open-source LLMs and a custom-built RAG system to assess the
effectiveness of different approaches. The findings indicate that RAG systems can significantly enhance the
performance of LLMs, especially for complex questions that require access to diverse information sources.
T5 conversational models, in particular, demonstrate strong performance in synthesis-based tasks, effectively
combining information from multiple retrieved documents. However, causal and question-answering models
may struggle with complex reasoning and synthesis, even with RAG augmentation.
Search Articles
Keywords: Retrieval-Augmented Generation (RAG), Large Language Models (LLMs), Knowledge Retrieval,
Contextual Embeddings, Information Retrieval, Neural Networks, LLM Transformer Models, Vector Databases
Pages: 272-297
DOI: 10.37394/23209.2025.22.23