Technical04/17/2024

A Deep Dive into Retrieval Augmented Generation (RAG) vs. Fine-tuning

Chris Munna
Senior ML Scientist

Transformers have revolutionized the way we approach natural language processing (NLP) tasks, offering unparalleled capabilities in generating human-like text and understanding complex language patterns. Nevertheless, the baseline models still have many limitations in their capacities to analyze data, form long-term plans, and solve general problems. There are many recent techniques being researched or utilized to enhance the performance and applicability of transformer models. This blog post will focus on two particular approaches: Retrieval Augmented Generation (RAG) and fine-tuning. Both methods offer distinct advantages and have their specific use cases. We will unpack these techniques, providing insights into their operation and illuminating their best applications with examples.

The Art of Fine-tuning Transformer Models

Fine-tuning involves adapting a pre-trained transformer model to a specific task or dataset by continuing the training process with targeted data. This method leverages the general language understanding capabilities of the base model, directing its focus towards the particular nuances of the new task or dataset.

During fine-tuning, the model's weights are slightly adjusted based on a smaller, task-specific dataset. This process tailors the model's predictions to fit the specific needs of the task, enhancing its performance significantly on similar types of data.

It is also important to note that one does not have to adjust all model weights during the fine-tuning process, as various lightweight “parameter efficient fine-tuning” (PEFT) methods have been developed. These methods, such as LoRA or IA3, introduce a smaller number of additional parameters which are tuned to the new dataset [1, 2, 3]. See [4] for a comprehensive review of such methods.

Use cases

Fine-tuning is particularly effective for tasks where a high degree of specialization is required, such as:

  • Textual Analysis: Adapting a general-purpose transformer model to understand and react to the specific language and expressions used in documents.
  • Subject-specific jargon: Similarly to the previous example, fine-tuning a model on a corpus of specialized texts to better understand and generate the language particular to the field. Examples include medical notes, legal textbooks, and so on.
  • Subtasks within a more general framework: For example, DNA sequence models are often pretrained on raw sequence data and then fine-tuned on downstream prediction tasks separately. Likewise, natural language transformers will be pretrained on a broad text corpus and then fine-tuned to improve question-answering, coding, or general helpfulness.

Understanding Retrieval Augmented Generation (RAG)

Retrieval Augmented Generation is a cutting-edge technique, first published in 2020, designed to enrich the transformer model's input with external information [5]. This process involves locating relevant documents or data from an external corpus and then generating a response based on both the original input and the retrieved information. RAG operates on the principle of expanding the model's knowledge base beyond its initial training data, allowing it to provide more accurate, detailed, and contextual responses.

RAG models typically consist of two core components: a retriever and a generator. The retriever first selects relevant documents from an external database based on the initial input. These documents are then passed along with the original input to the generator, which synthesizes the final output by integrating information from both sources.

In the context of transformers, the external data source usually takes the form of a vector database, in which units of information (e.g., paragraphs, facts, definitions) are embedded into high-dimensional vectors. The retriever locates the information with the highest similarity to the embedding of the generator’s initial input and then attaches it to the prompt in a specified manner. When the number of vectors is small, the vector similarities can be computed exactly for all embeddings in the database. However, in the more common case where the database is large, an approximate nearest neighbors search such as Hierarchical Small Navigable World is typically used [6].

Use cases

RAG shines in scenarios where the transformer model might require additional context or information outside its training data to generate accurate responses. Examples include:

  • Question Answering Systems: Enhancing the model's ability to provide detailed answers by retrieving relevant context from a larger database (such as wikipedia).
  • Factual grounding: RAG systems can be used to reduce hallucinations in text generation by supplying the model with detailed relevant facts for a given query. One recent paper was able to achieve 98% factual accuracy in conversations with users about recent topics by grounding in Wikipedia, a major advance over GPT-4 [7].
  • Content Recommendation: By retrieving and analyzing user-relevant content, RAG can generate personalized suggestions.
  • Document summarization: In the case of large files which cannot fit into a single prompt, RAG can be used to supply the model with important details from the entire document.

RAG vs. Fine-tuning: Choosing the Right Approach

As described above, fine-tuning tailors a pre-trained model towards specific tasks or data nuances by adjusting it with more specialized, task-focused training. This approach can substantially increase a model’s effectiveness within a particular domain. RAG, meanwhile, extends a transformer model’s knowledge base by integrating an external information retrieval step. This process not only amplifies the model's ability to produce more informed and contextually rich responses but also bridges gaps in its original training data. At a higher level, fine-tuning can be thought of as studying a subject matter and then taking a test from memory, while RAG can be thought of as taking an “open-book” test, but with limited additional study.

The decision between employing Retrieval Augmented Generation (RAG) and fine-tuning techniques for transformer models depends on nuanced considerations about the task's nature, resource availability, and the desired outcome's specificity.

When RAG is More Appropriate

  • Vast information usage: Applications that require a wide array of grounded factual information, beyond the model’s capability to memorize. Examples include educational platforms, scientific inquiry, medical diagnosis, and legal precedent research.
  • Real-time feeds: When the task involves staying abreast of the latest developments (e.g., stock market analysis, summarizing recent scientific discoveries), RAG uniquely offers the ability to dynamically pull in the most current data.
  • Demand for flexibility: Pretrained models are often competent on a wide range of tasks. RAG can be used to allow such models to bridge gaps in training data and improve outcomes without risking over-specialization.

When Fine-tuning is More Appropriate

  • Highly specialized domains: For tasks where precision based on a refined understanding of domain-specific language is paramount (e.g., legal document analysis, technical support in IT), fine-tuning to a curated dataset helps ensure accurate and meaningful responses.
  • Textual analysis: For tasks like sentiment analysis, spam detection, or language translation, the focus is mainly on understanding and/or categorizing the input text based on its content and context. These tasks often benefit from the rich representations learned by transformer models through fine-tuning on labeled datasets.
  • Limited resources: RAG requires access to a relevant and comprehensive external database for retrieval, making it more resource-intensive compared to fine-tuning, which can be effectively executed with a relatively smaller specialized dataset.
  • Pure sequence modeling: Though natural language gets most of the attention, there are many classes of problems suited to transformer models that operate purely on encoded sequences, for which it is not really possible to construct a relevant database for retrieval. Examples include DNA sequence modeling and some types of time series analysis. For these, it is typically necessary to fine-tune on relevant datasets.
  • Creative generation: When generating content within a well-defined scope or style (such as stories, characters, images, or poetry), fine-tuning can capture the creativity and style of the training data, which is often more important than the retrieval of any particular factual information.

Why choose one?

This section presented the two choices as something of a dichotomy, but they are by no means mutually exclusive. In fact, there are many cases where it would be ideal to employ both. When our model is taking an important test, I want it to study for the test and have an open book of knowledge to draw from! And indeed, some of the examples given above appear in both the RAG and fine-tuning sections. Major use cases include specialty fields with large volumes of relevant facts (e.g., medicine, law, scientific research) where factual grounding and nuanced discussion are both critically important.

Conclusion

Both Retrieval Augmented Generation and fine-tuning offer distinct pathways toward enhancing the capabilities of transformer models, each suited to specific circumstances and demands of tasks. The choice of one or the other (or both) ultimately boils down to the specific requirements of the task and the available resources. As the field of NLP continues to evolve, understanding and selecting the appropriate technique for a given application becomes crucial in leveraging the full potential of transformer models.

For a deeper understanding and more detailed insights, readers are encouraged to refer to the listed references and explore the wealth of information available on recent advancements in the implementation of RAG and fine-tuning strategies.

References

[1] Hu, E. J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., & Chen, W. LoRA: Low-Rank Adaptation of Large Language Models. International Conference on Learning Representations (2022).

[2] Zhang, R., Han, J., Liu, C., Gao, P., Zhou, A., Hu, X., Yan, S., Lu, P., Li, H., & Qiao, Y. LLaMA-Adapter: Efficient Fine-tuning of Language Models with Zero-init Attention. arXiv:2303.16199 (2023)

[3] Liu, H., Tam, D., Muqeeth, M., Mohta, J., Huang, T., Bansal, M., & Raffel, C. Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning. arXiv:2205.05638 (2022)

[4] Xu, L., Xie, H., Qin, S.-Z. J., Tao, X., & Wang, F. L. Parameter-Efficient Fine-Tuning Methods for Pretrained Language Models: A Critical Review and Assessment. arXiv:2312.12148 (2023).

[5] Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., & Kiela, D. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. NeurIPS (2020).

[6] Raj Kumar Gupta, Amr Ahmed, Jimmy Lin, Alexander J. Smola. "HSNW: Approximate Nearest Neighbor Search with Hierarchical Small-World Networks." Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (2019).

[7] Semnani, S. J., Yao, V. Z., Zhang, H. C., & Lam, M. S. WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia. EMNLP (2023).

Share:

Strong Analytics builds enterprise-grade data science, machine learning, and AI to power the next generation of products and solutions. Our team of full-stack data scientists and engineers accelerate innovation through their development expertise, scientific rigor, and deep knowledge of state-of-the-art techniques. We work with innovative organizations of all sizes, from startups to Fortune 500 companies. Come introduce yourself on Twitter or LinkedIn, or tell us about your data science needs.