|
| 1 | +RAG |
| 2 | +---------------------------------------------------- |
| 3 | + |
| 4 | +Overview |
| 5 | +~~~~~~~~~~~~~~~~~~~~~ |
| 6 | + |
| 7 | +Retrieval-augmented generation ("RAG") models combine the powers of pretrained dense retrieval (DPR) and |
| 8 | +sequence-to-sequence models. RAG models retrieve documents, pass them to a seq2seq model, then marginalize to generate |
| 9 | +outputs. The retriever and seq2seq modules are initialized from pretrained models, and fine-tuned jointly, allowing |
| 10 | +both retrieval and generation to adapt to downstream tasks. |
| 11 | + |
| 12 | +It is based on the paper `Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks |
| 13 | +<https://arxiv.org/abs/2005.11401>`__ by Patrick Lewis, Ethan Perez, Aleksandara Piktus, Fabio Petroni, Vladimir |
| 14 | +Karpukhin, Naman Goyal, Heinrich Küttler, Mike Lewis, Wen-tau Yih, Tim Rocktäschel, Sebastian Riedel, Douwe Kiela. |
| 15 | + |
| 16 | +The abstract from the paper is the following: |
| 17 | + |
| 18 | +*Large pre-trained language models have been shown to store factual knowledge |
| 19 | +in their parameters, and achieve state-of-the-art results when fine-tuned on |
| 20 | +downstream NLP tasks. However, their ability to access and precisely manipulate |
| 21 | +knowledge is still limited, and hence on knowledge-intensive tasks, their |
| 22 | +performance lags behind task-specific architectures. Additionally, providing |
| 23 | +provenance for their decisions and updating their world knowledge remain open |
| 24 | +research problems. Pre-trained models with a differentiable access mechanism to |
| 25 | +explicit nonparametric memory can overcome this issue, but have so far been only |
| 26 | +investigated for extractive downstream tasks. We explore a general-purpose |
| 27 | +fine-tuning recipe for retrieval-augmented generation (RAG) — models which combine |
| 28 | +pre-trained parametric and non-parametric memory for language generation. We |
| 29 | +introduce RAG models where the parametric memory is a pre-trained seq2seq model and |
| 30 | +the non-parametric memory is a dense vector index of Wikipedia, accessed with |
| 31 | +a pre-trained neural retriever. We compare two RAG formulations, one which |
| 32 | +conditions on the same retrieved passages across the whole generated sequence, the |
| 33 | +other can use different passages per token. We fine-tune and evaluate our models |
| 34 | +on a wide range of knowledge-intensive NLP tasks and set the state-of-the-art |
| 35 | +on three open domain QA tasks, outperforming parametric seq2seq models and |
| 36 | +task-specific retrieve-and-extract architectures. For language generation tasks, we |
| 37 | +find that RAG models generate more specific, diverse and factual language than a |
| 38 | +state-of-the-art parametric-only seq2seq baseline.* |
| 39 | + |
| 40 | + |
| 41 | + |
| 42 | +RagConfig |
| 43 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 44 | + |
| 45 | +.. autoclass:: transformers.RagConfig |
| 46 | + :members: |
| 47 | + |
| 48 | + |
| 49 | +RagTokenizer |
| 50 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 51 | + |
| 52 | +.. autoclass:: transformers.RagTokenizer |
| 53 | + :members: prepare_seq2seq_batch |
| 54 | + |
| 55 | + |
| 56 | +Rag specific outputs |
| 57 | +~~~~~~~~~~~~~~~~~~~~~ |
| 58 | + |
| 59 | +.. autoclass:: transformers.modeling_rag.RetrievAugLMMarginOutput |
| 60 | + :members: |
| 61 | + |
| 62 | +.. autoclass:: transformers.modeling_rag.RetrievAugLMOutput |
| 63 | + :members: |
| 64 | + |
| 65 | + |
| 66 | +RAGRetriever |
| 67 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 68 | + |
| 69 | +.. autoclass:: transformers.RagRetriever |
| 70 | + :members: |
| 71 | + |
| 72 | + |
| 73 | +RagModel |
| 74 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 75 | + |
| 76 | +.. autoclass:: transformers.RagModel |
| 77 | + :members: forward |
| 78 | + |
| 79 | + |
| 80 | +RagSequenceForGeneration |
| 81 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 82 | + |
| 83 | +.. autoclass:: transformers.RagSequenceForGeneration |
| 84 | + :members: forward, generate |
| 85 | + |
| 86 | + |
| 87 | +RagTokenForGeneration |
| 88 | +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
| 89 | + |
| 90 | +.. autoclass:: transformers.RagTokenForGeneration |
| 91 | + :members: forward, generate |
0 commit comments