About the encoder and generator used in the RAG model

Hi, I have questions about the Rag model.
In this paper, the query encoder is DPR and the generator is Bart.

My questions are:

  1. Is the generator a full Bart or just the decoder part of the Bart.
  2. If I implement a Rag with the encoder part of Bart as the query encoder, and decoder part of the Bart as generator. Does that make sense w.r.t the Rag concept? I think this is more intuitive to me. why they use a ‘heterogeneous’ setting?



  1. generator is Bart encoder-decoder. If you have a rag model, you can access it by model.generator

  2. RAG’s question-encoder is not the same as RAG’s generator’s encoder … This really may be confusing, so let me try to explain :smiley:

    • question encoder is for encoding “question” to retrieve “documents” (or so-called “contexts”) from retriever.
    • Then, retriever will concatenate “contexts” with “question” ; this concatenated texts are the new input.
    • This new input will be encoded by Bart’s encoder to generate answer via Bart’s decoder

Hope this helps!


Hi, thanks for the reply! I get it better.