Convert torch tensor to String Representation Value

Hi,

I am trying to use the RAG module to create the context based on a query from a custom knowledge dataset. The generated string works fine (just to test it, but that is not the objective)

I want to find out the retreived context, what is the context that has ben retrieved. I have got the context_input_ids from the model output, that is what I am assuming is the context which is torch.tensor object. I would like to understand how do I get the string value from that tensor object.

from transformers import AutoTokenizer, RagRetriever, RagTokenForGeneration
import torch
tokenizer = AutoTokenizer.from_pretrained(“facebook/rag-token-nq”,output_retrieved=True)
dataset_path = “my_knowledge_dataset” # dataset saved via dataset.save_to_disk(…)
index_path = “my_knowledge_dataset_hnsw_index.faiss”

retriever = RagRetriever.from_pretrained(
“facebook/rag-token-nq”, index_name=“custom”, passages_path=dataset_path, index_path=index_path,output_retrieved=True)

initialize with RagRetriever to do everything in one forward call

model = RagTokenForGeneration.from_pretrained(“facebook/rag-token-nq”, retriever=retriever,output_retrieved=True)

question= “What does Moses’ rod turn into ?”
input_ids = tokenizer.question_encoder(question, return_tensors=“pt”)[“input_ids”]
outputs = model(input_ids=input_ids)

print(outputs.context_input_ids)

generated = model.generate(input_ids) generated_string = tokenizer.batch_decode(generated, skip_special_tokens=True)[0]
print(question)
print(generated_string)

I think this one should work.
tokenizer.batch_decode(outputs.context_input_ids,skip_special_tokens=True)