Reasoning Distillation with Huggingface Trainer

I want to do Reasoning Distillation, to distillate a rationale of teacher model to the student model. Now, I already have a rationale generated from GPT3.5 acted as a rationale from teacher model kept as a json file. However, to do distillation by using HuggingFace Trainer. I need to convert it to dataset that is compatible to HuggingFace by using load_dataset, and the format is like

# {
#     "data": [
#         {
#             "text": "..."
#         },
#         {
#             "text": "..."
#         },
#         {
#             "text": "..."
#         },
#         ...
#     ]
# }

According to the case, since I do it on retrieval-augmented generation task, I want to ask that do we need to provide a retrieved document into text along with the rationale or not? or we can just put just only rationale into it ?