Transformer's output as input to other model

Hello,

I want to create a model which generates text and the generated text is input to other model. So basically two models are trained together. How can i achieve this using hugging face?

Thanks

Welcome to the forum, @omerarshad!

Nice questions, I had the same problem, too. In my opinion this is possible only if you have ground truth for the intermediate step and not only the final reference. What you might do is to train two models separately: the first one with the intermediate reference, and the last one with the final reference.

Schematically:

Input -> MODEL_1 -> Output_1
                        | compare (cross-entropy)
             Intermediate Reference

Intermediate Reference -> MODEL_2 -> Output_2
                                        | compare (cross-entropy)
                                  Final Reference

What do you think?

Yes, i have ground truth for both models, and we can train them separately. The only issue i am facing is how to do using Hugging face? The task is that first model writes answer of a question, and second model takes this answer as input and generates question

So you have questions references and answer references (let’s call them Q_REF and A_REF). Then you first take the model that answers, let’s call it MODEL_A and you train it with A_REF. After that, you take MODEL_Q and you train it with Q_REF. Once you have the fine-tuned models you can just have a question as input and produce an answer with MODEL_A, after you can take that answer and you give it to MODEL_Q to have a new question, if I understand correctly.

You can check out the milion examples on how to train a model for Q&A with your own dataset. Or you can start reproducing results with the SQuAD one. Anyway, I never did it so I am not the best to help you on that one.

how can this be achieved using hugging face? Remember these both models will be trained together