I am working on performing knowledge distillation on a variant XLM text encoder I am developing and which to use the altCLIP text encoder as the teacher model. However I am unsure how to extract that part from the overall model and link it to the Transformers training arguments correctly since I usually use the hugging face identifier.
More details:
from transformers import pipeline
teacher_model = “FacebookAI/xlm-roberta-base”
pipe = pipeline(“text-classification”, model=teacher_model)
id2label = pipe.model.config.id2label
label2id = pipe.model.config.label2id
How is it possible to link the “teacher_model” variable to represent just the text encoder of the altCLIP model and not the whole model.
Thank you.