Confusion About T5LM Properties

Hello,

I’m working on using T5LM (small) for semantic parsing tasks. However, there seems to be discrepancies between the model properties as listed on the T5 documentation page: https://huggingface.co/docs/transformers/model_doc/t5

versus the error outputs I get from trying to run the Supervised Training example. More specifically, when running the last line:

loss = model(input_ids=input_ids, labels=labels).loss

I get the error saying the model needs either decoder_inputs_embeds or decoder_input_ids in place of the labels parameter. Additionally, the tutorial text states:

The model will automatically create the decoder_input_ids based on the labels ,…

Is there source code for this model implementation that I can reference? I don’t understand the difference between the embeds vs decoder input ids for a start. And I can’t tell if the model I imported using

model = transformers.AutoModel.from_pretrained(“google/t5-small-lm-adapt”)

is correct because in addition to the above discrepancy, when I run the original loss command, I got the error that

Seq2SeqModelOutput does not have a .loss function.

I thought the LM adaptation of T5 would be 1-to-1 to the original T5 architecture, but that clearly doesn’t seem to be the case. Any help or direction would be greatly appreciated.

Thanks,
Selma

I now understand the difference between decoder_inputs_embeds and the decoder_input_ids; however I’m still confused about the difference between the

google/t5-small-lm-adapt that is being loaded from AutoModel

vs

The T5ConditionalGeneration model supplied by hugging face