About the origin of the model category names in `AutoModelWithLMHead`


I’d like to ask about where the model category names come from.

In AutoModelWithLMHead class, the warning says we should use AutoModelForCausalLM, AutoModelForMaskedLM, or AutoModelForSeq2SeqLM instead of it.

class AutoModelWithLMHead:
    This is a generic model class that will be instantiated as one of the model classes of the library---with a
    language modeling head---when created with the when created with the
    :meth:`~transformers.AutoModelWithLMHead.from_pretrained` class method or the
    :meth:`~transformers.AutoModelWithLMHead.from_config` class method.

    This class cannot be instantiated directly using ``__init__()`` (throws an error).

    .. warning::

        This class is deprecated and will be removed in a future version. Please use
        :class:`~transformers.AutoModelForCausalLM` for causal language models,
        :class:`~transformers.AutoModelForMaskedLM` for masked language models and
        :class:`~transformers.AutoModelForSeq2SeqLM` for encoder-decoder models.

I’m afraid this may not be a very essential question, but is there any origin for the names of the classification of CausalLM, MaskedLM, and Seq2SeqLM?
Or are they original to the transformers library?
I would like to know more about the source of the terms in using the library.

Thank you in advance.

The differences between the three are explained in the model summary. Causal language modeling/masked language modeling are very often used in research papers, so those terms don’t come from the transformers library.

1 Like

Thank you for telling me the link to the document. I’ve seen the page, but it seems my understanding was not enough. I will do a closer look.
After your explanation, I understand that causal language modeling/masked language modeling are common terms.
Thank you again.