Hello,
I’d like to ask about where the model category names come from.
In AutoModelWithLMHead
class, the warning says we should use AutoModelForCausalLM
, AutoModelForMaskedLM
, or AutoModelForSeq2SeqLM
instead of it.
class AutoModelWithLMHead:
r"""
This is a generic model class that will be instantiated as one of the model classes of the library---with a
language modeling head---when created with the when created with the
:meth:`~transformers.AutoModelWithLMHead.from_pretrained` class method or the
:meth:`~transformers.AutoModelWithLMHead.from_config` class method.
This class cannot be instantiated directly using ``__init__()`` (throws an error).
.. warning::
This class is deprecated and will be removed in a future version. Please use
:class:`~transformers.AutoModelForCausalLM` for causal language models,
:class:`~transformers.AutoModelForMaskedLM` for masked language models and
:class:`~transformers.AutoModelForSeq2SeqLM` for encoder-decoder models.
"""
I’m afraid this may not be a very essential question, but is there any origin for the names of the classification of CausalLM
, MaskedLM
, and Seq2SeqLM
?
Or are they original to the transformers
library?
I would like to know more about the source of the terms in using the library.
Thank you in advance.