Hi, i’m using transformer library but I really didn’t know how to load model.
when I loaded the model
from transformers import AutoTokenizer, AutoModel
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
# Model
model = AutoModel.from_pretrained('bert-base-cased')
The “bert-base-cased” model load on the memory?
how to know that? Can I read any docs about loading the model on the memory?
If so, when I load the model directly using git-lfs
from transformers import AutoTokenizer, AutoModel
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("./bert-base-uncased", local_files_only=True)
# Model
model = AutoModel.from_pretrained('./bert-base-cased', local_files_only=True)
Does this also load the model on the memory?
Any help?