Does the model load on the memory?

Hi, i’m using transformer library but I really didn’t know how to load model.

when I loaded the model

from transformers import AutoTokenizer, AutoModel
 
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
 
# Model
model = AutoModel.from_pretrained('bert-base-cased')

The “bert-base-cased” model load on the memory?
how to know that? Can I read any docs about loading the model on the memory?

If so, when I load the model directly using git-lfs

from transformers import AutoTokenizer, AutoModel
 
# Tokenizer
tokenizer = AutoTokenizer.from_pretrained("./bert-base-uncased", local_files_only=True)
 
# Model
model = AutoModel.from_pretrained('./bert-base-cased', local_files_only=True)

Does this also load the model on the memory?
Any help?

If I am not mistaken, using .from_pretrtained() will download a model to your memory and also cache it in your disk. There is a folder called .cache in Linux, and this is where the cache stays so that next time you want to load this model again for use, you don’t have to wait.

more information here Models

1 Like

Thanks! I found the folder ~/.cache/huggingface/hub