transformers version: 4.39.2
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
model = AutoModelForCausalLM.from_pretrained("google/gemma-7b")
on my end, i can load gemma-2b, but for 7b, it returns the error.
even I try to delete the local cache, it still does not work.
rm -r ~/.cache/huggingface/hub/models–google–gemma-7b
May I know the reason ?
thanks.
Traceback (most recent call last):
File "/home/chenyanan/trl/ttt.py", line 7, in <module>
tokenizer = AutoTokenizer.from_pretrained("google/gemma-7b")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chenyanan/anaconda3/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py", line 794, in from_pretrained
config = AutoConfig.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chenyanan/anaconda3/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1138, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chenyanan/anaconda3/lib/python3.11/site-packages/transformers/configuration_utils.py", line 631, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/chenyanan/anaconda3/lib/python3.11/site-packages/transformers/configuration_utils.py", line 686, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/home/chenyanan/anaconda3/lib/python3.11/site-packages/transformers/utils/hub.py", line 369, in cached_file
raise EnvironmentError(
OSError: google/gemma-7b does not appear to have a file named config.json. Checkout 'https://huggingface.co/google/gemma-7b/tree/None' for available files.