Not a valid JSON file - quant gemma model

loading gemma model
model = AutoModelForCausalLM.from_pretrained(
“google/gemma-7b-it-quant-pytorch”
)

OSError Traceback (most recent call last)
/usr/local/lib/python3.10/dist-packages/transformers/configuration_utils.py in _get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
719 config_dict[“_commit_hash”] = commit_hash
720 except (json.JSONDecodeError, UnicodeDecodeError):
→ 721 raise EnvironmentError(
722 f"It looks like the config file at ‘{resolved_config_file}’ is not a valid JSON file."
723 )

OSError: It looks like the config file at ‘/root/.cache/huggingface/hub/models–google–gemma-7b-it-quant-pytorch/snapshots/6c79d9bd68c8a9b870927ee913f428baadee8bff/config.json’ is not a valid JSON file.

Hi,

As stated in the model card:

This repository corresponds to the research Gemma PyTorch repository. If you’re looking for the transformers implementation, visit this page

Hence you need to use this repository for loading the model: google/gemma-7b-it · Hugging Face.

This topic was automatically closed 12 hours after the last reply. New replies are no longer allowed.