Hi, I am following t he CogVLM Model. https://huggingface.co/THUDM/cogvlm-chat-hf
I have spinup at sagemaker instance of ml.g4dn.12xlarge with 4x16 GPU.
I tried follow the code specified in above huggingface link, but face error at the load_checkpoint_and_dispatch
Blockquote
model = load_checkpoint_and_dispatch(
model,
“~/.cache/huggingface/hub/models–THUDM–cogvlm-chat-hf/snapshots/54b93e0af3f1d8badcdeefdb0d26b1dfbc227f7a/”, # typical, ‘~/.cache/huggingface/hub/models–THUDM–cogvlm-chat-hf/snapshots/balabala’
device_map=device_map,
)
This is the results of the !ls ~/.cache/huggingface/hub/models–THUDM–cogvlm-chat-hf/snapshots/54b93e0af3f1d8badcdeefdb0d26b1dfbc227f7a/
config.json model-00005-of-00008.safetensors
configuration_cogvlm.py model-00006-of-00008.safetensors
generation_config.json model-00007-of-00008.safetensors
model-00001-of-00008.safetensors model-00008-of-00008.safetensors
model-00002-of-00008.safetensors modeling_cogvlm.py
model-00003-of-00008.safetensors model.safetensors.index.json
model-00004-of-00008.safetensors visual.py
This is the error message
ValueError: checkpoint
should be the path to a file containing a whole state dict, or the index of a sharded checkpoint, or a folder containing a sharded checkpoint or the whole state dict, but got ~/.cache/huggingface/hub/models–THUDM–cogvlm-chat-hf/snapshots/54b93e0af3f1d8badcdeefdb0d26b1dfbc227f7a/.
Please advise how to fix it? Thanks