opened 12:45PM - 26 Jan 24 UTC
closed 01:08AM - 02 Apr 24 UTC
bug-unconfirmed
stale
I use python convert-hf-to-gguf.py /fengpo/github/Yi-34B-Chat-8bits.
I get this… error:
File "/fengpo/github/llama.cpp/convert-hf-to-gguf.py", line 1335, in main
model_instance = model_class(dir_model, ftype_map[args.outtype], fname_out, args.bigendian)
File "/fengpo/github/llama.cpp/convert-hf-to-gguf.py", line 57, in __init__
self.model_arch = self._get_model_architecture()
File "/fengpo/github/llama.cpp/convert-hf-to-gguf.py", line 254, in _get_model_architecture
raise NotImplementedError(f'Architecture "{arch}" not supported!')
NotImplementedError: Architecture "LlamaForCausalLM" not supported!.
This also contain LlamaForCausalLM.
