Unable to load a pretrained starcoder2 with SFT

AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path)

Getting the following error
functionexecutor-run-evaluation-d4381aa-3454115740: Traceback (most recent call last):
functionexecutor-run-evaluation-d4381aa-3454115740: File “/app/code/evaluation/evaluation_codesign.py”, line 989, in
functionexecutor-run-evaluation-d4381aa-3454115740: llm_chain = get_llm_chain(model_type,
functionexecutor-run-evaluation-d4381aa-3454115740: File “/app/code/evaluation/evaluation_utils.py”, line 317, in get_llm_chain
functionexecutor-run-evaluation-d4381aa-3454115740: model = from_pretrained_wrapper(model_name_or_path,
functionexecutor-run-evaluation-d4381aa-3454115740: File “/app/code/evaluation/evaluation_utils.py”, line 189, in from_pretrained_wrapper
functionexecutor-run-evaluation-d4381aa-3454115740: AutoModelForCausalLM.from_pretrained(pretrained_model_name_or_path)
functionexecutor-run-evaluation-d4381aa-3454115740: File “/usr/local/lib/python3.8/dist-packages/transformers/models/auto/auto_factory.py”, line 563, in from_pretrained
functionexecutor-run-evaluation-d4381aa-3454115740: return model_class.from_pretrained(
functionexecutor-run-evaluation-d4381aa-3454115740: File “/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py”, line 3039, in from_pretrained
functionexecutor-run-evaluation-d4381aa-3454115740: config.quantization_config = AutoHfQuantizer.merge_quantization_configs(
functionexecutor-run-evaluation-d4381aa-3454115740: File “/usr/local/lib/python3.8/dist-packages/transformers/quantizers/auto.py”, line 149, in merge_quantization_configs
functionexecutor-run-evaluation-d4381aa-3454115740: quantization_config = AutoQuantizationConfig.from_dict(quantization_config)
functionexecutor-run-evaluation-d4381aa-3454115740: File “/usr/local/lib/python3.8/dist-packages/transformers/quantizers/auto.py”, line 73, in from_dict
functionexecutor-run-evaluation-d4381aa-3454115740: raise ValueError(
functionexecutor-run-evaluation-d4381aa-3454115740: ValueError: Unknown quantization type, got bitsandbytes - supported types are: [‘awq’, ‘bitsandbytes_4bit’, ‘bitsandbytes_8bit’, ‘gptq’, ‘aqlm’, ‘quanto’]