Seems common with IQ3_M. Example, go here:
Click on the black gguf viewer icon for the IQ3_M. Then choose ollama. Look at the download link:
ollama run bartowski/Llama-3.3-70B-Instruct-ablated-GGUF · Hugging Face
Note what’s not present: the actual GGUF you’re trying to use. Compare to the links for other quantizations: