I used the HuggingFace AutoTrain web interface (/spaces/***) to finetune Meta-Llama-3-8B with default settings. However, when I attempt to deploy the trained model to a dedicated Inference Endpoint, I encounter an error during the endpoint creation.
You are trying to access a gated repo.
Make sure to have access to it at https://huggingface.co/meta-llama/Meta-Llama-3-8B.
401 Client Error. (Request ID: Root=1-66320ceb-645c84aa50a9d06a4d3f73c7;920a0a14-1eee-4208-b8a6-d5fde77a10d8)
Cannot access gated repo for url https://huggingface.co/meta-llama/Meta-Llama-3-8B/resolve/main/config.json.
Access to model meta-llama/Meta-Llama-3-8B is restricted. You must be authenticated to access it.
Note: I’ve access to Llama-3. But somehow this access isn’t passed when deploying.