Loading a local llm model with mlx-lm

I have been using this on my MacBook Pro for a couple of months with no problems. I returned back this week and this code fails:

from mlx_lm import load
model_filename = '/Users/myname/llm_models/gemma/gemma-7b-it'
model, tokenizer = load(model_filename)

error message:
HFValidationError: Repo id must be in the form ‘repo_name’ or ‘namespace/repo_name’: ‘/Users/myname/llm_models/gemma/gemma-7b-it’. Use repo_type argument if needed.

Would appreciate any advice in getting this going again.