Issue Accessing "reazon-research/reazonspeech-nemo-v2" Model via Inference API

I encountered an error while trying to access the Inference API (serverless) for the “reazon-research/reazonspeech-nemo-v2” model using the curl command. The error message I received is as follows:

{"error": "Model reazon-research/reazonspeech-nemo-v2 is currently loading", "estimated_time": 20.0}

Has anyone else experienced this issue? I’m curious to know if this is a temporary problem or if there are steps I should take to resolve it.

Any insights or suggestions would be greatly appreciated. Thank you in advance for your help!

Hi, I’m from ReazonSpeech team.

I encountered an error while trying to access the Inference API (serverless) for the “reazon-research/reazonspeech-nemo-v2” model using the curl command.

Any insights or suggestions would be greatly appreciated.

We provide a Google Colab notebook for ReazonSpeech:

If you want to try ReazonSpeech NeMo model quickly on Web Interface,
we recommend to use this notebook.

(It’s simply better than using a raw NeMo model on Hugging Face Inference API,
since we implement a few more functionality in our Python package)

1 Like

Just wanted to say thanks to ReazonSpeech team to offered the alternative solution. It work fine as temporary solution, but it’s helping me move forward for now. Really appreciate it!