How to use the inference api on tts model?

This looks like a cache issue (There’s a cache in front of the API to prevent calculating things over and over).

You can try adding {"inputs": "....", {"parameters": {"use_cache": False}} to your input to force the output to be calculated.

The caching mechanism should be upgraded at some point so you don’t have to do this.