All Hosted Inference Api's are giving the http 422 error

I’ve tried multiple models on the inference api text-generation,

  • Big Science/Bloom
  • tii uae/falcon-7b-instruct
  • OpenAssistant/oasst-sft-4-pythia-12b-epoch-3.5

Since all the models appear to be having this issue, I think its something with the hosted api. What wrong, and how can we fix it?