"New" Inference API for text-to-text no longer "is useful"

Summary

Updates to the inference API on model cards have changed, where I want to convert the input text into something else a la text2text-generation to now follow the logic of standard text-generation models. This results in the API only showing the difference in characters if more characters are generated, which is not always the case.

Am I doing something wrong, or is it a bug?

Details

So recently, there was an update to the inference API for some (unsure of scope) models on the hub. This is awesome for text-generation models like this one because you can now use ctrl+enter, see the difference between prompt text and what is generated, etc.

However, I am either missing something, or there might be a bug because, for text2text-generation models, it seems like the API is now also configured with similar logic. The downside is that not all text2text-generation models append new text to the original. In the case I am writing about, I attempt to do some “diffusion/denoising” textual spelling & grammar correction. In this case (ideally), new characters should only be added if that improves the grammatical correctness.

Functionally what this does with the new API is it looks like the model does nothing when you try and run inference on the API. Is there some new way/thing I should be doing, or is it a bug?

What currently happens

Model: pszemraj/grammar-synthesis-small

Expected behaviour

As per this demo Colab using the pipeline object:

FWIW, it was behaving correctly before the most recent change in the past week or so. It used to follow the same style as this summarization model for example, which would work fine

Any help greatly appreciated (also let me know if I tagged the topic incorrectly) :slight_smile:

1 Like

@pszemraj thanks so much for notifying us about the issue!

Fix is being deployed here Fix text2tex widget by mishig25 · Pull Request #247 · huggingface/hub-docs · GitHub

1 Like

Thanks for handling!!