Adding Preprocessing to Hosted Inference API

My question answering model requires an extra step of preprocessing the input and the output. How can I add those preprocessing scripts to work with Hosted Inference API (Widget on the website, too)?

hey @yigitbekir, as far as i know the inference api does not support custom pre- / post-processing logic, but you could easily include these steps within a dedicated web / streamlit application :slight_smile:

if you need a custom widget, you can propose one on the huggingface_hub library here: GitHub - huggingface/huggingface_hub: Client library to download and publish models and other files on the huggingface.co hub

1 Like