Curl parameters for aws-whisper-large inference end point?

are there any additional curl parameters for aws-whisper-large inference end point?

otherwise any uploaded wav or flac files that are larger than a few words get truncated:

“The Witsber model was proposed in robust speech recognition via large scale weak supervision.”

instead of:

“The Witsber model was proposed in robust speech recognition via large scale weak supervision. The first paragraphs of the abstract read as follows: We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zeroshot transfer setting without the need for any finetuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.”

kind of a bummer if we cannot fix this.

logs say this:

lc8gt 2022-10-16T23:43:04.560Z /opt/conda/lib/python3.9/site-packages/transformers/pipelines/base.py:1043: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
lc8gt 2022-10-16T23:43:04.560Z warnings.warn(
lc8gt 2022-10-16T23:43:04.829Z warnings.warn(
lc8gt 2022-10-16T23:43:04.829Z /opt/conda/lib/python3.9/site-packages/transformers/generation_utils.py:1296: UserWarning: Neither max_length nor max_new_tokens has been set, max_length will default to 20 (self.config.max_length). Controlling max_length via the config is deprecated and max_length will be removed from the config in v5 of Transformers – we recommend using max_new_tokens to control the maximum length of the generation.
lc8gt 2022-10-16T23:43:05.314Z 2022-10-16 23:43:05,314 | INFO | POST / | Duration: 757.65 ms

Hey @silvacarl,

are you using Amazon SageMaker to deploy your model or Inference Endpoint?

are you using Amazon SageMaker to deploy your model or Inference Endpoint?

Inference.

But i can see your point, if we just set it up with sage maker we can modify anything we want correct?

we want to set it up like this and run it against many commercial tools for STT we use now:

To be like this:

curl https://xnmxj6m0sqyl3z55.us-east-1.aws.endpoints.huggingface.cloud
-X POST
–data {
“encoding”: “wav”,
“language”: “Japanese”,
“transcribe”: true,
“languageDetect”: false,
“model”: “large”,
“audio”: “signed-url-for-google-or-aws-buckets”,
}
-H “Authorization: Bearer xxxx”
-H “Content-Type: json”

question: what is the difference between these two options: