I’ve used the api for a while, but it keeps crashing and having to load it takes too long at times. I want to create a local install, and now I just don’t get the json format output, with labels and all.
I get the logits, that then I can transform to probabilities. I have the labels, etc. But is there no way of just getting the output response from a simple api call? How can I replicate that format?
This would be my call:
response = requests.post(API_URL2, headers=headers, json=payload)
and I’d get a json format response.
Now I’m having to do
input_1 = bert_tokenizer(x, return_tensors=“pt”)
output_1 = bert_model(**input_1)
predictions = torch.nn.functional.softmax(output_1.logits, dim=-1)
and I get >[tensor(0.9948, grad_fn=)