https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2

Hello All,

I am using sentence-transformers/paraphrase-MiniLM-L6-v2 while using the API. but getting the 404 error. We are in free plan. any help please

1 Like

Hello All, Its seems no model is accessiable via API access..iPlease confirm if any one is also facing the same issue.

import requests

:white_check_mark: API key must be in quotes

API_TOKEN = “hf”
API_URL = “https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2”

headers = {“Authorization”: f"Bearer {API_TOKEN}"}

def test_huggingface_api():
data = {
“inputs”: “This is a test sentence.”
}
response = requests.post(API_URL, headers=headers, json=data)

print("Status Code:", response.status_code)
if response.ok:
    print("API is working âś…")
    print("Response:", response.json())
else:
    print("API error ❌")
    print("Error Message:", response.text)

test_huggingface_api()

OutPut ::::Status Code: 404
API error :cross_mark:
Error Message: Not Found

2 Likes

I have the same problem, the Inference API still returns a 404 error when calling curl in bash, client in Python or visit the url in interface web.

1 Like

Please make sure to hide or remove your API token, it’s not secure to post it publicly.

It looks like you’re receiving a 404 error when trying to access sentence-transformers/paraphrase-MiniLM-L6-v2 via the Hugging Face API. This usually happens if the model is unavailable or there’s API endpoint. Here’s how you can fix it:

  1. Check Model Availability: Visit the model’s page to ensure it’s still hosted. If it’s been removed, you may need to find an alternative.

  2. Verify API Endpoint: Make sure you’re using the correct API request. The right format should be:

    headers = {"Authorization": f"Bearer {your_token}"}
    API_URL = "https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2"
    response = requests.post(API_URL, headers=headers, json={"inputs": "Your text here"})
    print(response.json())
    

    If the model has moved, the API URL may need to be updated.

  3. Free Plan Limitations: Some models require authentication or a paid plan. If you’re using the free tier, make sure the model is accessible to free users. You may need to run it locally instead.

If these steps don’t resolve the issue, let me know what error message you’re getting, and I’ll be happy to assist further!

Best,

1 Like

Don’t forget to disable the token that has been leaked.

Currently, almost all models for users are returning 404 errors. @michellehbn

Thanks , this has been removed and deleted as well

1 Like

I couldn’t find much on the matter.

import requests

API_URL = “https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2”
your_token = “your_actual_api_token”
input_text = “Your actual text here”

headers = {“Authorization”: f"Bearer {your_token}"}
response = requests.post(API_URL, headers=headers, json={“inputs”: input_text})

if response.status_code == 404:
print(“Model not found or API URL is incorrect”)
elif response.status_code == 401:
print(“Invalid API token”)
elif response.status_code == 429:
print(“Rate limit exceeded”)
else:
print(“Error:”, response.text)
This code snippet appears to be testing the Hugging Face API using the requests library in Python. Let’s break it down:

  1. It sends a POST request to the API with a JSON payload containing the input text.
  2. It checks the response status code and prints out a success or error message accordingly.
  3. If the response is OK (200-299 status code), it prints the JSON response.
  4. If there’s an error, it prints the status code and the error message.

The code seems mostly correct, but here are a few potential issues:

  1. The variable your_token should be replaced with an actual API token.
  2. The input text “Your text here” should be replaced with the actual text you want to process.
  3. Error handling could be improved, such as catching exceptions or handling specific error codes.

Here’s a slightly modified version with some improvements:

import requests
import json

API_URL = "https://api-inference.huggingface.co/models/sentence-transformers/paraphrase-MiniLM-L6-v2"
your_token = "your_actual_api_token"  # Replace with your API token
input_text = "Your actual text here"  # Replace with your input text

headers = {"Authorization": f"Bearer {your_token}"}
response = requests.post(API_URL, headers=headers, json={"inputs": input_text})

if response.ok:
    print("API is working ")
    print("Response:", response.json())
else:
    print("API error ")
    print("Status Code:", response.status_code)
    print("Error Message:", response.text)

Make sure to replace your_actual_api_token and Your actual text here with your actual API token and input text, respectively.

1 Like