Hi, very new to all of this,
I have downloaded a model using the huggingface-cli,
How would I go about running the model locally?
I have read the docs and cant work out how to get it to run.
Thanks in advance
Joe
Edit:
i have this so far
from transformers import AutoModelForSequenceClassification, AutoTokenizer
import torch
def main():
model_name = "models--Orenguteng--Llama-3-8B-Lexi-Uncensored-GGUF"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_text = input("you: ")
inputs = tokenizer(input_text, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class = torch.argmax(logits, dim=1).item()
print(f"Input text: {input_text}")
print(f"Predicted class: {predicted_class}")
if __name__ == "__main__":
main()
it just says that the path is incorrect and that i need to provide the path to the model on the hub. :(