Hugging Face Forums
Problem with launching DeepSeek-R1-Distill-Qwen-32B-Uncensored-Q8_0-GGUF
Models
John6666
March 17, 2025, 4:17pm
29
I see, so the model doesn’t recognize its own statements as its own.
In LangChain
show post in topic
Related topics
Topic
Replies
Views
Activity
DeepSeek-R1-Distill-Llama-8B - CUDA out of Memory - RTX 4090 24GB
Beginners
2
219
February 26, 2025
Tokenizer.template not working with ollama
🤗Hub
0
153
February 1, 2025
Why the model provide an error response ever time
Beginners
5
22
March 4, 2025
Error running Llama 3.1 Minitron 4B quantized model with Ollama
Models
2
930
August 28, 2024
A Call for Expert Help: Building a Native Windows AI Wrapper to Empower My Students with Learning Disabilities
Beginners
2
12
March 24, 2025