Hi , I tried to load , meta-llama/Meta-Llama-3.1-70B INT4 version but i am getting the fatal error as we have 32 GB Ram in my server…here model is loaded successfully but during prediction i am getting fatal error …can somebody help me on this
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
LLAMA2 Model Won't Load | 1 | 217 | February 24, 2024 | |
Load model efficiently using llama.cpp | 0 | 149 | September 6, 2024 | |
Need Suggestions for LLM Models Suitable for 250GB RAM Server | 0 | 132 | December 29, 2024 | |
Loadig the LLAMA 30B Model. Memory Issue | 2 | 2142 | July 27, 2023 | |
Meta-Llama-3.1-70B-Instruct-IMat-GGUF | 0 | 139 | July 24, 2024 |