Hi, can someone please advise me what RAM and GPU requirements the Llama2 model has with 7 billion parameters? If I take the model as it is, without further fine tuning like Lora or quantization. Conversely, what would be the requirements if I used Lora, quantization or both.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Hardware Requirement GPU | 3 | 461 | January 27, 2025 | |
Memory requierements | 2 | 130 | February 18, 2025 | |
Fine Tuning LLama 3.2 1B Quantized Memory Requirements | 2 | 880 | January 23, 2025 | |
Local HW specs for Hosting meta-llama/Llama-3.2-11B-Vision-Instruct | 4 | 1278 | October 28, 2024 | |
Unable to run quantized Llama2 70b model | 2 | 76 | December 30, 2024 |