Hi, can someone please advise me what RAM and GPU requirements the Llama2 model has with 7 billion parameters? If I take the model as it is, without further fine tuning like Lora or quantization. Conversely, what would be the requirements if I used Lora, quantization or both.
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Fine Tuning LLama 3.2 1B Quantized Memory Requirements | 1 | 123 | November 29, 2024 | |
Local HW specs for Hosting meta-llama/Llama-3.2-11B-Vision-Instruct | 4 | 209 | October 28, 2024 | |
LLaMA 7B GPU Memory Requirement | 18 | 129875 | May 13, 2024 | |
Nvidia P40 and LLama 2 | 0 | 2187 | August 15, 2023 | |
How to calculate the memory required using Lora fine tuning | 0 | 801 | November 21, 2023 |