Hello, I am doing a chatbot working with data from IMDB.
I was thinking of using meta-llama/Llama-3.2-1B-Instruct and LoRa fine tuning it on BrightData/IMDb-Media Dataset. I have a 6GB VRAM GPU and not much time.
My questions are: Is this the right model to use for the task, or should I use a smaller one? Is it good paired with the dataset? Will 6GB VRAM be enough for LoRa fine tuning it? Also, should I split the dataset and LoRa tune it on a smaller scale? The dataset has 250K rows.