Hi, i am attempting to train a CodeLlama-34b-v2 model on a custom dataset of front end code. I have tried to do this using GCPs Vertex AI; however, the integration with huggingface and GCP resources is not that intuitive. Does anyone have any experience with training this model on larger datasets of code?
Related Topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
How to fine-tune a pretrained LLM on custom code libraries? | 2 | 1632 | April 29, 2024 | |
Cannot fine-tune RobertaForQA on SQuAD-like dataset? | 0 | 268 | November 15, 2021 | |
Nlp course: Why I Fine-tune a model on the GLUE SST-2 dataset but get worse score compare to Bert(base) | 0 | 479 | July 27, 2023 | |
Wav2vec2 finetuning custom dataset | 1 | 2203 | July 28, 2021 | |
Finetune generative model | 2 | 604 | March 9, 2023 |