I am following the “fine-tuning a pretrained model” on the huggingface transformers tutorial. I replaced the bert-base-uncased model with google/mobilebert-uncased. I can get decent result using a huggingface Trainer, but when I trained using native PyTorch, the accuracy stucks around 50%. I am wondering is the huggingface Trainer doing some optimization that I must explicitly add in the native PyTorch code in order to work? Thank you!
Related topics
Topic | Replies | Views | Activity | |
---|---|---|---|---|
Training General Pytorch model with HuggingFace's Trainer | 0 | 354 | May 7, 2023 | |
Help with Training a Custom Model using Hugging Face Transformers | 0 | 14 | October 11, 2024 | |
How to Optimize Fine-tuning in Hugging Face Transformers? | 0 | 293 | March 5, 2024 | |
Evaluating huggingface transformer with trainer gives different results | 0 | 861 | March 22, 2023 | |
Continuous Learning Using Trainer for BERT multi-class model | 0 | 329 | April 19, 2023 |