I get below warning when I try to run the code from this page.
/usr/local/lib/python3.7/dist-packages/transformers/optimization.py:309: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use thePyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning FutureWarning,
I am super confused because the code doesn’t seem to set the optimizer at all. The most probable places where the optimizer was set could be below but I dont know how to change the optimizer then
# define the training arguments training_args = TrainingArguments( output_dir = '/media/data_files/github/website_tutorials/results', num_train_epochs = 5, per_device_train_batch_size = 8, gradient_accumulation_steps = 8, per_device_eval_batch_size= 16, evaluation_strategy = "epoch", disable_tqdm = False, load_best_model_at_end=True, warmup_steps=200, weight_decay=0.01, logging_steps = 4, fp16 = True, logging_dir='/media/data_files/github/website_tutorials/logs', dataloader_num_workers = 0, run_name = 'longformer-classification-updated-rtx3090_paper_replication_2_warm' ) # instantiate the trainer class and check for available devices trainer = Trainer( model=model, args=training_args, compute_metrics=compute_metrics, train_dataset=train_data, eval_dataset=test_data ) device = 'cuda' if torch.cuda.is_available() else 'cpu' device
I tried another transformer such as
distilbert-base-uncased using the identical code but it seems to run without any warnings.
- Is this warning more specific to
- How should I change the optimizer?