FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use the PyTorch implementation torch.optim.AdamW instead

comment out

#from transformers import AdamW

change the optimizer to PyTorch AdamW implementation

optimizer = torch.optim.AdamW(optimizer_grouped_parameters, lr=1e-5)
1 Like