Fedrated Learning using trainer

Hello! When I implemented the fedavg algorithm, I used the trainer in the transformer library to fine-tune the model. I found that after one aggregation was completed, the user’s trainer would occupy the video memory without releasing it, so that an OOM problem would occur during the second aggregation.
My implementation idea is roughly like this: each user holds a trainer (15 users), and 3 users will be selected to participate in the aggregation in one aggregation. After the user fine-tunes, the model parameters are aggregated to the server, and then returned to the user for the model. renew.
I found that the user’s trainer may have occupied the video memory and not released it. The users selected in the previous round did not release the gpu memory after aggregation, and the new round of users will occupy it again.
Is there any way to release the video memory occupied by the trainer?
simple

del client.trainer

It seems to have no effect