How to free pipeline memory for new model in TPU

i am trying to use google cloud TPU for inferring model.

in GPU i used

del pipeline
torch.cuda.empty_cache()

to free memory for new model.

but how can i free TPU memory?