Using sensitive info in inference of model

Hi!

A fairly simple question:
Can I be sure that my data is not transferred anywhere while inference of the model?
I use tokeniser and model from python package transformers

Thanks!