Using PyTorch model in TensorFlow

Hi,

I would like to use a model built using PyTorch (namely this one ) in a Tensorflow environment.

More specifically I would like to start by just extract some of the embeddings in the later layers, and then potentially run some fine-tuning.

So I have two questions:

  1. Is there a way to load and run inference from a PyTorch model in TensorFlow?
  2. Is there a way to load and fine-tune a PyTorch model in Tensorflow?

Thanks in advance!

Hi @gruffgoran, your use cases sound like a perfect match for the ONNX format :slight_smile:

Having said that, you might be able to get a quick win by trying something like the following (see docs):

tf_model = TFBertForSequenceClassification.from_pretrained("KB/bert-base-swedish-cased", from_pt=True)

From here you can then run inference / fine-tune etc using TensorFlow.

If you want to go the ONNX route, the idea would be to convert PyTorch → ONNX and then load the ONNX model in TensorFlow. Details on doing the conversion can be found here: Exporting transformers models — transformers 4.3.0 documentation

Hello, thanks but the link seems to be broken: https://huggingface.co/transformers/serialization.html#onnx-onnxruntime

Please direct to the correct one.