How does Transformers Library work under the hood?

Hi everyone,

I am curious to know, how the Transformers library works under the hood and its architecture. Specifically I’m looking for a way to run existing models on HF on my special hardware architecture directly (without any retraining or using runtimes like ONNX). Any documentation for this matter would be awesome.

1 Like

In extreme terms, Transoformers is a wrapper for PyTorch, so as long as you can find a way to use PyTorch on the target computing device, there should be no fundamental issues.

1 Like