Inference using Pipeline and TensorFlow

Is there a way to use a Dataset during inference with a standard Pipeline (zero-shot-classification task) with TensorFlow? I found a note about this in the Pipeline documentation, but it seems to only be applicable to PyTorch with the KeyDataset object - Pipelines

The reason behind my question is I have application where I am passed Pandas DataFrames of text that I need to classify using my zero-shot-classification model. My thought was for fastest inference I should be utilizing the Dataset object rather than converting my text data to a list, however it is unclear to me if I can then use the Pipeline layer on the Dataset in TensorFlow or if I need to instead tokenize the Dataset separately and then have logic to process my data through the model without using the Pipeline. I’m looking for the most efficient solution.

Also, just a few more details about my constraints – as mentioned I am using TensorFlow not PyTorch, the inference is being run on CPU, and cloud/api solutions are not an option.