More Outputs from Inference API Endpoitns

I’ve searched for a while on how to get the pooler_output from the feature extraction when using the inference api. (setu4993/LaBSE)
Or any feature extraction inference endpoint.

When using the provided python library, I can just use pooler_output on the output of the model to get the sentence embeddings.

Or are there other models for sentence embeddings which I can use directly to precompute embeddings (multi-lang)?

I don’t really get how to create such a wonderful system to create endpoints in a few clicks and then don’t provide the possibility to use all the outputs of a model.
That seems to be just wasted potential?