Float tensors as input to transformer

Hello everyone
I am seeking advice of a transformer model where I can use float tensor as input to a model. My input to transformer will be a 1D tensor of size 768 containing floating values. The output will also be a 1D tensor of size 768 containing floating values. How can I convert it to contain feature_embedding (transformer expect input of size (S, E)) or what other approach can I use to solve this

A Transformer typically expects 2 inputs, namely input_ids and attentions These are obtained by passing your inputs through a tokenizer. Could you be specific about your use case?

As an option, here’s some info for you. To convert your input data, represented as a 1D tensor of size 768, into the format expected by the transformer (dimensions (S, E)), where S is the length of the sequence and E is the dimension of the embedding vector, you will need to perform several steps.