Using Tokenizer for integer data

Hey guys, I am working on a problem where I am having tabular data having more than 90 features and one target and all the features are in integers (continuous). I want to use pre-trained BERT, GPT2 but when it comes to the tokenizer the tokenizer is expecting the input in the text format. I can change the integer data in the text format like this;

original_data = [1,2,3,4,5,…,94]
transformed_data = [“1 2 3 4 5 …94”]

Now if I pass the transformed_data to the tokenizer then surely it will work but I wanna know if someone tried to use transformers for this purpose and if yes, then what was the outcome, and how did the results look like?