Currently I am using a pandas column of strings and tokenizing it by defining a function with the tokenization operation, and using that with pandas map to transform my column of texts.
Itās a slow process when I have millions of rows of texts, and I am wondering if thereās a faster way to tokenize all my training examples.