i’m trying to load the data using the pandas dataframe and loaded bert based tokenizer
bert-base-cased.
I am trying to tokenize using below code.
def tokenize_function(examples):
return tokenizer(examples[“text”], padding=“max_length”, truncation=True)
dataset_df.map(tokenize_function, batched=True)
and it throws error saying dataframe has no attribute map.
How can map those using pandas dataframe?