Transformers: WordLevel tokenizer produces strange vocabulary

Training the WordLevel tokenizer I receive strange vocabulary. Bellow is my code:

data = [
    "Beautiful is better than ugly."
    "Explicit is better than implicit."
    "Simple is better than complex."
    "Complex is better than complicated."
    "Flat is better than nested."
    "Sparse is better than dense."
    "Readability counts."
]

from tokenizers.models import WordLevel
from tokenizers import Tokenizer, models, normalizers, pre_tokenizers, decoders, trainers

tokenizer = Tokenizer(models.WordLevel())

trainer = trainers.WordLevelTrainer(
    vocab_size=100000,
)

tokenizer.train_from_iterator(data, trainer=trainer)

tokenizer.get_vocab()

The output is the following:

{'Beautiful is better than ugly.Explicit is better than implicit.Simple is better than complex.Complex is better than complicated.Flat is better than nested.Sparse is better than dense.Readability counts.': 0}

Please explain what I’m doing wrong…

Your data should be a list of lists.