BUGs on offset-mapping

Hi guys, I recently notice that the offset mapping returned by tokenizer seems to be problematic. I am working with the llama3-8b-Instruct model with dtype=fp16.

text='''Suppose A represents a certain relation. Infer the relation based on certain examples.'''
token_ranges = tokenizer(text,return_offsets_mapping=True)['offset_mapping'] 

and what it returns is:

[(0, 0), (0, 3), (3, 7), (7, 7), (9, 9), (20, 20), (22, 22), (30, 30), (39, 39), (40, 40), (46, 46), (50, 50), (59, 59), (65, 65), (68, 68), (76, 76), (85, 85)]

I assume that what it should return is the starting and end position of the token. Is there a bug or something that I misunderstand?