- Why do papers like LayoutLMv3 mention visual tokenizers that are absent from all open-source implementations?
- Why do most BEIT implementations lack the visual tokenizer?
- Why do pixel values go through PatchEmbedding straight into a decoder without any tokenization?
- What is the relationship between Patch Embed, self.mask_token = nn.Parameter() and visual tokenizers.
But, when I look at the code in HuggingFace and TIMM for BEIT, there’s no tokenization of image patches.
The pixel values get transformed into embeds that go straight into the encoder.
In Microsofts BEIT implementation I managed to find an example that makes use of visual tokenizers.
After successfully using the visual tokenizer, my confusion only increased and led me to question number 4.
Thank you for reading and for your attention. Any light shed on this subject would be amazing.