Hey all am I missing something here:
Do the bert base multilingual uncased and sentence-bert/LaBSE have the same layers?
When I print out both models it seems so. I thought they are different? Is it just the data that they have been trained on that differs?
Thanks a lot