LaBSE vs multilingual BERT, same layers?

Hey all am I missing something here:

Do the bert base multilingual uncased and sentence-bert/LaBSE have the same layers?

When I print out both models it seems so. I thought they are different? Is it just the data that they have been trained on that differs?

Thanks a lot

Oh I see now, they indeed use the same layers as indicated in the research paper. (page 4)