Gemma 2b model loading issue

Hi there I was looking for ways to load the Gemma tensorflow lite model. I did not find any resources out there regarding this as the model file format is .bin it does not work properly. I tried the following code interpreter = tf.lite.Interpreter(model_path)
It returns ValueError: tensorflow/lite/core/subgraph.cc:1870 required_bytes != bytes (131137536 != 262275072)Tensor 0 is invalidly specified in the schema.
Given this .bin models are specifically designed for mobile devices and Google has mediapipe API to work with such models. However, I wanted to look at the performance of this model in my computer. Any help will be much appreciated regarding any success in loading the model from the .bin file to either in tensorflow or in PyTorch.