How to fix ByteBuffer is not a valid flatbuffer model error?

I tried converting the GPT2 Model of Huggingface to TFLite format using this exact script Reference Script .

I got the logs as below

Downloading:   0%|          | 0.00/665 [00:00<?, ?B/s]
Downloading: 100%

498M/498M [03:10<00:00, 4.62MB/s]
All model checkpoint layers were used when initializing TFGPT2LMHeadModel.

All the layers of TFGPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFGPT2LMHeadModel for predictions without further training.
None
None
WARNING:absl:Found untraced functions such as wte_layer_call_fn, wte_layer_call_and_return_conditional_losses, dropout_layer_call_fn, dropout_layer_call_and_return_conditional_losses, ln_f_layer_call_fn while saving (showing 5 of 294). These functions will not be directly callable after loading.
INFO:tensorflow:Assets written to: /var/folders/x7/5xhg0cw578l_p770plz2hyk80000gn/T/tmpoi_xp1kv/assets
INFO:tensorflow:Assets written to: /var/folders/x7/5xhg0cw578l_p770plz2hyk80000gn/T/tmpoi_xp1kv/assets
2023-01-03 17:52:02.881677: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:362] Ignored output_format.
2023-01-03 17:52:02.882405: W tensorflow/compiler/mlir/lite/python/tf_tfl_flatbuffer_helpers.cc:365] Ignored drop_control_dependency.
249173188

Then , I replaced the model.tflite in the android gpt2 project to the generated one from the prior step and tried the run the app , but in the logs I see

E/AndroidRuntime: FATAL EXCEPTION: main
    Process: co.huggingface.android_transformers.gpt2, PID: 22134
    java.lang.IllegalArgumentException: ByteBuffer is not a valid flatbuffer model
        at org.tensorflow.lite.NativeInterpreterWrapper.createModelWithBuffer(Native Method)
        at org.tensorflow.lite.NativeInterpreterWrapper.<init>(NativeInterpreterWrapper.java:60)
        at org.tensorflow.lite.Interpreter.<init>(Interpreter.java:224)
        at co.huggingface.android_transformers.gpt2.ml.GPT2Client$loadModel$2.invokeSuspend(GPT2Client.kt:138)
        at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
        at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)
        at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)
        at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60)
        at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740)

How can we fix this issue ?