I think this wouldn’t solve the missing KenLM model. What you could do is use os.system('install kenlm') at the top of your inference.py to install it on start up (needs to finish under 2 min/ i am not sure what the behavior is for serverless)
I think this wouldn’t solve the missing KenLM model. What you could do is use os.system('install kenlm') at the top of your inference.py to install it on start up (needs to finish under 2 min/ i am not sure what the behavior is for serverless)