What is the proper way to do inference using fine-tuned model?


Once the model is fine-tuned, what is the proper way to do inference?
E.g. I would send list of words to API (e.g. Flask) and predict labels for each word using fine-tuned model.
Should I just run model using do_pretict = True, and do_eval, do_test = False?
Or there is a better way?


this summary of tasks doc shows how you can do inference for various tasks.

1 Like