Load fine tuned model from local

But the important issue is, do I need this? Can I still download it the normal way? Is the tokenizer affected by model fientuning? I assume no, so I could still use the tokenizer from your API?