How compatible (mix and match) are tokenizers? For example, I want to implement a question and answer model for basic, relatively simple q and a.
However, I also want to offer summarization over larger portions of text. The data is essentially the same in both cases.
Can I fine tune the same model and tokenizer for this task, or should I implement a different model?
If they are different, then what do I do with the tokenizer?
Put another way, if I want to use more than one model to do different things with my data, am I stuck with re-fine tuning the same model and or tokenizer for the entire site?
Would I need to use a separate copy of my data for anything different I might think of?