After llama fine tuning, model merging fails

It seems like this error can probably be resolved, but it appears that you may need to change the Embedding of the token in some cases…

That being said, the specifications for merge-related functions sometimes change, so existing scripts may not work properly.
If you’re only using it for yourself, it might be quicker to rewrite it, as you only need to load and save it.