I’ve been building a chatbot with Llama3 that doesn’t have enough local storage for the model. It works fine using the model remotely, but I want to be able to tune the outputs with model.generate. When I add this function, I get the following error:
Traceback (most recent call last):
File "/home/groups/ruthm/bcritt/foo.py", line 88, in <module>
outputs = model.generate(
^^^^^
NameError: name 'model' is not defined
If I try to set this to the remote model (“meta-llama/Llama-3.3-70B-Instruct”), I get this error:
Traceback (most recent call last):
File "/home/groups/ruthm/bcritt/foo.py", line 88, in <module>
outputs = model.generate(
^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'generate'
I’ve tried some various other things, but the real question is, is there a way to use a remote model while still fine-tuning chatbot outputs?
Thanks!