Accessing a model with model.generate via remote only

I’ve been building a chatbot with Llama3 that doesn’t have enough local storage for the model. It works fine using the model remotely, but I want to be able to tune the outputs with model.generate. When I add this function, I get the following error:

Traceback (most recent call last):
  File "/home/groups/ruthm/bcritt/foo.py", line 88, in <module>
    outputs = model.generate(
              ^^^^^
NameError: name 'model' is not defined

If I try to set this to the remote model (“meta-llama/Llama-3.3-70B-Instruct”), I get this error:

Traceback (most recent call last):
  File "/home/groups/ruthm/bcritt/foo.py", line 88, in <module>
    outputs = model.generate(
              ^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'generate'

I’ve tried some various other things, but the real question is, is there a way to use a remote model while still fine-tuning chatbot outputs?

Thanks!

1 Like

Leaving aside the details, the program and model data work together as a single model, so it’s not possible to directly change and adjust the program of a remote model here. You can’t modify something that doesn’t exist in front of you. Even if you modify just the part in front of you, it will usually just cause a contradiction.

However, I think there are a number of parameters that can be specified when using a remote model with an API. Like temperature.

1 Like