For example, if you want to use OpenAI’s LLM, I think you need to use OpenAIServerModel
instead of InferenceClientModel
.
Maybe like this.
from smolagents import CodeAgent, OpenAIServerModel
model = OpenAIServerModel(model_id="gpt-4o", api_base="https://api.openai.com/v1", api_key=os.getenv("OPENAI_API_KEY", None))
agent = CodeAgent(tools=[], model=model)