I have llm studio installed locally and I’m trying to use the openai to connect to it and I’m getting connection error , here is my code , what is the problem
import openai
import re
import getpass
openai.base_url = “http://127.0.0.1:1234 ”
openai.api_key = “”
def complete(prompt: str) → str:
try:
response = openai.chat.completions.create(
model="deepseek-r1-distill-qwen-7b", # Ensure this matches your local model's name
messages=[{"role": "user", "content": prompt}],
temperature=0.7, # Adjust as needed
max_tokens=100 # Limit response length
)
return response.choices[0].message.content
except Exception as e:
print(f"Error: {e}")
return ""
def main():
user_input=“is it working”
result = complete(user_input)
print(result)
if name == “main ”:
main()
1 Like
I don’t know much about LM Studio, but I found a case study of a similar error.
opened 04:35AM - 15 Jul 24 UTC
closed 04:06PM - 29 Aug 24 UTC
Hi, I tried to use local LLM with lm-studio but it returned me as connection err… or. My sample code is modified from
https://github.com/zou-group/textgrad/blob/main/textgrad/engine/local_model_openai_api.py
First, it seems that the example code does not work, because prompt is not an argument, but content, for ChatExternalClient.
For the code:
```python
from openai import OpenAI
from textgrad.engine.local_model_openai_api import ChatExternalClient
client = OpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
engine = ChatExternalClient(client=client, model_string="your-model-name")
print(engine.generate(max_tokens=40, prompt="What is the meaning of life?"))
```
I met such an error:
```
st_to, options, remaining_retries, stream, stream_cls)
1011 log.debug("Raising connection error")
-> 1012 raise APIConnectionError(request=request) from err
1014 log.debug(
1015 'HTTP Response: %s %s "%i %s" %s',
1016 request.method,
(...)
1020 response.headers,
1021 )
APIConnectionError: Connection error.
```
Does this mean I cannot create a client from local machine? Thanks.