Hi.
This is a function that basically uses the OpenAI API to take an input system message, file prompt and generate an output to eb parsed as a knowledge graph:
def process_gpt(file_prompt, system_msg):
completion = openai.ChatCompletion.create(
engine=openai_deployment,
max_tokens=15000,
temperature=0,
messages=[
{"role": "system", "content": system_msg},
{"role": "user", "content": file_prompt},
],
)
nlp_results = completion.choices[0].message.content
sleep(8)
return nlp_results
I am trying to do the same here in Bloom:
def process_bloom(system_msg):
try:
generator = pipeline('text-generation', model = model, tokenizer=tokenizer)
nlp_results = generator(system_msg, max_length = 2000)
nlp_results = nlp_results[0]['generated_text']
return nlp_results
except Exception as e:
print(f'Error processing bloom: (e)')
return '{"error": Failed to process bloom}'
But I’m using the Hugginface API, not OpenAI, so I couldn’t figure out how to map the exact logic to make use of Bloom.