I’m using the Hugging Face API with a GPT-2 model to generate text based on a prompt. However, I’m encountering an issue where the generated text is consistently too short, even though I’m specifying a maximum number of new tokens and using other parameters to try to generate longer text.
function GPT2()
{
const prompt = "Please give me a 100 words tell me about yourself";
const API_TOKEN = "xxxxxxxxxxxx";
const MODEL_NAME = "gpt2";
const url = `https://api-inference.huggingface.co/models/${MODEL_NAME}`;
const options =
{
method: "post",
headers:
{
Authorization: `Bearer ${API_TOKEN}`,
"Content-Type": "application/json",
},
payload: JSON.stringify
({
inputs: prompt,
options:
{
contentType: "application/json",
max_tokens: 200,
max_length: 100,
num_return_sequences: 3,
use_cache: false,
return_full_text: true
}
})
};
const response = UrlFetchApp.fetch(url, options);
const res = JSON.parse(response.getContentText());
Logger.log(response);
return res[0].generated_text.trim();
}
For example, when I input the prompt “Please give me a 100 words tell me about yourself.”, the generated text returned is only a few words long, such as “I’m a writer, and I”. I was expecting the generated text to be much longer.
Can anyone offer any suggestions for how I can generate longer text using the Hugging Face API with a GPT-2 model? Is there a problem with my code or input parameters that is causing the generated text to be too short?
I have also checked the API response to ensure that it is valid and contains the expected input and output values.
Thank you for your help.