Hugging Face Inference API returning short generated text with GPT-2 model

I’m using the Hugging Face API with a GPT-2 model to generate text based on a prompt. However, I’m encountering an issue where the generated text is consistently too short, even though I’m specifying a maximum number of new tokens and using other parameters to try to generate longer text.

function GPT2()
{
const prompt = "Please give me a 100 words tell me about yourself";  

const API_TOKEN = "xxxxxxxxxxxx";
const MODEL_NAME = "gpt2";

const url = `https://api-inference.huggingface.co/models/${MODEL_NAME}`;
  const options = 
  {
    method: "post",
    headers: 
    {
      Authorization: `Bearer ${API_TOKEN}`,
      "Content-Type": "application/json",
    },
    payload: JSON.stringify
    ({
      inputs: prompt,
      options: 
      {
        contentType: "application/json",
        max_tokens: 200,
        max_length: 100, 
        num_return_sequences: 3,
        use_cache: false,
        return_full_text: true
      }
    })
  };
  const response = UrlFetchApp.fetch(url, options);
  const res = JSON.parse(response.getContentText());
  Logger.log(response);

  return res[0].generated_text.trim();
}

For example, when I input the prompt “Please give me a 100 words tell me about yourself.”, the generated text returned is only a few words long, such as “I’m a writer, and I”. I was expecting the generated text to be much longer.

Can anyone offer any suggestions for how I can generate longer text using the Hugging Face API with a GPT-2 model? Is there a problem with my code or input parameters that is causing the generated text to be too short?

I have also checked the API response to ensure that it is valid and contains the expected input and output values.

Thank you for your help.

I’ve faced the same issue with this model,even though I was trying to prompt it with an incomplete sentence, it generates a couple words at a time, and starts repeating itself

Try using another model, though they currently all give error 422

I used the “bigscience/bloom” model on Hugging Face, but I encountered same issue where the generated text was consistently too short. However, when I used the API for OpenAI GPT-3, I didn’t encounter any issues. So I assume the problem lies with my coding or with Hugging Face.