Batch generation with GPT2

How to do batch generation with the GPT2 model?

1 Like

Batch generation is now possible for GPT2 in master by leveraging the functionality shown in this PR: https://github.com/huggingface/transformers/pull/7552?notification_referrer_id=MDE4Ok5vdGlmaWNhdGlvblRocmVhZDEyMTMzNzA0MDA6MjM0MjM2MTk%3D#event-3876130796 .

For more info on how to prepare a GPT2 for batch generation, you can checkout this test:

3 Likes

Hi I am the author of the PR.

You can now do batch generation by calling the same generate().
All you need to add is:

  1. set tokenizer.padding_side = "left" (probably reset it back later)
  2. pass in attention_mask to generate()

Explanation: (see full example in the end)

  1. We need tokenizer.padding_side = "left" because we will use the logits of the right-most token to predict the next token, so the padding should be on the left.
  2. This what this PR added. Here is a summary:

GPT-2 uses absolute positional embedding (position_ids), before this change, no position_ids is passed in to the model, and the model automatically generates the embeddings from 0 to n, even if there is padding (e.g. when input is a batch).

Example: tokens=<pad> <pad> a b c -> position_ids=0 1 2 3 4, what we expect is x x 0 1 2 (x means don’t case)

This PR adds positional embedding in prepare_inputs_for_generation(), which is called in generate(), by calculating them using
attention_mask, and that’s why you need to pass it in.

You can find a full example in PR.

5 Likes

Hi, there. Thanks for your work to support batch inference in GPT2. However, I still have one confusion, which may need your help. Thanks in advance!
If I wanna pass the “past_key_values”, how should I process the position_ids and attention mask? Supposing the length of my past_key_values is 2, the padded input is just like your example: , , a, b, c. Should I change the attention mask from 0, 0, 1, 1, 1 to 1, 1, 0, 0, 1, 1, 1, where the first double “1” refers. to the past_key_values.
Thanks a lot!

@patrickvonplaten @ttj I think this is a good question! Could we discuss on how to do batch inference with past_key_values?

Is it possible to have variable max_gen_length? depending on the length of the input sequence, for instance? (e.g. max_gen_length = len(tokenizer.tokenize(input_seq) + 20)?

It looks like you are looking for max_new_tokens?

1 Like

hi, I’m using the input parameter “past_key_values” to train a gpt model. So I wonder when doing batch generation in this way, if I pass “past_key_values” to model through the parameter “model_kwargs”, whether the generation method will work as expected?
Thx!