Text Generation using GPT2

I am trying to generate text using GPT2. I am using the code snippet at https://huggingface.co/transformers/quickstart.html (reproduced below). Unfortunately, it gives an error

import torch

tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = GPT2LMHeadModel.from_pretrained('gpt2')

generated = tokenizer.encode("The Manhattan bridge")
context = torch.tensor([generated])
past = None

for i in range(100):
print(i)
output, past = model(context, past=past)
token = torch.argmax(output[..., -1, :])

generated += [token.tolist()]
context = token.unsqueeze(0)
sequence = tokenizer.decode(generated)

print(sequence)

Error is in the line token = torch.argmax(output[0, -1, :]), saying that TypeError: string indices must be integers. Can someone please help me out?

1 Like