Hey @fzyzcjy â if I understand your question correctly, you want the model output at each step of generation, i.e. the logits for all batches/beams at each step. You have access to that information in outputs.scores (see its docstring, e.g. for beam search).
The function .compute_transition_scores() is only needed to get the logits of the selected tokens, the tokens present in outputs.sequences.
@joaogante Hi thanks for your reply! However, I need only one probability for one whole sequence (instead of one probability for one token). For example, suppose the input of a BART model is an article, and I am doing the summarization task, and I call generate using beam search with num_beams=3. Then, the model will output 3 sentences, say, âI am sentence oneâ / âI am sentence twoâ / âI am sentence threeâ. Now I want to have 3 float numbers, representing the probability of each sentence. For example, the first floating number should represent P("I am sentence one" | the input article). I do not need things like P("I" | the input article) or P("I am" | the input article) or P("I am sentence" | the input article).
@fzyzcjy gotcha. In that case, yes, the script you shared would be a way to do it (and yes, with normalize_logits=True) â probabilities will be a tensor with shape (batch_size*num_return_sequences,) and its contents must be <= 1.0 when length_penalty==0.0. After you apply the length penalty, then you no longer have probabilities (hence the terminology score, instead of logits/probabilities).
If you are getting values > 1.0 with length_penalty==0.0, then it means we have an important bug to catch Can I please ask you to open an issue on github, share a script for reproducibility, and tag me (@gante)?
@joaogante I got some more time to work on this issue again. If we wanted to calculate token log probs for AutoModelForSeq2SeqLM (I used flan t5), do the pairings between the probabilities and the tokens have to be shifted as well? In this case, the shifting is done internally when the labels are shifted right for the decoder input, right? This is the biggest selling point for token log probs API because one has to get these pairings correctly for all architectures. I donât get high logits for obvious words in our test sentences, so I suspect the code I provided is still incorrect. Any ideas about what am I doing wrong?
Hey @vblagoje â you also need to shift the output logits by one in seq2seq models, if you use similar code. The logic is the same: the logits are always with respect to the next token
transition_scores = model.compute_transition_scores(tokens.sequences, tokens.scores, normalize_logits=True)
for i in range(len(transition_scores)):
prob=np.exp(transition_scores.numpy())[i].sum(axis=0)
print(prob/len(transition_scores[I]))
The printed probability (the one in the for loop) is supposed to be the probability of the generated output, which is a stream of tokens. Thus, if the beam_size=2, I would get (after running the code) something like this:
0.917925999082368 â Cumulative probability of the tokens generated for the first beam
0.8858097997205011 â Cumulative probability of the tokens generated for the second beam
@mastro1996 Two important details you should fix to get a correct interpretation:
Because you are using beam search, in model.compute_transition_scores you should also pass beam_indices=tokens.beam_indices. With beam search, the contents of tokens.scores are scrambled, and beam_indices are required to de-scramble them.
Language modeling is a causal problem, so it makes more sense to evaluate the product of the probabilities (and not the sum). The product of the token probabilities corresponds to the probability of the sequence!
@Ranjittechieprobabilities = torch.exp(transition_scores.sum(axis=1) (without the length_penalty) would be the closest you can get to the probability of the generated sequences.
When using beam search: With the length_penalty division, it is a score. Kinda like a probability, but without the guarantee that the probability of all possible generated sequences sum to 1. Be mindful that beam search picks the outputs with the highest score, and not with the highest probability â this allows you to control how long you want your outputs to be.
Please note that length_penalty is NOT used outside beam search.
And yes, if you are interested in probabilities, normalize_logits should be True.
here i am not using beam search right? @joaogante
and it would be helpful if you could just share a piece of code here which returns probabilities of generated sequences!
also a piece code which uses beam search to generate sequences!
cause i tried with custome traing and its not generating that much correct responses!
is there any specific data format i should follow ?
currently using a text file containing question and answer format inside !
Dropping a quick thank you note to everyone in this discussion.
I was really not sure about the difference between scores and transition_scores but having read this thread things became much clearer. Thank you again.