Applying finetuned model to data

I have finetuned an ELECTRA model on some data using the PyTorch framework, and now I wish to apply my model to some text data.
First I load my best model according to validation loss:

trained_model = CrowdCodedTagger.load_from_checkpoint(
  trainer.checkpoint_callback.best_model_path,
  n_classes=len(LABEL_COLUMNS)
)
trained_model.eval()
trained_model.freeze()

Then I load my data which looks like this head(5):

tweet_id	        user_username	text	created_at	user_name	user_verified	sourcetweet_text
443011743288393728	jahimes	        People are now using @metronorth like a subway...	2014-03-10T13:13:25.000Z	Jim Himes	True	NaN
43011451142537216	jahimes	        Spent morning on @metronorth issues with Rep. ...	2014-03-10T13:12:15.000Z	Jim Himes	True	NaN
442389699978862592	jahimes	        Will be interesting to see how that St. Patric...	2014-03-08T20:01:38.000Z	Jim Himes	True	NaN
442387206767136768	jahimes	        Step dancing, boiled meat, and beer at the Hib...	2014-03-08T19:51:43.000Z	Jim Himes	True	NaN
442356433993363458	jahimes	       What a reception for #Team26 in Greenwich! htt...	2014-03-08T17:49:27.000Z	Jim Himes	True	NaN

I make the text variable into a list

congress_head_list = congress_head['text'].tolist()

And now trouble occurs. I want to add my models prediction (in probabilities) of each sentence (text) to the dataframe i new columns. So far I’ve come up with

def run_model(input_data):
    # tokenize list
    encoding = tokenizer.encode_plus(
      congress_head_list,
      add_special_tokens=True,
      max_length=512,
      return_token_type_ids=False,
      padding="max_length",
      return_attention_mask=True,
      return_tensors='pt',
    )

    # returning probability values for each label
    _, test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
    return test_prediction.flatten().numpy()
    

#Then, apply the function to each row:
congress_head[LABEL_COLUMNS] = congress_head[['text']].apply(run_model, axis=1, result_type='expand')

But the resulting data frame contains the same probabilities for each sentence, something like:

tweet_id	user_username	text	created_at	user_name	user_verified	sourcetweet_text	morality_binary	emotion_binary	...	negative_binary	care_binary	fairness_binary	authority_binary	sanctity_binary	harm_binary	injustice_binary	betrayal_binary	subversion_binary	degradation_binary
443011743288393728	jahimes	People are now using @metronorth like a subway...	2014-03-10T13:13:25.000Z	Jim Himes	True	NaN	0.094099	0.119907	...	0.098311	0.045509	0.045513	0.044468	0.037584	0.045439	0.051038	0.034683	0.047893	0.053268
443011451142537216	jahimes	Spent morning on @metronorth issues with Rep. ...	2014-03-10T13:12:15.000Z	Jim Himes	True	NaN	0.094099	0.119907	...	0.098311	0.045509	0.045513	0.044468	0.037584	0.045439	0.051038	0.034683	0.047893	0.053268

Can anyone see what I’m doing wrong? And it doesn’t seem to return probability values either. I can give you a link to the colab where I’m doing the coding if that’s helpful.
Thanks in advance!

My first guess is that you are retrieving the logits of the model, i.e. before softmax. Try pushing test_prediction through a softmax to get probabilities.

As to why it returns the same output, I think

def run_model(input_data):
    # tokenize list
    encoding = tokenizer.encode_plus(
      congress_head_list,

should be

def run_model(input_data):
    # tokenize list
    encoding = tokenizer.encode_plus(
      input_data,  # here

because you are never using congress_head_list (not sure why you created it because you are using apply to iterate over the rows).

Ahh yes you’re right about softmax, missed that one. I successfully retrieved the probabilities after applying softmax.
As to the other problem, I get an value type error when I run your suggestion:

ValueError: Input text    People are now using @metronorth like a subway...
Name: 0, dtype: object is not valid. Should be a string, a list/tuple of strings or a list/tuple of integers.

This is how I did:

# Loading softmax library for converting raw model outpus to probabilities:
from scipy.special import softmax

def run_model(input_data):
    # tokenize list
    encoding = tokenizer.encode_plus(
      input_data,
      add_special_tokens=True,
      max_length=512,
      return_token_type_ids=False,
      padding="max_length",
      return_attention_mask=True,
      return_tensors='pt',
    )

    # returning probability values for each label
    _, test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
    probabilities = softmax(test_prediction, axis=1)
    return probabilities.flatten().numpy()
    

#Then, apply the function to each row:
congress_head[LABEL_COLUMNS] = congress_head[['text']].apply(run_model, axis=1, result_type='expand')

I guess it wants text as a list?

I think it is the other way around: with apply you passed a pandas series (a “row” with just one item) to the run_model function instead of a single item. Can you try with item?

    encoding = tokenizer.encode_plus(
      input_data.item(),
1 Like

Works like a charm. Thank you so much!! I really appreciate it. Maybe you can help me with one last thing :crossed_fingers:
The probability outputs puzzles me.
When I run a test on a single sentence, without using softmax such as:

test_comment = "You are such a loser! You'll regret everything you've done to me! I'm so mad at you. You are ugly and stupid"


# tokenizing comment ^
encoding = tokenizer.encode_plus(
  test_comment,
  add_special_tokens=True,
  max_length=512,
  return_token_type_ids=False,
  padding="max_length",
  return_attention_mask=True,
  return_tensors='pt',
)

# returning probability values for each label
_, test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
test_prediction = test_prediction.flatten().numpy()


for label, prediction in zip(LABEL_COLUMNS, test_prediction):
  print(f"{label}: {prediction}",)

I get the fairly logical output (in what I assume is probabilities)

morality_binary: 0.8413364291191101
emotion_binary: 0.8254574537277222
positive_binary: 0.05144023522734642
negative_binary: 0.8077533841133118
care_binary: 0.08624635636806488
fairness_binary: 0.09649784862995148
authority_binary: 0.0513346865773201
sanctity_binary: 0.06140131130814552
harm_binary: 0.4953136444091797
injustice_binary: 0.613288164138794
betrayal_binary: 0.27055174112319946
subversion_binary: 0.1701277196407318
degradation_binary: 0.22703076899051666

But when I run the same test, using softmax and the workflow we’ve just made, the predictions are very different:
Input:

test = ["Hugh grew up to be an amoral man because his parents never told him the difference between right and wrong.",
        "You are the sweetst person I know. I just love you so much and wish you all the best",
        "You are such a loser! You'll regret everything you've done to me! I'm so mad at you. You are ugly and stupid"]
id = [1,2,3]
d = {'text':test,'ID':id}
df = pd.DataFrame(d)

# Loading softmax library for converting raw model outpus to probabilities:
from scipy.special import softmax

def run_model(input_data):
    # tokenize list
    encoding = tokenizer.encode_plus(
      input_data.item(),
      add_special_tokens=True,
      max_length=512,
      return_token_type_ids=False,
      padding="max_length",
      return_attention_mask=True,
      return_tensors='pt',
    )

    # returning probability values for each label
    _, test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
    probabilities = softmax(test_prediction, axis=1)
    return probabilities.flatten().numpy()

#Then, apply the function to each row:
df[LABEL_COLUMNS] = df[['text']].apply(run_model, axis=1, result_type='expand')

Output:

ID  text	                                                morality_binary	emotion_binary	positive_binary	negative_binary	care_binary	fairness_binary	authority_binary	sanctity_binary	harm_binary	injustice_binary	betrayal_binary	subversion_binary	degradation_binary
0	Hugh grew up to be an amoral man because his p...	1	0.118782	    0.118820	    0.056195	0.121279	0.057092	0.057526	0.056044	0.055932	0.076664	0.094562	0.065077	0.060283	0.061743
1	You are the sweetst person I know. I just love...	2	0.100631	    0.109011	    0.101731	0.066541	0.075631	0.080617	0.069760	0.066682	0.066603	0.066711	0.064998	0.065613	0.065473
2	You are such a loser! You'll regret everything...	3	0.119290	    0.117411	    0.054145	0.115350	0.056062	0.056640	0.054139	0.054687	0.084397	0.094965	0.067409	0.060968	0.064538

Is there some obvious explanation to this?

1 Like

Okay, I successfully tried this, I hope it does what I think it does (returning in probabilities):

THRESHOLD = 0.5

test = ["Hugh grew up to be an amoral man because his parents never told him the difference between right and wrong.",
        "You are the sweetst person I know. I just love you so much and wish you all the best",
        "You are such a loser! You'll regret everything you've done to me! I'm so mad at you. You are ugly and stupid"]
id = [1,2,3]
d = {'text':test,'ID':id}
df = pd.DataFrame(d)

# Loading softmax library for converting raw model outpus to probabilities:
from scipy.special import softmax

def run_model(input_data):
    # tokenize list
    encoding = tokenizer.encode_plus(
      input_data.item(),
      add_special_tokens=True,
      max_length=512,
      return_token_type_ids=False,
      padding="max_length",
      return_attention_mask=True,
      return_tensors='pt',
    )

    # returning probability values for each label
    _, test_prediction = trained_model(encoding["input_ids"], encoding["attention_mask"])
    return test_prediction.flatten().numpy()
    for label, prediction in zip(LABEL_COLUMNS, test_prediction):
      if prediction < THRESHOLD:
        continue
      print(f"{label}: {prediction}")
    

    
    

#Then, apply the function to each row:
df[LABEL_COLUMNS] = df[['text']].apply(run_model, axis=1, result_type='expand')

OUTPUT:

    text		                                        ID  morality_binary	emotion_binary	positive_binary	negative_binary	care_binary	fairness_binary	authority_binary	sanctity_binary	harm_binary	injustice_binary	betrayal_binary	subversion_binary	degradation_binary
0	Hugh grew up to be an amoral man because his p...	1	0.789733	0.790058	0.041283	0.810540	0.057118	0.064678	0.038587	0.036581	0.351876	0.561704	0.188023	0.111505	0.135425
1	You are the sweetst person I know. I just love...	2	0.454512	0.534496	0.465380	0.040864	0.168917	0.232755	0.088105	0.042983	0.041802	0.043420	0.017404	0.026819	0.024688
2	You are such a loser! You'll regret everything...	3	0.841336	0.825457	0.051440	0.807753	0.086246	0.096498	0.051335	0.061401	0.495314	0.613288	0.270552	0.170128	0.227031

Before leaving this, I just wanna say thank you. You’ve been a great help and I can now continue with my work. All the best,
Jørgen

1 Like

Ah, I didn’t realise you were working in a multi-label problem. A faster way to do what you want (and more torchy) is something like

preds = sigmoid(test_prediction)
preds = preds > 0.5  # turn sigmoud output in True/False depending on whether the value is > 0.5
preds = preds.to(int)  # turn True/False into 1/0
1 Like