Model.generate stop code execution without any error

Hello. I am pretty new to HF, this is my first attempt to use a model. The problem is model.generate kinda abrupt the script execution without any error. Here’s my code:

from transformers import RobertaTokenizer, T5ForConditionalGeneration

tokenizer = RobertaTokenizer.from_pretrained('Salesforce/codet5-base')
model = T5ForConditionalGeneration.from_pretrained('Salesforce/codet5-base')

text = "write for cycle"
input_ids = tokenizer(text, return_tensors="pt").input_ids

print("before")
generated_ids = model.generate(input_ids, max_length=8)
print("after")

print(tokenizer.decode(generated_ids[0], skip_special_tokens=True))

This code gives me no output except “before”. I’ve tried other models with the same result. It looks the issue on my side… I’ll be very grateful for your help. Thanks!

Here’s the logs:

loading file vocab.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/vocab.json
loading file merges.txt from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/merges.txt
loading file added_tokens.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/added_tokens.json
loading file special_tokens_map.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/special_tokens_map.json
loading file tokenizer_config.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/tokenizer_config.json
loading configuration file config.json from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/config.json
Model config T5Config {
  "_name_or_path": "/content/drive/MyDrive/CodeT5/pretrained_models/codet5_base",
  "architectures": [
    "T5ForConditionalGeneration"
  ],
  "bos_token_id": 1,
  "d_ff": 3072,
  "d_kv": 64,
  "d_model": 768,
  "decoder_start_token_id": 0,
  "dense_act_fn": "relu",
  "dropout_rate": 0.1,
  "eos_token_id": 2,
  "feed_forward_proj": "relu",
  "gradient_checkpointing": false,
  "id2label": {
    "0": "LABEL_0"
  },
  "initializer_factor": 1.0,
  "is_encoder_decoder": true,
  "is_gated_act": false,
  "label2id": {
    "LABEL_0": 0
  },
  "layer_norm_epsilon": 1e-06,
  "model_type": "t5",
  "n_positions": 512,
  "num_decoder_layers": 12,
  "num_heads": 12,
  "num_layers": 12,
  "output_past": true,
  "pad_token_id": 0,
  "relative_attention_max_distance": 128,
  "relative_attention_num_buckets": 32,
  "task_specific_params": {
    "summarization": {
      "early_stopping": true,
      "length_penalty": 2.0,
      "max_length": 200,
      "min_length": 30,
      "no_repeat_ngram_size": 3,
      "num_beams": 4,
      "prefix": "summarize: "
    },
    "translation_en_to_de": {
      "early_stopping": true,
      "max_length": 300,
      "num_beams": 4,
      "prefix": "translate English to German: "
    },
    "translation_en_to_fr": {
      "early_stopping": true,
      "max_length": 300,
      "num_beams": 4,
      "prefix": "translate English to French: "
    },
    "translation_en_to_ro": {
      "early_stopping": true,
      "max_length": 300,
      "num_beams": 4,
      "prefix": "translate English to Romanian: "
    }
  },
  "torch_dtype": "float32",
  "transformers_version": "4.30.0.dev0",
  "use_cache": true,
  "vocab_size": 32100
}

loading weights file pytorch_model.bin from cache at /Users/zonder/.cache/huggingface/hub/models--Salesforce--codet5-base/snapshots/4078456db09ba972a3532827a0b5df4da172323c/pytorch_model.bin
Generate config GenerationConfig {
  "_from_model_config": true,
  "bos_token_id": 1,
  "decoder_start_token_id": 0,
  "eos_token_id": 2,
  "pad_token_id": 0,
  "transformers_version": "4.30.0.dev0"
}

All model checkpoint weights were used when initializing T5ForConditionalGeneration.

All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at Salesforce/codet5-base.
If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training.
Generation config file not found, using a generation config created from the model config.
before
Generate config GenerationConfig {
  "_from_model_config": true,
  "bos_token_id": 1,
  "decoder_start_token_id": 0,
  "eos_token_id": 2,
  "pad_token_id": 0,
  "transformers_version": "4.30.0.dev0"
}