I am using
finetune.py script among the seq2seq examples to finetune for a QA task:
export NQOPEN_DIR=/home/danielk/nqopen_csv export OUT=/home/danielk/fine_tune_t5_small python3 finetune.py \ --data_dir $NQOPEN_DIR \ --model_name_or_path t5-small --tokenizer_name t5-small \ --learning_rate=3e-4 --freeze_encoder --freeze_embeds \ --do_train --train_batch_size 16 \ --do_predict --n_train -1 \ --eval_beams 2 --eval_max_gen_length 142 \ --val_check_interval 0.25 --n_val 3000 \ --output_dir $OUT --gpus 4 --logger_name wandb \ --save_top_k 3
Here are how my input/outputs look like:
$ head ~/Desktop/nqopen_csv/train.source -l 5 ==> /Users/danielk/Desktop/nqopen_csv/train.source <== total number of death row inmates in the us? big little lies season 2 how many episodes? who sang waiting for a girl like you? where do you cross the arctic circle in norway? who is the main character in green eggs and ham? do veins carry blood to the heart or away? who played charlie bucket in the original charlie and the chocolate factory? what is 1 radian in terms of pi? when does season 5 of bates motel come out? how many episodes are in series 7 game of thrones? head: -l: No such file or directory head: 5: No such file or directory $ head ~/Desktop/nqopen_csv/train.target -l 5 ==> /Users/danielk/Desktop/nqopen_csv/train.target <== 2,718 seven Foreigner Saltfjellet Sam - I - am to Peter Gardner Ostrum 1 / 2π February 20 , 2017 seven
After fine-tuning, I use the following script to get example generations:
path = "/Users/danielk/ideaProjects/fine_tune_t5_small/best_tfmr" model = T5ForConditionalGeneration.from_pretrained(path) tokenizer = T5Tokenizer.from_pretrained(path) model.eval() def run_model(input_string, **generator_args): # input_string += "</s>" input_ids = tokenizer.encode(input_string, return_tensors="pt") res = model.generate(input_ids, **generator_args) tokens = [tokenizer.decode(x) for x in res] print(tokens) run_model("how many states does the US has? ") run_model("who is the US president?") run_model("who got the first nobel prize in physics?") run_model("when is the next deadpool movie being released?") run_model("which mode is used for short wave broadcast service?") run_model("the south west wind blows across nigeria between?")
which gives me the following responses:
['44,100 state legislatures, 391,415 state states,527 states ; 521 states : 517 states'] ['President Pro - lect Ulysses S. Truman and Mr. President Proseudo - Emees'] ['Wilhelm Conrad Röntgen of Karl - Heinz Zurehmann - Shelgorithsg ⁇ rd'] ['December 14, 2018. 05 - 02 - 03 - 08 - 13 - 2022. 2022'] ['Fenway Wireless, Bluetooth, wireless channel system, WMV, FMN type 3D system.E.N'] ["Nigeria's natural gas, but some other half saggbourns ; they reboss"]
which are quite bad.
For comparison, when I used a T5-small model fine-tuned with TPU (tensorflow), I get the following predictions:
['50'] ['Donald Trump'] ['Wilhelm Conrad Röntgen'] ['December 18, 2018'] ['TCP port 25'] ['the Nigerian and Pacific Oceans']
Any thoughts on what is going wrong?