Hello there!
Thanks for enhancing the world of Translation with Transformers library!
CONTEXT
- Context. Finetunning t5-small with opus100 dataset . Following this script, which is a brief modification of the TFexample
SUMMARY.
Finding this issue in Code after correct training .
Traceback (most recent call last):
File "/../The-Lord-of-The-Words-The-two-frameworks/src/models/train_model.py", line 742, in <module>
main()
File "/../The-Lord-of-The-Words-The-two-frameworks/src/models/train_model.py", line 695, in main
history = model.fit(tf_train_dataset, epochs=int(training_args.num_train_epochs), callbacks=callbacks)
File "/../The-Lord-of-The-Words-The-two-frameworks/.venv3.9/lib/python3.9/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler
raise e.with_traceback(filtered_tb) from None
File "/.../The-Lord-of-The-Words-The-two-frameworks/.venv3.9/lib/python3.9/site-packages/transformers/keras_callbacks.py", line 227, in on_epoch_end
predictions = self.model.generate(
File "/../The-Lord-of-The-Words-The-two-frameworks/.venv3.9/lib/python3.9/site-packages/transformers/generation/tf_utils.py", line 874, in generate
and generation_config.min_length > generation_config.max_length
TypeError: '>' not supported between instances of 'int' and 'NoneType'
When I visit tf_utils.py I can see that it checks lengths from generation config.
Right now I think generation_config is not being created. Does anyone know if this is a reasonable hypothesis?
I thought that the error was related to the configuration of the model (it was not being created ) , so I tried implementing the script flag --config_name t5-small
, but the error persisted.
Any hints of what´s going on at this point or where shall I go from this?
Thanks in advance!