I tried to reproduce the fine-tuning of the T5 masked language model (MLM) in Transformer.
It is based on the article here.
The fine-tuning part is done with the run_t5_mlm_flax.py
script.
python run_t5_mlm_flax.py \
--output_dir="./norwegian-t5-base" \
--model_type="t5" \
--config_name="./norwegian-t5-base" \
--tokenizer_name="./norwegian-t5-base" \
--dataset_name="oscar" \
--dataset_config_name="unshuffled_deduplicated_no" \
--max_seq_length="512" \
--per_device_train_batch_size="32" \
--per_device_eval_batch_size="32" \
--adafactor \
--learning_rate="0.005" \
--weight_decay="0.001" \
--warmup_steps="2000" \
--overwrite_output_dir \
--logging_steps="500" \
--save_steps="10000" \
--eval_steps="2500" \
--push_to_hub
However, when I run this code the GPU doesn’t seem recognized.
For example watch -n0.1 nvidia-smi
command doesn’t show the GPU being used.
I confirmed that the GPU is recognized by FLAX/JAX?
In [1]: import jax
...:
...: print("Number of available GPUs:", jax.device_count())
...: print("Default GPU:", jax.default_backend())
Number of available GPUs: 1
Default GPU: gpu
How can I make sure the run_mlm_flax.py
use the GPU machine?