Wave2Vec,loss decreased, but WER remained stable

Hello everyone,

I am using the training script from transformers/examples/research_projects/robust-speech-event at main · huggingface/transformers · GitHub for the robust speech event.
During my training, the loss decreases below 0.3 but the WER stays constant near 1, while it should drop.

The run args I used are the followings:

python run_speech_recognition_ctc.py \
--dataset_name="mozilla-foundation/common_voice_7_0" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
--dataset_config_name="fr" \
--output_dir="./xls-r-300m-fr" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="8" \
--gradient_accumulation_steps="4" \
--per_device_eval_batch_size="8" \
--learning_rate="7.5e-5" \
--warmup_steps="2000" \
--length_column_name="input_length" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — ’ … – \
--save_steps="500" \
--eval_steps="500" \
--logging_steps="100" \
--layerdrop="0.0" \
--activation_dropout="0.1" \
--save_total_limit="3" \
--freeze_feature_encoder \
--feat_proj_dropout="0.0" \
--mask_time_prob="0.75" \
--mask_time_length="10" \
--mask_feature_prob="0.4" \
--mask_feature_length="64" \
--gradient_checkpointing \
--report_to="wandb" \
--run_name="xls-r-300m-fr" \
--use_auth_token \
--fp16 \
--group_by_length \
--do_train --do_eval \
--push_to_hub

Any idea explaining why ?

I found that during evaluation the <s> tokens are still decoded by my tokenizer :
"<s>c<s>e<s>c<s>i<s> est <s>un<s> <s>t<s>e<s>s<s>t<s>e<s>"instead of "ceci est un teste "
I suppose the tokens are ignored during the compute of the CTC loss but not for the eval metrics.
To correct that I changed in compute_metrics(pred) pred_str = tokenizer.batch_decode(pred_ids) by pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True).
Is there a way to add this parameter to the tokenizer config and not during the decoding call? I didn’t find anything about that.
It would also allow to have the output nicely decoded when model is tested in the hub.

1 Like

Hello AlexN,

I am experiencing the same issue :slight_smile:

I think the solution can be found on this issue thread:

→ Add the following input arguments to the Tokenizer

eos_token=None, bos_token=None

1 Like