AlexN
January 27, 2022, 11:04am
1
Hello everyone,
I am using the training script from transformers/examples/research_projects/robust-speech-event at main · huggingface/transformers · GitHub for the robust speech event.
During my training, the loss decreases below 0.3 but the WER stays constant near 1, while it should drop.
The run args I used are the followings:
python run_speech_recognition_ctc.py \
--dataset_name="mozilla-foundation/common_voice_7_0" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
--dataset_config_name="fr" \
--output_dir="./xls-r-300m-fr" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="8" \
--gradient_accumulation_steps="4" \
--per_device_eval_batch_size="8" \
--learning_rate="7.5e-5" \
--warmup_steps="2000" \
--length_column_name="input_length" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--chars_to_ignore , ? . ! \- \; \: \" “ % ‘ ” � — ’ … – \
--save_steps="500" \
--eval_steps="500" \
--logging_steps="100" \
--layerdrop="0.0" \
--activation_dropout="0.1" \
--save_total_limit="3" \
--freeze_feature_encoder \
--feat_proj_dropout="0.0" \
--mask_time_prob="0.75" \
--mask_time_length="10" \
--mask_feature_prob="0.4" \
--mask_feature_length="64" \
--gradient_checkpointing \
--report_to="wandb" \
--run_name="xls-r-300m-fr" \
--use_auth_token \
--fp16 \
--group_by_length \
--do_train --do_eval \
--push_to_hub
Any idea explaining why ?
AlexN
January 29, 2022, 9:10pm
2
I found that during evaluation the <s> tokens are still decoded by my tokenizer :
"<s>c<s>e<s>c<s>i<s> est <s>un<s> <s>t<s>e<s>s<s>t<s>e<s>"instead of "ceci est un teste "
I suppose the tokens are ignored during the compute of the CTC loss but not for the eval metrics.
To correct that I changed in compute_metrics(pred) pred_str = tokenizer.batch_decode(pred_ids)
by pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
.
Is there a way to add this parameter to the tokenizer config and not during the decoding call? I didn’t find anything about that.
It would also allow to have the output nicely decoded when model is tested in the hub.
1 Like
Plim
January 30, 2022, 10:29am
3
Hello AlexN,
I am experiencing the same issue
I think the solution can be found on this issue thread:
opened 04:05PM - 28 Jan 22 UTC
closed 12:53PM - 03 Feb 22 UTC
## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.… 4.0-1063-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (device = 'cuda')
- Using distributed or parallel set-up in script?: No
## Who can help
@patrickvonplaten, @anton-l
## Dataset used
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
## Information
Model I am using: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm, (dataset = https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) The model fine-tuned version of https://huggingface.co/facebook/wav2vec2-xls-r-300m
The problem arises when using:
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_7.0_nl_lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
## To reproduce
Steps to reproduce the behavior:
1. install packages using: `!pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode
!pip install git+https://github.com/huggingface/transformers.git
!pip install git+https://github.com/huggingface/datasets.git
!pip install torchaudio soundfile librosa Levenshtein telwoord wandb jiwer`
2. run:
```
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_7.0_nl_lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
3. Observe the error: `ValueError: Input logits of size 48, but vocabulary is size 50
## Expected behavior
I would expect pyctc-decode to work correctly and give me a transcription.
I suspect it has something to do with `<s>` and `</s>`. I've been struggling a bit with the length of the logits not matching up with the length of the vocabulary, when using pyctcdecode. For example in this repo that uses the LM: https://huggingface.co/patrickvonplaten/wav2vec2-base-100h-with-lm/blob/main/vocab.json the vocab.json includes <s> and </s>, but in this repo it doesn't: https://huggingface.co/hf-test/xls-r-300m-sv/blob/main/vocab.json Maybe that helps
→ Add the following input arguments to the Tokenizer
eos_token=None, bos_token=None
1 Like