TypeError: Wav2Vec2FeatureExtractor.__call__() missing 1 required positional argument: 'raw_speech'

Hello
I am followinh tutorial “Fine-Tuning week of XLSR-Wav2Vec2 on 60 languages
I am just copy pasting the code. I am trying to run “run_common_voices.py” script with following parameters:
python run_common_voice.py
–model_name_or_path=“facebook/wav2vec2-large-xlsr-53”
–dataset_config_name=“ka”
–output_dir=./wav2vec2-large-xlsr-georgian-demo
–overwrite_output_dir
–num_train_epochs=“5”
–per_device_train_batch_size=“16”
–learning_rate=“3e-4”
–warmup_steps=“500”
–evaluation_strategy=“steps”
–save_steps=“400”
–eval_steps=“400”
–logging_steps=“400”
–save_total_limit=“3”
–freeze_feature_extractor
–feat_proj_dropout=“0.0”
–layerdrop=“0.1”
–gradient_checkpointing
–fp16
–group_by_length
–do_train --do_eval

but I am getting the following error:

loading weights file https://huggingface.co/facebook/wav2vec2-large-xlsr-53/resolve/main/pytorch_model.bin from cache at /home/pavle/.cache/huggingface/transformers/5d2a20b45a1689a376ec4a6282b9d9be42f931cdf8daf07c3668ba1070a059d9.622b46163a38532eae8ac5423b0481dfc0b9ea401af488b5141772bdff889079
Some weights of the model checkpoint at facebook/wav2vec2-large-xlsr-53 were not used when initializing Wav2Vec2ForCTC: [‘project_hid.bias’, ‘quantizer.weight_proj.bias’, ‘project_q.weight’, ‘project_hid.weight’, ‘project_q.bias’, ‘quantizer.codevectors’, ‘quantizer.weight_proj.weight’]

  • This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
  • This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
    Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-large-xlsr-53 and are newly initialized: [‘lm_head.weight’, ‘lm_head.bias’]
    You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
    100%|████████████████████████████████████████████████████████████████████████████| 1585/1585 [00:17<00:00, 92.73ex/s]
    100%|██████████████████████████████████████████████████████████████████████████████| 656/656 [00:07<00:00, 87.72ex/s]
    0%| | 0/100 [00:00<?, ?ba/s]
    Traceback (most recent call last):
    File “/home/pavle/Dev/ai/Georgian/transformers/examples/research_projects/wav2vec2/run_common_voice.py”, line 513, in
    main()
    File “/home/pavle/Dev/ai/Georgian/transformers/examples/research_projects/wav2vec2/run_common_voice.py”, line 423, in main
    train_dataset = train_dataset.map(
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py”, line 2387, in map
    return self._map_single(
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py”, line 557, in wrapper
    out: Union[“Dataset”, “DatasetDict”] = func(self, *args, **kwargs)
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py”, line 524, in wrapper
    out: Union[“Dataset”, “DatasetDict”] = func(self, *args, **kwargs)
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/fingerprint.py”, line 480, in wrapper
    out = func(self, *args, **kwargs)
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py”, line 2775, in _map_single
    batch = apply_function_on_filtered_inputs(
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py”, line 2655, in apply_function_on_filtered_inputs
    processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
    File “/home/pavle/.local/lib/python3.10/site-packages/datasets/arrow_dataset.py”, line 2347, in decorated
    result = f(decorated_item, *args, **kwargs)
    File “/home/pavle/Dev/ai/Georgian/transformers/examples/research_projects/wav2vec2/run_common_voice.py”, line 417, in prepare_dataset
    processed_batch = processor(
    File “/home/pavle/.local/lib/python3.10/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py”, line 73, in call
    return self.current_processor(*args, **kwargs)
    TypeError: Wav2Vec2FeatureExtractor.call() missing 1 required positional argument: ‘raw_speech’

I looked everywhere but I am unable to find anything about this