Fine tuning for summarization script error

Hello I’m trying to run the fine tuning for summarization but i’m getting this error.

usage: run_summarization_no_trainer.py [-h] [–dataset_name DATASET_NAME]
[–dataset_config_name DATASET_CONFIG_NAME]
[–train_file TRAIN_FILE]
[–validation_file VALIDATION_FILE]
[–ignore_pad_token_for_loss IGNORE_PAD_TOKEN_FOR_LOSS]
[–max_source_length MAX_SOURCE_LENGTH]
[–source_prefix SOURCE_PREFIX]
[–preprocessing_num_workers PREPROCESSING_NUM_WORKERS]
[–overwrite_cache OVERWRITE_CACHE]
[–max_target_length MAX_TARGET_LENGTH]
[–val_max_target_length VAL_MAX_TARGET_LENGTH]
[–max_length MAX_LENGTH]
[–num_beams NUM_BEAMS]
[–pad_to_max_length]
[–model_name_or_path MODEL_NAME_OR_PATH]
[–config_name CONFIG_NAME]
[–tokenizer_name TOKENIZER_NAME]
[–text_column TEXT_COLUMN]
[–summary_column SUMMARY_COLUMN]
[–use_slow_tokenizer]
[–per_device_train_batch_size PER_DEVICE_TRAIN_BATCH_SIZE]
[–per_device_eval_batch_size PER_DEVICE_EVAL_BATCH_SIZE]
[–learning_rate LEARNING_RATE]
[–weight_decay WEIGHT_DECAY]
[–num_train_epochs NUM_TRAIN_EPOCHS]
[–max_train_steps MAX_TRAIN_STEPS]
[–gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
[–lr_scheduler_type {linear,cosine,cosine_with_restarts,polynomial,constant,constant_with_warmup}]
[–num_warmup_steps NUM_WARMUP_STEPS]
[–output_dir OUTPUT_DIR] [–seed SEED]
[–model_type {albert,bart,beit,bert,bert-generation,big_bird,bigbird_pegasus,blenderbot,blenderbot-small,bloom,camembert,canine,clip,codegen,convbert,convnext,ctrl,cvt,data2vec-audio,data2vec-text,data2vec-vision,deberta,deberta-v2,decision_transformer,deit,detr,distilbert,donut-swin,dpr,dpt,electra,flaubert,flava,fnet,fsmt,funnel,glpn,gpt2,gpt_neo,gpt_neox,gptj,groupvit,hubert,ibert,imagegpt,layoutlm,layoutlmv2,layoutlmv3,led,levit,longformer,longt5,luke,lxmert,m2m_100,marian,maskformer,mbart,mctct,megatron-bert,mobilebert,mobilevit,mpnet,mt5,mvp,nezha,nystromformer,openai-gpt,opt,owlvit,pegasus,perceiver,plbart,poolformer,prophetnet,qdqbert,reformer,regnet,rembert,resnet,retribert,roberta,roformer,segformer,sew,sew-d,speech_to_text,splinter,squeezebert,swin,swinv2,t5,tapas,trajectory_transformer,transfo-xl,unispeech,unispeech-sat,van,videomae,vilt,vision-text-dual-encoder,visual_bert,vit,vit_mae,wav2vec2,wav2vec2-conformer,wavlm,xglm,xlm,xlm-prophetnet,xlm-roberta,xlm-roberta-xl,xlnet,yolos,yoso}]
[–push_to_hub]
[–hub_model_id HUB_MODEL_ID]
[–hub_token HUB_TOKEN]
[–checkpointing_steps CHECKPOINTING_STEPS]
[–resume_from_checkpoint RESUME_FROM_CHECKPOINT]
[–with_tracking]
[–report_to REPORT_TO]
run_summarization_no_trainer.py: error: unrecognized arguments:
./sumtrainer.sh: line 2: --model_name_or_path: command not found
./sumtrainer.sh: line 3: --config_name: command not found
./sumtrainer.sh: line 4: --do_train: command not found
./sumtrainer.sh: line 5: --dataset_name: command not found
./sumtrainer.sh: line 6: --dataset_config: command not found
./sumtrainer.sh: line 7: --source_prefix: command not found
./sumtrainer.sh: line 8: --output_dir: command not found
./sumtrainer.sh: line 9: --preprocessing_num_workers=: command not found
./sumtrainer.sh: line 10: --per_device_train_batch_size: command not found
./sumtrainer.sh: line 11: --per_device_eval_batch_size: command not found
./sumtrainer.sh: line 12: --overwrite_output_dir: command not found
./sumtrainer.sh: line 13: --predict_with_generate: command not found
./sumtrainer.sh: line 14: --num_beams: command not found
./sumtrainer.sh: line 15: --pad_to_max_length: command not found
./sumtrainer.sh: line 16: --tokenizer_name: command not found

here is the script file

python3 run_summarization_no_trainer.py
–model_name_or_path “/media/New Volume/models/t5-efficient-xl-nl28”
–config_name “/media/New Volume/models/t5-efficient-xl-nl28”
–do_train
–dataset_name “xsum”
–dataset_config “train”
–source_prefix "summarize: "
–output_dir “/media/New Volume/models/nl28”
–preprocessing_num_workers= “16”
–per_device_train_batch_size “4”
–per_device_eval_batch_size “4”
–overwrite_output_dir “media/New Volume/models/need to make/cachedir”
–predict_with_generate
–num_beams “10”
–pad_to_max_length “max_length”
–tokenizer_name “google/t5-efficient-xl-nl28”

The summarization file hasn’t been touched either.