Problem loading local dataset using TRL

I’ve managed to run and fine-tune on an example huggingface dataset per the instructions here:Welcome Gemma - Google’s new open LLM

However, when I can’t seem to load a local dataset of my own construction. I have a folder with following structure:

Directory
-train.csv
-test.csv

But the following code (with substitutions for the bolded text) throws “error sft.py: error: the following arguments are required: --output_dir”:

accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml --num_processes=1
examples/scripts/sft.py
–model_name google/gemma-7b
–dataset_name path/to/mycorpus
–per_device_train_batch_size 2
–gradient_accumulation_steps 1
–learning_rate 2e-4
–save_steps 20_000
–use_peft
–lora_r 16 --lora_alpha 32
–lora_target_modules q_proj k_proj v_proj o_proj
–load_in_4bit
–output_dir myOutputDir

This code is only minimally different from the working code below, using a dataset from the huggingface hub:

accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml --num_processes=1
examples/scripts/sft.py
–model_name google/gemma-7b
–dataset_name OpenAssistant/oasst_top1_2023-08-25
–per_device_train_batch_size 2
–gradient_accumulation_steps 1
–learning_rate 2e-4
–save_steps 20_000
–use_peft
–lora_r 16 --lora_alpha 32
–lora_target_modules q_proj k_proj v_proj o_proj
–load_in_4bit
–output_dir gemma-finetuned-openassistant

What am I doing wrong? What is the minimal change to the example script needed to finetune the model on my own local dataset?