Builder Error after downloading all files

from datasets import load_dataset

load_dataset(“Open-Orca/OpenOrca”, split=“train”)

Downloading data files: 0%| | 0/2 [00:00<?, ?it/s]lWorker-1]
Downloading data files: 100%|##########| 2/2 [00:00<00:00, 2280.75it/s]
Extracting data files: 0%| | 0/2 [00:00<?, ?it/s]olWorker-1]
Extracting data files: 100%|##########| 2/2 [00:00<00:00, 162.74it/s]1]
Generating train split: 0 examples [00:00, ? examples/s]rkPoolWorker-1]
Generating train split: 16538 examples [00:00, 144812.64 examples/s]-1]
Generating train split: 33201 examples [00:00, 128912.00 examples/s]-1]
Generating train split: 49716 examples [00:00, 121710.42 examples/s]-1]
Generating train split: 71911 examples [00:00, 125691.84 examples/s]-1]
Generating train split: 88488 examples [00:00, 120111.03 examples/s]-1]
Generating train split: 105138 examples [00:00, 118511.01 examples/s]1]
Generating train split: 127453 examples [00:00, 133419.59 examples/s]1]
Generating train split: 149486 examples [00:01, 144590.81 examples/s]1]
Generating train split: 171386 examples [00:01, 160420.28 examples/s]1]
Generating train split: 193695 examples [00:01, 173701.28 examples/s]1]
Generating train split: 215800 examples [00:01, 180090.42 examples/s]1]
Generating train split: 243393 examples [00:01, 188582.26 examples/s]1]
Generating train split: 265446 examples [00:01, 171633.36 examples/s]1]
Generating train split: 287781 examples [00:01, 175285.94 examples/s]1]
Generating train split: 310119 examples [00:01, 177919.19 examples/s]1]
Generating train split: 332364 examples [00:02, 177526.10 examples/s]1]
Generating train split: 354521 examples [00:02, 181653.21 examples/s]1]
Generating train split: 376803 examples [00:02, 186114.51 examples/s]1]
Generating train split: 399311 examples [00:02, 147922.18 examples/s]1]
Generating train split: 415965 examples [00:02, 138670.33 examples/s]1]
Generating train split: 438219 examples [00:02, 138115.71 examples/s]1]
Generating train split: 455040 examples [00:03, 134880.62 examples/s]1]
Generating train split: 477502 examples [00:03, 145554.50 examples/s]1]
Generating train split: 499529 examples [00:03, 157858.22 examples/s]1]
Generating train split: 521940 examples [00:03, 165399.38 examples/s]1]
Generating train split: 544042 examples [00:03, 170986.72 examples/s]1]
Generating train split: 566173 examples [00:03, 174126.31 examples/s]1]
Generating train split: 588273 examples [00:03, 177247.01 examples/s]1]
Generating train split: 610152 examples [00:03, 183130.04 examples/s]1]
Generating train split: 632486 examples [00:03, 176939.14 examples/s]1]
Generating train split: 654932 examples [00:04, 174152.24 examples/s]1]
Generating train split: 682643 examples [00:04, 161374.63 examples/s]1]
Generating train split: 704916 examples [00:04, 166903.39 examples/s]1]
Generating train split: 727468 examples [00:04, 174407.34 examples/s]1]
Generating train split: 749487 examples [00:04, 174286.88 examples/s]1]
Generating train split: 771555 examples [00:04, 183449.05 examples/s]1]
Generating train split: 793590 examples [00:04, 188877.08 examples/s]1]
Generating train split: 815797 examples [00:04, 194304.02 examples/s]1]
Generating train split: 838094 examples [00:05, 191393.82 examples/s]1]
1]
dataset-worker | [2023-06-30 08:15:51,788: WARNING/ForkPoolWorker-1]
dataset-worker | [2023-06-30 08:15:51,944: ERROR/ForkPoolWorker-1] Task nlp.tasks.task_retrieve_and_store_open_orca_data[681dd111-bc30-4893-894e-7ded8014aca1] raised unexpected: DatasetGenerationError(‘An error occurred while generating the dataset’)
dataset-worker | Traceback (most recent call last):
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/builder.py”, line 1894, in _prepare_split_single
dataset-worker | writer.write_table(table)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/arrow_writer.py”, line 570, in write_table
dataset-worker | pa_table = table_cast(pa_table, self._schema)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/table.py”, line 2324, in table_cast
dataset-worker | return cast_table_to_schema(table, schema)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/table.py”, line 2282, in cast_table_to_schema
dataset-worker | raise ValueError(f"Couldn’t cast\n{table.schema}\nto\n{features}\nbecause column names don’t match")
dataset-worker | ValueError: Couldn’t cast
dataset-worker | id: string
dataset-worker | system_prompt: string
dataset-worker | question: string
dataset-worker | target: string
dataset-worker | response: string
dataset-worker | to
dataset-worker | {‘id’: Value(dtype=‘string’, id=None), ‘system_prompt’: Value(dtype=‘string’, id=None), ‘question’: Value(dtype=‘string’, id=None), ‘response’: Value(dtype=‘string’, id=None)}
dataset-worker | because column names don’t match
dataset-worker |
dataset-worker | The above exception was the direct cause of the following exception:
dataset-worker |
dataset-worker | Traceback (most recent call last):
dataset-worker | File “/usr/local/lib/python3.10/site-packages/celery/app/trace.py”, line 451, in trace_task
dataset-worker | R = retval = fun(*args, **kwargs)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/celery/app/trace.py”, line 734, in protected_call
dataset-worker | return self.run(*args, **kwargs)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/celery.py”, line 197, in _inner
dataset-worker | reraise(*exc_info)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/sentry_sdk/_compat.py”, line 54, in reraise
dataset-worker | raise value
dataset-worker | File “/usr/local/lib/python3.10/site-packages/sentry_sdk/integrations/celery.py”, line 192, in _inner
dataset-worker | return f(*args, **kwargs)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/load.py”, line 1809, in load_dataset
dataset-worker | builder_instance.download_and_prepare(
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/builder.py”, line 909, in download_and_prepare
dataset-worker | self._download_and_prepare(
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/builder.py”, line 1004, in _download_and_prepare
dataset-worker | self._prepare_split(split_generator, **prepare_split_kwargs)
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/builder.py”, line 1767, in _prepare_split
dataset-worker | for job_id, done, content in self._prepare_split_single(
dataset-worker | File “/usr/local/lib/python3.10/site-packages/datasets/builder.py”, line 1912, in _prepare_split_single
dataset-worker | raise DatasetGenerationError(“An error occurred while generating the dataset”) from e
dataset-worker | datasets.builder.DatasetGenerationError: An error occurred while generating the dataset

There are two versions of the OpenOrca dataset - one with GPT4 completions and another with GPT 3.5. You can load the former with load_dataset("Open-Orca/OpenOrca", data_dir="001-1M-GPT4-Augmented") and the latter with load_dataset("Open-Orca/OpenOrca", data_dir="002-3_5M-GPT3_5-Augmented").