Error loading and preprocessing librispeech

When I try to read librispeech, I get an error,

RuntimeError: stack expects each tensor to be equal size, but got [248560] at entry 0 and [32000] at entry 1

However, this happens because the length of each data is different and it cannot be summarized as a batch.
So I used padding in train_dataset.map(train_dataset, batched=True) to adjust the length, but an unexpected error popped up
This error is newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py" Some problem has occurred in the library I would like to know the cause of this please

  0%|                                                                                                                               | 0/29 [00:00<?, ?ba/s]
Traceback (most recent call last):
  File "load_test.py", line 25, in <module>
    train_dataset = train_dataset.map(train_dataset, batched=True)#学習用
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py", line 2405, in map
    desc=desc,
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py", line 557, in wrapper
    out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py", line 524, in wrapper
    out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\fingerprint.py", line 480, in wrapper
    out = func(self, *args, **kwargs)
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py", line 2779, in _map_single
    offset=offset,
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
    processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
  File "C:\Users\PC_User\newAI2022y08y23d\newAI2022y08y23d\lib\site-packages\datasets\arrow_dataset.py", line 2347, in decorated
    result = f(decorated_item, *args, **kwargs)
TypeError: 'Dataset' object is not callable

this is all code

from datasets import load_dataset
import torch
import numpy as np
from torch.utils.data import DataLoader
from torchvision import datasets, transforms
from transformers import Wav2Vec2Processor, Wav2Vec2ConformerForCTC

tokenizer = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-conformer-rope-large-960h-ft")



def tokenize_function(examples):
    return tokenizer(examples["audio"], padding="max_length", truncation=True)
batch_size=256

dataset_URL="librispeech_asr"#データセットの場所
train_dataset = load_dataset(dataset_URL,"clean", split="train.100")#学習用
test_dataset = load_dataset(dataset_URL,"clean", split="test")#試験用

#train_dataset =load_dataset("text", data_files="datasets/Librispeech/.txt")
#test_dataset =load_dataset("text", data_files="my_file.txt")
sampling_rate = train_dataset.features["audio"].sampling_rate#サンプルレート


train_dataset = train_dataset.map(train_dataset, batched=True)#学習用
test_dataset = test_dataset.map(test_dataset, batched=True)#試験用

train_loader = DataLoader(train_dataset, shuffle=True, batch_size=batch_size)#学習用
test_loader = DataLoader(test_dataset, batch_size=batch_size)#試験用
print(len(train_loader))

Hi!

In the pasted code, you are using dataset objects as transforms in map, but they are not callables, hence the error. I guess you meant to use tokenize_function instead.