How to load image dataset using csv to get proper dataset datatype

Hi, I’m working on image classification. I have csv file which has image names, and their label mappings.
I tried as suggested here., but I want the image to be in the form of PIL format, and also labels has to be detected.

this is what I’m getting:

>> data_files = {'train': 'train.csv', 'test': 'test.csv'}
ds = load_dataset('csv', data_files=data_files, data_dir='/kaggle/input/cassava-leaf-disease-classification/train_images/')
>> ds
    train: Dataset({
        features: ['Unnamed: 0', 'image_id', 'label'],
        num_rows: 17118
    test: Dataset({
        features: ['Unnamed: 0', 'image_id', 'label'],
        num_rows: 4279

{'Unnamed: 0': Value(dtype='int64', id=None),
 'image_id': Value(dtype='string', id=None),
 'label': Value(dtype='int64', id=None)}

This is how I want it to be:

(this image is from a colab notebook provided by huggingface)

How can I get it right?

That notebook uses the ImageFolder loading strategy, but since you’re using a CSV file you can just cast the image_id column to an Image() after you’ve loaded the dataset. i.e. you can just run

from datasets import Image
ds = ds.cast_column("image_id", Image())

(From Load image data)

And when you check the features again you’ll see that image_id will be an image!

Thanks for the reply.
By following that, I get this error -

>> ds = ds.cast_column("image_id", Image())
ArrowNotImplementedError                  Traceback (most recent call last)
<ipython-input-22-8d1f43006898> in <module>
----> 1 ds = ds.cast_column("image_id", Image())

19 frames
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()

ArrowNotImplementedError: Unsupported cast from int64 to struct using function cast_struct