I’m trying to create a dataset for an object detection task. The training images are stored on s3 and I would like to eventually use sagemaker and a estimator to train the model.
If I understand correctly I need to create a dataset first and then save it in the session bucket on s3 but I am not entirely sure how to do that with a dataset which is too big to pull locally first in order to create it.
I have found the load_dataset function with the ‘imagefolder’ option which seems to do what I want for local image files but doesn’t seem to support filepaths on s3. I have also found the load_from_disk function which seems to do the loading for datasets from s3 but doesn’t have an imagefolder option.
What is the best way to prepare my data in this case?
you don’t need to save the dataset in the session bucket or do the pre-processing in advance. You could do everything inside sagemaker.
Meaning you can either have as first the the download of your dataset from s3 to local and the use load_dataset or just provide the S3 URI when calling HuggingFace.fit() to your bucket. Those S3 URIs don’t need to be on the session bucket they could be on any bucket.
Basically I have the same problem set up as cotrane, but I wanted to use the FastFile input mode, because of the size of my dataset. As I understand it, FastFile streams the data from S3 instead of downloading it all at once. Is there any way I can make that work together with the HF estimator/dataset approach?
@johko yes you can. The Hugging Face estimator and DLC support all known SageMaker features. Meaning you can use the File input mode for your training. Documentation can be found here: Access Training Data - Amazon SageMaker
Thank you.
I thought FastFile Mode was different from File Mode in terms of where and how the input data will be stored. But the figure on that page you linked makes it clearer for me
Hi @philschmid. Apologies if this is answered and I just misread it, but is it possible to use load_dataset with imagefolder from s3 just like I would locally?
## load images from s3
import boto3
from sagemaker import get_execution_role
role = get_execution_role()
data_location = "s3/path/here"
dataset = load_dataset("imagefolder", data_dir=data_location)
I get the following error in SageMaker Studio despite it working locally: FileNotFoundError: The directory at "s3/path/here" doesn't contain any data file
@philschmid the PR you have linked is merged, however as far as I can tell it does not contain support for imagefolder. This is a pretty important functionality, since as is I have 2 options:
Download the entire dataset to SageMaker EFS, preprocess it, and save it to S3. This takes a lot of time and is inconvenient code-wise.
Process data every time in HuggingFace Estimator train.py script. This is very costly and time-consuming, since e.g. in hyperparameter optimization I would have to do this every time, in every estimator, and on GPU instance.
Would making a separate Github issue for this make sense in this case?
I basically want something like this, but without downloading everything from S3 manually.