Hello everyone, I’m exploring the use of Hugging Face’s Inference Endpoints for a project involving music files. My goal is to input a music file and have the model generate a music file as output. I understand that the feasibility of this would depend on the specific model being used and whether it has been trained to process and generate music files.
However, I’m aware that handling music files as direct output might not be possible at the moment. As an alternative, I’m considering having the output music file stored in an AWS S3 bucket or Google Cloud Storage after inference. Has anyone implemented something similar, or could provide guidance on how to achieve this? Are there any specific models (as example) you would recommend for this task?
I agree that uploading the output music file to a cloud storage service like Amazon S3 or Google Cloud Storage and then returning the URL seems to be a more efficient and practical solution, especially for larger files.
This approach would also provide a more seamless experience for users, as they can directly download the file from the provided URL.