[Request] Provide better examples for each model and task existing in the library [/Request]

I don’t know if anyone from HF is monitoring this forum but I’m writing this in hopes that something might change in the future.

At the moment the HF library has tens of models that each of them can be used in more than one downstream task.

But, what is actually lacking is concise and simple examples how the users can achieve their goals for each specific task and model without wasting tremendous amount of time scavenging the internet or on this forum searching previous posts and trying to connect the dots (most of the time nonexistent).

Here’s an example suppose one wants to do text summarization and compare a bunch of models, from T5, Bart, GPT2, Pegasus, Mistal, Llama, etc.

Where the heck should one turn to in order to find examples on how exactly are the data supposed to be processed for each model?

Also, most examples in HF tutorials don’t make appropriate use of validation metrics therefore a user copying a tutorial from HF and simply adding some validation metrics finds themselves in front of OOM errors resulting in spending valuable time to avoid errors that would have been eliminated if there were appropriates documentation resources showing that for the model X on task Y here’s how you should preprocess your dataset and here’s how to utilize validation metrics without resulting in OOM errors. Same for model Z on task B

tagging for visibility @sgugger