Transformers Huge Community Feedback

So last week we shared the first feedback request on :hugs:transformers. The community was pretty amazingly involved in this with close to 900 detailed feedback forms to analyze and dive into, containing more than 50 thousand words of open answers :exploding_head:

In this post, I would like, first, to deeply thank the community for this amazing feedback and for the carefully crafted answers, people were so keen to share. It’s really humbling to read all these encouraging and more critical words, from the thank-you to the detailed critics of the library in which people tried to be so helpful I wanted to thank you all one by one if only I had all your email addresses.

In the following post, I’d like to try to summarize and share the most interesting takeaways from this huge corpus.

:people_holding_hands:Let me first start by our users

:hugs:transformers has three big user communities of roughly equal sizes (in the respondents):

  • Researchers (blue)
  • ML engineers (red)
  • Data scientists (green)

Comparing the answers from each community can give some hints:

Researchers :

  • The oldest and core users
  • They probably often develop or study models
  • They are more often using directly master and want verbose and easy to customize model and examples. Some would like to be able to train from scratch. They usually don’t want to use high-level training abstraction and encapsulated examples.

ML engineers :

  • They often joined a bit after the researchers community
  • They probably often push models in production applications
  • They are more often using recent PyPi versions. They are interested in fast inference, fp16, ONNX, TPU, multi-GPU, production-ready models. Some of them like training abstractions, some don’t.

Data scientists :

  • The most recent community of users (many have been using it since less than 3 months)
  • They probably often use models for data-analytics (i.e. no strong reqs for perf)
  • They are often beginning to use transformers models. They are interested in fast prototyping tool and examples that they can easily and quickly adapt to their use cases. They often don’t care much about the model internals or having access to training details but they are very interested in diving into/mastering data-processing.

There are also a lot of common points between these communities (they have mostly common interests) so don’t be distracted by these apparent differences, almost all of our users want recent (SOTA) models in an easy-to-use API :slight_smile:

:clock: For how long have our users been using the library

There is a significant influx of new users (green + purple are < 3 months users). One-third of the respondents have been using the library for less than 3 months!

The longest users are researchers followed by ML engineers. Data scientists are more recent users (40% of them have been using transformers for less than 3 months).

:woman_technologist: work or :man_artist: fun

Most users are using :hugs:transformers for work (80% overall)

Researchers are somewhat the more serious community (>90% using it for work) :slight_smile:

Data scientists are using it more for fun than the other communities at the moment (maybe also because they are still discovering it).

:oncoming_automobile: Which version

Many users are on the latest or two latest versions (blue+red+green+purple)

Researchers use master (red) more than the other communities (maybe because they tweak the models more).

ML engineers are somewhat a bit more conservative (more users on 2.11.X (green))

Data scientists tend to use master (red) less than the other communities (maybe because they customize the models less than the other groups)

:star: Would you recommend the library?

:female_detective: Importance of various features in the examples

User specific interests:

Three features are rated as most essential in the examples overall by all user-communities:

  1. Full/transparent access to data-processing
  2. Full access to training loop
  3. Simple data-downloading and processing

People care less about TPU support

Some more community-specific interests:

  • Researchers want more strongly to avoid encapsulated training than the other communities
  • ML engineers are more interested in FP16, multi-GPU, TPU than the other communities
  • Data Scientists care less about training and optimization than the other communities (more ok with encapsulated trainer logic) and care more about the data processing

:1st_place_medal: What to prioritize

Users ask for more priority on:

  • Adding examples for NLP tasks + easier to adapt to more real-life scenarios
  • Keep adding SOTA models

Less interesting for most users:

  • Refactor the code toward modularization

:heart_eyes: What do you like the most

Most frequent reasons are:

  • Easy to use and simplicity
  • Many SOTA pretrained models
  • Community

Short summary of the top 300 strongest likes (apart from above mentioned top 3):

  • Pipelines <= many people like them
  • AutoModels <= many people like them as well
  • Easy to tweak - self-contained-verbose - non-modularization - no code reuse
  • Good doc
  • Model hub
  • PyTorch and TF interopt
  • Transparency

:thinking: What do you dislike the most

Some top dislikes are:

  • Need more examples, more real-use-cases and on how to load your own datasets for certain tasks - More examples on using transformers for text or token classification
  • More tutorials and lack of guidance for simple examples
  • Too frequent breaking changes
  • Too much modularization - Model code is spread across too many files
  • Examples are too encapsulated – examples are hard to unpack sometimes
  • Trainer
  • Less support for Tensorflow 2.0
  • model hub is a bit messy, it’s hard to find the right model
  • …

:speaking_head: Open-feedback

The most noticeable one:

Thanks! <= :heart:
…

17 Likes

I am positively impressed by the number of responses you got! Looking at the “would you recommend” graph, most people seem to have taken questionnaire seriously, too. (Not too many trolls.)

Some things that I am surprised by:

  • I had expected more data scientists and ML engineers and less researchers
  • I am surprised that there are so many long-time users because it often feels that the library is continuously growing with many new users but less interaction from older users (perhaps the forums will change that!)
  • 67, 9, 123 as a an answer to a “work or fun” question :sweat_smile:
  • Big one: it seems that there is quite a preference for DataParallel over DistributedDataParallel. I wonder why this is. What are use cases where you can use DP but not DDP? In all other cases, DDP should perform better in terms of speed.
  • I do understand the comments about modularization, that the code base seems to be spread over too many files. On the one hand I think this can be partially solved by using a folder structure for the models, tokenizers, and utils. A directory structure already makes the code base more readable. On the other hand, though, since there is a lot of inheritance (e.g. PretrainedModel, BertModel, RobertaModel), you cannot get around separate building blocks. It is necessary to allow for custom models and rapid research where other building blocks can be reused.

The most important thing that I see in all of this, though, is the highlighted keyword in the likes: community. I definitely agree with it. The catch phrase of HF is to democratize NLP, and I strongly believe that this is exactly what you guys are doing. Let’s have the forum be another step into that direction. A big congrats on all the work that you do!

4 Likes

One of the questions for me coming out of this is how useful our examples are to people, especially in the data scientist / student crowd. Does the Big table of tasks hit the spot for people who are looking for beginner-oriented examples of how to use transformers on various tasks with different datasets, or do we need to build some more walkthrough type examples in the docs directly, e.g. in the Keras style?

1 Like

I would love to see some notebooks for token classification. I was recently going through the Trainer run_ner.py example and was not having a great time. The preprocessing/data formatting is not very clear.

I had a really great time going through some of the other notebooks that used transformers on https://notebooks.quantumstat.com/, however they are often out-of-date. If you could keep an up to date version where you touch each portion of what the pipeline handles for you, I would have a much easier time recommending transformers to my colleagues who are maybe aren’t able to find outside resources to learn from.

2 Likes

+1 I think we need to document what files/format users need to supply in README.md for examples. I tried for seq2seq: https://github.com/huggingface/transformers/blob/353b8f1e7a7361c0afd9e391381bc226b4a5ca8f/examples/seq2seq/README.md#L9 but I’d definitely appreciate help.
I think in general we don’t experiment/document using private data because we full-time open source developers/researchers mostly use benchmark datasets. We should do better.

1 Like

my friend is having some problem in using the transformers library from hugging face, so please can you help to create a more better library by linking some emotional qualities which correspond with the sentences meaning:

(-)ve Qualities are:
Anger, Attachment, Greed, Ego, Lust.

(+)ve Qualities are:
Calm, Love, Kindness, Humble, Healthy.