First of all, thank you for participating in this event. It has been a lot of fun and quite exciting to see so much engagement and incredible projects. Today we’re officially wrapping up the event and would like to do some last announcements.
1. Winning Projects and Mentions
We want to congratulate the top 3 projects selected by the jury:
- First Place: DALL-E mini
- Second Place: CLIP+NeRF: Fewshot Learning, Putting NeRF on a Diet
- Third Place: Fine-tune CLIP on satellite images+captions
The jury was quite impressed with the projects, so there are a couple of additional special nominees they would like to recognize as well:
- BERTIN: PreTrain RoBERTa-large from scratch in Spanish
- CLIP like contrastive vision-language models for Italian with pre-trained text and vision models
- Generate GIF reply to English text with VQGAN + CLIP
- Sentence Embeddings
And finally the jury gave an honorary mention to the Chef Transformer (Recipe Generation Model). You can find all comments from the jury in this document and find all 15 top projects here. We’ll follow up with the teams with the next steps.
This has been the largest Hugging Face event, and we’re extremely excited by the results. Almost 800 members joined Slack, people were very active in Discord as well, and had almost 100 projects, 170 models, and 36 Spaces! That is super impressive given the timeframes of the event!
Many projects have a practical impact and great potential, so we want to encourage you to keep contributing to the ecosystem and developing your projects if it makes sense. For example:
- BERTIN gets similar downstream metrics compared to a model trained on a super-computer.
- Italian CLIP shows how to leverage pre-trained models and open-source tools to create a model in other languages.
- Multiple Language Models were trained for low resources languages like Swahili, Marathi, Polish, and Bangla.
- Interesting applications of CLIP for other domains such as medical and satellite imagery.
- Multiple SOTA sentence-embedding models.
Starting today, all flax-community Spaces are publicly viewable. You can now share the results of your projects with the community and friends.
If people ask you how to build their own Spaces, anyone can request to join the Beta in Join the beta for Spaces.
4. Closing Remarks
We want to thank the JAX/Flax and Cloud teams (Skye Wanderman-Milne, Marc van Zee, Avital Oliver, Jonathan Heek, James Bradbury, and Michael Green) for making this event possible. The resources, support, and collaboration provided by them were invaluable.
We also want to thank the members of the jury for going through the projects and contributing with great feedback. Thank you, Niki Parmar, Ashish Vaswani, Ross Wightman, and Thomas Wolf.
Also thanks to all the speakers from the talks, including Pablo Castro, Sabrina J Milke, Mostafa Dehghani, Rohan Anil, Ben Wang, Lucas Beyer, Iurii Kemaev, Soňa Mokrá, Junhyuk Oh, Siddhartha Kamalakara, Joanna Yoo, and João G M Araújo. You can find videos from day 1, day 2 and day 3 on YouTube.
Thank you again for participating, making this an awesome event. All the jury members and everyone at Hugging Face are super impressed with the projects given the tight timeline and a new framework. Thank you also for being patient with us as initially there were a few issues with TPUs and training script.