Hello everyone,
I recently released Gnarly, an experimental open-source project that melds cellular automata, Deep Dream effects, continuous zoom, and object morphing. The project makes extensive use of Hugging Face tools—particularly for downloading predictors and running a Stable Diffusion pipeline.
Key aspects include:
• A deep-dream processing pipeline powered by an InceptionV3 model (via torchvision)
• Object detection using a blend of YOLOv5 and EfficientDet models
• Real-time interactive visualization using Pygame
I’d love to discuss best practices for integrating Hugging Face models in a real-time, creative context. Any suggestions or experiences you can share would be greatly appreciated!