Dynamic Flow Networks (DFN) — A Continuous Flow-Inspired Architecture for Next-Gen AI

Hey everyone!
I’ve been working on a new deep learning architecture called Dynamic Flow Networks (DFN) — a model that reimagines how information moves inside neural networks.

Instead of treating data as discrete tokens like Transformers do, DFN models information as a continuous field — where meaning flows dynamically, interacts locally, and evolves in a smooth, physics-inspired space.


:puzzle_piece: Core ideas

  • No tokens, no fixed attention maps — just continuous entities sampled from information fields.

  • Dynamic Flow Attention (DFA): information moves through learned directions, not matrix multiplications.

  • FlowNorm & Persistent Field Memory for stability and long-range context.

  • Sub-quadratic complexity and fully multimodal (text, image, audio…).

:light_bulb: Think of it as taking the spirit of Transformers… and letting it flow.


:high_voltage: Performance Insights
I ran a scalability benchmark to measure DFN’s efficiency across different sequence lengths.
At smaller sizes, inference remains extremely fast and memory-efficient, but once the model pushes past VRAM limits, you can see a clear transition as it starts using virtual memory (shared RAM) — which increases latency, a typical sign of GPU overflow.

Example (NVIDIA GPU, PyTorch):

Length   32:   0.034s, VRAM 59MB  
Length  512:   0.025s, VRAM 59.9MB  
Length 204800: 0.296s, VRAM 458.9MB  
Length 512000: 13.66s, VRAM 1058.9MB  
Length 768000: 21.35s, VRAM 1558.9MB

Once VRAM hits around 1.5 GB, the system starts paging into slower shared memory — a good reminder that continuous-field inference scales well but still requires careful memory management for very long contexts.


:laptop: Code + Paper: https://github.com/janxhg/Dynamic-Flow-Networks

I’d love to hear your thoughts, questions, or crazy ideas for extensions — especially around multimodal flows or efficient training!

“From discrete attention to a seamless flow of information. DFN is not just a model, but a paradigm shift for deep learning.”
“Transformers use tokens. DFN lets meaning flow — a new physics-inspired deep learning architecture.”

:red_question_mark:How would you approach training or compressing continuous-field models like this?


1 Like