Hey everyone — I just finished building Meaning Machine, a visual tool that lets you see what happens when you input a sentence into a language model.
- Tokenization with BERT (via Hugging Face
)
- POS tagging and dependency parsing with spaCy
- Embedding space visualized with PCA + Plotly
- SVO extraction for structure and meaning
It’s meant to show how models “understand” text statistically — not with real-world grounding, but through structure, frequency, and co-occurrence.
Would love feedback on the concept, UX, or how useful this might be for teaching/explainability.
Live: https://meaning-machine.streamlit.app
Context: Parsing Perception - by Joshua Hathcock
GitHub: GitHub - jdspiral/tokenizer