Living Crystal Lattice: Can Neural Networks Learn New Knowledge Without Changing Their Weights?
Emylton Leunufna
KLINEXA Research, Langgur, Maluku Tenggara, Indonesia
TL;DR
This is a proof-of-concept at small scale (540 samples, 11 crystals, 551-token vocabulary). Conclusions may not generalize to production-scale systems. This is early-stage exploratory research, not a mature result.
We built a neural architecture where model weights are trained once, then frozen permanently, while the model continues to absorb knowledge through a separate runtime state.
After 5,000+ knowledge absorptions:
-
Weights remain bitwise identical (SHA-256 verified across 81 tensors)
-
System continues to acquire usable knowledge
We ran 27 experiments across 34 iterative runs, including failures.
Observed strengths
-
Weight immutability (cryptographically verified)
-
Cross-domain transfer: 56% vs 33% baseline
-
Cross-crystal reasoning: 23.3% vs simple RAG baseline 0%
-
Hierarchical scaling: 9% vs flat 0% (55 crystals)
-
Collision-aware sleep: 377 β 0 collisions
Observed weaknesses
-
End-to-end QA: +1.2% only
-
Dense retrieval dominates factual lookup (48% vs 10%)
-
Bond contribution still small (+3.3%)
1. Problem
Modern LLMs assume:
More knowledge = more parameters
Options today:
-
Fine-tuning β changes weights
-
Continual learning β adds capacity
-
RAG β external memory
-
Model scaling β more parameters
None cleanly separate:
-
how to process knowledge (weights)
-
what knowledge exists (state)
Human brains suggest this separation may be possible.
Question:
Can we build a system where weights stay fixed, and knowledge grows independently?
2. Architecture: Living Crystal Lattice
Core Idea
Knowledge is stored as multi-faceted βcrystalsβ connected in a lattice.
Each crystal:
-
F facets
-
D-dimensional representation
-
Learned inter-crystal bonds
Separation
-
Weights (ΞΈ) β fixed, define computation
-
State (S) β dynamic, stores knowledge
State includes:
-
Knowledge buffers
-
Facet rotations
-
Bond strengths
-
Emergent hierarchy (facts β patterns β insights)
One-Shot Absorption
No gradients. No backprop.
-
Encode input (fixed encoder)
-
Select crystal via resonance
-
Append to buffer
-
Adjust facet orientation
-
Update bonds
-
Update hierarchy
Models
| Model | Params | Crystal | Role |
|---|---|---|---|
| Daud | 21.4M | Yes | Experimental |
| Goliat | 206.1M | No | Baseline |
Dataset:
-
540 training samples
-
100 evaluation questions
-
Domain: healthcare (Maluku Tenggara)
3. Experiments
A. Scaling vs Transformer
| Metric | Daud | Goliat |
|---|---|---|
| Overall | 24.2% | 25.5% |
| Recall | 32.3% | 39.0% |
| Reasoning | 16.0% | 12.0% |
Takeaway:
Lower recall, better reasoning efficiency.
B. Zero-Shot Reasoning
| Metric | Daud | Goliat |
|---|---|---|
| Cross-crystal | 10% | 5% |
C. Cross-Domain Transfer (Strongest Result)
Train: disease only
Test: personnel + facilities
| Metric | Daud | Goliat |
|---|---|---|
| Score | 56% | 33.3% |
| Reasoning | 61.1% | 11.1% |
Interpretation:
Possible cross-domain knowledge transfer via crystal bonds.
Caution: small evaluation set β needs replication.
D. Weight Immutability
-
9 validation rounds
-
81 tensors hashed
-
SHA-256 identical
Result:
Weights unchanged. Knowledge increased (0% β 8.3%).
4. Stress Testing
Works
-
Immutable weights
-
Distributed reasoning
-
Hierarchical routing
-
Collision resolution
-
Stable drift (cosine ~0.88)
-
No collapse up to 5K items
Note: RAG baseline is simple (no reranker / optimized chunking)
Fails
-
QA improvement minimal
-
Retrieval baseline much stronger
-
Bond effect small
-
Some memory decay (R@5 drop 2%)
Bridge Mechanisms (Key Iterations)
-
Threshold activation β 0% β 23.3%
-
Hierarchical routing β scaling restored
-
Sleep β collision elimination
-
Distillation β precision increases with scale
Lesson:
Fixes happen at interaction points, not isolated components.
5. Mathematical Checks
| Result | Status |
|---|---|
| Non-saturation | PASS |
| Identity preservation | PARTIAL |
| No collapse (β€5K) | PASS |
Score: 8 PASS / 2 PARTIAL
6. Interpretation
What this IS
-
A hybrid architecture with stateful knowledge inside forward pass
-
Enables multi-domain activation simultaneously
What this is NOT
-
Not infinite memory
-
Not better than retrieval for lookup
-
Not production-ready
-
Not deeper reasoning engine
Core Contribution
-
Cross-domain transfer (56%)
-
Weight immutability (cryptographic proof)
7. Biological Inspiration (Loose)
Used as intuition only:
-
Fixed neurons β fixed weights
-
Synaptic plasticity β state updates
-
Sleep β consolidation
Not claimed as biological equivalence.
8. Limitations
-
Small dataset
-
Custom benchmark (bias risk)
-
Weak QA performance
-
Weak retrieval performance
-
Small bond effect
9. Future Work
-
Larger-scale models
-
Stronger baselines (BM25 + reranker)
-
QA-aligned training
-
Multimodal inputs
-
Formal capacity bounds
10. Reproducibility
-
PyTorch implementation
-
11 crystals Γ 8 facets Γ 256 dim
-
12 iterative mechanisms
-
SHA-256 validation
Discussion
-
Is weight-state separation meaningfully different from RAG?
-
Are biological analogies useful or misleading?
-
Does this scale beyond small models?
-
How to make bonds actually matter?
-
Should this compete with retrieval at all?
Part of KLINEXA (health AI for Indonesia).
Tags: neural-architecture, knowledge-state, weight-immutability, parameter-efficiency