# SpiralTorch — Rust-first ML stack that trains in Z-space
**TL;DR:** Pure Rust training framework with WGPU/WebGPU/MPS/CUDA/HIP backends.
No NumPy/Torch shims. Roundtable planner (A/B/C) self-rewrites SpiralK heuristics.
- Model card: https://huggingface.co/RyoSpiralArchitect/SpiralTorch
- Repo: https://github.com/RyoSpiralArchitect/SpiralTorch
## What it is
- **Rust-first training**: `st-nn` modules (Linear/Conv1d/WaveRnn/Relu/Sequential), datasets, trainer.
- **Z-space hypergrad**: tensors can absorb text/complex waves; gradients flow on an open topos.
- **Unified planner**: SpiralK DSL (hard), SoftLogic (A), Tuner table (C); Wilson-backed consensus.
- **WGPU/WASM**: same heuristics in browser and native (zero shims, zero tracebacks).
- **Python wheel**: thin veneer over the same Rust logic.
## Hello SpiralSession (quickstart)
```python
from spiraltorch import SpiralSession, Tensor
# hello session: barycenter + hypergrad alignment + one training epoch
session = SpiralSession(device="auto", curvature=-0.95)
print(session) # SpiralSession(device=wgpu, curvature=-0.95, ...)
input = Tensor(1, 4, [0.1, -0.2, 0.3, -0.4])
target = Tensor(1, 2, [0.0, 1.0])
stats = session.train_epoch(input, target)
print(f"loss={stats.average_loss:.6f}, steps={stats.steps}")
Benchmarks (forward)
On my local M-series CPU vs WGPU (WebGPU path):
Input CPU (ms) WGPU (ms) Speedup
128 0.46 0.14 ×3.3
256 0.73 0.22 ×3.3
512 1.28 0.38 ×3.4
1024 2.45 0.74 ×3.3
(We’d love external numbers on 4090/CUDA / RDNA/ROCm / iGPU via WebGPU.)
Why Rust / WebGPU?
- Single source of truth: planners, ops, losses — all in Rust.
- Native WGPU/WASM means same heuristics in browser and native.
- No C++/Python/JS split, no glue layers, no tracebacks.
WASM demo (browser)
cd crates/st-tensor
wasm-pack build --target web --release
python3 -m http.server 8080
# open http://localhost:8080/wasm_bench/index.html
You’ll see a live Z-space spiral evolving on with WebGPU.
Call for feedback
- API surface (Rust & Python), esp.
SpiralSessionergonomics - Planner heuristics & SpiralK snippets you’d want to override
- Perf reports on MPS/CUDA/HIP/WGPU (browser & native)
License: AGPL-3.0-or-later
Contact: kishkavsesvit@icloud.com (research & integration)