SpiralTorch — a Rust-first ML stack that trains in Z-space (WebGPU/WASM/MPS/CUDA). It ships a tokenizer-free pre-embedding path and a Canvas Transformer projector that can feed HF Transformers via `inputs_embeds`

Model card: RyoSpiralArchitect/SpiralTorch · Hugging Face

### Why this might be useful

- Tokenizer-free Z-space signals (text or canvas) → projected to model hidden size

- Same planner/heuristics in browser (WebGPU) and native (no NumPy/Torch shims)

- Works as a front-end to `transformers` (keep your decoder/head intact)

### Minimal bridge (Python)

```python

import spiraltorch as st

from spiraltorch import LanguageWaveEncoder, Tensor

from transformers import AutoModel

import torch, numpy as np

# 1) Z-space encoding (tokenizer-free)

enc = LanguageWaveEncoder(-0.95, 0.6)

z = enc.encode_z_space(“hello transformers”) # shape: (1, D)

# 2) Map to model hidden size

hidden = 768

proj = st.nn.Linear(z.shape()[1], hidden)

emb_st = proj.forward(z) # SpiralTorch Tensor (1, hidden)

emb = torch.tensor(np.array(emb_st.tolist(), dtype=np.float32)).unsqueeze(1) # (1, 1, hidden)

# 3) Feed HF model via inputs_embeds

model = AutoModel.from_pretrained(“bert-base-uncased”)

out = model(inputs_embeds=emb, output_hidden_states=True)

print(out.last_hidden_state.shape)

What I’d love feedback on

  • inputs_embeds ergonomics for tokenizer-free front-ends

  • Perf on CUDA/MPS vs WebGPU pre-embeds

  • API shape for a clean project(text|canvas) → inputs_embeds

1 Like

Added a Canvas→Z-space projector: WebGPU canvas → color vector field → Z-space tensor → inputs_embeds.

This runs in browser (WASM/WebGPU) and native with the same planner table.

1 Like