Model card: RyoSpiralArchitect/SpiralTorch · Hugging Face
### Why this might be useful
- Tokenizer-free Z-space signals (text or canvas) → projected to model hidden size
- Same planner/heuristics in browser (WebGPU) and native (no NumPy/Torch shims)
- Works as a front-end to `transformers` (keep your decoder/head intact)
### Minimal bridge (Python)
```python
import spiraltorch as st
from spiraltorch import LanguageWaveEncoder, Tensor
from transformers import AutoModel
import torch, numpy as np
# 1) Z-space encoding (tokenizer-free)
enc = LanguageWaveEncoder(-0.95, 0.6)
z = enc.encode_z_space(“hello transformers”) # shape: (1, D)
# 2) Map to model hidden size
hidden = 768
proj = st.nn.Linear(z.shape()[1], hidden)
emb_st = proj.forward(z) # SpiralTorch Tensor (1, hidden)
emb = torch.tensor(np.array(emb_st.tolist(), dtype=np.float32)).unsqueeze(1) # (1, 1, hidden)
# 3) Feed HF model via inputs_embeds
model = AutoModel.from_pretrained(“bert-base-uncased”)
out = model(inputs_embeds=emb, output_hidden_states=True)
print(out.last_hidden_state.shape)
What I’d love feedback on
-
inputs_embeds ergonomics for tokenizer-free front-ends
-
Perf on CUDA/MPS vs WebGPU pre-embeds
-
API shape for a clean project(text|canvas) → inputs_embeds