ATOM AIPI Weekly, Week 12 (March 30, 2026)

Week 12 numbers are out.

2,614 SKUs across 1,234 models and 47 vendors. All 14 AIPI indexes came in flat this week, 0.0% WoW across every modality and channel category.

Numbers worth knowing:

Text input benchmark (AIPI TXT GLB): $0.001190 per 1K tokens

Text output benchmark: $0.005540 per 1K tokens

Output premium over input: 3.85x

Open source vs proprietary median: OSS is 70.7% cheaper

Caching savings where available: 69.7%

Models offering cached pricing: 20.4%

Context cost curve (128K+ vs 32K): 3.0x

Three straight weeks of flat pricing. The competition right now is on model breadth, not price.

We publish a free MCP server with four tools that work out of the box in Claude, Cursor, and Windsurf with no API key. MCP PRO at $49/mo unlocks search_models, get_model_detail, compare_prices, and get_vendor_catalog.

Full data and weekly indexes published every Monday. MCP server available on npm and Smithery for anyone building with Claude, Cursor, or Windsurf.

New data point from this week’s index. Buying inference direct from the model developer costs 7x more per input token than buying the same model through a third-party inference platform. The gap widens to 5.2x on output tokens. Most teams pay this premium by default simply because they never compare across channels.