Live on the gonka.ai decentralized inference network

Cheap AI. Real infrastructure.

A container runtime for AI agents, jobs, workflows, and services — wired to a decentralized inference network where tokens cost a fraction of the big clouds. Pay per second. Ship in one command.

15 blocks published63 runs executed5 primitives
~/zeroclaw — gonkablocks
$npx gonkablocks deploy
↳ packing project (12 files, 2.4 MB)
↳ building image · python:3.12-slim
✓ image built · 184 MB
✓ manifest verified · v0.1.0
✓ pushed to gonka-blocks-registry
live → gonkablocks.com/u/you/zeroclaw
 
$gonkablocks run zeroclaw -i task="summarize this"
↳ inference · meta-llama/Llama-3.3-70B
✓ done · 1,247 tokens · $0.0002
$
Inference cost
10–50×cheaper

vs OpenAI / Anthropic, billed at network rates.

Backed by
Gonkanetwork

Decentralized GPU providers, OpenAI-compatible API.

Container runtime
Per-secondbilling

No idle pods, no cold-start tax, no 30-day reservations.

Deploy in
<60s

One CLI, one manifest. No Helm. No YAML graveyard.

The runtime

Five primitives. One manifest. Anything you can package in a Dockerfile.

Pick the shape that matches the workload. Every block gets a metered inference proxy, secrets, persistent storage, and a shareable URL — no infra plumbing required.

Job

One-shot. Inputs in, outputs out. Perfect for transcription, summarization, batch transforms.

type: job
entrypoint: python main.py
Worker

Persistent or scheduled. Cron jobs, queue consumers, background daemons.

type: worker
schedule: '@hourly'
Session

Interactive. Each run gets a live URL — embed a chat UI, expose a terminal, anything HTTP.

type: session
port: 8000
Workflow

Compose other blocks visually or as YAML. Recursive. Royalty splits handled automatically.

type: workflow
steps: [step1, step2]
Service

Long-running HTTP / MCP endpoint with its own URL, auto-scaling, and metered billing.

type: service
port: 8000
auto_sleep: 5m
+ Forkable

Every public block can be forked, edited in-browser, and republished under your namespace.

$ gonkablocks fork @user/block
Quickstart

From git init to a public endpoint in three steps.

01Write a manifest

One YAML file describes your inputs, outputs, runtime, resources, and pricing.

name: my-agent
type: job
inputs: { task: string }
runtime:
  build: dockerfile
02Ship it

One CLI command packs the directory, builds the image, and uploads.

$ npx gonkablocks deploy
✓ live → /u/you/my-agent
03Call it

Invoke from the web UI, the CLI, REST, or chain it inside a workflow.

$ gonkablocks run my-agent \
    -i task="hello"
Full reference in the docs — including manifest schema, in-browser builds, MCP server, and workflow composition.
Decentralized by default

Every inference call hits the Gonka network — a permissionless mesh of GPU providers competing on price. Your block stays oblivious; it just sees an OpenAI-compatible endpoint.

Forkable, composable

Every public block is a one-click fork. Wire blocks together as workflows in the visual editor, or call them from another block's code. Royalty splits flow back automatically.

Real container infra

Not just a function-as-a-service. Long-running services, interactive sessions, scheduled workers, full filesystem, network egress — all under one billable runtime.

Build once. Distribute forever.

Wrap any agent, script, model, or workflow in a manifest. Publish in minutes. Earn from every run on the network.