Cheap AI.
Real infrastructure.
A container runtime for AI agents, jobs, workflows, and services — wired to a decentralized inference network where tokens cost a fraction of the big clouds. Pay per second. Ship in one command.
$npx gonkablocks deploy↳ packing project (12 files, 2.4 MB)↳ building image · python:3.12-slim✓ image built · 184 MB✓ manifest verified · v0.1.0✓ pushed to gonka-blocks-registrylive → gonkablocks.com/u/you/zeroclaw $gonkablocks run zeroclaw -i task="summarize this"↳ inference · meta-llama/Llama-3.3-70B✓ done · 1,247 tokens · $0.0002$vs OpenAI / Anthropic, billed at network rates.
Decentralized GPU providers, OpenAI-compatible API.
No idle pods, no cold-start tax, no 30-day reservations.
One CLI, one manifest. No Helm. No YAML graveyard.
Five primitives. One manifest. Anything you can package in a Dockerfile.
Pick the shape that matches the workload. Every block gets a metered inference proxy, secrets, persistent storage, and a shareable URL — no infra plumbing required.
One-shot. Inputs in, outputs out. Perfect for transcription, summarization, batch transforms.
type: job
entrypoint: python main.pyPersistent or scheduled. Cron jobs, queue consumers, background daemons.
type: worker
schedule: '@hourly'Interactive. Each run gets a live URL — embed a chat UI, expose a terminal, anything HTTP.
type: session
port: 8000Compose other blocks visually or as YAML. Recursive. Royalty splits handled automatically.
type: workflow
steps: [step1, step2]Long-running HTTP / MCP endpoint with its own URL, auto-scaling, and metered billing.
type: service
port: 8000
auto_sleep: 5mEvery public block can be forked, edited in-browser, and republished under your namespace.
$ gonkablocks fork @user/blockFrom git init to a public endpoint in three steps.
One YAML file describes your inputs, outputs, runtime, resources, and pricing.
name: my-agent
type: job
inputs: { task: string }
runtime:
build: dockerfileOne CLI command packs the directory, builds the image, and uploads.
$ npx gonkablocks deploy
✓ live → /u/you/my-agentInvoke from the web UI, the CLI, REST, or chain it inside a workflow.
$ gonkablocks run my-agent \
-i task="hello"Featured blocks
Hand-picked, ready to run. Fork into your namespace with one click.
Autonomous deep-research agent. Give it a topic and a depth; it generates sub-questions, answers them, and synthesizes a polished markdown report.
Transcribe a YouTube (or other yt-dlp-supported) video to text using a local Whisper-tiny model. Runs entirely on CPU — no platform inference credits are spent.
Every inference call hits the Gonka network — a permissionless mesh of GPU providers competing on price. Your block stays oblivious; it just sees an OpenAI-compatible endpoint.
Every public block is a one-click fork. Wire blocks together as workflows in the visual editor, or call them from another block's code. Royalty splits flow back automatically.
Not just a function-as-a-service. Long-running services, interactive sessions, scheduled workers, full filesystem, network egress — all under one billable runtime.
Build once. Distribute forever.
Wrap any agent, script, model, or workflow in a manifest. Publish in minutes. Earn from every run on the network.