feat: Namespace.so execution driver for remote workload execution
Opened by stack72 · 4/7/2026· GitHub #21
Summary
Implement a Namespace.so execution driver that enables swamp to run model method workloads on ephemeral, isolated VMs provided by Namespace.so. This would be a new driver alongside the existing raw (in-process) and docker (container) drivers, giving users access to configurable remote compute with sub-second boot times.
Background
What is Namespace.so?
Namespace.so is a developer infrastructure platform that provides ephemeral, isolated VMs. Key capabilities relevant to swamp:
- Ephemeral compute: Fully isolated VMs that boot in sub-seconds, auto-destroy at a configurable deadline
- Configurable shapes: 2-32 vCPU, 2-512GB RAM, Linux (AMD64/ARM64), macOS (Apple Silicon), Windows
- Connect/gRPC API: Published on buf.build/namespace/cloud, supports HTTP+JSON encoding (no gRPC library needed — plain
fetch()works) - Per-instance command execution:
RunCommand(streaming) andRunCommandSync(unary) APIs on a per-instance endpoint - Authentication: Bearer tokens via
nsc login, service tokens, or OIDC federation - CLI:
nsctool (brew install namespacelabs/namespace/nsc) - TypeScript SDK:
@namespacelabs/sdk(but see design decision below on why we should NOT use it) - Pricing: Unit minutes (1 vCPU + 2GB RAM = 1 unit minute), $0.0015/unit-minute overage. Free tier available.
Swamp's Execution Driver Architecture
Drivers implement the ExecutionDriver interface:
interface ExecutionDriver {
readonly type: string;
execute(request: ExecutionRequest, callbacks?: ExecutionCallbacks): Promise<ExecutionResult>;
initialize?(): Promise<void>;
shutdown?(): Promise<void>;
}ExecutionRequest contains: protocolVersion, modelType, modelId, methodName, globalArgs, methodArgs, definitionMeta, bundle? (Uint8Array for out-of-process execution), traceHeaders, resourceSpecs, fileSpecs.
ExecutionResult contains: status ("success" | "error"), outputs (DriverOutput[]), logs (string[]), durationMs.
DriverOutput for out-of-process drivers: { kind: "pending", specName, name, type, content, metadata } — the host persists these after the driver returns.
The Docker driver (src/domain/drivers/docker_execution_driver.ts) is the primary reference implementation for an out-of-process driver. It supports two modes:
- Command mode: When
methodArgs.runis a string → runssh -c "<command>", stdout = resource data, stderr = logs - Bundle mode: When
request.bundleexists → writes bundle.js + request.json + runner.js to a temp dir, mounts in container, runsdeno run --allow-all runner.js, parses JSON output from stdout
Extension drivers live in extensions/drivers/ and export:
export const driver = {
type: "@collective/name", // e.g. "@swamp/namespace"
name: string,
description: string,
configSchema?: z.ZodTypeAny,
createDriver: (config: Record<string, unknown>) => ExecutionDriver,
};They are auto-discovered and loaded by UserDriverLoader at startup.
Design Decisions
1. Extension driver, not built-in
Type: @swamp/namespace. Namespace is a third-party paid service — it belongs in the extension system alongside other community drivers.
2. Raw HTTP+JSON via Connect protocol (NOT the TypeScript SDK)
The @namespacelabs/sdk depends on @connectrpc/connect-node which uses Node.js http2 module APIs that are risky under Deno. It would also bloat the extension bundle with complex npm transitive dependencies.
Instead, use Deno's built-in fetch() against the Connect protocol directly:
POST https://{region}.compute.namespaceapis.com/{package}.{Service}/{Method}
Content-Type: application/json
Authorization: Bearer {token}
Connect-Protocol-Version: 1This keeps the bundle small, dependency-free, and fully Deno-native.
3. Instance-per-execution (no pooling)
Each execute() creates a fresh VM and destroys it afterward. Namespace boots in sub-seconds so overhead is minimal, and this matches Docker's --rm semantics. Pooling can be added later as an optimization.
4. Both command mode and bundle mode
Mirrors the Docker driver exactly — same mode detection logic, same runner script (DOCKER_RUNNER_SCRIPT from src/domain/drivers/docker_runner.ts), same output parsing.
Configuration Schema
driver: "@swamp/namespace"
driverConfig:
token: ${{ vault.get('namespace', 'api-token') }} # Required (or NAMESPACE_TOKEN env var)
region: "us" # "us" | "eu", default "us"
machineType: "2x4" # vCPUxRAM format: "2x4", "4x8", "8x16", "16x32"
arch: "amd64" # "amd64" | "arm64", default "amd64"
duration: "10m" # Instance auto-destroy deadline (safety net)
timeout: 60000 # Driver-level execution timeout in ms
bare: true # No k3s cluster (default true, saves boot time)
denoSetup: "install" # "install" (curl at boot) or "preinstalled"
env: {} # Optional env vars injected into instanceAuth fallback chain: driverConfig.token → NAMESPACE_TOKEN env var → error.
Execution Flows
Command Mode
1. Parse & validate config via Zod schema
2. Create ephemeral instance (bare, configured shape, deadline)
3. Wait for instance ready
4. Run: sh -c "<methodArgs.run>"
- Stream stderr → callbacks.onLog()
- Capture stdout → resource content
5. Return kind:"pending" DriverOutputs
6. Destroy instance (finally block, always runs)Bundle Mode
1. Parse & validate config via Zod schema
2. Create ephemeral instance (bare, configured shape, deadline)
3. Wait for instance ready
4. Install Deno if denoSetup === "install":
Run: curl -fsSL https://deno.land/install.sh | sh
5. Upload files to /swamp/ on the instance:
- bundle.js (from request.bundle)
- request.json (methodName, methodArgs, globalArgs, modelType, modelId, definitionMeta)
- runner.js (reuse DOCKER_RUNNER_SCRIPT from swamp core)
6. Run: deno run --allow-all /swamp/runner.js
- Stream stderr → callbacks.onLog()
- Capture stdout → parse JSON { resources, files }
7. Convert to kind:"pending" DriverOutputs (same parsing as Docker driver)
8. Destroy instance (finally block, always runs)File Upload Strategy
Namespace has no direct file-upload gRPC endpoint. Recommended approach:
Primary: Tar + base64 pipe via command execution
echo '<base64-encoded-tar-of-bundle.js+request.json+runner.js>' | base64 -d | tar xz -C /swampThis works for typical bundle sizes (< 100KB). For larger bundles, fall back to nsc instance upload subprocess (requires nsc CLI on host). The driver's initialize() should detect whether nsc is available.
File Structure
extensions/drivers/namespace/
namespace_driver.ts # Driver class + export const driver = { ... }
namespace_api.ts # Thin fetch()-based API client for Namespace Connect API
namespace_api_test.ts # Unit tests for API client (mocked fetch)
namespace_driver_test.ts # Unit tests for driver (mocked API client)Plus an integration test: integration/namespace_driver_test.ts (gated on NAMESPACE_TOKEN env var).
Error Handling
Instance cleanup guarantee (two layers)
- Application:
try/finallyinexecute()always callsdestroyInstance() - Platform: Instance
durationdeadline auto-destroys even if the swamp process crashes
Timeout handling
AbortControllerwith configuredtimeout— on signal, destroy instance and return error result- Instance
durationserves as the hard ceiling
API error mapping
| Connect error code | User-facing message |
|---|---|
unauthenticated |
"Namespace auth failed. Check your API token." |
resource_exhausted |
"Namespace quota exceeded. Check workspace billing." |
unavailable |
"Namespace API unavailable. Retry later." |
invalid_argument |
"Invalid Namespace config: {detail}" |
Pre-Implementation Discovery (IMPORTANT)
Before writing any code, the exact protobuf service and method names must be confirmed from buf.build/namespace/cloud. The service/method names used in this issue are educated guesses based on convention. To discover the actual names:
- Browse
buf.build/namespace/cloudfor the proto service definitions - Run
nsc create --debugand observe the actual RPC endpoints being called - Read the TypeScript SDK's generated proto stubs at
@namespacelabs/sdk/proto
Key endpoints to confirm:
- Create instance — service name, request message shape, response fields (especially
instanceIdandcommandServiceEndpoint) - Wait for instance — polling vs streaming, endpoint
- Run command — sync and streaming variants, confirm it's on the per-instance endpoint (not the regional API)
- Destroy instance — request shape
Implementation Phases
- Discovery — Confirm exact Namespace Connect API endpoints from buf.build or
nsc --debug - API client — Implement
namespace_api.tswith rawfetch(), unit test against mocked responses - Command mode — Create/wait/execute/destroy flow, log streaming, unit tests
- Bundle mode — File upload via tar+base64, reuse
DOCKER_RUNNER_SCRIPT, output parsing, unit tests - Error handling — Timeouts, cleanup guarantees, API error mapping
- Integration test — End-to-end test gated on
NAMESPACE_TOKENenv var
Risks and Mitigations
| Risk | Impact | Mitigation |
|---|---|---|
| Exact proto service names differ from this plan's guesses | Blocks implementation | Discovery phase first (phase 1) |
| Bundle too large for base64 pipe (> ~100KB) | Bundle mode fails for large extension models | Fall back to nsc instance upload subprocess |
| Deno install adds 5-10s overhead per execution | Slow bundle mode | Support denoSetup: "preinstalled" with custom Namespace image |
| Connect streaming response format complex to parse with raw fetch | No real-time log streaming initially | Start with sync command execution (RunCommandSync), add streaming later |
| Cost surprises for users | Unexpected Namespace bills | Log estimated cost per execution based on duration × machine shape |
Key Reference Files in swamp repo
These files in systeminit/swamp are the primary references for implementing this driver:
src/domain/drivers/docker_execution_driver.ts— Primary pattern to follow (mode detection, output parsing, timeout handling, cleanup)src/domain/drivers/execution_driver.ts—ExecutionDriverinterface,ExecutionRequest,ExecutionResult,DriverOutput,ExecutionCallbackstypessrc/domain/drivers/docker_runner.ts—DOCKER_RUNNER_SCRIPTconstant to reuse for bundle modesrc/domain/drivers/user_driver_loader.ts— Extension loading contract and validationsrc/domain/drivers/driver_config.ts— Reusable Zod config schema fragments (timeout, env)src/domain/drivers/docker_execution_driver_test.ts— Test patterns to follow
Verification Checklist
-
deno checkpasses -
deno lintpasses -
deno fmtpasses - Unit tests pass:
deno run test extensions/drivers/namespace/ - Integration test passes with real Namespace token
- Command mode: creates instance, runs command, captures output, destroys instance
- Bundle mode: creates instance, uploads bundle, runs runner, parses output, destroys instance
- Instance is always destroyed (verify via
nsc listor Namespace dashboard after error cases) - Timeout triggers instance destruction
- Invalid token produces clear error message
Automoved by swampadmin from GitHub issue #21
Open
No activity in this phase yet.
Sign in to post a ripple.