Skip to main content
← Back to list
01Issue
FeatureOpenExtensions

feat: Namespace.so execution driver for remote workload execution

Opened by stack72 · 4/7/2026· GitHub #21

Summary

Implement a Namespace.so execution driver that enables swamp to run model method workloads on ephemeral, isolated VMs provided by Namespace.so. This would be a new driver alongside the existing raw (in-process) and docker (container) drivers, giving users access to configurable remote compute with sub-second boot times.

Background

What is Namespace.so?

Namespace.so is a developer infrastructure platform that provides ephemeral, isolated VMs. Key capabilities relevant to swamp:

  • Ephemeral compute: Fully isolated VMs that boot in sub-seconds, auto-destroy at a configurable deadline
  • Configurable shapes: 2-32 vCPU, 2-512GB RAM, Linux (AMD64/ARM64), macOS (Apple Silicon), Windows
  • Connect/gRPC API: Published on buf.build/namespace/cloud, supports HTTP+JSON encoding (no gRPC library needed — plain fetch() works)
  • Per-instance command execution: RunCommand (streaming) and RunCommandSync (unary) APIs on a per-instance endpoint
  • Authentication: Bearer tokens via nsc login, service tokens, or OIDC federation
  • CLI: nsc tool (brew install namespacelabs/namespace/nsc)
  • TypeScript SDK: @namespacelabs/sdk (but see design decision below on why we should NOT use it)
  • Pricing: Unit minutes (1 vCPU + 2GB RAM = 1 unit minute), $0.0015/unit-minute overage. Free tier available.

Swamp's Execution Driver Architecture

Drivers implement the ExecutionDriver interface:

interface ExecutionDriver {
  readonly type: string;
  execute(request: ExecutionRequest, callbacks?: ExecutionCallbacks): Promise<ExecutionResult>;
  initialize?(): Promise<void>;
  shutdown?(): Promise<void>;
}

ExecutionRequest contains: protocolVersion, modelType, modelId, methodName, globalArgs, methodArgs, definitionMeta, bundle? (Uint8Array for out-of-process execution), traceHeaders, resourceSpecs, fileSpecs.

ExecutionResult contains: status ("success" | "error"), outputs (DriverOutput[]), logs (string[]), durationMs.

DriverOutput for out-of-process drivers: { kind: "pending", specName, name, type, content, metadata } — the host persists these after the driver returns.

The Docker driver (src/domain/drivers/docker_execution_driver.ts) is the primary reference implementation for an out-of-process driver. It supports two modes:

  1. Command mode: When methodArgs.run is a string → runs sh -c "<command>", stdout = resource data, stderr = logs
  2. Bundle mode: When request.bundle exists → writes bundle.js + request.json + runner.js to a temp dir, mounts in container, runs deno run --allow-all runner.js, parses JSON output from stdout

Extension drivers live in extensions/drivers/ and export:

export const driver = {
  type: "@collective/name",     // e.g. "@swamp/namespace"
  name: string,
  description: string,
  configSchema?: z.ZodTypeAny,
  createDriver: (config: Record<string, unknown>) => ExecutionDriver,
};

They are auto-discovered and loaded by UserDriverLoader at startup.


Design Decisions

1. Extension driver, not built-in

Type: @swamp/namespace. Namespace is a third-party paid service — it belongs in the extension system alongside other community drivers.

2. Raw HTTP+JSON via Connect protocol (NOT the TypeScript SDK)

The @namespacelabs/sdk depends on @connectrpc/connect-node which uses Node.js http2 module APIs that are risky under Deno. It would also bloat the extension bundle with complex npm transitive dependencies.

Instead, use Deno's built-in fetch() against the Connect protocol directly:

POST https://{region}.compute.namespaceapis.com/{package}.{Service}/{Method}
Content-Type: application/json
Authorization: Bearer {token}
Connect-Protocol-Version: 1

This keeps the bundle small, dependency-free, and fully Deno-native.

3. Instance-per-execution (no pooling)

Each execute() creates a fresh VM and destroys it afterward. Namespace boots in sub-seconds so overhead is minimal, and this matches Docker's --rm semantics. Pooling can be added later as an optimization.

4. Both command mode and bundle mode

Mirrors the Docker driver exactly — same mode detection logic, same runner script (DOCKER_RUNNER_SCRIPT from src/domain/drivers/docker_runner.ts), same output parsing.


Configuration Schema

driver: "@swamp/namespace"
driverConfig:
  token: ${{ vault.get('namespace', 'api-token') }}  # Required (or NAMESPACE_TOKEN env var)
  region: "us"          # "us" | "eu", default "us"
  machineType: "2x4"    # vCPUxRAM format: "2x4", "4x8", "8x16", "16x32"
  arch: "amd64"         # "amd64" | "arm64", default "amd64"
  duration: "10m"       # Instance auto-destroy deadline (safety net)
  timeout: 60000        # Driver-level execution timeout in ms
  bare: true            # No k3s cluster (default true, saves boot time)
  denoSetup: "install"  # "install" (curl at boot) or "preinstalled"
  env: {}               # Optional env vars injected into instance

Auth fallback chain: driverConfig.tokenNAMESPACE_TOKEN env var → error.


Execution Flows

Command Mode

1. Parse & validate config via Zod schema
2. Create ephemeral instance (bare, configured shape, deadline)
3. Wait for instance ready
4. Run: sh -c "<methodArgs.run>"
   - Stream stderr → callbacks.onLog()
   - Capture stdout → resource content
5. Return kind:"pending" DriverOutputs
6. Destroy instance (finally block, always runs)

Bundle Mode

1. Parse & validate config via Zod schema
2. Create ephemeral instance (bare, configured shape, deadline)
3. Wait for instance ready
4. Install Deno if denoSetup === "install":
   Run: curl -fsSL https://deno.land/install.sh | sh
5. Upload files to /swamp/ on the instance:
   - bundle.js (from request.bundle)
   - request.json (methodName, methodArgs, globalArgs, modelType, modelId, definitionMeta)
   - runner.js (reuse DOCKER_RUNNER_SCRIPT from swamp core)
6. Run: deno run --allow-all /swamp/runner.js
   - Stream stderr → callbacks.onLog()
   - Capture stdout → parse JSON { resources, files }
7. Convert to kind:"pending" DriverOutputs (same parsing as Docker driver)
8. Destroy instance (finally block, always runs)

File Upload Strategy

Namespace has no direct file-upload gRPC endpoint. Recommended approach:

Primary: Tar + base64 pipe via command execution

echo '<base64-encoded-tar-of-bundle.js+request.json+runner.js>' | base64 -d | tar xz -C /swamp

This works for typical bundle sizes (< 100KB). For larger bundles, fall back to nsc instance upload subprocess (requires nsc CLI on host). The driver's initialize() should detect whether nsc is available.


File Structure

extensions/drivers/namespace/
  namespace_driver.ts       # Driver class + export const driver = { ... }
  namespace_api.ts          # Thin fetch()-based API client for Namespace Connect API
  namespace_api_test.ts     # Unit tests for API client (mocked fetch)
  namespace_driver_test.ts  # Unit tests for driver (mocked API client)

Plus an integration test: integration/namespace_driver_test.ts (gated on NAMESPACE_TOKEN env var).


Error Handling

Instance cleanup guarantee (two layers)

  1. Application: try/finally in execute() always calls destroyInstance()
  2. Platform: Instance duration deadline auto-destroys even if the swamp process crashes

Timeout handling

  • AbortController with configured timeout — on signal, destroy instance and return error result
  • Instance duration serves as the hard ceiling

API error mapping

Connect error code User-facing message
unauthenticated "Namespace auth failed. Check your API token."
resource_exhausted "Namespace quota exceeded. Check workspace billing."
unavailable "Namespace API unavailable. Retry later."
invalid_argument "Invalid Namespace config: {detail}"

Pre-Implementation Discovery (IMPORTANT)

Before writing any code, the exact protobuf service and method names must be confirmed from buf.build/namespace/cloud. The service/method names used in this issue are educated guesses based on convention. To discover the actual names:

  1. Browse buf.build/namespace/cloud for the proto service definitions
  2. Run nsc create --debug and observe the actual RPC endpoints being called
  3. Read the TypeScript SDK's generated proto stubs at @namespacelabs/sdk/proto

Key endpoints to confirm:

  • Create instance — service name, request message shape, response fields (especially instanceId and commandServiceEndpoint)
  • Wait for instance — polling vs streaming, endpoint
  • Run command — sync and streaming variants, confirm it's on the per-instance endpoint (not the regional API)
  • Destroy instance — request shape

Implementation Phases

  1. Discovery — Confirm exact Namespace Connect API endpoints from buf.build or nsc --debug
  2. API client — Implement namespace_api.ts with raw fetch(), unit test against mocked responses
  3. Command mode — Create/wait/execute/destroy flow, log streaming, unit tests
  4. Bundle mode — File upload via tar+base64, reuse DOCKER_RUNNER_SCRIPT, output parsing, unit tests
  5. Error handling — Timeouts, cleanup guarantees, API error mapping
  6. Integration test — End-to-end test gated on NAMESPACE_TOKEN env var

Risks and Mitigations

Risk Impact Mitigation
Exact proto service names differ from this plan's guesses Blocks implementation Discovery phase first (phase 1)
Bundle too large for base64 pipe (> ~100KB) Bundle mode fails for large extension models Fall back to nsc instance upload subprocess
Deno install adds 5-10s overhead per execution Slow bundle mode Support denoSetup: "preinstalled" with custom Namespace image
Connect streaming response format complex to parse with raw fetch No real-time log streaming initially Start with sync command execution (RunCommandSync), add streaming later
Cost surprises for users Unexpected Namespace bills Log estimated cost per execution based on duration × machine shape

Key Reference Files in swamp repo

These files in systeminit/swamp are the primary references for implementing this driver:

  • src/domain/drivers/docker_execution_driver.ts — Primary pattern to follow (mode detection, output parsing, timeout handling, cleanup)
  • src/domain/drivers/execution_driver.tsExecutionDriver interface, ExecutionRequest, ExecutionResult, DriverOutput, ExecutionCallbacks types
  • src/domain/drivers/docker_runner.tsDOCKER_RUNNER_SCRIPT constant to reuse for bundle mode
  • src/domain/drivers/user_driver_loader.ts — Extension loading contract and validation
  • src/domain/drivers/driver_config.ts — Reusable Zod config schema fragments (timeout, env)
  • src/domain/drivers/docker_execution_driver_test.ts — Test patterns to follow

Verification Checklist

  • deno check passes
  • deno lint passes
  • deno fmt passes
  • Unit tests pass: deno run test extensions/drivers/namespace/
  • Integration test passes with real Namespace token
  • Command mode: creates instance, runs command, captures output, destroys instance
  • Bundle mode: creates instance, uploads bundle, runs runner, parses output, destroys instance
  • Instance is always destroyed (verify via nsc list or Namespace dashboard after error cases)
  • Timeout triggers instance destruction
  • Invalid token produces clear error message

Automoved by swampadmin from GitHub issue #21

02Bog Flow
OPENTRIAGEDIN PROGRESSSHIPPED

Open

4/7/2026, 11:28:45 PM

No activity in this phase yet.

03Sludge Pulse

Sign in to post a ripple.