Three NemoClaw-sandboxed OpenClaw agents — scheduler, healer, optimizer — running always-on inside OpenShell on your HPC cluster. Nemotron reasoning routed through the OpenShell gateway. Strict network policy enforced. No cluster data leaves the premises. No rip-and-replace.
Nexus wraps your existing Slurm, InfiniBand, and parallel storage with a NemoClaw agent layer. Each agent runs sandboxed inside OpenShell. Inference is intercepted and routed transparently — the agent never calls Nemotron directly.
NemoClaw is NVIDIA's OpenClaw plugin for OpenShell. It runs OpenClaw agents inside isolated sandboxes with NVIDIA inference routing, strict network policy, and operator-controlled egress approval.
| PROFILE | STAGE | MODEL | ENDPOINT | CONTEXT | STATUS |
|---|---|---|---|---|---|
| vllm | RTX 3090 | nvidia/nemotron-3-nano-30b-a3b | host.openshell.internal:8000 | 131,072 | ● ACTIVE |
| nim-local | DGX Stage | nvidia/nemotron-3-super-120b-a12b | nim-service.local:8000 | 131,072 | ○ NEXT |
| default | Fallback | nvidia/nemotron-3-super-120b-a12b | integrate.api.nvidia.com | 131,072 | ○ CLOUD |
| POLICY | ENDPOINTS ALLOWED | BINARIES | METHODS |
|---|---|---|---|
| vllm_inference | host.openshell.internal:8000 | openclaw | POST /v1/chat/completions |
| nexus_core | nexus-core:8000 · localhost:8000 | openclaw · python3 | GET /health · POST /events/* |
| slurm_rest | localhost:6820 · host.docker.internal:6820 | openclaw · python3 | GET · POST /slurm/* |
| prometheus | prometheus:9090 · localhost:9090 | openclaw · python3 | GET /api/v1/* |
| dcgm_exporter | localhost:9400 · host.docker.internal:9400 | openclaw · python3 | GET /metrics |
Each agent runs as a sandboxed OpenClaw process, governed by its system prompt and the OpenShell network policy. Behaviour is transparent and auditable — every decision is logged to the Nexus event bus.
Active validation on internal RTX 3090 cluster. All figures are targets. Full benchmark methodology and results published at eviox.tech/nexus on Q2 2026 completion.
Nexus is validated on RTX 3090 before any customer deployment. DGX migration is triggered by benchmark sign-off — it requires only a profile switch, no code changes.
Prerequisites: Ubuntu 22.04 LTS · Docker 24+ · NVIDIA Container Toolkit · DCGM exporter on host (:9400) · Prometheus node exporter (:9100) · Slurm 23.x with slurmrestd (optional — mock data used without it).
Built on NVIDIA NemoClaw + OpenShell with vLLM inference. Runs alongside your existing Slurm scheduler, InfiniBand fabric, and parallel storage. DGX migration is one openshell inference set command.
Billed only when NemoClaw agents are active. No idle charges. No upfront commitment. INR invoicing with GST compliance for India-based customers.
RTX 3090 benchmark results published Q2 2026. DGX NIM GA follows benchmark sign-off. NemoClaw agent sandboxes running in under 48 hours on your existing infrastructure.