Autonomous AI agents are no longer a future consideration. They are running in production environments today, executing real tasks across files, APIs, code repositories, and messaging platforms with minimal human intervention. For IT and infrastructure leaders, the conversation has shifted. It is no longer about whether to deploy autonomous agents. It is about whether your organization has the operational infrastructure to govern them once they are deployed, and what happens to your AI program if you do not.
This is the context behind Quali’s announcement today: Torque now delivers enterprise governance and lifecycle management for NVIDIA NemoClaw, the on-premises autonomous AI agent stack that is fast becoming the enterprise standard for secure, production-grade agent deployment. Understanding why that announcement matters requires understanding what NemoClaw is, who is building on it, and why the inability to manage it at scale is one of the most consequential operational risks in enterprise AI today.
What Is NVIDIA NemoClaw and Why Is It the Foundation Enterprises Are Building On?
The scale of enterprise AI agent adoption in 2026 is not a projection, it is already underway. Gartner forecasts that 40% of enterprise applications will embed task-specific AI agents by the end of this year, up from less than 5% in 2025. IDC projects that AI investment will grow at 31.9% annually through 2029, reaching $1.3 trillion, driven in large part by the rise of agentic systems and the platforms required to manage them at scale.
93% of IT leaders plan to introduce autonomous agents within two years, with nearly half already having done so.
Source: 2025 Connectivity Benchmark Report — Deloitte Digital
For organizations serious about deploying autonomous AI agents on-premises, securely and on trusted infrastructure, NVIDIA NemoClaw represents the most significant development in enterprise agent architecture to date. It is an open-source reference stack that combines three production-grade components into a single, hardened deployment blueprint validated on NVIDIA DGX Spark hardware:
- OpenClaw. The autonomous agent framework that gave developers a self-hosted AI agent capable of executing complex, multi-step tasks across real systems. It surpassed 145,000 GitHub stars within weeks of launch, a pace of adoption that reflects how urgently engineering teams needed a capable, self-hosted agent they could trust.
- Nemotron 3 Super. NVIDIA’s high-performance model serving as the intelligence backbone of the agent stack, optimized for on-premises GPU infrastructure.
- NVIDIA OpenShell. A sandboxed execution environment that isolates agent actions, ensuring autonomous tasks run within defined boundaries and cannot affect systems or data outside their permitted scope.
Together, NemoClaw gives enterprises a production-grade, on-premises autonomous AI agent stack that is secure by design, runs on trusted NVIDIA hardware, and does not require sending sensitive data to external cloud services. For industries where data sovereignty, security, and regulatory compliance are non-negotiable, financial services, healthcare, defense, and any organization operating under strict data governance requirements, NemoClaw is not just relevant, it is the architecture enterprises have been waiting for.
The teams building on it are the ones responsible for making AI real inside the enterprise: infrastructure architects and platform engineers standing up AI capabilities on-premises, data science teams automating research and model workflows, development teams deploying agents to handle code review and testing, and operations teams running autonomous workflows across internal systems.
The Operational Problem That Stops AI Programs in Their Tracks
NemoClaw solves the deployment problem. A team can stand up a secure, capable autonomous AI agent on DGX Spark hardware with a clear, validated blueprint. That is a genuine achievement, and it is why adoption is accelerating.
What it does not solve is the moment that single deployment becomes an organizational requirement. And in enterprise environments, that moment arrives fast.
When five teams each need their own NemoClaw environment, then ten, then the organization needs agent workloads running across multiple DGX systems, on-premises clusters, and cloud GPU infrastructure simultaneously, the operational requirements multiply rapidly.
Access governance, policy enforcement across every environment, GPU cost attribution per team and project, consistent provisioning, automatic decommissioning when tasks complete, and self-service access that does not create security gaps, none of these are addressed by deployment tooling alone. They require a governance layer that sits above the stack and manages its entire lifecycle.
Without that layer, organizations face one of two failure modes. IT becomes the gatekeeper for every environment request, and the AI program moves at the pace of the ticket queue. Or teams start provisioning environments outside any policy boundary, GPU costs become impossible to attribute, and security guarantees quietly collapse.
- 40% of agentic AI projects will be canceled by the end of 2027 due to escalating costs, inadequate risk controls, and governance structures that were never put in place. Source: Gartner
- 1 in 5 companies currently has a mature governance model for autonomous AI agents, even as deployment accelerates sharply across the enterprise. Source: Deloitte State of AI in the Enterprise, 2026
The infrastructure and governance required to support that scale is not optional, it is the determining factor between programs that deliver and programs that are quietly shut down. This is the gap at the center of every serious NemoClaw deployment that tries to scale. It is not a flaw in NemoClaw. It is simply outside its scope. And it is one of the primary reasons enterprise AI initiatives that show genuine promise at the pilot stage never reach production at scale.
Why Support for NemoClaw and Why It Is Critical
Quali’s recent announcement about ‘Torque delivering enterprise governance and lifecycle management for NVIDIA NemoClaw’ is a direct response to this operational reality. Torque is the control plane above NemoClaw, the layer that takes a powerful individual deployment and turns it into a governed, multi-tenant enterprise infrastructure that organizations can actually operate, scale, and account for.
There is no shortage of tools that help teams deploy AI workloads. There are very few that govern them across the full operational lifecycle, at enterprise scale, across hybrid infrastructure. Torque is built to do exactly that, and NemoClaw support means that governance now extends to the autonomous agent stack enterprises are actively building on today.
What Torque Delivers
Torque sits above the NemoClaw stack. It does not replace any component of it. It governs the stack’s entire operational lifecycle.
- Provisioning from versioned blueprints. Torque deploys the full NemoClaw stack, Nemotron 3 Super model serving, OpenClaw gateway configuration, and OpenShell sandboxing, from versioned infrastructure blueprints. Every environment is reproducible, auditable, and consistent. No manual configuration. No configuration drift.
- Automatic lifecycle management. Torque enforces automatic teardown when agent tasks complete. GPU resources are not left running. Environments do not accumulate. Costs do not spiral.
- Hard multi-tenant isolation. Torque enforces genuine isolation between teams and workloads across DGX Spark, DGX Station, on-premises clusters, and cloud GPU infrastructure. No agent environment can access another team’s resources. No workload operates outside its defined policy boundary.
- GPU cost attribution. Every NemoClaw environment Torque provisions is tagged and tracked. Infrastructure spend is attributed per team, per project, per environment. Finance gets the data it needs. Teams are accountable for the GPU resources they consume.
- Self-service access without IT overhead. Data scientists, developers, and operations teams request governed NemoClaw environments on demand through Torque’s self-service portal. No tickets. No waiting. No infrastructure expertise required. Governance is enforced by Torque, not by making access difficult.
The Difference It Makes
An enterprise running NemoClaw without Torque has a powerful autonomous AI agent stack for the teams that have already deployed it. An enterprise running NemoClaw with Torque has a governed AI agent infrastructure that every team can access, that finance can account for, that security can audit, and that IT can manage without becoming a bottleneck to the business.
The gap between those two states is the gap between a pilot and a production program. Between an AI initiative that shows promise in one team and one that delivers measurable value across the organization. The Gartner and Deloitte data makes that gap quantifiable: the organizations that get governance right are the ones whose AI programs survive and scale. The ones that do not are contributing to that 40% cancellation rate.
As Lior Koriat, CEO of Quali, said in the Press Release : “NemoClaw gets agents running. Torque keeps them governed.”
For IT leaders building out an AI stack that is meant to last, that distinction is everything.
A Complete, Governed AI Agent Stack
Torque’s support for NemoClaw extends Quali’s existing NVIDIA ecosystem coverage across DGX Spark, DGX Station, NVIDIA GPU Operator, and NIM Operator. Together, Torque and NemoClaw provide a complete, governed autonomous AI agent stack, from the model and the agent runtime to the infrastructure control plane that makes both production-ready at scale.
For organizations at the point of deciding how to build their AI infrastructure, the decision is not just about which components to deploy. It is about whether those components will be manageable, accountable, and scalable when the program grows beyond the first team. Torque is built to ensure they are.
Read the press release: Quali’s Torque Platform Brings Enterprise Governance to NVIDIA NemoClaw
To see Torque in action, visit the Torque playground, or book a live demo to see how Torque is a major contributor to address the risks caused by agentic agent conflict.