OpenClaw on Nutanix AHV: Architecture and Tech Stack

If you follow my ramblings on the blog, you know I love tinkering with my clusters and testing somewhat out-of-the-box stuff (cf. my Steamdeck articles for example). Recently, I had a thought: Gemini or Claude in the public cloud is great for coding a Python script or writing emails. But when it comes to asking it to interact with our local infrastructure, that’s where it gets stuck.
So I wondered how I could connect artificial intelligence closer to my VMs. With this in mind, I got my hands on OpenClaw. Honestly, it was a bit of an obstacle course at the start. No more simple conversational gadgets, here we are talking about deploying a true Private AI on a Nutanix AHV cluster capable of acting on our infrastructure. Let me present the tech stack I chose for this experiment.
What is OpenClaw?
For those who have been living in a cave these past few months, OpenClaw is a GitHub project that exceeded 300k stars in just a few months. Imagine an ultra-intelligent thought translator coupled with a butler. Instead of clicking through dozens of menus in a complex interface, you simply ask your infrastructure to work for you in natural language (via a universal web interface or even messaging apps like WhatsApp and Telegram). It is even capable of working on its own while you sleep!
But where it gets exciting for us engineers is under the hood. OpenClaw is not just another “stateless” Large Language Model (LLM) that forgets everything with each new request. It is a true Agentic Gateway. Concretely, this means it orchestrates autonomous agents equipped with tools. These agents can be configured to tap directly into our cluster’s private APIs (like the REST APIs of Prism Element or Prism Central), code, browse the web, and synthesize certain information. In short, we don’t just ask the AI questions anymore, we delegate tasks to it.

Why Self-hosted?
In the field, the question of data governance arises the second the word “AI” is pronounced. Out of the question to send sensitive information to servers over which I have no control!
Choosing the Self-hosted route with OpenClaw means taking back absolute control. Data flows, execution logs, and API credentials stay locked down warm and safe on my network, isolated from the internet if desired.
Architecture and Tech Stack
For this project, a simple “Next, Next, Finish” on the corner of a table was out of the question. Here is the robust technical architecture I ended up validating for my deployment.

The Foundation: Nutanix AHV & Ubuntu 24.04 LTS
To run this beast, you need solid foundations. I provisioned a virtual machine running Ubuntu 24.04 LTS hosted directly on my Nutanix AHV cluster.
On the sizing side, I went with 8 vCPUs, 32 GB of RAM, and 250 GB of dedicated storage. You might tell me: “32 GB for a gateway, isn’t that a bit too much?” The gateway will have to ingest substantial data streams, maintain the cache of the various active agents, and potentially handle heavy parallel API querying. And besides, I can allocate these resources in my lab, so why deprive myself?
The Application Engine: NodeJS 22
At the heart of OpenClaw, the magic happens thanks to NodeJS 22. It is the execution engine that runs the gateway and its AI agent integrations.
Why is Node 22 an excellent architectural choice here? For its asynchronous management (Event Loop). When you ask OpenClaw to do a status report on 50 VMs, the gateway will initiate multiple API calls to Prism Central while keeping your WebSocket stream open to reply in real-time in the chat interface. NodeJS excels in this non-blocking concurrency management.
Network Routing: Caddy
The usual operating mode for OpenClaw is to deploy it locally on the machine from which you will connect to it, or to set up a tunnel to access the remote instance. Let’s not lie to ourselves, I wanted to type the IP in my browser and be able to access my instance, whether I’m on my PC or my tablet.
To make this possible, I use a Caddy Reverse Proxy. Caddy manages traffic routing and HTTPS encryption fully automatically.
I can already hear you saying: “Yes, but if a guy connects to your local network, he will have access to your instance!”. Well no! Because OpenClaw natively integrates a Device Whitelisting system. If your PC has never been connected to the instance, you will have to provide the “Gateway Token”. Then, you will have to accept this new connection on the OpenClaw instance side. As you can see, only previously authorized devices can enjoy your local instance.
The Entry Point: Discord
The choice of entry point, which will allow you to interact with OpenClaw, is often a matter of taste and colors.
OpenClaw directly integrates a chat system so you can talk to it. It’s good, it’s native, but inaccessible if I’m not at home. The system also offers to configure external entry points like Telegram, WhatsApp, Discord, or even Teams and Slack. And that is clearly a big plus because it gives almost unlimited possibilities!

What’s Next?
The goal of this article was to present the architecture envisioned for my OpenClaw assistant, to understand what we are deploying and why. We therefore have a coherent technical stack, performant thanks to Nutanix AHV, and hosted locally.
In a future article, I will explain how to install OpenClaw step by step until you have a functional instance.
0 comments