
If you read my previous article detailing the architecture and the technical stack I chose to deploy OpenClaw, you already know why I decided to run this solution on my Nutanix AHV cluster. Today, we’re getting practical! I will show you, step by step, how to deploy your own instance on a freshly installed Ubuntu virtual machine.
Before kicking off the hostilities, here is a quick reminder of my setup. I provisioned a VM on Nutanix AHV with:
- 8 vCPUs
- 32 GB of RAM
- 250 GB of storage
- an NVIDIA Tesla P4 graphics card in PCI Passthrough
💡 Why favor full Passthrough over vGPU (virtual GPU)? Quite simply to guarantee near “bare-metal” inference performance. By giving our VM direct and exclusive access to the physical hardware, we completely eliminate the overhead (latency) associated with the virtualization layer.
Let’s start the deployment.
Preparing the Ubuntu VM: System and NVIDIA Drivers
The very first step is to prepare the ground to deploy our AI.
Ubuntu 24.04: Operating System Update
This is a rule I apply every single time I deploy a new operating system. As soon as I connect via SSH, I make sure all packages are up to date to avoid future security flaws or dependency conflicts.
sudo apt update && sudo apt upgrade -y
GPU: Installing NVIDIA Drivers
For OpenClaw to harness the computing power of my Tesla P4, the operating system must be able to communicate with it properly. Here are the commands to run to install the drivers (you can access a more detailed guide on the blog):
sudo apt install nvidia-driver-535-server -y
sudo reboot
Once the machine has rebooted, we log back in and type the command to verify that our GPU is properly detected and ready to work:
nvidia-smi

Node.js and OpenClaw
Installing Node.js 22
OpenClaw is built on Node.js. To ensure we have a recent and efficient runtime environment (here version 22), we add the official NodeSource repository before launching the installation:
curl -fsSL [https://deb.nodesource.com/setup_22.x](https://deb.nodesource.com/setup_22.x) | sudo -E bash -
sudo apt install -y nodejs
Basic OpenClaw Deployment
Now that NodeJS is in place, we move on to installing OpenClaw. A simple curl script provided by the developers does the heavy lifting:
curl -fsSL [https://openclaw.ai/install.sh](https://openclaw.ai/install.sh) | bash
Once the installation is complete, the system automatically launches the configuration wizard for your instance. I will detail this step as well as the creation of API keys (Discord, Telegram, etc.) in a future blog post.

A small manipulation is required right after the OpenClaw installation if we want to be able to use “openclaw” commands without constraints. We need to add the local installation directory to our PATH environment variable (remember to adapt the username if you are not using administrateur):
export PATH="/home/administrateur/.npm-global/bin:$PATH"
💡 Why this manipulation? It’s an excellent security practice that I highly recommend. By exporting the PATH to ~/.npm-global/bin, we avoid installing global NPM packages with root (sudo) privileges. This significantly reduces attack surfaces and saves you from the eternal Linux permission conflicts!
Cleanly Exposing OpenClaw with Caddy
By default, the OpenClaw web interface listens on port 18789. Instead of attacking this port directly, I always prefer to place a reverse proxy in front of my applications. For this lab, my choice fell on Caddy.
sudo apt install -y caddy
💡 Why Caddy rather than Apache or Nginx? Because Caddy is formidably efficient. Where Nginx sometimes requires long configuration blocks for simple proxying, Caddy does the same job in literally three lines of code, all while being ultra-lightweight.
We edit its configuration file:
sudo vi /etc/caddy/Caddyfile
And we replace the entire content with the following instructions (replace the IP with the one of your VM, in my case 192.168.84.134):
192.168.84.134 {
reverse_proxy 127.0.0.1:18789
}
Now, all that’s left is to restart the service so the proxy takes over:
sudo systemctl restart caddy
Network Security: Locking Down the OpenClaw Instance
Having a functional instance is good, securing it is essential. Even if you are on your local network (LAN), you should never leave open access to your control interface. We are going to apply a strict configuration via the OpenClaw CLI commands.
We start by restricting the Gateway listening to the local loopback to prevent any direct access:
openclaw config set gateway.bind loopback
We then force the operating mode to local, and activate token authentication (the bare minimum):
openclaw config set gateway.mode local
openclaw config set gateway.auth.mode token
Finally, since we are going through Caddy, we must authorize Cross-Origin requests (CORS) coming from our IP address, otherwise the browser will block the page (don’t forget to adapt the IP):
openclaw config set gateway.controlUi.allowedOrigins '["[https://192.168.84.134](https://192.168.84.134)"]'
We restart the service to apply our lockdown:
openclaw gateway restart
💡 The security pattern applied here is akin to local “Zero Trust”. By forcing OpenClaw on the loopback (127.0.0.1), we ensure that absolutely all traffic is forced to go through our Caddy proxy. Coupled with CORS filtering and authentication, we provide a baseline protection for our instance against potential scans or malicious scripts on the network.
First Contact and Configuration Validation
Retrieving the Access Token
Now that the doors are locked, we need the key. The authentication token was automatically generated during installation. We’re going to go fish it directly out of the JSON configuration file:
grep -i token ~/.openclaw/openclaw.json
Carefully copy this string of characters. Then open your browser and access your Web interface (e.g., https://192.168.84.134).

Enter the token in the “Gateway Token” box.
Device Approval
Once connected, you will notice that something is missing: the system is waiting for us to approve the “device” (the PC or tablet from which we wish to use OpenClaw) to grant it the right to process requests.
Return to your terminal to list the pending devices:
openclaw devices list
Locate your device ID in the list (a UUID-type string) and approve it:
openclaw devices approve b7beb7fa-fa4e-46e9-aec1-282bcce881f6
💡 Device approval (devices approve) is much more than a simple interface formality. It’s a sort of cryptographic handshake. This mechanism guarantees that no unsolicited machine can attach itself to your OpenClaw cluster instance without your knowledge!
Interaction Tests
The OpenClaw instance is now 100% operational! To validate our entire stack, there’s nothing like a full-scale test. You can send a first prompt on the web interface’s integrated chat, or configure a bridge to send a message on the Discord side.

Conclusion
We went from a simple Ubuntu VM to a true secured inference server, powered by Node.js and accelerated by a dedicated NVIDIA Tesla P4 GPU via Nutanix AHV. The architecture is clean, secured behind a Caddy proxy, and ready to handle our requests.
But this is only the beginning. In upcoming articles, we will go even further: I will show you how to configure OpenClaw via the startup wizard, deploy local models via Ollama, create an interactive Discord bot, and even inject Google API keys to equip our AI with search capabilities. Stay tuned!



































