Ubuntu 24.04 (riscv64) on Milk-V Mars: nvm + unofficial Node.js 22 + llama.cpp build flags + run OpenClaw (with persistent env)
0) Prereqs
sudo apt update
sudo apt install -y \
git curl ca-certificates build-essential pkg-config \
cmake ninja-build python3 python3-pip \
libopenblas-dev
1) Install nvm (official)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
Load it in the current shell:
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && . "$NVM_DIR/nvm.sh"
nvm --version
2) Use unofficial-builds as Node download mirror (critical) + persist it
cat >> ~/.bashrc << 'EOF'
# Use Node.js unofficial builds (riscv64 friendly)
export NVM_NODEJS_ORG_MIRROR="https://unofficial-builds.nodejs.org/download/release"
EOF
source ~/.bashrc
echo "$NVM_NODEJS_ORG_MIRROR"
3) Install Node.js 22 from unofficial-builds (example: 22.22.0)
nvm install 22.22.0
nvm use 22.22.0
nvm alias default 22.22.0
node -v
npm -v
node -p "process.arch + ' ' + process.platform"
4) Add llama.cpp build flags (via node-llama-cpp) and build
Set the flags (all OFF):
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RVV=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZICBOP=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZIHINTPAUSE=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZVFH=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZFH=OFF
Download/build llama.cpp:
npx --yes node-llama-cpp source download
# rebuild later if needed:
# npx --yes node-llama-cpp source build
5) Install OpenClaw
cd ~
git clone git clone https://github.com/openclaw/openclaw.git
cd openclaw
npm install
6) Run OpenClaw (Node must include env params)
Run via npm script:
NODE_OPTIONS="--max-old-space-size=2048" \
OPENCLAW_PORT="3000" \
OPENCLAW_MODEL_PATH="$HOME/models/your-model.gguf" \
npm run start
Or run entry file directly:
NODE_OPTIONS="--max-old-space-size=2048" \
OPENCLAW_PORT="3000" \
OPENCLAW_MODEL_PATH="$HOME/models/your-model.gguf" \
node ./path/to/openclaw-entry.js
7) Persist parameters
A) Persistent for your shell (bash)
cat >> ~/.bashrc << 'EOF'
# --- OpenClaw runtime env ---
export NODE_OPTIONS="--max-old-space-size=2048"
export OPENCLAW_PORT="3000"
export OPENCLAW_MODEL_PATH="$HOME/models/your-model.gguf"
# --- node-llama-cpp / llama.cpp build options ---
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RVV=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZICBOP=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZIHINTPAUSE=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZVFH=OFF
export NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZFH=OFF
EOF
source ~/.bashrc
B) Persistent for systemd (recommended for boot/startup)
Find your Node path first:
which node
# example: /home/<user>/.nvm/versions/node/v22.22.0/bin/node
Create the service (replace <user>{=html} and the entry path):
sudo tee /etc/systemd/system/openclaw.service >/dev/null << 'EOF'
[Unit]
Description=OpenClaw Service
After=network.target
[Service]
Type=simple
User=<user>
WorkingDirectory=/home/<user>/openclaw
ExecStart=/home/<user>/.nvm/versions/node/v22.22.0/bin/node /home/<user>/openclaw/path/to/openclaw-entry.js
Environment=NODE_OPTIONS=--max-old-space-size=2048
Environment=OPENCLAW_PORT=3000
Environment=OPENCLAW_MODEL_PATH=/home/<user>/models/your-model.gguf
Environment=NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RVV=OFF
Environment=NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZICBOP=OFF
Environment=NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZIHINTPAUSE=OFF
Environment=NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZVFH=OFF
Environment=NODE_LLAMA_CPP_CMAKE_OPTION_GGML_RV_ZFH=OFF
Restart=on-failure
RestartSec=3
[Install]
WantedBy=multi-user.target
EOF
Enable + start + view logs:
sudo systemctl daemon-reload
sudo systemctl enable --now openclaw.service
journalctl -u openclaw.service -e --