1. Why Run OpenClaw on a Mac Mini?
Cloud VPS providers charge by compute time and add GPU surcharges for AI inference. A Mac Mini M4 gives you dedicated Apple Silicon at a flat $75/month β no pay-per-call surprises, full SSH access, and the ability to run local LLMs alongside OpenClaw for near-zero inference cost.
| Feature | Cloud VPS (AI tier) | Mac Mini M4 (MyRemoteMac) |
|---|---|---|
| Monthly Cost | $150β$400/mo (GPU VPS) | $75/mo (flat rate) |
| Local LLM Support | Extra cost / limited | Native (Ollama, llama.cpp) |
| Always-On Uptime | Yes | Yes (99.9% SLA) |
| Apple Integrations | None | iMessage, Shortcuts, Automator |
| Setup Complexity | Medium | Low (SSH access included) |
| Privacy | Shared datacenter | Dedicated hardware |
Key Advantage: Unlike a cloud VPS, your Mac Mini M4 runs Ollama or llama.cpp natively on Apple Silicon. You can route OpenClaw to a local LLM for sensitive tasks, then fall back to Claude for complex queries β all without paying per token.
2. Prerequisites
Before you begin, make sure you have the following:
- A Mac Mini M4 server from MyRemoteMac (from $75/mo)
- SSH access to your Mac Mini (provided with your MyRemoteMac subscription)
- An Anthropic API key (for Claude) β get one at console.anthropic.com
- A messaging account on Telegram, Discord, or WhatsApp to connect your agent
- Basic familiarity with terminal commands and JSON configuration files
3. Step 1: Connect to Your Mac Mini and Install Node.js
First, SSH into your Mac Mini M4. You will have received your credentials when you set up your MyRemoteMac server.
Connect via SSH
# Connect to your Mac Mini M4
ssh admin@your-server-ip
# Verify you're on Apple Silicon
uname -m
# Expected output: arm64
# Check macOS version
sw_vers
# ProductName: macOS
# ProductVersion: 15.4
Install nvm (Node Version Manager)
# Install nvm (Node Version Manager)
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.7/install.sh | bash
# Load nvm into the current session
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
# Verify nvm is available
nvm --version
# 0.39.7
Install Node.js LTS
We recommend nvm to manage Node.js versions β it avoids permission issues and makes upgrades trivial:
# Install the latest LTS version of Node.js
nvm install --lts
# Set it as the default for new sessions
nvm alias default node
# Verify the installation
node --version
# v22.x.x
npm --version
# 10.x.x
Verify the Installation
# Confirm Node.js is running on Apple Silicon
node -e "console.log(process.platform, process.arch)"
# darwin arm64
# Note the exact Node.js binary path (needed for launchd in Step 4)
which node
# /Users/admin/.nvm/versions/node/v22.14.0/bin/node
4. Step 2: Install OpenClaw and Configure Claude API Key
With Node.js ready, install OpenClaw in a dedicated project directory. OpenClaw is distributed as an npm package.
Install OpenClaw
# Create a directory for your OpenClaw agent
mkdir -p ~/openclaw
cd ~/openclaw
# Initialize a Node.js project
npm init -y
# Install OpenClaw
npm install openclaw
# Create the logs directory
mkdir -p logs
Create the Configuration File
OpenClaw reads its settings from a config.json file. Create one in your project directory:
# Create config.json in your project directory
cat > ~/openclaw/config.json << 'EOF'
{
"model": "claude-sonnet-4-6",
"apiKey": "sk-ant-YOUR_API_KEY_HERE",
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_TELEGRAM_BOT_TOKEN"
}
},
"maxHistory": 20,
"systemPrompt": "You are a helpful assistant. Be concise and accurate."
}
EOF
Replace sk-ant-YOUR_API_KEY_HERE with your actual key from console.anthropic.com. You can use claude-sonnet-4-6 for complex tasks or claude-haiku-4-5 for faster, cheaper responses.
Test the Agent Interactively
# Start OpenClaw in interactive mode to test it
cd ~/openclaw
npx openclaw start --config ./config.json
# Expected output:
# [OpenClaw] Agent initialized with model: claude-sonnet-4-6
# [OpenClaw] Telegram channel connected
# [OpenClaw] Listening for messages...
# Press Ctrl+C to stop (you will set up the persistent service in Step 4)
5. Step 3: Connect Messaging Channels
OpenClaw supports Telegram, Discord, WhatsApp, iMessage, Slack, and Signal. The most straightforward channel to set up is Telegram β create a bot via @BotFather:
# Step 1: Open Telegram and search for @BotFather
# Step 2: Send /newbot and follow the prompts
# Step 3: Copy the token (format: 1234567890:AAF...)
# Test your bot token is valid:
curl -s "https://api.telegram.org/botYOUR_TOKEN/getMe"
# {"ok":true,"result":{"id":...,"first_name":"My Agent","username":"myagent_bot",...}}
Connect Discord
To connect a Discord bot, create an application in the Discord Developer Portal and grant it the necessary permissions:
# In Discord Developer Portal (https://discord.com/developers/applications):
# 1. Create a New Application and give it a name
# 2. Go to Bot > Add Bot
# 3. Copy the Bot Token
# 4. Enable "Message Content Intent" under Privileged Gateway Intents
# 5. Under OAuth2 > URL Generator, select bot + Send Messages + Read Message History
# 6. Use the generated URL to invite the bot to your server
# Update your config.json to enable Discord:
# "discord": { "enabled": true, "token": "YOUR_DISCORD_BOT_TOKEN" }
6. Step 4: Run as a Persistent launchd Service
For 24/7 operation, OpenClaw must start automatically on boot and restart on failure. macOS uses launchd for this purpose β create a service plist in ~/Library/LaunchAgents/.
Create the launchd Plist
Save the following as ~/openclaw/com.openclaw.agent.plist (replace admin with your actual username and adjust the Node.js path from Step 1):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN"
"http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>com.openclaw.agent</string>
<key>ProgramArguments</key>
<array>
<string>/Users/admin/.nvm/versions/node/v22.14.0/bin/node</string>
<string>/Users/admin/openclaw/node_modules/.bin/openclaw</string>
<string>start</string>
<string>--config</string>
<string>/Users/admin/openclaw/config.json</string>
</array>
<key>WorkingDirectory</key>
<string>/Users/admin/openclaw</string>
<key>RunAtLoad</key>
<true/>
<key>KeepAlive</key>
<true/>
<key>StandardOutPath</key>
<string>/Users/admin/openclaw/logs/openclaw.log</string>
<key>StandardErrorPath</key>
<string>/Users/admin/openclaw/logs/openclaw-error.log</string>
</dict>
</plist>
Enable and Start the Service
# Save the plist to the LaunchAgents directory
# (Use your actual username instead of "admin")
cp ~/openclaw/com.openclaw.agent.plist ~/Library/LaunchAgents/
# Load and start the service
launchctl load ~/Library/LaunchAgents/com.openclaw.agent.plist
launchctl start com.openclaw.agent
Verify the Service is Running
# Check that the service is running
launchctl list | grep openclaw
# 12345 0 com.openclaw.agent
# (First column is the PID β a number means it's running)
# View live logs
tail -f ~/openclaw/logs/openclaw.log
# View error logs
tail -f ~/openclaw/logs/openclaw-error.log
Update OpenClaw (Zero Downtime)
# Stop the service before updating
launchctl stop com.openclaw.agent
# Update OpenClaw to the latest version
cd ~/openclaw
npm update openclaw
# Restart the service
launchctl start com.openclaw.agent
# Confirm the new version is running
tail -5 ~/openclaw/logs/openclaw.log
7. Troubleshooting Common Issues
Service fails to start: "Cannot find module"
The absolute Node.js path in the plist is wrong, or npm packages are missing. Check the error log and reinstall:
# Check the error logs
cat ~/openclaw/logs/openclaw-error.log | tail -20
# Make sure to use the absolute Node.js binary path in the plist
which node
# /Users/admin/.nvm/versions/node/v22.14.0/bin/node
# Reinstall dependencies
cd ~/openclaw
npm install
Claude API returns 401 Unauthorized
Your API key is missing, incorrect, or your Anthropic account has no remaining credits. Verify the key:
# Test your API key directly
curl https://api.anthropic.com/v1/messages \
-H "x-api-key: YOUR_API_KEY" \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{"model":"claude-haiku-4-5","max_tokens":10,"messages":[{"role":"user","content":"Hi"}]}'
# Expected: {"id":"msg_...","type":"message",...}
# If you get 401: check your key at console.anthropic.com
Telegram bot stops responding
The bot token may have been revoked or the service has crashed. Check the token and restart:
# Verify the bot token is still valid
curl -s "https://api.telegram.org/botYOUR_TOKEN/getMe"
# Restart the service
launchctl stop com.openclaw.agent
launchctl start com.openclaw.agent
# If the token was revoked, generate a new one via @BotFather (/mybots > API Token)
High memory usage after several days
OpenClaw accumulates conversation history in memory. Lower maxHistory in config.json and schedule a weekly restart:
# Reduce conversation history in config.json
# "maxHistory": 10 (default is 20 β lower values use less memory)
# Set up a weekly restart via cron
crontab -e
# Add this line to restart OpenClaw every Sunday at 3am:
# 0 3 * * 0 launchctl stop com.openclaw.agent; sleep 2; launchctl start com.openclaw.agent
# Monitor memory usage
ps aux | grep openclaw
8. Cost Analysis vs. Cloud VPS
Here is a realistic cost comparison for running an AI agent 24/7 at different usage levels:
| Use Case | AI Calls/Month | Cloud VPS Cost | MyRemoteMac Cost | Monthly Savings |
|---|---|---|---|---|
| Personal assistant | ~500 calls | $120/mo (basic GPU VPS) | $75/mo | $45/mo |
| Small team bot | ~5,000 calls | $200/mo | $75/mo | $125/mo |
| Local LLM + agent | Unlimited (local) | $350+/mo (GPU VPS) | $75/mo | $275+/mo |
| Multi-agent system | 10,000+ calls | $600+/mo | $179/mo (M4 Pro) | $421+/mo |
Bottom Line: Running OpenClaw on a Mac Mini M4 is significantly cheaper than any cloud AI VPS. You also get native Apple Silicon for local LLMs via Ollama β route non-sensitive tasks to a free local model and save even more on API costs.
Related Guides
Run LLMs on Mac Mini M4
Run Ollama, llama.cpp, and local language models natively on Apple Silicon for zero-cost AI inference.
Mac Mini as AI & ML Server
Turn your Mac Mini into a private AI inference server for machine learning workloads.
Develop iOS Without a Physical Mac
Build, test, and sign iOS apps remotely using a cloud Mac Mini β no hardware purchase needed.