# For Node Runners

## **Introduction**

As a **Node Runner**, you contribute GPU-powered computation to the **DIVS decentralized network**, running **Vision-Language Models (VLMs)** to verify claims in images submitted by Builders.

Node Runners help build a **trustless, censorship-resistant truth layer for online images**.

This guide walks you through installing the node client, configuring your compute resources, and starting your first verification tasks.

***

## **1️⃣ Prerequisites**

✅ **Hardware Requirements:** We support a range of models — from GPU-hungry giants to ones that can chill on your laptop.

* **GPU (recommended):** NVIDIA with CUDA is ideal. More VRAM = happier models.
* **CPU (possible):** 4+ cores, but It’ll work… eventually. Great time to grab a coffee. Or two.
* **RAM:** At least 8GB. For bigger models, 16GB+ is safer.
* **Architecture**: x86\_64 and ARM supported (some models prefer x86 + CUDA).
* **Disk:** 120+ GB free space for keeping things comfy

✅ **Software Requirements**

* **Docker (v20+)** for containerized setup. That’s it. No additional installs or builds — just pull the image and run.

✅  **Network Requirements**

Our protocol uses peer-to-peer communication over UDP.

* **Ports:** Open **UDP ports 12000–12009** on your router or firewall.
* **Connectivity:** A stable public internet connection is best. NAT traversal is attempted, but port forwarding is recommended.
* **Docker note:** Make sure Docker can expose the above ports correctly.

✅ **DIVS Wallet Configuration**

* **Automatic:** Node keys are auto-generated on first run.
* **Optional override:** You can supply your own key using environment variables when starting the container.

***

## **2️⃣ Run the DIVS node**

### Create a Volume&#x20;

So that the node will not pull models again and again

```
docker volume create wtns-vol
```

### 🚀 Option 1: With NVIDIA GPU (Recommended) <a href="#option-1-with-nvidia-gpu-recommended" id="option-1-with-nvidia-gpu-recommended"></a>

For the best performance and support for larger models, run your Watchtower using a CUDA-enabled NVIDIA GPU:

```
docker run \
 -d \
 --gpus all \
 --network=host \
 -v wtns-vol:/root \
 -e WALLET_PUBLIC_KEY=0x_your_key_here \
 -e MODEL_NAME=MODEL_NAME \
 -e NETWORK=testnet \
 --name mywatchtower \
 witnesschain/infinity-watch-nvidia:2.0.0
```

### 🧪 Option 2: CPU-Only (Lightweight Model) <a href="#option-2-cpu-only-lightweight-model" id="option-2-cpu-only-lightweight-model"></a>

No GPU? You can still join the network by running a smaller model on your CPU:

```
docker run \
 -d \
 -v wtns-vol:/root \
 -e WALLET_PUBLIC_KEY=0x_your_key_here \
 -e MODEL_NAME=MODEL_NAME \
 -e NETWORK=testnet \
 --name mywatchtower \
 witnesschain/infinity-watch:2.0.0
```

{% hint style="info" %}
Note: if you want to use your own private key for your watchtower,  add the environment variable PRIVATE\_KEY in your docker run command

-e PRIVATE\_KEY="your\_custom\_private\_key"
{% endhint %}

### **Models Supported**

Following are the models we support as of now. Use the below models to pick one for the MODEL\_NAME variable.

{% hint style="success" %}
We keep adding models frequently. If you want to add your model to the list, write to us at <support@witnesschain.com>
{% endhint %}

<table><thead><tr><th width="479.91796875">🤗 HuggingFace family </th><th align="center">RAM Requirement</th></tr></thead><tbody><tr><td>HuggingFaceTB/SmolVLM2-2.2B-Instruct</td><td align="center">6 GB</td></tr><tr><td>HuggingFaceTB/SmolVLM-500M-Instruct</td><td align="center">2 GB</td></tr><tr><td>HuggingFaceTB/SmolVLM-256M-Instruct</td><td align="center">1 GB</td></tr></tbody></table>

<table><thead><tr><th width="479.5625">🔮 Qwen Family</th><th align="center">RAM Requirement</th></tr></thead><tbody><tr><td>Qwen/Qwen2.5-VL-7B-Instruct</td><td align="center">16 GB</td></tr><tr><td>Qwen/Qwen2.5-VL-3B-Instruct</td><td align="center">8 GB</td></tr></tbody></table>

<table><thead><tr><th width="480.06640625">🧠 GLM Family</th><th align="center">RAM Requirement</th></tr></thead><tbody><tr><td>zai-org/GLM-4.1V-9B-Thinking</td><td align="center">22 GB</td></tr></tbody></table>
