For Node Runners

Introduction

As a Node Runner, you contribute GPU-powered computation to the DIVS decentralized network, running Vision-Language Models (VLMs) to verify claims in images submitted by Builders.

Node Runners help build a trustless, censorship-resistant truth layer for online images.

This guide walks you through installing the node client, configuring your compute resources, and starting your first verification tasks.


1️⃣ Prerequisites

Hardware Requirements: We support a range of models — from GPU-hungry giants to ones that can chill on your laptop.

  • GPU (recommended): NVIDIA with CUDA is ideal. More VRAM = happier models.

  • CPU (possible): 4+ cores, but It’ll work… eventually. Great time to grab a coffee. Or two.

  • RAM: At least 8GB. For bigger models, 16GB+ is safer.

  • Architecture: x86_64 and ARM supported (some models prefer x86 + CUDA).

  • Disk: 120+ GB free space for keeping things comfy

Software Requirements

  • Docker (v20+) for containerized setup. That’s it. No additional installs or builds — just pull the image and run.

Network Requirements

Our protocol uses peer-to-peer communication over UDP.

  • Ports: Open UDP ports 12000–12009 on your router or firewall.

  • Connectivity: A stable public internet connection is best. NAT traversal is attempted, but port forwarding is recommended.

  • Docker note: Make sure Docker can expose the above ports correctly.

DIVS Wallet Configuration

  • Automatic: Node keys are auto-generated on first run.

  • Optional override: You can supply your own key using environment variables when starting the container.


2️⃣ Run the DIVS node

Create a Volume

So that the node will not pull models again and again

docker volume create wtns-vol

For the best performance and support for larger models, run your Watchtower using a CUDA-enabled NVIDIA GPU:

docker run \
 -d \
 --gpus all \
 --network=host \
 -v wtns-vol:/root \
 -e WALLET_PUBLIC_KEY=0x_your_key_here \
 -e MODEL_NAME=MODEL_NAME \
 -e NETWORK=testnet \
 --name mywatchtower \
 witnesschain/infinity-watch-nvidia:2.0.0

🧪 Option 2: CPU-Only (Lightweight Model)

No GPU? You can still join the network by running a smaller model on your CPU:

docker run \
 -d \
 -v wtns-vol:/root \
 -e WALLET_PUBLIC_KEY=0x_your_key_here \
 -e MODEL_NAME=MODEL_NAME \
 -e NETWORK=testnet \
 --name mywatchtower \
 witnesschain/infinity-watch:2.0.0

Note: if you want to use your own private key for your watchtower, add the environment variable PRIVATE_KEY in your docker run command

-e PRIVATE_KEY="your_custom_private_key"

Models Supported

Following are the models we support as of now. Use the below models to pick one for the MODEL_NAME variable.

🤗 HuggingFace family
RAM Requirement

HuggingFaceTB/SmolVLM2-2.2B-Instruct

6 GB

HuggingFaceTB/SmolVLM-500M-Instruct

2 GB

HuggingFaceTB/SmolVLM-256M-Instruct

1 GB

🔮 Qwen Family
RAM Requirement

Qwen/Qwen2.5-VL-7B-Instruct

16 GB

Qwen/Qwen2.5-VL-3B-Instruct

8 GB

🧠 GLM Family
RAM Requirement

zai-org/GLM-4.1V-9B-Thinking

22 GB

Last updated