Run a watchtower!
You can deploy one easily using Docker!
๐ Option 1: With NVIDIA GPU (Recommended)
For the best performance and support for larger models, run your Watchtower using a CUDA-enabled NVIDIA GPU:
docker run \
-d \
--gpus all \
--network=host \
-e WALLET_PUBLIC_KEY=0x_your_key_here \
-e MODEL_NAME=MODEL_NAME \
-e NETWORK=testnet \
--name wtns-test \
wtns-ai.tail1d7c2.ts.net:5000/infinitywatch-nvidia:dev
๐งช Option 2: CPU-Only (Lightweight Model)
No GPU? You can still join the network by running a smaller model on your CPU:
docker run \
-d \
-e WALLET_PUBLIC_KEY=0x_your_key_here \
-e MODEL_NAME=MODEL_NAME \
-e NETWORK=testnet \
--name wtns-test \
wtns-ai.tail1d7c2.ts.net:5000/infinitywatch:dev
Note:
following are the models we support as of now;
๐ค HuggingFace family
HuggingFaceTB/SmolVLM2-2.2B-Instruct
HuggingFaceTB/SmolVLM-500M-Instruct
HuggingFaceTB/SmolVLM-256M-Instruct
๐ฎ Qwen Family
Qwen/Qwen2.5-VL-3B-Instruct
Qwen/Qwen2.5-VL-7B-Instruct
Last updated