# Decentralized Image Verification Service

### **Overview**

The **Decentralized Image Verification Service (DIVS)** is a trustless, network-driven platform designed to&#x20;

* verify the authenticity of the claims referred in images shared across the internet.&#x20;
* Spot celebrities in photos
* Identify deepfakes&#x20;
* Describe an image
* Summarize a video
* and more..

Using a distributed network of inference providers running advanced **Vision-Language Models (VLMs)**, DIVS allows developers, fact-checkers, and applications to request independent, verifiable assessments of whether an image supports a given claim—without relying on a **single centralized authority.**

By leveraging the power of **decentralized computation and VLMs, consensus mechanisms, and backed by EigenLayer's crypto-economic security**, DIVS provides a censorship-resistant and scalable foundation for truth verification in visual content.

***

### **What It Solves**

The internet is flooded with images that contain **textual claims, captions, or embedded information**, often spreading **misinformation** rapidly. Today, verifying these claims is:

* **Centralized:** Controlled by a few big platforms or organizations.
* **Opaque:** Lacking transparency in how decisions are made.
* **Slow:** Manual fact-checking cannot keep pace with viral image content.

DIVS solves this by creating a **trustless verification marketplace** where:

* Anyone can **submit an image and a textual claim** to verify.
* Independent node operators run **VLMs to analyze the image** and return factual verdicts.
* A **consensus score** is derived from multiple independent model results. (In Progress)
* Builders get a **fast, transparent, and verifiable API response**, usable in apps, bots, and extensions.

This creates an **open and scalable truth layer for the visual web.**

***

### **How It Works (High-Level)**

1. **Task Submission:** A Builder (developer) calls the DIVS API with an image and a claim to verify.
2. **Task Distribution:** The request is sent to a **decentralized network of Node Runners (aka Watchtowers)** running open VLMs.
3. **Parallel Processing:** Each node analyzes the image independently and submits a verification verdict with confidence scores.
4. **Consensus Engine:** The system aggregates responses, filters out malicious actors, and computes a **final trust score and verdict**. (Work in Progress)
5. **Result Delivery:** The Builder receives a **JSON response** containing verdict, confidence, and references (if available).

This design ensures **speed, transparency, and neutrality**—no single entity controls truth verification.

***

### **Who Should Read This**

* **🔹 Builders (API Consumers)** – Developers building apps, bots, extensions, and platforms that need **automated claim verification** for images. If you want to integrate truth-checking into your products, start here.
* **🔹 Node Runners (Compute Providers)** – Individuals or organizations operating GPU-enabled infrastructure willing to run **DIVS VLM nodes** and contribute to a **trustless verification network**.

Both groups are essential to the ecosystem: Builders generate demand, Node Runners supply decentralized computation.

***

### **Quick Links**

* 🚀 [**Get Started for Builders →**](/infinity-watch/proof-of-model-testnet/for-builders.md)
* ⚡ [**Get Started for Node Runners →**](/infinity-watch/proof-of-model-testnet/for-node-runners.md)


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://docs.witnesschain.com/infinity-watch/proof-of-model-testnet.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
