opncrafter

AI Watermarking: Proving What's Real

Jan 1, 2026 • 18 min read

The internet is flooded with AI-generated images, videos, and text. Distinguishing synthetic from human-created content is now a critical infrastructure problem — for electoral integrity, creative copyright, journalism, and trust in digital media. Two parallel approaches have emerged: invisible watermarking (embedding imperceptible signals directly into content) and cryptographic provenance (signing file metadata like a digital passport). Neither is a complete solution, and this guide explains exactly why.

1. Google SynthID: Statistical Invisible Watermarking

Developed by Google DeepMind, SynthID operates by embedding a statistically detectable pattern at the generation layer — invisible to humans, but mathematically verifiable by a detector.

How SynthID Text Watermarking Works

# Conceptual explanation of SynthID text watermarking
# (Based on published DeepMind research)

# Standard LLM text generation (no watermark):
# At each token position, the model outputs logits (raw scores for each vocabulary token)
# Standard temperature sampling: sample proportionally from softmax(logits)

import torch
import numpy as np

def standard_sample(logits, temperature=1.0):
    probs = torch.softmax(logits / temperature, dim=-1)
    return torch.multinomial(probs, num_samples=1)

# SynthID watermarking approach (Pseudorandom Sampling):
# A secret PRNG (seeded with a hash of recent context tokens + secret key)
# Produces random scores for each vocabulary token
# Tokens with higher random scores are slightly "preferred" during sampling
# The bias is invisible (doesn't change semantic meaning or quality)
# But is detectable statistically across 100+ tokens

def synthid_sample(logits, secret_key, context_hash, temperature=1.0):
    # Generate a random "tournament" using secret key + context
    prng = np.random.RandomState(seed=hash(secret_key + str(context_hash)) % (2**32))
    random_scores = prng.random(size=logits.shape[-1])  # One score per vocabulary token
    
    # Add small bias toward "green" tokens (those with high random_scores)
    watermark_bias = 2.0  # Tunable — higher = more detectable but slightly lower quality
    watermarked_logits = logits + watermark_bias * torch.tensor(random_scores)
    
    probs = torch.softmax(watermarked_logits / temperature, dim=-1)
    return torch.multinomial(probs, num_samples=1)

# DETECTION:
# Given a sequence of N tokens, check for each token:
# "Was this token in the 'green' set (high random score)?"
# Unwatermarked text: ~50% green tokens (random)
# Watermarked text: ~65-75% green tokens (biased sampling)
# Statistical test (z-score) determines if correlation is significant

def detect_watermark(token_ids, secret_key, threshold=4.0):
    green_count = 0
    for i, token_id in enumerate(token_ids):
        context_hash = hash(tuple(token_ids[max(0, i-4):i]))  # Use context window
        prng = np.random.RandomState(seed=hash(secret_key + str(context_hash)) % (2**32))
        random_scores = prng.random(size=50257)  # vocab size
        if random_scores[token_id] > 0.5:  # Is this token in the "green" set?
            green_count += 1
    
    # Z-score: how many standard deviations above chance (50%)?
    z_score = (green_count - len(token_ids) * 0.5) / (len(token_ids) * 0.5) ** 0.5
    return z_score > threshold, z_score  # Returns (is_watermarked, confidence)

How SynthID Image Watermarking Works

For images, SynthID operates in the frequency domain rather than pixel space, making it robust to common image transformations:

# Image watermarking operates in the DCT/frequency domain
# Similar to JPEG compression artifacts — imperceptible but detectable

from PIL import Image
import numpy as np
from scipy.fft import dct, idct

def embed_image_watermark(image_array: np.ndarray, watermark_key: bytes) -> np.ndarray:
    """
    Embed an invisible watermark by modifying frequency-domain coefficients.
    Robust to: JPEG compression, rescaling, moderate cropping, color adjustments.
    Not robust to: Screenshot (captures pixel values, discards frequency patterns),
                   heavy editing (repainting large regions), adversarial attacks.
    """
    # Convert to frequency domain using DCT (same as JPEG)
    dct_image = dct(dct(image_array, axis=0), axis=1)
    
    # Generate a pseudo-random pattern from the secret key
    rng = np.random.RandomState(seed=int.from_bytes(watermark_key, 'big') % (2**32))
    pattern = rng.choice([-1, 1], size=dct_image.shape)  # +1 or -1 for each coefficient
    
    # Embed in mid-frequency range (invisible but survives compression)
    # High frequencies: visible as noise, removed by compression
    # Low frequencies: affect overall image appearance (obvious)
    # Mid frequencies: invisible AND survive most transformations
    strength = 0.5  # Tuned for imperceptibility vs. detection robustness
    watermarked_dct = dct_image + strength * pattern
    
    # Convert back to pixel domain
    return idct(idct(watermarked_dct, axis=1), axis=0).clip(0, 255).astype(np.uint8)

# DETECTION recovers the pattern correlation even after JPEG at quality=70,
# 20% resize, or mild color grading — but NOT after screenshot re-capture

2. C2PA Content Credentials: Cryptographic Provenance

# C2PA (Coalition for Content Provenance and Authenticity)
# Supported by: Adobe, Microsoft, OpenAI (DALL-E 3), Leica, Nikon, Canon, BBC

# How it works: A "manifest" is attached to the file (like a chain of custody document)
# Each step in the image's life (capture, edit, AI generation) is recorded
# The manifest is cryptographically signed by the software/device that made the change

# C2PA Manifest JSON structure (simplified):
manifest = {
    "claim_generator": "Adobe Photoshop/25.0",
    "claim_generator_info": [{"name": "Adobe Photoshop", "version": "25.0"}],
    
    "assertions": [
        {
            "label": "c2pa.actions",
            "data": {
                "actions": [
                    {
                        "action": "c2pa.created",
                        "softwareAgent": "DALL-E 3",
                        "when": "2024-11-15T14:30:00Z",
                        # Cryptographic hash of the original generated image
                        "digitalSourceType": "http://cv.iptc.org/newscodes/digitalsourcetype/trainedAlgorithmicMedia"
                    },
                    {
                        "action": "c2pa.edited",      # User opened in Photoshop
                        "softwareAgent": "Adobe Photoshop 25.0",
                        "when": "2024-11-15T15:00:00Z",
                    }
                ]
            }
        },
        {
            "label": "c2pa.hash.data",  # Hash of the file content at signing time
            "data": {"alg": "sha256", "hash": "a3b4c5..."},
        }
    ],
    
    # Ed25519 signature by Adobe's authorized certificate
    "signature": "3045022100d4e5f6...",
    "certificate_chain": ["-----BEGIN CERTIFICATE-----
MIIB...", "..."]
}

# Verification process:
# 1. Read manifest from file metadata (JFIF, XMP, or BMFF container)
# 2. Look up signing certificate in Certificate Transparency log
# 3. Verify certificate chains to Adobe/Microsoft/Leica root CA
# 4. Verify file hash matches — any pixel change invalidates the manifest
# 5. Report: "Created by DALL-E 3, edited in Photoshop, signed by Adobe"

# Try the C2PA Python SDK:
# pip install c2pa-python
import c2pa

# Read C2PA manifest from an image
with open("image.jpg", "rb") as f:
    image_bytes = f.read()

reader = c2pa.Reader("image/jpeg", image_bytes)
manifest_store = reader.get_manifest_store_json()
# Returns JSON with full provenance chain

3. Comparison: Which Approach Wins?

PropertySynthID (Invisible)C2PA (Metadata)
Screenshot survival⚠️ Partial (image: yes, text: no)❌ Screenshot strips metadata
JPEG compression✅ Survives low-quality JPEG✅ Survives (hash updated)
Privacy✅ No edit history revealed⚠️ Full edit history visible
Trust model⚠️ Requires Google as gatekeeper✅ Decentralized PKI
Paraphrase attack❌ Destroyed by rewriting with LLM✅ New manifest created (shows AI edit)
Setup requirement✅ Automatic (built into model)⚠️ Requires software/device support
Open source tools⚠️ Detection model proprietary✅ Open standard, open SDK

Frequently Asked Questions

Can users remove or forge C2PA metadata?

Stripping it is trivial: almost any screen-capture tool, image resizer, or format converter removes EXIF/XMP metadata, eliminating the C2PA manifest. Forging it is computationally infeasible: creating a valid manifest requires a certificate signed by a recognized C2PA member authority (Adobe, Microsoft, etc.). An attacker cannot create a certificate claiming to be a legitimate device/software without access to a compromised private key. So: absence of C2PA metadata doesn't prove AI generation (it might have been stripped), but presence of valid metadata is a strong signal of authentic provenance.

What about AI text detectors like GPTZero? Are they reliable?

AI text detectors that use perplexity-based heuristics (measuring how "surprising" each word is) are not reliable enough for consequential decisions. They have false positive rates of 5-20% on human text — meaning roughly 1 in 10 human essays might be flagged as AI-generated. They fail on paraphrased AI text, non-native English speakers (who write more simply, similar to AI), and scientific/technical writing (which has low perplexity by nature). SynthID-based detection (using the statistical token watermark) is far more reliable when the watermark is present, but requires the content was generated with watermarking enabled — content from open-weight models like Llama is never watermarked.

Conclusion

AI watermarking is a necessary but incomplete solution to the synthetic content problem. SynthID provides robust invisible watermarking that survives most image transformations, but can be defeated by screenshot re-capture, paraphrase attacks on text, and adversarial pixel perturbation. C2PA provides a cryptographically verifiable provenance chain that's impossible to forge, but is trivially stripped by metadata-removing tools. The most realistic near-term approach combines both: C2PA for authenticated devices and software (cameras, Adobe apps), SynthID or equivalent for AI generation platforms, and platform-level transparency labels in social media feeds that don't rely on the content itself.

Continue Reading

👨‍💻
Written by

Vivek

AI Engineer

Full-stack AI engineer with 4+ years building LLM-powered products, autonomous agents, and RAG pipelines. I've shipped AI features to production for startups and worked hands-on with GPT-4o, LangChain, LlamaIndex, and the Vercel AI SDK. I started OpnCrafter to share everything I wish I had when learning — no fluff, just working code and real-world context.

GPT-4oLangChainNext.jsVector DBsRAGVercel AI SDK