The fraud you catch
isn't the problem.
Sophisticated attackers operate below your detection floor. VerifyStack probes hardware physics — oscillator drift, shader timing, silicon signatures — signals that user-agent strings and canvas hashes can't reach. Zero ML. Every decision is deterministic and reproducible.
10,000 free requests · No credit card · Full detection stack
Don't trust us.
Verify it yourself.
The analysis below is the same production Titan engine that powers every API call. Your device is being scored right now — hardware timing, behavioral signals, evasion checks — all server-side. Open DevTools → Network and inspect the raw POST /api/v1/analyze payload yourself.
- Real hardware probes: GPU shader timing, audio DAC fingerprint, crystal oscillator drift
- Server-side scoring only — zero analysis logic exposed to the client
- Full signal-level transparency — every score component is explainable
- Identical engine, identical code path — no demo mode, no mock data
The signals your stack
doesn't collect.
Most fraud vendors cite vague “signal counts” with no breakdown. Below is the actual coverage report from your device — hardware timing, surface fingerprints, privacy indicators, media stack, and network metadata — with per-category verification status.
Signals your current stack cannot access
- Crystal oscillator drift — Picosecond-level CPU clock jitter from the physical quartz crystal
- GPU shader execution timing — Transcendental math throughput unique to each GPU die via WGSL
- Audio DAC chip fingerprint — 256-point spectral hash from OfflineAudioContext + DynamicsCompressor
- Accelerometer calibration bias — Zero-G offset physically unique per MEMS sensor — unclonable
- WASM micro-architecture ratio — Integer vs float ALU/FPU throughput identifies the CPU family
Signal Coverage MatrixLive
Real-time measurement of signal depth for this browser session.
We see your silicon.
Nine independent hardware probes — from GPU shader execution curves to accelerometer zero-G bias — produce a device identity rooted in physics, not in cookies or JavaScript properties. VMs, emulators, and anti-detect browsers leave measurable analog artifacts.
Six physical primitives.
One unforgeable signal.
Each layer probes a different physical property of the device. Spoofing one channel is tractable. Maintaining consistency across all six simultaneously — under cross-modality correlation — is a computationally infeasible adversarial problem.
Hardware Fingerprinting
Physical identity that persists across sessions
GPU shader timing via WGSL compute kernels measures transcendental math throughput unique to each GPU die. WebGL renderer hash across 12 parameters, audio DAC oscillator fingerprint (triangle wave → DynamicsCompressor → 256-point FFT hash), and canvas pixel-level spectral decomposition produce a device signature that survives session clearing, browser switching, and incognito mode.
Behavioral Physics
12-technique motor-control analysis no script can replicate
Detrended Fluctuation Analysis (DFA, Peng et al. 1994) extracts the scaling exponent α of mouse trajectory residuals — humans exhibit α ≈ 0.6–0.8 (persistent fBm), bots produce α ≈ 0.5 (random walk). Recurrence Quantification Analysis (RQA) builds binary recurrence matrices and computes determinism, laminarity, and trapping time. FFT micro-tremor detection isolates the 8–12 Hz physiological band. Approximate Entropy (ApEn) and spectral entropy quantify regularity vs. complexity. Bézier curvature deviation, jerk profiles (Flash & Hogan 1985), and digraph latency models (Monrose & Rubin 2000) form a multi-modal behavioral identity unforgeable in real time.
Bayesian Beta Fusion
Evidence accumulation with mathematical guarantees
151 detection techniques across 12 independent analyzers scored then fused via Beta distribution conjugate priors with calibrated weights. The Positive Trust Model enforces asymmetry: only 8 specific proof layers (Proof of Work, WebAuthn, CAPTCHA, device binding, hardware attestation) can lower fraud scores. Bounded weight updates (max 5% shift per feedback, 2–40% clipping) prevent adversarial manipulation of the scoring surface.
FFT Spectral Analysis
Frequency-domain signatures from analog hardware
Cooley-Tukey Radix-2 FFT extracts spectral entropy and centroid from canvas luminance distributions — encoding GPU rasterization characteristics invisible to JavaScript property spoofing. Audio phase coherence via dual-oscillator harmonic ratio fingerprints the DAC chip. Font sub-pixel rendering spectral analysis captures text rasterizer behavior unique to each OS/GPU combination.
Micro-Architecture Profiling
CPU-level identification below the OS layer
Crystal oscillator drift measures picosecond-level CPU clock jitter from the physical quartz crystal — a manufacturing variance unique to each chip. WASM instruction stream profiling computes integer vs float ALU/FPU throughput ratios that distinguish Apple Firestorm from Intel Skylake from AMD Zen 4. Accelerometer zero-G calibration bias provides a physically unclonable sensor fingerprint.
Cross-Modality Correlation
12-strategy distributed attack detection engine
Louvain community detection (Blondel et al. 2008) builds IP–device–session graphs and extracts modularity-maximized clusters to expose coordinated botnets. MinHash LSH (Broder 1997) and SimHash (Charikar 2002) perform near-duplicate fingerprint detection. HyperLogLog (Flajolet et al. 2007) provides cardinality estimation of distinct devices per entity. Temporal clustering via Silverman's rule-of-thumb KDE identifies burst patterns. Autocorrelation periodicity detection, Haversine impossible-travel, and ASN concentration entropy round out the correlation stack.
12 analyzers. 151 techniques.
Zero ML models.
Fingerprinting is layer one. Beneath it: velocity analysis, graph-based fraud ring detection (PageRank, k-core decomposition), causal inference (propensity score matching, regression discontinuity), temporal Markov modeling, information-theoretic entropy measurement, TLS fingerprinting (JA3/JA4), WebAuthn device binding, and game-theoretic threshold optimization via Nash equilibrium computation.
Every layer is deterministic. Same inputs → same output. No training data, no model drift, no black-box scoring. The dual-path engine handles 85% of requests with 47ms p95 server-side via the fast path. Only the ~5% grey-zone enters deep analysis (under 200ms). End-to-end: 145–265ms.
Device Intelligence
Canvas noise, GPU timing, AAGUID, VM/emulator detection
Behavioral Biometrics
FFT timing analysis, jerk derivatives, micro-tremor band
Evasion Detection
Residential proxy rotation, anti-detect browser signatures
TLS Fingerprinting
JA3/JA4/JA4+ hashes (via hosting platform), cipher analysis
Causal Inference
Propensity score matching, diff-in-diff, regression discontinuity
Information Theory
Shannon, Rényi, transfer entropy, Lempel-Ziv complexity
Device Binding
WebAuthn/FIDO2, TPM attestation, sign count forensics
Hardware Similarity
Cross-signal hardware validation, timing consistency checks
Steganographic Honeypot
Hidden form fields, invisible links, CSS traps, decoy endpoints
Hardware Timing
Crystal oscillator drift, GPU contention, memory latency validation
Time Series Anomaly
16-technique engine: Z-score, MAD, IQR, Grubbs, Holt-Winters, CUSUM, EWMA, ARIMA(1,1,1), Isolation Forest, LOF, Bayesian change-point, PELT, Spectral Residual (FFT), Matrix Profile (STOMP), Mahalanobis, adaptive ensemble
Behavioral Physics
12-technique engine: DFA (Peng 1994), RQA (Marwan 2007), Approximate Entropy, FFT micro-tremor 8–12Hz, spectral entropy, Bézier curvature, jerk profile, digraph latency, keystroke rhythm/burstiness, form-fill timing, honeypot, Bayesian fusion
Proof of Work
SHA-256 hashcash, memory-hard puzzles, time-lock VDFs, solve-time forensics
Three steps. Ten minutes.
Production-grade fraud detection.
From npm install to blocking your first fraudulent session.
Add the SDK
One script tag — no build step required. Initialize with your API key. The SDK silently collects 83+ hardware and behavioral signals — all collection logic runs client-side, all analysis runs server-side. Under 17 KB gzipped.
<script src="https://verifystack.io/sdk/browser.js"></script>Score Every Visitor
One API call returns a confidence-weighted risk score (0–100), a deterministic decision (ALLOW / CHALLENGE / DENY), and the full signal chain with per-layer breakdown. Fast path: 47ms p95. Deep analysis: under 200ms.
await vs.decide({ userId, action })Enforce Your Policy
Set your own thresholds. Wire up webhooks for async decisions. Submit ground-truth feedback to tune scoring weights (bounded: max 5% shift, 2–40% clipping). Every decision is auditable and reproducible.
if (result.decision === "deny") block()The entire integration
fits in one file.
One API call. One response with a deterministic risk score, decision, and full signal chain. TypeScript-first with complete type safety. No SDK lock-in. No client-side scoring logic exposed. Works with Next.js, Express, Django, Rails — any backend.
- TypeScript SDK with full type inference and JSDoc documentation
- Under 15KB gzipped — no client-side performance impact
- Webhooks for async risk decisions with retry and HMAC signing
- REST API with OpenAPI 3.0 spec — use any HTTP client
// Add via <script src="https://verifystack.io/sdk/browser.js"></script>
// Or import as ES module:
import { VerifyStack } from 'https://verifystack.io/sdk/browser.mjs';
const vs = new VerifyStack({
apiKey: 'pk_live_xxxxxxxxx',
endpoint: 'https://verifystack.io'
});
// Collect 83+ hardware signals and score the visitor
const result = await vs.decide({
userId: session.id,
action: 'login',
});
// result.decision: 'allow' | 'challenge' | 'deny'
// result.score: 0–100 (deterministic)
// result.reasons: string[] — risk factors detected
if (result.decision === 'deny') {
redirect('/blocked');
} else if (result.decision === 'challenge') {
await presentChallenge(result.evidenceId);
}Every decision feeds the network.
Consortium intelligence across all VerifyStack customers. When one customer blocks a fraudulent device, the signal propagates network-wide. No PII shared — only anonymized device hashes and risk indicators.
Global Threat Intelligence
Real-time threat detection across edge nodes
The Uncomfortable Details
Most vendors hide these numbers. We publish them because our architecture survives scrutiny.
Device Signals
Hardware timing, surface fingerprints, behavioral biometrics, privacy indicators, network metadata
Detection Techniques
12 analyzers: device (26), botd (33), behavior (12), physics (8), evasion (11), hardware (7), hw-timing (6), tls (4), entropy (9), correlation (12), anomaly (16), session (7)
Fast Path p95
85% of requests scored with p95 latency of 47ms server-side via edge-optimized fast path. End-to-end latency including signal collection: 145–265ms.
ML Models
Bayesian Beta fusion with calibrated priors. Deterministic. Auditable. Reproducible.
FFT Depth
Cooley-Tukey Radix-2 spectral decomposition on canvas, audio, and timing data
SDK Size
Gzipped browser SDK — zero performance impact on your application
Uptime SLA
Growth tier. Enterprise: 99.99% with dedicated infrastructure and failover
Trust Proofs
PoW, WebAuthn, CAPTCHA, device binding, hardware attestation — only these lower scores
Built for security teams
that audit everything.
GDPR-compliant by architecture, not by checkbox. No PII collection. One-way hashes only. Configurable retention with one-click erasure. Every decision is reproducible and exportable.
Verify everything yourself — the production engine runs on every page.
Hardware-Based Fingerprinting
Extract hardware signatures from GPU shaders, audio DAC, and timing signals that are designed to resist spoofing.
Bayesian Risk Scoring
Probabilistic inference framework that quantifies uncertainty in risk assessments.
Global Edge Network
Deployed globally to minimize latency and provide regional redundancy.
Why we don't use machine learning.
✕ML models drift silently
→Our scoring weights are explicit, bounded (2–40%), and shift by max 5% per feedback event. You see every change.
✕ML decisions are unexplainable
→Every VerifyStack decision includes the full signal chain, per-layer scores, and the mathematical reasoning that produced the verdict.
✕ML needs training data you don't have
→Bayesian Beta fusion works with calibrated priors from day one. No cold start. No minimum data requirement.
✕ML is vulnerable to adversarial inputs
→Deterministic scoring with hardware-physics signals. The adversary must spoof analog physical properties — not fool a gradient.
Questions Security Engineers Ask
If your question isn't here, check the technical documentation or ask our engineering team directly.
How do you handle false positives?+
Every decision returns a confidence score (0–100) and full signal-level breakdown — you see exactly which layers contributed and by how much. You set your own DENY (default ≥85) and CHALLENGE (default ≥65) thresholds. The feedback API accepts ground-truth labels and recalibrates scoring weights via bounded updates: maximum 5% shift per feedback event, 2–40% weight clipping. You control the precision/recall tradeoff explicitly, per use case.
Is this actually deterministic? No ML at all?+
Zero ML models. The scoring engine uses weighted evidence accumulation with Beta distribution conjugate priors — a mathematically proven approach to Bayesian inference. The Positive Trust Model enforces asymmetry: only 8 specific proof layers (Proof of Work, WebAuthn/FIDO2, CAPTCHA, behavioral verification, device binding, hardware attestation, temporal session, hardware timing) can lower fraud scores. Same inputs produce the same output, every time. Fully auditable, fully reproducible, no training data drift.
What signals do you actually collect?+
83+ signals across 6 categories: Hardware Timing (crystal oscillator drift, GPU shader timing, WASM ALU/FPU ratio), Surface Fingerprints (canvas FFT spectral entropy, WebGL renderer hash, audio DAC 256-point hash), Privacy & Anti-Abuse (incognito detection via storage timing attack, ad-blocker bait probes, WebDriver flags), Behavioral Physics (DFA scaling exponent, RQA determinism, approximate entropy, FFT micro-tremor 8–12Hz, spectral entropy, Bézier curvature), Media & GPU (WebGPU compute shader fingerprint, codec support matrix), and Network & Geo (TLS JA3/JA4 fingerprint, geo-velocity impossible travel). All documented in our OpenAPI 3.0 spec.
Can I verify this before integrating?+
Yes — the live analysis on this page is the production engine running on your browser right now. Open DevTools → Network, inspect the POST to /api/v1/analyze, and read the raw JSON response. The /tech page provides deeper visualizations with spectral analysis charts, hardware probe data, and behavioral signal waveforms. No signup required to see the engine work.
What about privacy and GDPR?+
No PII is collected — ever. Device fingerprints are irreversible one-way hashes (FNV-1a 64-bit + SimHash). Data auto-deletes after configurable retention periods (default 90 days). One-click GDPR Article 17 erasure endpoints are exposed via the API. All processing is documented under GDPR Art. 6(1)(f) legitimate interest for fraud prevention. Full subprocessor list and Data Processing Agreement available at /gdpr.
What's the latency overhead?+
Dual-path architecture: the fast path scores 85% of requests with a p95 of 47ms server-side — clear bot or clear human decisions via velocity, device binding, and hardware consistency checks. The remaining ~5% of ambiguous, grey-zone requests enter the slow path for deep analysis (causal inference, graph analysis, temporal Markov modeling) and complete in under 200ms server-side. End-to-end latency observed by your application includes client-side signal collection and network round-trip time to the nearest edge node, typically totaling 145–265ms depending on browser and geographic distance. Client-side signal collection is non-blocking and overlaps with page load.
How do you detect anti-detect browsers?+
Anti-detect browsers (Multilogin, GoLogin, Dolphin Anty, etc.) spoof JavaScript API responses but cannot replicate analog hardware behavior. Our hardware timing probes — crystal oscillator drift, GPU shader execution curves, audio DAC spectral signatures — measure physical properties of the device that no software layer controls. Cross-modality correlation then flags inconsistencies: if the GPU renderer claims "NVIDIA RTX 4090" but shader timing matches integrated Intel UHD, the evasion attempt is exposed.
What does the free tier include?+
10,000 lifetime API requests with the full detection stack — all 151 detection techniques, all 83+ signals, device fingerprinting, behavioral biometrics, hardware detection, and email support. No feature gating. No credit card. The same engine that powers Enterprise customers runs on your free-tier requests. Upgrade to Growth ($49/mo, 50,000 requests/month) when you need advanced analytics, custom fraud policies, webhooks, and the dashboard.
You've already been analyzed.
Your device data is in the live proof above — hardware signals, behavioral indicators, risk score, everything. The free tier gives you 10,000 requests to see this work on your own users. No credit card. No sales call. Full detection stack from day one.
Free tier: 10,000 lifetime requests · Full detection stack · No feature gating