Home About

About AuthentiScan

We build practical, transparent tools that help people reason about the authenticity of online media using an ensemble of in-house ML models and classic media forensics.

Our image ML model is live • ML for video, text & mixed content is on our roadmap

What AuthentiScan is

AuthentiScan estimates whether media may be AI-generated or manipulated. We combine our in-house trained ML image model with classical forensics (ELA/FFT), JPEG/EXIF cues, C2PA checks, external providers where available, and text analysis. The app presents an interpretable score with the top contributing signals.

Our approach

  • In-house ML for images: trained on curated real vs. AI datasets and calibrated for probabilistic output.
  • Media forensics: error-level analysis (ELA), frequency-domain texture (FFT), JPEG grid/cues, and EXIF/C2PA provenance checks.
  • Ensemble fusion: each detector contributes a score and confidence; we blend these into a single probability and show the breakdown.

Roadmap

  • Video: extend ML to temporal models alongside frame-level analysis.
  • Text: add ML classifiers for long-form and short-form writing to complement heuristics.
  • Mixed content: specialized models for web pages, documents, and embedded media.
  • Broader training data: expand genres, devices, codecs, and post-processing coverage.

Principles

  • Signal, not verdict. Outputs are probabilistic and should be combined with context.
  • Transparency. We surface rationale, per-signal contributions, and provenance indicators.
  • Privacy. Uploads are processed transiently for analysis.

Limitations

Recompression, resizing, re-uploads, and camera pipelines can shift signals. Content Credentials, when present, may provide strong provenance information. No detector is perfect—use results alongside human review.

Contact

Email: authentiscanai@gmail.com