As AI systems become foundational to digital services, machine learning models themselves have become critical digital identities—yet unlike users, devices, or certificates, models today lack a reliable way to prove who they are, where they came from, or whether they have been altered.
In practice, model files can be renamed, rebranded, fine-tuned, merged, or subtly tampered with while preserving their external appearance. File hashes break under legitimate updates, metadata is easily forged, and behavioral testing is expensive and unreliable. This creates a growing risk of model impersonation, provenance fraud, and silent tampering across the AI supply chain.
This talk introduces a two-layer identity and integrity framework for AI models, designed explicitly through a digital-identity lens.
First, we present Model Family Identification (MFI)—a method for assigning models a size-invariant architectural identity by extracting stable “architectural DNA” from configuration artifacts alone. Instead of hashing weights, MFI fingerprints immutable design choices—such as tokenizer structure, rotary embedding parameters, attention variants, and micro-architectural ratios—to reliably identify a model’s true lineage across parameter sizes, fine-tunes, quantization, and re-packaging.
Second, we introduce the Provenance Intelligence Engine (PIE)—an execution-free integrity system that detects statistical drift and malicious tampering inside model weights, even when file formats and architectures remain unchanged. PIE uses hierarchical tensor fingerprints and compact lineage sketches to distinguish legitimate fine-tuning from suspicious or adversarial modification at scale.
Together, MFI and PIE form a verifiable identity stack for AI models, enabling:
a.Model identity verification without execution
b.Detection of impersonation and misattribution
c.Continuous integrity monitoring across model lifecycles
d.Trust enforcement in AI registries, marketplaces, and deployment pipelines
This session reframes AI security as a digital identity problem, and shows how identity-grade primitives—deterministic identifiers, lineage verification, and tamper evidence—can be applied to secure the rapidly growing AI ecosystem.