As AI-driven video avatars and interactive personas begin to reshape digital interaction, a new challenge has emerged at the intersection of ethics, law, and public perception: how do we represent human-like personas responsibly without misleading users or infringing on personal identity rights?
This question led us to introduce a simple, powerful solution: the term “AI Portrait.”
Advances in AI generation technology now allow us to create avatars that speak fluidly, exhibit emotional nuance, and even resemble real people. But this new realism raises old concerns:
These concerns are not just theoretical. They affect trust in AI systems and open the door to reputational and legal risks. Without a clear framework, the industry risks falling into the same traps seen with deepfakes and unauthorized likeness usage.
These incidents underscore how easily synthetic media can blur lines—and how crucial it is to proactively communicate intention and identity.
In art and journalism, it's perfectly legal to create portraits of real people—even without permission—as long as the work doesn’t defame or impersonate. This principle stems from long-standing protections around free speech and creative expression.
We believe AI personas should follow similar logic. An AI-generated persona—whether inspired by a real CFO, a fictional scientist, or a composite of many professionals—should be framed as a creative interpretation, not an impersonation.
Hence, the term “AI Portrait.”
An AI Portrait is a clearly labeled, fictional digital representation generated by AI. It may be inspired by a real person, a professional archetype, or entirely synthetic. What defines it is not the level of realism but the clarity of intent.
We define AI Portraits as:
By labeling our video avatars as “AI Portrait of [Name],” we make it immediately clear to viewers: this is not the actual person, but a digital character designed to communicate.
We now include the “AI Portrait” label on:
We are in the process of trademarking “AI Portrait” not to gatekeep expression—but to protect the clarity of its use. Like “Fair Trade” or “Creative Commons,” it signals trust, transparency, and intent.
We didn’t create AI Portraits to limit creativity—we created them to protect it.
In an era where synthetic voices, faces, and personas are becoming indistinguishable from real people, clear labeling is no longer optional—it’s essential.
We invite others building AI-driven experiences to adopt the AI Portrait standard, helping ensure that as machines learn to speak, they also learn to respect.
To learn more or to license the use of the “AI Portrait” framework for your own platform, contact us at info@mench.ai.