Hi, I'm Eric.

I’m an avid world traveler, photographer, software developer, and digital storyteller.

I help implement the Content Authenticity Initiative at Adobe.

Search Results

Internet Identity Workshop 41

21 October 2025

    This week I’m attending the 41st biannual Internet Identity Workshop, which is one of the most valuable conferences I’ve encountered in any professional space. As the name might suggest, the topics are largely around how to express human and organizational identity in digital terms that respect privacy and security.

    Why Am I Here?

    I’m part of a team at Adobe that is dedicated to helping content creators and content consumers establish genuine connections with each other. We do this through three organizations that we’ve helped to create:

    • Content Authenticity Initiative: CAI is a community of media and tech companies, NGOs, academics, and others working to promote adoption of an open industry standard for content authenticity and provenance. The CAI does outreach, advocacy, and education around these open standards. Content Authenticity Initiative is also the name of the business unit of which I’m a part at Adobe through which we participate in all three of these organizations, develop open source and open standards, and guide implementation within Adobe’s product and service suite.

    • Coalition for Content Provenance and Authenticity: C2PA is a technical standards organization which addresses the prevalence of misleading information online through the development of technical standards for certifying the source and history (or provenance) of media content.

    • Creator Assertions Working Group: CAWG builds upon the work of the C2PA by defining additional assertions that allow content creators to express individual and organizational identity and intent about their content. CAWG is a working group within the Decentralized Identity Foundation (DIF).

    Last year, I published an article titled Content Authenticity 101, which explains these organizations and our motivations in more detail.

    The Venue

    IIW is held at the lovely Computer History Museum, which recounts the formative years of our tech industry. CHM is located in Mountain View, California, right in the heart of Silicon Valley.

    I’ll share a few photos of the venue and the conference. My non-technical friends might want to bow out after this section as it will rapidly descend into lots of deep geek speak.

    The Format

    IIW is conducted as an unconference,” which means that there is a pre-defined structure to the conference, but not a predefined agenda.

    There are several variations on how the agenda gets built at an unconference. In IIW’s case, there is an opening meeting on each morning in which people stand up and describe sessions they’d like to lead that day. Then there’s a mad rush to schedule these sessions (see photo below) and we all choose, in the moment, which sessions to attend.

    Computer History Museum, Mountain View, California
    Computer History Museum, Mountain View, California
    Computer History Museum, Mountain View, California
    Computer History Museum, Mountain View, California
    An as-yet empty schedule board.

    You might think that not having a predefined agenda would mean that the topics that occur could be flimsy or weak or low in value. In practice, the opposite is true. Both times I’ve attended this conference so far, I’ve had to make very difficult choices about which sessions not to attend so I could attend something else which was also very compelling.

    With that, here is my description of the sessions I’m attending this time around:

    Tuesday Sessions

    Session 1M: First Person Project

    Drummond Reed, First Person Project

    Reviewing version 1.1 of the First Person Project White Paper, just published yesterday. It’s 86 pages long. This talk is an overview of the major sections. Since it’s in the document, I won’t transcribe everything, but will instead focus on some highlights.

    There’s also a slide deck if you prefer that format.

    Fundamental goal: Establish a person-to-person trust layer for the internet.

    Ability to impersonate real people and real situations with current generative AI tools has led to an accelerating degeneration of trust.

    An essential part of this is to establish and facilitate person-to-person secure channels.

    Reference out to personhood credentials paper.

    Computer History Museum, Mountain View, California

    Talk through how people have multiple personas that they might want to disclose selectively, for example:

    • political
    • religious
    • neighborhood
    • work
    • medical
    • online dating

    Session 2M: The Swiss e-ID / The Failure of Decentralized Identity

    Christopher Allen

    This is a recap of the blog post and slide deck posted by Christopher at Musings of a Trust Architect: Five Anchors to Preserve Autonomy & Sovereignty.

    A few years ago, Swiss Post Office proposed a digital ID. Referendum voted it down.

    Government went back and reworked to address system to address feedback. This passed by a very narrow margin in a recent referendum. Result is based on SSI technology, but is completely government-implemented (i.e. centralized). This system is open in the sense that other information can be attached to government-issued digital identifiers; closed in the sense that only government can issue these credential.

    Law says that digital identifiers are not required, but Chris is skeptical. The implementation will turn out to make physical identifiers into second-class citizens.

    Swiss cultural concerns are largely about how the platform vendors (i.e. Android and iOS) will have an outsized ability to use data obtained through those credentials.

    The TLS warning: Once you ship something, “good enough” becomes “stuck with it.” TLS 1.0 was ratified in 1999 with some known problems. Problems weren’t fixed until TLS 1.3 in 2019. (Gulp.)

    Love this quote:

    If a system cannot hear you say no, it was never built for us. It was built for them.

    Chris describes this as “the least worst implementation” of a government-backed digital ID system.

    Swiss ID system doesn’t have a well-established right to refuse participation.

    The Sad State of Decentralized ID

    • eIDAS captured by Apple & Google (mDL)
      • Very difficult for smaller orgs to participate in standards organizations. W3C makes it hard; ISO is worse because power is in the governments. ISO pricing is difficult.
      • ISO response to “no phone home” was weak. “Phone home” is optional. But eIDAS requires vendors to test phone home capacity. Ugh.
      • Platform vendors have no fiduciary responsibility for the data contained in wallets.
    • US states following suit (it’s cheap)
    • DHS funding for DID/VC has collapsed
    • KYC everywhere but insecure
    • Web3/Nostr: progress but no key rotation
    • Corporate capture of “decentralized”
      • Example: E-mail, which was originally decentralized – and still is in theory. In practice, it’s impossible for anyone but the Major Players to run an e-mail server.
    • Builder’s dilemma: pure but irrelevant vs adopted but compromised

    So what can we do about it?

    • Persuade Switzerland to do the right things during this window after the narrow referendum win.
    • Persuade more states to follow Utah’s lead.

    Session 3C: Content Authenticity 101

    Eric Scouten, Adobe

    Discussion followed my slide deck (PDF).

    Session 4I: Scaling the Agentic Web

    Andor Kesselman

    Discussion followed Andor’s slide deck.

    Started with a history of AI evolution.

    Andor expects that thinking of agents as singular agents isn’t likely to remain common. Pressures are likely to lead to orchestration of agents working with each other, but that comes with increased risk of attack surface and error propagation.

    Identity for AI agents is far more complex than human identity. For example: Where is the agent running? What version? What host OS? What compute center? What context did it have? What goals was it given?

    Some people are now working on Know Your Agent (KYA).

    Interesting question: Does DNS scale up sufficiently for agents, especially given their potentially short lifetimes?

    As of yet, MCP servers aren’t really talking to each other. That will likely change soon and may substantially increase the attack surface vector.

    If you’ve enjoyed this …

    Subscribe to my free and occasional (never more than weekly) e-mail newsletter with my latest travel and other stories:

    Or follow me on one or more of the socials: