Authentic Comments
February 22nd, 2011I was inspired to think about the problem of impersonation on the web, when I read Caterina Fake’s comment on Twitter, bemoaning the fact that somebody was able to impersonate her in a comment on GigaOm. Because they used an email associated with her by Gravatar, the comment gained an element of authenticity because her avatar picture appeared by it.
My initial reaction, like Caterina’s, was to assume there is something wrong in the Gravatar model. Why should somebody be able to masquerade as me simply by guessing the email address I associated with Gravatar? But Matt Mullenweg of Automattic, which owns Gravatar, explained concisely that the fundamental problem of impersonation cannot be prevented by their service. An impersonator could just as easily have associated a new email, “[email protected]” with Gravatar, and uploaded a copy of her avatar.
A Hopeless Situation?
I am convinced by Matt’s claim that Gravatar is not in a position to prevent impersonation. However, it’s possible to imagine ways in which Gravatar could promote authenticity. Gravatar already allows me to create an account through which I claim email addresses and can control which avatars should appear for these addresses. In addition, it allows me to confirm that account’s association with certain services such as Blogger.com, Facebook, Twitter, etc. This, combined with the fact that use of Gravatar is already widespread on the web, makes it a great candidate for serving as an arbiter of trust in arbitrary contexts on the web.
Web sites that make use of Gravatar’s services are currently able to fetch an image associated with a particular email address, by manipulating (hashing) the user’s email address in such a way that the email address is no longer discernable, but Gravatar can easily look up the associated avatar image.
There are steps that Gravatar could take to make possible the “authentication” of specific Gravatar appearances on the web. It would be exhausting to elaborate on the variety of ways this might be done, and many of the options that spring to mind also bring to mind many pitfalls and annoyances, not to mention significant service demands on Gravatar. Maybe the authentication would require hosting sites to present authentication keys, or maybe users would just whitelist particular comment URLs. Let’s not get bogged down in details: the details are for companies like Gravatar to take on if they choose to meet the challenge.
In a world where Gravatar offered some form of per-use authentication, a site like GigaOm could show a trust icon next to commenters’ avatars, or maybe it would be integrated into the avatar as a form check-mark badge or something. Click on the trust icon and it might take you to a Gravatar page where a curious reader could gauge authenticity with Gravatar’s help:
The Gravatar being shown at <link to e.g. a comment url> was verified by Daniel Jalkut, a registered Gravatar user. Daniel is known to be associated with Twitter ID “danielpunkass”, and controls the web site domain http://www.red-sweater.com. For more information, view his profile here.
The current Gravatar user profiles already lean strongly towards identity confirmation. Some clever techniques for authenticating comments would not eliminate impersonation, but would allow identity-concerned users such as Caterina a means of participating in web conversations while proactively confirming their own identities.
February 22nd, 2011 at 7:02 pm
I agree. :)
February 22nd, 2011 at 8:13 pm
The idea is not bad. But identity is a hard problem. What prevents someone from creating a Gravatar profile identical to yours with a different email address? The email is kept private, so visitors won’t see the difference.
I think you’d have better luck using an OpenID login as your identity system. The OpenID login would become the unique identifier which can be shown to everyone.
February 23rd, 2011 at 2:31 am
Michel – the Gravatar service already supports “validating” connections to other services such as Twitter, Blogger, etc. So a visitor could say “Hmm, this Gravatar is verified to be connected ‘danielpunkass’ on Twitter, and I trust that that ID is legitimately Daniel Jalkut.”
Like all trust networks you trace back to a trusted source. Gravatar is already doing work in this area. See the example blockquote I included in my post for an example of how it would theoretically be able to list credentials for the Gravatar account in question.
February 23rd, 2011 at 8:35 am
I did not know Gravatar was validating those. That’s a good thing.
Nevertheless, I’d be more comfortable with a decentralized system that does not depend on a unique third party everyone must trust.
February 23rd, 2011 at 12:53 pm
I’d guess privacy folks would freak out if I could click one link and find out who you were on Twitter and whatever other sites you attached to your Gravatar account. Trust doesn’t mix well with privacy since it is hard to verify your identity at the same time it is being kept secret!
February 24th, 2011 at 7:20 pm
And what prevents a person from forging the “trust icon” as well? The web is built upon openness, which is a huge advantage. But the same choices make trusted content a tough nut to crack.
February 24th, 2011 at 9:54 pm
Darren – the point of the trust icon is not to indicate you can trust the person for who they say they are, but to say there is a trust chain present if you choose to follow it. You follow it, and decide, yes, I believe the person who owns red-sweater.com is in fact Daniel Jalkut.
The proposal, as I said, is not to end impersonation, but to make it possible for people who want to assert their authenticity to have a means to do so. A person like Caterina Fake who is famous enough for it to be an issue might make public statements along the lines “I never post unauthenticated comments” and that would be a veil of protection for her, as well as a point of plausible deniability if anybody said something idiotic in her name.
February 25th, 2011 at 12:41 pm
Daniel – thanks for clarifying that, but there is still a couple of problems. We can learn from the problems of trust chains and webs of trust with SSL and PGP, respectively. In practical use, people who see the icon — whether real or forged — will assume the trust chain is valid.
As the example scenario you start with demonstrates, once people *assume* that a trusted person has spoken, it’s very difficult to convince them otherwise no matter how much evidence you have. At best, a trust chain-and-icon would improve the speed at which people so-inclined could verify a source. Even so, for real use cases, it’s still shutting the barn door after the horse has left.
To really solve the trust problem, you’d need independent verifiers of trust who adequately check an identity; and then you’d need ubiquitous tools (likely built into all common browsers) that easily allow authors to “sign” their messages (cryptographic signatures are the obvious solution, but there may be more elegant ones), and allow browsers to automatically flag unsigned messages as “identity not certain”.
That’s what people have tried to do with SSL for sites, and we’ve learned that it’s very hard to do well unless you have very strong control on who can authenticate identity, and prevent end users from accepting identities from untrusted providers. And again, that requires a lot of people to cooperate. Establishing identity trust is a *really hard* problem, and whoever cracks it in a way that actually works for the majority will be filthy rich.