As artificial intelligence becomes increasingly sophisticated, researchers from MIT and other institutions have proposed a new method to distinguish between AI and human users online: personhood credentials.

In a recent white paper, MIT graduate students Nouran Soliman and Tobin South, along with colleagues from OpenAI and Microsoft, outlined the concept of personhood credentials as a way to verify human identity online while preserving privacy.

"AI could have the ability to create accounts, post content, generate fake content, pretend to be human online, or algorithmically amplify content at a massive scale," South explained. "This unlocks a lot of risks."

Personhood credentials would allow users to prove their humanity without revealing personal information. The system would require an offline component, such as visiting a government office, to obtain the credential.

Soliman emphasised that these credentials could be optional: "Service providers can let people choose whether they want to use one or not."

The researchers acknowledged potential risks, including privacy concerns and the concentration of power in credential-issuing entities. They stressed the importance of implementing the system with multiple issuers and an open protocol to maintain freedom of expression.

"Our paper is trying to encourage governments, policymakers, leaders, and researchers to invest more resources in personhood credentials," Soliman said. The team calls for further research into implementation strategies and broader community impacts.

The researchers urge governments and companies to prepare for a future where verifying human identity online becomes crucial. The proposed personhood credentials aim to address this challenge while safeguarding user privacy and security.



Share this post
The link has been copied!