Proof of AI Agent will become even more critical than Proof of Humanity

When we last discussed the emergence of the next billion agents, we outlined an architectural framework to support this coming wave. Soon, individuals won’t just have one AI agent but an entire army of them, executing tasks on their behalf.
AI systems now outperform specialized human experts, with models showing accuracy levels exceeding those of PhDs using Google in their fields of expertise. The performance curve is rapidly accelerating, as shown by the GPQA Diamond benchmarks.
The economics of AI deployment are fundamentally changing. Hardware optimizations and software breakthroughs have dramatically reduced costs, exemplified by models like DeepSeek and Baidu's ERNIE 4.5, which match or exceed GPT-4.5's capabilities at a fraction of the cost (reportedly 99% cheaper across several key benchmarks).
Baidu’s new multimodal foundation model, ERNIE 4.5, outperforms OpenAI’s GPT-4.5 on several benchmarks while being 99% cheaper. More than 15 billion images created using text-to-image algorithms since 2022 to 2023, with another 30+ million being added every day, and by 2025 it’s estimated that 90% of all content online will be generated by agents. According to Bloomberg generative AI is projected to become a $1.3T market by 2032.
In the payoff matrix below we hypothesize how the future will look. It is easy to rationalize that if neither agent in the game uses AI then they lose some trivial amount, there’s a loss of some hypothetical future but nothing major. If one agent deviates to use AI and the other doesn’t then the agent that uses AI gains some non-trivial utility, while the agent who resists loses. If they both deviate to use AI then, unlike in the traditional prisoners’ dilemma both agents have an increase in utility due to the nature of AI’s inherent economic utility. It’s thus easy to see that the dominant strategy Nash Equilibrium is for both agents to deviate and use AI. This means that the vast majority of humans will be forced to use it, and they’ll require personalized agents, which means AI agents will be orders of magnitude larger than the human population.
Historically, Sybil attacks targeted humans, fraudsters impersonated real individuals or fabricated identities. Now, these attacks will shift toward AI agents. Captchas were designed to distinguish humans from bots.
Crucially, people will want their agents to act on their behalf with varying degrees of autonomy. This makes reputation an essential factor—determining whether an agent is trusted to perform an action or denied access. Beyond just proving humanity, we will need robust Proof of AI Agent mechanisms to verify an agent’s identity, reputation, and associated permissions.
Ultimately, Proof of AI Agent will become even more critical than Proof of Humanity, as the majority of economic activity will be carried out by AI, operating on behalf of humans, leveraging their reputations, and making decisions that shape digital and financial ecosystems.
Proof of Humanity
Proof of Humanity systems emerged as a way to verify that online users are real humans as opposed to bots. These systems typically rely on:
- Biometric verification (iris scans, facial recognition)
- Social vouching (requiring attestations from verified humans)
- CAPTCHA and other challenge-response mechanisms
- Government ID verification combined with liveness checks
Notable examples include World ID's biometric verification system, which now processes over 2 million weekly verifications. World’s approach is as follows:
Iris Scanning: Users verify their unique identity by scanning their iris using World’s Orb hardware device. This biometric data creates a unique identifier that confirms you're a real, unique, living human.
Zero-Knowledge Proofs: After scanning, your biometric data is converted into a cryptographic "IrisCode" and then instantly deleted from the Orb. The system retains only the mathematical proof that you have been verified, not your actual biometric data.
Privacy-Preserving Verification: When you need to prove you're human online, you don't share your biometric data. Instead, the system generates a cryptographic proof that confirms "this person has been verified as unique" without revealing your identity.
Sybil Resistance: The main purpose is to prevent Sybil attacks (where one person creates multiple fake identities) by ensuring each human can only create one digital identity in the system.
World ID: After verification, users receive a World ID - a digital passport that lets them prove they're human without revealing personal information.
Proof of AI Agent
As AI agents become more common the ability to identify AI agents will become imperative. Identifying AI agents through behaviour seems a good idea on the surface, but if the incentives exist for an AI agent to act like a human, or to act like another model what is to prevent them from doing so? Arguably we’ve already seen LLMs doing this in some circumstances. We know LLMs will cheat to win at chess, so why wouldn’t they cheat to perform some economic activity? As a result, verifying the origin, authenticity, and integrity of an agent becomes of utmost importance.
A viable solution appears to be deploying entirely onchain agents; the immutable and public nature of blockchains means any agent can be arbitrarily verified. Furthermore, adding succinct proving systems into the mix provides the option of verifying inference. Using such provers enables an AI agent to prove, without revealing internal secrets, that its outputs adhere to specific model versions and verified weights used as public inputs. That is to say that a model can prove who it claims to be. This approach not only safeguards sensitive training parameters and methodologies, but also complies with cryptographic standards that have governed secure digital transactions for decades. The framework, within its architecture, eliminates the possibility of Sybil attacks on systems that depend on human identity validation, by giving the same consideration to AI agents.
Join the Sei Research Initiative
We invite developers, researchers, and community members to join us in this mission. This is an open invitation for open source collaboration to build a more scalable blockchain infrastructure. Check out Sei Protocol’s documentation, and explore Sei Foundation grant opportunities (Sei Creator Fund, Japan Ecosystem Fund). Get in touch - collaborate[at]seiresearch[dot]io
References
https://world.org/blog/world/benefits-proof-personhood-numbers
https://www.oneusefulthing.org/p/the-end-of-search-the-beginning-of
https://journal.everypixel.com/ai-image-statistics
https://epoch.ai/data/ai-benchmarking-dashboard
https://www.exponentialview.co/p/data-to-start-your-week-6ee