Proof of Humanity in the Age of Machines 🤖🧠

Identifying human authenticity in the Web3 era

Table of Contents 🕹️

  1. Introduction 🐙

  2. The Ghost in the Machine 🤖

  3. Proof of Personhood 🧍

  4. Use Cases ⚗️

  5. Rise of the Replicants 🧠

  6. Further Down the Rabbit Hole 🕳️

“We could try the Turin test," said Lobsang.
"Oh, machines have been able to pass the Turing test for years."
"No, the Turin test. We both pray for an hour, and see if God can tell the difference.”

― Stephen Baxter, The Long War

I. Introduction 👾

In Descartes’ Discourse on the Method, the 17th Century philosopher states:

If there were machines which bore a resemblance to our bodies and imitated our actions as closely as possible for all practical purposes, we should still have two very certain means of recognizing that they were not real men. The first is that they could never use words, or put together signs, as we do in order to declare our thoughts to others […] Secondly, even though some machines might do some things as well as we do them, or perhaps even better, they would inevitably fail in others, which would reveal that they are acting not from understanding, but only from the disposition of their organs.

From this passage it’s clear that Descartes didn’t believe machines could be capable of thought, and that this could be verified by administering language and behavioral tests. This testing schema would later be realized by Alan Turing, the English mathematician and computer scientist renowned for cracking German cyphers during the Second World War among a myriad of other achievements within the fields of theoretical computer science, artificial intelligence, and cryptography.

To Turing, it didn’t make sense to question whether a machine could think. Instead, he believed it was better to address the more precise question of whether a machine could fool a human into believing it was also human. The Turing Test, as it is known today, seeks to identify whether a being is human or not solely from the performance in an Imitation Game.

The Imitation Game is described as follows:

Suppose that we have a person, a machine, and an interrogator. The interrogator is in a room separated from the other person and the machine. The object of the game is for the interrogator to determine which of the other two is the person, and which is the machine. The interrogator knows the other person and the machine by the labels ‘X’ and ‘Y’—but, at least at the beginning of the game, does not know which of the other person and the machine is ‘X’—and at the end of the game says either ‘X is the person and Y is the machine’ or ‘X is the machine and Y is the person’. The interrogator is allowed to put questions to the person and the machine of the following kind: “Will X please tell me whether X plays chess?” Whichever of the machine and the other person is X must answer questions that are addressed to X. The object of the machine is to try to cause the interrogator to mistakenly conclude that the machine is the other person; the object of the other person is to try to help the interrogator to correctly identify the machine.

Turing predicted in 1950 that it would be possible within 50 years to program computers to pass a Turing Test in 70% of cases. While today it’s possible to fool a human with sufficiently advanced language generators such as OpenAI’s GPT-3, these language generators have yet to be linked to perceptual inputs and behavioral outputs in a way capable of passing the originally designed Turing Test. This is not to say we won’t be able to do this. It isn’t difficult to imagine that within the next couple of generations (GPT-4 or GPT-5) we could embed this ability.

We are already seeing the initial impact of massive robotic immersion within our society. Deepfakes and bots are making it difficult to know what’s real or to compete at a human level in things like online auctions or capital market trading. Add to this the difficulty of addressing Sybil Attacks when designing governance models along with the rise of anonymity, pseudonymity, and alt-accounts, and it becomes clear that we need to be able to establish authenticity online.

There needs to be some kind of proof of personhood.

II. The Ghost in the Machine 👾

Let’s further define the problem set. Below are some interesting examples of what we need to address.

Synthetic Media

Synthetic media is a term used to describe the artificial production, manipulation, and modification of media through the use of artificial intelligence algorithms for the purpose of misleading people or changing an original meaning.

Synthetic media rose to public awareness in 2017 following a Vice report on pornographic deepfakes. Deepfakes take an existing image or video and use generative adversarial networks (GANs) to replace the subject with the likeness of someone else. However, deepfakes are just one subset of synthetic media as the class also includes voice cloning, natural language generation, image/audio synthesis, and more.

As of 2020, the number of deepfakes online has grown from 14,678 in 2019 to 145,227, an increase of 900% year over year. Deepfakes on social media platforms have generated billions of views.

Advancements in AI, lower computing costs, and data accessibility online have all contributed to the staggering growth in synthetic media.

Since synthetic media can be malevolently used to spread misinformation and distrust of reality, we must consider the ramifications of this growth. In 2019, voice cloning technology was used to impersonate a chief executive in order to fraudulently transfer €220,000. Cyberattacks focused on phishing, catfishing, and social engineering could be automated with these new technologies—it is estimated that deepfake fraud reached over $250M in 2020.

In addition to fraud, there are real-world geopolitical consequences of deepfake technology. A military coup d’etat in Gabon was initiated as a result of an alleged deepfake, for example. It isn’t difficult to imagine a totalitarian or absolutist regime rewriting history using synthetic media.

Sybil Attacks

A Sybil attack occurs when an attacker subverts a reputation system by creating multiple pseudonymous identities to gain disproportionate influence. To an outside observer each identity appears legitimate, however they are all controlled by a single entity. Sybil attacks are a major challenge for distributed identity systems or any protocol attempting to use uniqueness in the mechanism design.

To understand this attack vector, consider Gitcoin’s quadratic funding model. In quadratic funding, the percentage of a matching pool that a grant receives is a function of the number of total contributors.

An attacking actor could decide to create a fake grant, donate to himself, and collect matching funds as “interest”. Since an increased number of contributions results in more matching funds, the simplest form of attack will be splitting the contribution into multiple accounts, and donating to themselves.

-Gitcoin, How to Attack and Defend Quadratic Funding


Bots are software applications that run automated tasks on the Internet. While there are a myriad of bot types that run the gamut from helpful to malicious, social bots are particularly frightening.

Social bots are agents that communicate more or less autonomously on social media platforms. It is believed that 9-15% of active Twitter accounts are social bots. In the run-up to the 2016 U.S. election, bots were believed to account for 3.8 million tweets or roughly 19% of the total volume.

In addition to social bots, malicious bots can include:

  • Spambots that harvest emails or redirect users to malicious websites

  • DDoS Attacks and botnets

  • Viewbots that create fake views to trick video service algorithms

  • Bots that buy up high-demand goods (e.g. concert tickets or NFTs) for the purpose of immediately reselling them at a profit

  • MMORPG bots used to farm resources

III. Proof of Personhood 👾

With these issues in mind, let’s consider existing approaches to identity verification. These include:

  • Centralized identity verification - This method uses passports, licenses, and national ID cards from nation-states or authentication as a service models from entities such as Facebook, Twitter, and Google (e.g. the blue checkmark). The risks of this method include privacy concerns, data misuse, and the creation of exclusions. In addition, verification becomes potentially subjected to adverse social and political forces and can expose society to surveillance, manipulation, and data theft. Finally, it should be noted that acquiring officially-recognized forms of ID is challenging for an estimated 1.1 billion people around the globe.

  • Proof of Work (PoW) - Blockchain technology, starting with Bitcoin in 2009, has created an alternative method of verification through a “one-CPU-one-vote” system formulated by Satoshi Nakamoto in the original Bitcoin whitepaper. This method is great for enabling financial networks, but an identity in this system is limited to an entity that can offer computing capacity.

  • Proof of Stake (PoS) - PoS is a one-token-one-vote system in which nodes stake funds with the risk of loss of capital on veracity. PoS is subject to the influence of whales, meaning powerful and wealthy entities can potentially manipulate outcomes in these systems. PoS can work well when the community is engaged, but can flounder from voter apathy and “shark” investors (those that take positions in competitors to manipulate governance).

Two Web3 protocols leading the charge beyond the one-cpu-one-vote and one-dollar-one-vote mechanisms focus on the concept of Proof of Personhood.

Proof of Humanity (Kleros)

Proof of Humanity is a social identity verification system on Ethereum that combines web of trust, reverse Turing tests, and dispute resolution to create a Sybil-proof list of humans.

In the Proof of Humanity model, users connect their Web3 wallet to the app, fill out basic personal information, provide a short video that includes some speech and a visible identifier (such as a sign with the user’s name and ETH address), and submit a deposit fee. Once completed, one must find a registered user that can vouch for one’s existence, forming a social web chain between existing users.

During the vouching phase, profile submissions can be challenged if it is believed the registrant is a duplicate, a bot, deceased, or otherwise non-existent. If disputed, the decision goes to an ERC-792 compliant dispute resolution system through Kleros, a decentralized court system (decentralized justice will likely be the topic of a future issue). A successful refutation results in loss of deposit for the user as the funds are rewarded as bounty to jurors. Any users that vouched for a fake profile are subsequently removed from the registry, penalizing false affirmations.

If, however, the application is successful, the user receives back their initial deposit and is onboarded onto Proof of Humanity’s Sybil-resistant list of humans.


BrightID is a global open-source social identity network that attempts to solve the uniqueness problem through the creation and analysis of a social graph. The social graph is formed by weaving together cryptographically signed connections between people. Since each user manages their own private keys and no service or application controls the connections, users are able to create self-sovereign identities out of their own digital identifiers.

BrightID operates under the assumption that traditional forms of ID will not suffice in the Internet era. Using personal data and biometrics results in surveillance economies, and it is difficult to control who receives your information in traditional systems. Instead, verification of identity should come down to who knows you best.

The system relies on a trust score that is generated by connections between members of groups, and the connections between those groups. When creating a new connection, a user generates a QR code that is scanned. The user then selects their connection level to the new person.

The more connections you form, the more trusted your identity and the identities of those you verified become, resulting in a quantitative metric: a trust score.

The BrightID network consists of nodes run by applications that have a need for unique users (e.g. Gitcoin and RabbitHole). Each node stores the entirety of the social graph, and the new connections formed by users are verified by the nodes. When an ID is queried, several nodes provide verification of the trust score to ensure consensus on values across nodes.

Gitcoin uses their own in-house Sybil detection algorithm, but also incentivizes user verification using tools such as BrightID. Users verified with BrightID can receive a trust bonus percentage increase to the matching funds they receive. As more applications like Gitcoin and RabbitHole integrate BrightID, the network grows stronger overall.

IV. Use Cases 👾

Proof of Personhood can enable a multitude of uses cases that ordinarily would be prone to abuse.

Universal Basic Income

Universal Basic Income (UBI) is a program where payments are made periodically to all members of a given population without any pre-existing conditions. Since the goal of UBI is to ensure all humans receive equal funding, it makes sense to have a Proof of Personhood qualifier to avoid Sybil-attacks.

Proof of Humanity has already begun experimenting with the UBI model, launching a UBI token built on top of their registry. The UBI token is automatically streamed every hour to all Ethereum wallet addresses verified by the protocol. Currently, this equates to 720 UBI or $90 per month at current prices.

Peer-to-Peer Governance and Public Goods Funding

Proof of Personhood registries can be integrated into DAO governance models. This would allow for experimentation with one-person-one-vote democratic systems or alternative voting systems like preferential or quadratic voting.

In addition to governance frameworks, Proof of Personhood can enable new funding mechanisms that would typically be prone to Sybil attacks, such as quadratic funding. You can read more about quadratic funding and the public goods domain here.

Reputation Systems

In traditional institutions, individuals receive certifications (e.g. degrees) or reputation points (e.g. credit scores) that are linked to their public identity. Proof of Personhood protocols can help bring these reputation systems on-chain in a verifiable manner.

One interesting protocol working in this space is RabbitHole, an on-chain discovery, onboarding, and training platform dedicated to taking people down the crypto rabbit hole. Users perform tasks that can earn them both experience and rewards while simultaneously building an on-chain reputation. Combining this with Proof of Personhood protocols (RabbitHole currently integrates BrightID) enables Sybil-resistance for reward distributions while also solidifying their protocol as a a unique human reputation registry.

Sybil-resistant Distributions

Distribution methods such as token airdrops or NFT raffles are currently prone to Sybil attacks. One individual can use multiple wallet addresses or pseudonyms in order to qualify for these distributions unfairly.

Historically, airdrop amounts have been distributed equally across wallet addresses or proportionally to the qualifying holdings. The first of these methods lends itself to gaming the system by creating multiple qualifying wallet addresses while the latter is skewed toward wealthier individuals.

If these distributions required Proof of Personhood, protocols and creators would be able to ensure that each human being gets a proportionate amount.

Spam and Social media

As mentioned earlier, bots and alt accounts are clouding our social media interactions with falsities and spam. Existing systems predominately rely on captchas (small exercises testing user capacity to analyze an image or a sound), which are a form of reverse Turing test. However, these are frequently wasteful to the end-user’s time and can be easily circumvented.

Users could be required to verify via a Proof of Personhood registry in order to access social media sites, ensuring every interaction is uniquely human.

V. Rise of the Replicants 👾

"On the Internet, nobody knows you're a dog"

In the web3 metaverse, we will be interacting with AIs, bots, anons, and DAOs daily. Synthetic minds and media will proliferate. It’s already becoming difficult to distinguish authentic humanity on the Internet. I’m pseudonymous—how do you know this entire text wasn’t written using GPT-3?

At a certain point, I imagine we stop worrying much about whether the being behind the avatar is human or not. Like the quotation to open this issue, machines could someday pass the Turing Test easily enough that your interaction with an NPC could be as real and visceral as any other human interaction. Proof of Personhood doesn’t seek to exclude—it only provides a method for us to identify unique souls. It’s possible in the future that our conception of humanity will extend to include synthetic minds or collective organizations as unique persons too.

Either way, with the ultimate goal of subverting Sybil attacks, authenticating uniqueness, and scaling digital identity, Proof of Personhood offers a slightly better alternative than just praying.

VI. Further Down the Rabbit Hole 🕳️

Accelerated Capital is a weekly publication exploring how cryptoassets, DeFi, virtual reality, and other exponential technologies are transforming our economy, society, and culture.

Be sure to subscribe to this newsletter below and follow us on Twitter.