Vitalik Buterin: The good and bad sides of collaboration

Vitalik Buterin: The good and bad sides of collaboration

Collaboration — the ability of a large group of actors to work together for their common good — is one of the most powerful forces in the universe. It’s the ability of a king to comfortably rule a country with an oppressive dictatorship, while the people can rise up and overthrow him. It’s the ability to let global temperatures rise by 3-5°C, while we can work together to stop them from rising any further once they’ve risen a bit. Collaboration is the key to how companies, countries, and any social organization of any size can function.

There are many ways that coordination can be improved: faster information dissemination, better norms to determine what behavior is classified as cheating and impose more effective penalties, stronger or more powerful organizations, tools like smart contracts that allow interactions in low-trust scenarios, governance technologies (voting, shares, decision markets, etc.), and more. And the fact is that with every decade, we can make some progress on the coordination problem.

But collaboration also has a philosophically counterintuitive dark side: while “everyone collaborates with everyone” is much better than “everyone for themselves,” it doesn’t mean that everyone taking a step toward more collaboration is necessarily beneficial. If collaboration is increased in an unbalanced way, the results can easily be harmful.

We can present this problem on a map, but in fact this map has many, many "dimensions" instead of just the two drawn ones.

The bottom left, "every man for himself," is where we don't want to be. The top right, "full collaboration," is ideal but probably unattainable. But the vast expanse in the middle is far from a gentle uphill climb, with many reasonably safe and productive places that might be ideal for us to settle down in, and avoid many deep, dark pits.

Note: Hobbesianism believes that human behavior is selfish and that society is a situation of unlimited, selfish and brutal competition. It comes from the book "Leviathan", author Thomas Hobbes, a 17th century British political philosopher.

Now what are dangerous forms of “partial collaboration”, where someone collaborates with certain groups but not with others, leading to a fall into the abyss? This is best illustrated with examples:

A country's citizens died heroically in war for the good of their country... and that country was Germany or Japan during World War II.

Lobbyists bribe politicians in exchange for the politicians adopting the lobbyist's preferred policies.

Someone sold his vote in the election

The act of all product sellers in the market colluding to raise prices at the same time

A large blockchain miner colluded to launch a 51% attack

In all of the above cases, we see a group of people coming together and cooperating with each other, but doing substantial harm to the group outside the circle of collaboration, thereby causing substantial damage to the world as a whole. In the first case, all of them are victims of the aforementioned state aggression, they are the people outside the circle of collaboration and suffer greatly as a result; in the second and third cases, it is the people affected by the decisions made by the corrupted voters and politicians; in the fourth case, it is the customers; in the fifth case, it is the miners and users of the blockchain who are not participating. This is not a rebellion of individuals against the group, but a rebellion of one group against the broader group, which is often the whole world.

This kind of local coordination is often called "collusion", or "collusion", but it's important to note that the range of behavior we're talking about is quite broad. In normal contexts, the word "collusion" tends to be used more to describe relatively symmetrical relationships, but in many of the cases above, there are strong asymmetric features. Even extortionate relationships ("vote for the policies I like, or I'll publicly expose your affair") are forms of collusion in this sense. For the rest of this post, we'll use the word "collusion" (or perhaps "collusion" is more appropriate) to refer to this kind of "undesirable coordination" in general.

Evaluate intentions, not actions

An important feature of particularly minor cases of collusion is that one cannot determine whether an action is an unintended conspiracy simply by looking at the action itself. The reason is that the actions a person takes are a combination of that person's internal knowledge, goals, and preferences and external incentives imposed on that person, so the actions people take when they are conspiring often overlap with actions people take voluntarily (or in a benign manner).

For example, consider the case of collusion among sellers (an antitrust violation). If operating independently, three sellers might each set the price of a product between $5 and $10; the spread in the range reflects the sellers’ internal costs, or different wage preferences, supply chain issues, and so on. But if the sellers are colluding, they might set the price between $8 and $13. Again, this price range reflects different possibilities about internal costs and other factors that are hard to see. If you see someone selling the product for $8.75, are they doing something wrong? Without knowing whether they are colluding with other sellers, you can’t tell! It’s not a good idea to have a law that says it’s not a good idea to sell the product for more than $8, and maybe there are good reasons why the price must be high at the moment. But having a law against collusion, and successfully enforcing it, will get the desired result—you get $8.75 if the price has to be that high to cover the sellers’ costs, but you don’t get that price if the factors that drive the price up are naturally low.

This also applies in the case of bribery and vote selling: it is likely that some people voted for the Orange Party legally, but some voted for the Orange Party because they were bribed. From the perspective of the people who decide the rules of the voting mechanism, they don't know in advance whether the Orange Party is good or bad. But what they do know is that a vote based on how a voter truly feels works out just fine, but a vote where voters can freely buy and sell votes works out very badly. This is because vote selling is a "tragedy of the commons": each voter only gets a small benefit from voting correctly, but gets the entire bribe if they vote the way the briber wants. The bribe needed to attract each voter is then much smaller than the cost of actually compensating the people for whatever policy the briber wants. Therefore, voting that allows vote selling quickly collapses into plutocracy.

Understanding Game Theory

We can go a step further and look at this from the perspective of game theory. In the version of "game theory" that focuses on individual choice - that is, the version that assumes that each player makes decisions independently (and does not allow for the possibility of "groups of agents" working for their common interests) there is a mathematical proof that there must be at least one stable Nash equilibrium in any game. In fact, mechanism designers have a lot of freedom to design games to achieve specific outcomes. But in the version of game theory that allows for the possibility of coalition cooperation (such as "collusion"), called "cooperative game theory", we can show that there is a large class of games that do not have any stable outcomes (called the "core" (game theory terminology: Core)). In such games, no matter what the current situation is, there are always some coalitions that can profitably deviate from it.

Note: This conclusion is called the Bondareva–Shapley theorem.

An important part of this set of inherently unstable games is “Majority Games”. Majority Games are formally described as games of agents in which any subset of agents that is more than half of the agents can extract a fixed reward and divide it among themselves — a setup that is eerily similar to many other situations in corporate governance, politics, and human life. That is, if there is some fixed pool of resources and some currently established mechanism for allocating resources, 51% of the participants will inevitably conspire to seize control of the resources, and no matter what the current configuration is, there will always be some conspiracy that is profitable for the participants. However, this conspiracy will be vulnerable to potential new conspiracies, which may include combinations of previous conspirators and victims... and so on.

This fact, namely the instability of majority games under cooperative game theory, can be said to be seriously underestimated as a simplified general mathematical model, why there is most likely no "end of history" in politics, and no system that has ever been proven to be completely satisfactory; I personally think that it is much more useful than the more famous "Arrow's Theorem".

Note: Arrow's Theorem, also known as Arrow's paradox, refers to the fact that there is no ideal election mechanism that simultaneously satisfies the three principles of fairness, Pareto efficiency, non-dictatorship and independence.

Note again that the core dichotomy here is not "individual vs. group"; that is surprisingly easy for a mechanism designer to handle. It is "group vs. broader group" that is the challenge.

Decentralization as an anti-collusion

But there is another brighter and more actionable conclusion from this line of thought: if we want to create stable mechanisms, then we know that an important factor is to find ways to make collusion, especially large-scale collusion, more difficult to occur or maintain. In the voting scenario, we have "secret ballots" - which ensure that voters have no way to prove their votes to third parties, even if they want to (MACI is a project that attempts to use cryptography to extend the principle of secret ballots to online environments [1]). This undermines trust between voters and bribers, severely limiting the unwelcome collusion that can occur. In the case of antitrust and other corporate malfeasance, we often rely on whistleblowers and even give them rewards, explicitly incentivizing participants in harmful collusion to defect. And in terms of broader public infrastructure, we have that very important concept: decentralization.

A naive view of why decentralization is valuable is that it reduces the risk of single points of technical failure. In traditional "enterprise-grade" distributed systems, this is often actually true, but in many other cases, we know that this is not an adequate explanation for what is happening. Looking at blockchain is instructive. A large mining pool publicly showing how they distribute their nodes and network dependencies internally did little to calm community members' fears about mining centralization. And pictures like the one below, showing that 90% of Bitcoin hash power was on the same conference discussion panel at the time, are indeed terrifying:

But why is this picture scary? From a "decentralization is fault tolerance" perspective, large miners are able to talk to each other and not cause any harm. But if we think of "decentralization" as barriers to harmful collusion, then this picture becomes quite scary because it suggests that these barriers are not as strong as we think. Now in reality, these barriers are far from zero, those miners can easily collaborate on technology, and are likely all in the same WeChat group, but the fact that this does not mean that Bitcoin is "actually not much better than centralized companies."

So what are the remaining obstacles to collusion? Some of the main obstacles include:

Moral barriers: In Liars and Outsiders, Bruce Schneier reminds us that many "security systems" (door locks, warning signs that remind people they are being punished...) also have a moral function, reminding potential bad actors that they are about to commit a serious illegal act and should not do so if they want to be a good person. Decentralization can be said to play this role.

Internal bargaining failure: Individual firms may begin to demand concessions in exchange for participation in the conspiracy, which may lead to a direct deadlock in negotiations (see the "holdup problem" in economics).

Anti-collaboration: A system is decentralized, which makes it easy for participants who are not involved in the conspiracy to make a fork, peel off the conspiring attackers, and then continue to run the system from there. The threshold for users to join the fork is low, and the intention of decentralization will form moral pressure in favor of participating in the fork.

Defection risk: It is much more difficult for five companies to unite for evil than for them to unite for uncontroversial or benign purposes. The five companies do not know each other well, so it is possible that one of them refuses to participate and blows the whistle quickly, making it difficult for participants to judge the risks. Individual employees within the company may also blow the whistle.

Taken together, these barriers are indeed substantial — often substantial enough to prevent a potential attack, even if all five companies are perfectly capable of quickly coordinating to do something legal. For example, Ethereum miners are perfectly capable of coordinating to increase the GAS limit, but that doesn’t mean they can so easily collude to attack the blockchain.

The blockchain experience shows that it is often valuable to design protocols to be institutionally decentralized, even when it is known in advance that most activity will be dominated by a small number of companies. This idea is not limited to blockchains and can be applied in other contexts (see, for example, antitrust applications [2]).

Forking as Anti-Cooperation

But we can’t always effectively prevent harmful collusions from happening. And to deal with those cases where harmful collusions do happen, it would be better if we could make the system more robust against these collusions — more expensive for the colluders and easier for the system to recover from.

We can achieve this through two core operating principles: (1) supporting anti-collaboration, and (2) taking on the risk of "skin in the game". The idea behind anti-collaboration is this: we know that we can't design systems to be passively robust against collusion, largely because there are so many ways to organize collusion and no passive mechanism to detect them, but what we can do is actively respond to collusion and fight back.

Note: The term skin in the game comes from horse racing, where the owners of the horses have the "skin" in the race and they have the most say in the outcome of the race.

In digital systems like blockchains (and this also applies to more mainstream systems like DNS), a major and critical form of anti-cooperation is forking.

If a system is taken over by a harmful coalition, dissenters can come together and create an alternative version of the system that has (mostly) the same rules, except that it removes the power of the attacking coalition to control the system. In the context of open source software, forking is very easy; the main challenge in creating a successful fork is usually gathering the required "legitimacy" (a kind of "common sense" in game theory) to get all the people who disagree with the direction of the main coalition to follow you.

This isn’t just theoretical; it’s been successfully accomplished, most notably with the Steem community’s rebellion against a hostile takeover attempt, which resulted in a new blockchain called Hive, in which the original hostile parties had no power.

Market and Skin in the game

Another class of strategies to resist collusion is the concept of "Skin in the game". In this context, "Skin in the game" basically refers to any mechanism that holds individual contributors to a decision individually accountable for their contributions. If a group makes a bad decision, then the people who approved it must suffer more than those who tried to dissent. This avoids the "tragedy of the commons" inherent in voting systems.

Forking is a powerful form of anti-coordination precisely because it introduces “skin in the game.” In Hive, the community fork of Steem that threw aside a hostile takeover attempt, the coins used to vote in favor of the hostile takeover were largely deleted in the new fork. Key individuals involved in the attack were also personally affected as a result.

Markets are very powerful tools in general precisely because they maximize skin in the game. Decision markets (prediction markets used to guide decisions; also called futarchy[3]) are an attempt to extend this benefit of markets to organizational decision making. However, decision markets can only solve some problems; in particular, they cannot tell us which variables to optimize in the first place.

Note: Futarchy is a new form of government proposed by economist Robin Hanson, where elected officials make policies and citizens bet on different policies through speculative markets to produce the most effective choice. See V. Buterin’s article “On Collusion” [4].

Structured collaboration

This all gives us an interesting perspective on what people who build social systems do. One of the goals of building an effective social system is, in large part, to determine the structure of collaboration: which groups, in what configurations, can come together to advance their group goals, and which groups cannot?

Different collaboration structures lead to different results

Sometimes, more collaboration is beneficial: when people can work together to collectively solve their problems, things are better. At other times, more collaboration is dangerous: a small group of actors may collaborate to disenfranchise others. And at other times, more collaboration is necessary for another reason: to enable the wider society to “fight back” against collusion that attacks the system.

In all three cases, these ends can be achieved through different mechanisms. Of course, it is very difficult to block communication outright, and it is also difficult to make collaboration work perfectly. However, there are many options in between that can produce powerful effects.

Below are several possible structuring techniques for collaboration.

Technologies and regulations to protect privacy

Technical means to make it difficult to prove how you acted (secret ballot, MACI and similar techniques).

Deliberate decentralization distributes control of a mechanism to a large group of people who are known not to work well together.

Decentralization of physical space, separating different functions (or different shares of the same function) to different locations (see, for example, Samo Burja on the link between urban decentralization and political decentralization).

Decentralization between role-based constituencies, separating different functions (or different shares of the same function) to different types of participants (e.g., in a blockchain, "core developers", "miners", "coin holders", "application developers", "users").

Schelling points, which allow large groups of people to quickly collaborate around a path forward. Complex Schelling Points may even be implemented in code (e.g., how to recover from a 51% attack).

Use a common language (or, split control among multiple constituencies who use different languages).

Use per-person voting instead of per-(coin/share) voting to greatly increase the number of people needed to collude to influence a decision.

Defectors are encouraged and relied upon to alert the public to impending acts of collusion.

Note: Schelling points were proposed by American economist Thomas Schellin in his book "Strategies of Conflict". If people know that others are trying to do the same thing in the absence of communication, their actions will tend to converge on a conspicuous focus. For example, if two people meet in New York without prior communication, they are very likely to choose Grand Central Station, which forms a natural Schelling Point.

None of these strategies are perfect, but they can be used in a variety of situations with varying degrees of success. Furthermore, these techniques can and should be combined with mechanism designs that attempt to make harmful collusion as unprofitable and risky as possible; in this regard, "skin in the game" is a very powerful tool. Which combination works best ultimately depends on your specific case.

[1]https://github.com/appliedzkp/maci

[2]https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3597399

[3]https://blog.ethereum.org/2014/08/21/introduction-futarchy

[4] https://vitalik.ca/general/2019/04/03/collusion.html

<<:  Just yesterday, Bitcoin broke the record for the longest time above $10,000

>>:  In 2020, Ethereum’s cumulative transaction fees exceeded Bitcoin for the first time

Recommend

BitShares founder: Lisk smart contracts are not smart, far inferior to Ethereum

Lisk smart contracts are not that smart. As a blo...

What are the facial features of a man with no future?

The most useless man The first point: having shif...

Moles reveal what setbacks you will encounter in life

We will encounter many setbacks in our lives. Som...

Coin Zone Trends: Bitcoin Price Trends Based on Big Data This Week (2017-03-27)

The overall market is weak and short-term risk co...

The face of a man who is irresponsible in his relationship

The face of a man who is irresponsible in his rel...

Will a woman with wrinkles between her eyebrows lead to a life of toil?

The area between the eyebrows is what we often ca...

A woman with a mole will marry a rich man

A woman with a mole will marry a rich man There i...

Strong communication skills

Communication skills actually reflect a person...

What is the personality of a woman with willow-shaped eyebrows?

What is the personality of a woman with willow-sh...