Prisoner’s Dilemma is one of the classical games in game theory. It’s an interesting abstraction of a whole class of situations, where two parties can choose between cooperative and non-cooperative behavior. It’s a kind of game where win-win and lose-lose result can be achieved. It doesn’t apply to situations like negotiating a price, where one party’s loss is another’s gain; prisoner’s dilemma applies to situation where two parties form a team and work together.

The key point of prisoner’s dilemma is that there are 4 levels of payoff, set up in such a way that the biggest gain is achieved by tricking the other party into cooperation and betraying it.

In the prisoner’s dilemma game, there are 4 possible game results.

In the generalized prisoner’s dilemma, strategies are called *cooperate* and *defect*.

I had an idea about extending prisoner’s dilemma to model bullying, and then try to find best strategies for such a situation. The main differences would include:

- There are many (not just two) players.
- Each player plays many times, with many other players, each game involves 2 players only.
- Players can observe games of other players, without taking part.
- The observation may be wrong, so it’s possible that defecting can be observed as cooperation.
- If a player defects and the other party cooperates, this information may leak into the environment. In such a case the defecting player can expect to be punished by all other players.

Bullying would happen when one player defects (attacks?), wins much and prevents this information from leaking into the environment. In the next game, its betrayed party, according to the “tit for tat” strategy, should retaliate. Unfortunately, if the it does so, the environment sees that as the *first* defection and will punish the previously betrayed player for retaliating, considering it an “attack”. This effectively prevents the betrayed player from retaliating and makes it vulnerable to the next defection. If the truth (i.e. which party has originally defected) never leaks to the environment, the bully can keep succeeding forever.

I’m not sure how to model the details, for instance, how does misinformation occur? In a deterministic or probabilistic way? Do players fully control the information that goes to the environment, or only partially?

A robust model should be simple, as much as possible, while still reflecting the key components of bullying such as environment and misinformation.

### Like this:

Like Loading...

*Related*

Very nice post- I like the image especially.