Robert Axelrod, in the book "The Evolution of Cooperation" found that the strongest, most stable strategy in an iterated Prisoners' Dilemma environment is "tit for tat" - essentially making there be a consequence for bad behavior, but otherwise behaving well. (Always defecting is a *stable* strategy, but a world of defectors is much poorer.
From what you're saying, and from what I've observed, in most social interaction, most people are following "nicer" strategies than tit-for-tat, which allows people with "mean" strategies to enrich themselves at the expense of others.
As far as I know, nobody has modeled the IPD in a social environment where players have knowlege of other players' interactions with other players. I suspect that the results would be similar, though.
The question is whether in a universe populated mostly by "nice" players (with "nicer" strategies than tit-for-tat), but knowlege of outside interactions and the ability to refuse interactions, how much information is required to effectively isolate "mean" players. (Model some inaccuracy of the information about other players' transaction history to make it even more realistic.)
Iterated Prisoner's Dilemma
From what you're saying, and from what I've observed, in most social interaction, most people are following "nicer" strategies than tit-for-tat, which allows people with "mean" strategies to enrich themselves at the expense of others.
As far as I know, nobody has modeled the IPD in a social environment where players have knowlege of other players' interactions with other players. I suspect that the results would be similar, though.
The question is whether in a universe populated mostly by "nice" players (with "nicer" strategies than tit-for-tat), but knowlege of outside interactions and the ability to refuse interactions, how much information is required to effectively isolate "mean" players. (Model some inaccuracy of the information about other players' transaction history to make it even more realistic.)