Benji’s Reviews > Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations > Status Update
Benji
is on page 116 of 504
For every perfect information game there exists a corresponding normal-form game. Note, however, that the temporal structure of the extensive- form representation can result in a certain redundancy within the normal form. One general lesson is that while this transformation can always be performed, it can result in an exponential blowup of the game representation.
— Oct 29, 2022 01:32AM
Like flag
Benji’s Previous Updates
Benji
is on page 124 of 504
Despite the fact that strong argument can be made in its favor, the concept of backward induction is not without controversy. To see why this is, consider the well-known Centipede game. There exist different accounts of this situation, and they depend on the probabilistic assumptions made, on what is common knowledge, and on exactly how one revises one's belief in the face of measure-zero events.
— Nov 02, 2022 03:16AM
Benji
is on page 116 of 504
One general lesson is that while this transformation can always be performed, it can result in an exponential blowup of the game representation. This is an important lesson, since the didactic example of normal-from games are very small, wrongly suggesting that this form is more compact.
— Oct 29, 2022 01:33AM
Benji
is on page 81 of 504
The correlated equilibrium is a solution concept that generalizes the Nash equilibrium. Some people feel that this is the most fundamental solution concept of all. A Nobel-prize winning game theorist, R. Myerson, has gone so far as to say that 'if there is intelligent life on other planets, in a majority of them, they would have discovered correlated equilibrium before Nash equilibrium.'
— Oct 11, 2022 04:58AM
Benji
is on page 79 of 504
Since iterated removal of strictly dominated strategies preserves Nash equilibria, we can use this technique to computational advantage.
— Oct 11, 2022 04:56AM
Benji
is on page 76 of 504
Agents might play maxmin strategies in order to achieve good payoffs in the worst case, even in a game that is not zero sum. However, consider a setting in which the other agent is not believed to be malicious, but is instead entirely unpredictable. In such a setting, it can make sense for agents to care about minimizing their worst-case losses, rather than maximizing their worst-case pay-offs.
— Oct 11, 2022 01:47AM
Benji
is on page 76 of 504
(Mini(Max(Regret))) An agent's minimax regret is an action that yields the smallest maximum regret.
— Oct 11, 2022 01:46AM

