diff --git a/books/economics/game-theory-critical-introduction.md b/books/economics/game-theory-critical-introduction.md
index 012c9eaf60aff42d6f708677942b2bcf0bbbbe81..6b34244337cd6d36fed1b036ecb5f668f25e4278 100644
--- a/books/economics/game-theory-critical-introduction.md
+++ b/books/economics/game-theory-critical-introduction.md
@@ -448,3 +448,43 @@ resolution would require a higher State in the next upper level of recursion:
     should agree to submit to the authority of a higher State which will enforce an
     agreement to disar m (an argument for a strong, independent, United
     Nations?).
+
+Nash-equilibrium: self-confirming strategy:
+
+    A set of rationalisable strategies (one for each player) are in a Nash
+    equilibrium if their implementation confirms the expectations of each player
+    about the other’s choice.  Put differently, Nash strategies are the only
+    rationalisable ones which, if implemented, confirm the expectations on which
+    they were based. This is why they are often referred to as self-confirming
+    strategies or why it can be said that this equilibrium concept requires that
+    players’ beliefs are consistently aligned (CAB).
+
+    -- 53
+
+Arguments agains CAB:
+
+    In the same spirit, it is sometimes argued (borrowing a line from John von
+    Neumann and Oskar Morgenstern) that the objective of any analysis of games is
+    the equivalent of writing a book on how to play games; and the minimum
+    condition which any piece of advice on how to play a game must satisfy is
+    simple: the advice must remain good advice once the book has been published.
+    In other words, it could not really be good advice if people would not want to
+    follow it once the advice was widely known. On this test, only (R2, C2) pass,
+    since when the R player follows the book’s advice, the C player would want to
+    follow it as well, and vice versa. The same cannot be said of the other
+    rationalisable strategies. For instance, suppose (R1, C1) was recommended: then
+    R would not want to follow the advice when C is expected to follow it by
+    selecting C1 and likewise, if R was expected to follow the advice, C would not
+    want to.
+    
+    Both versions of the argument with respect to what mutual rationality entails
+    seem plausible. Yet, there is something odd here. Does respect for each other’s
+    rationality lead each person to believe that neither will make a mistake in a
+    game? Anyone who has talked to good chess players (perhaps the masters of
+    strategic thinking) will testify that rational persons pitted against equally
+    rational opponents (whose rationality they respect) do not immediately assume
+    that their opposition will never make errors. On the contrary, the point in
+    chess is to engender such errors! Are chess players irrational then?  One is
+    inclined to answer no, but why? And what is the difference as
+
+    -- 57