Apparently I should read this:
A reflective variant of game theory worried about decision problems with smart predictive agents. Strong AI risk people are excitable in the vicinity of these.
Although their reading list is occasionally IMO undiscerning, you might want to start with MIRI’s intro which at least exists.
Existing methods of counterfactual reasoning turn out to be unsatisfactory both in the short term (in the sense that they systematically achieve poor outcomes on some problems where good outcomes are possible) and in the long term (in the sense that self-modifying agents reasoning using bad counterfactuals would, according to those broken counterfactuals, decide that they should not fix all of their flaws).