You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There seems to be an idea or a group of similar ideas regarding AGI being attacked due to simulating distant superintelligences. Some terms I've seen used to describe this:
- [probable environment hacking](https://arbital.com/p/probable_environment_hacking/), "coercing the most probable environment of your AI", etc. (see also comments on that page)
- [weirdness of the universal prior](https://ordinaryideas.wordpress.com/2016/11/30/what-does-the-universal-prior-actually-look-like/)
- Roko's basilisk seems to be a specific instance of this attack
are things like simulation warfare (also on Christiano blog) similar?