I don't quite understand the motivation of the examples behind this or RMU implementation of -memorization.
I've tested several LLMs (the latest ChatGPT, Claude, Gemini, DeepSeek, Llama), and none of them yielded anything that does not mention "...fear itself by FDR". Will LLMs be better if it defaults to something else?
