The paper is absolutely ridiculous. It describes almost every major ransomware group as using AI — without any evidence (it’s also not true, I monitor many of them). It even talks about Emotet (which hasn’t existed for many years) as being AI driven. It cites things like CISA reports for GenAI usage … but CISA never said AI anywhere.
Safe Security just happen to sell an agentic AI product, which they tout as being developed with MIT, and they wave this paper around as evidence of the imaginary AI ransomware problem they claim their product can totally fix.
Kevin notes that a pile of MIT academics, including Michael Siegel, director of CAMS and lead author on this paper, happen to be on the Safe Security advisory board. This conflict of interest is at no point disclosed in the paper… The paper finishes by recommending “embracing AI in cyber risk management”. Safe Security marketing material is cited in the references for the paper!



So, do they loose their job for such blatant abuse of their positions?