You know that method of stopping LLM scrapers wherein you send it through a maze to false content until it figures out what's up? (There appears to be a
better way, BTW, though it uses up visitors' CPU cycles.)
This AI security review is sort of the reverse of that. In Soviet Russia, LLM wastes the time of
you! Oh, there is an input that's not sanitized and could lead to a path traversal attack, you say? [Ten minutes later] Actually, yes, it is sanitized and also it is never actually used for file access.
This is actually not a new insight, it occurs to me, because LLM-generated content has been wasting my time in search results for months.