The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.
LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.


To me it’s a form of automation. I will not use it as I see no use of it in my workflows. I am also keenly aware of the trap of thinking it improves one’s effectiveness when it often very much does the opposite. Not to mention environmental costs etc.
If somebody wants to use these tools for that, whatever, have at it. But it’s pretty difficult to claim one’s an ethical hacker if one uses tools that have serious ethical issues. And genAI has plenty of those. So that’s what I’d bear in mind.