The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.
LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.


I am not opposed to machine learning as a technology. I use Firefox’s built-in translation as a way to access information online I otherwise would not be able to access, and I think it’s great that small, local model can provide this kind of functionality.
I am opposed to marketing terms like “AI” – “AI” is a marketing term, there are now toothbrushes with “AI” – and I am opposed to religious pseudo-sciencey bullshit like AGI (here’s Marc Andreessen talking about how “AGI is a search for God”).
I also see very little use for LLMs. This has been pointed out before, by researchers who got fired for doing so from Google: smaller, more tailored models are going to be better suited for specific tasks than ever-bigger humongous behemoths. The only reason Big Tech is desperately pushing for huge models is because these cannot be run locally, which means they can monopolize them. Firefox’s translation models show what we could have if we went in a different direction.
I cannot wait for the bubble to burst so that we can start getting the actually interesting tools out of ML-adjacent technologies, instead of the insufferable investor-targeted hype we are getting today. Just as we started getting actually interesting Internet stuff once the Internet bubble popped.