The way “AI” is going to compromise your cybersecurity is not through some magical autonomous exploitation by a singularity from the outside, but by being the poorly engineered, shoddily integrated, exploitable weak point you would not have otherwise had on the inside.

LLM-based systems are insanely complex. And complexity has real cost and introduces very real risk.

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      21 hours ago

      Pre-Generative AI, lots of companies had AI/Algorithmic tools that posed a risk to personal cyber security (Google’s Assistant and Apple’s Siri, MS’s Cortana etc).

      Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?

      Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.

      • rysiek@szmer.infoOP
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        Is the stance here that AI is more dangerous than those because of its black box nature, it’s poor guardrails, the fact that it’s a developing technology, or it’s unfettered access?

        All of the above I guess. Although I am not keen on making a comparison to these previous things. I have previously written about how IoT/“Smart” devices are a massive security issue, for example. This is not a competition, the point is not whether or not these tools are worse by some degree from some other problematic technologies, the point is that the AI hype would have you believe they are some end-all demiurgs when the real threat is coming from inside the house.

        Also, do you think that the “popularity” of Google Gemini is because people were already indoctrinated into the Assistant ecosystem before it became Gemini, and Google already had a stranglehold on the search market so the integration of Gemini into those services isn’t seen as dangerous because people are already reliant and Google is a known brand rather than a new “startup”.

        I don’t know about Gemini’s actual popularity. What I do know is that it is being shoved down people’s throats in every possible way.

        My feeling is that a lot of people would prefer to use their tools and devices the way they had before this crap came down the pipeline but they simply don’t know how to turn it off reliably (partially because Google makes it really hard to do so), and so Google gets to make bullish claims on line-going-up as far as “people using Gemini” are concerned.

        • atrielienz@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          My main concerns are mostly to do with the fact that Google in my experience has always had the benefit of enticing software and services that are extremely invasive but also very convenient (even if we remove IoT from the table for a moment). This is mostly due to how invasive Google Play Services is, and how invasive the Google app has been since the first iterations of Google Assistant (Google Now). I’m concerned that even those of use who have done what we can to turn off Gemini and not use Generative AI are still compromised regardless because big tech has a choke hold on the services we use.

          So I suppose I’m trying to understand what the differences are in how these two types of technology compromise cyber security.

          • rysiek@szmer.infoOP
            link
            fedilink
            English
            arrow-up
            1
            ·
            20 hours ago

            So I suppose I’m trying to understand what the differences are in how these two types of technology compromise cyber security.

            Again, it does not make sense to me to make that kind of comparison.

    • nforminvasion@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      What are your thoughts on using AI prompt “engineering” to hack corporations and other organizations? Specifically if you already had a grasp on traditional pen-testing but were trying to utilize a new tool.

      • rysiek@szmer.infoOP
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        To me it’s a form of automation. I will not use it as I see no use of it in my workflows. I am also keenly aware of the trap of thinking it improves one’s effectiveness when it often very much does the opposite. Not to mention environmental costs etc.

        If somebody wants to use these tools for that, whatever, have at it. But it’s pretty difficult to claim one’s an ethical hacker if one uses tools that have serious ethical issues. And genAI has plenty of those. So that’s what I’d bear in mind.

      • rysiek@szmer.infoOP
        link
        fedilink
        English
        arrow-up
        11
        ·
        edit-2
        1 day ago

        I am not opposed to machine learning as a technology. I use Firefox’s built-in translation as a way to access information online I otherwise would not be able to access, and I think it’s great that small, local model can provide this kind of functionality.

        I am opposed to marketing terms like “AI” – “AI” is a marketing term, there are now toothbrushes with “AI” – and I am opposed to religious pseudo-sciencey bullshit like AGI (here’s Marc Andreessen talking about how “AGI is a search for God”).

        I also see very little use for LLMs. This has been pointed out before, by researchers who got fired for doing so from Google: smaller, more tailored models are going to be better suited for specific tasks than ever-bigger humongous behemoths. The only reason Big Tech is desperately pushing for huge models is because these cannot be run locally, which means they can monopolize them. Firefox’s translation models show what we could have if we went in a different direction.

        I cannot wait for the bubble to burst so that we can start getting the actually interesting tools out of ML-adjacent technologies, instead of the insufferable investor-targeted hype we are getting today. Just as we started getting actually interesting Internet stuff once the Internet bubble popped.