• bryndos@fedia.io
    link
    fedilink
    arrow-up
    5
    ·
    8 hours ago

    Law enforcement will seize and use computers and the data they hold as evidence to convict criminals, just like any other tool that they might be warranted to seize.

    Courts will examine the evidence of what it did to determine what role it played in the offence and whether it supports the allegation.

    Likewise police complaints authorities could do the same in principle against the police; if someone were to give them a warrant and the power to execute it.

    If a thing happens in public that was unwarranted and can be traced back to a police force or how they deployed any equipment, they can be judicially reviewed* for any decision to deploy that bit of kit. It’s more a matter of will they actually be JR’d and will that be review be just and timely. * - in my country.

    I don’t think it’s much different from how they deploy other tech like clubs and pepper spray, tear gas, tazers or firearms. If they have no fear of acting outwith their authority , that’s a problem.

    In some ways it might be easier to have an ‘our word’ vs ‘their word’ defense when they shoot someone, compared to a computer program that might literally document the abuse of power in its code or log files.

    “Oops i dropped my notebook”, is maybe easier than, “oops i accidentally deleted my local file and then sent a request to IT - that was approved by my manager - asking them to delete instead of restore any onsite or offsite backups”.

  • over_clox@lemmy.world
    link
    fedilink
    arrow-up
    28
    arrow-down
    3
    ·
    22 hours ago

    Now where does this thought come from?

    Do you not know what a computer is? It’s literally a digital logical accountant! Yeah yeah, we should probably blame the programmers and engineers instead when shit goes sideways, but now I think we need to also hold CEOs accountable when they decide to inject faulty AI into mission critical systems…

    https://lemmy.dbzer0.com/post/55990956

    • henfredemars@infosec.pub
      link
      fedilink
      English
      arrow-up
      15
      ·
      22 hours ago

      There’s a reason why license agreements often stay there there are no warranties express or implied, no guarantees, and no fitness for any particular purpose.

    • mogranja@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      20 hours ago

      If a building collapses. You blame the people who built the walls and poured the concrete, or the ones who chose the materials and approved the project?

      In any case, often programmers and engineers retain no rights to the software they worked on. So whoever profits from the software should also shoulder the blame.

      • over_clox@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        19 hours ago

        Those in charge, that approve the continued use of proven faulty software, should take all the blame, after significant faults have been proven anyways.

        I mean you have a point, but still, 1+2+3≠15, and a bag of Doritos is not a gun. When AI fucks up this badly, the real guilty parties

        (my AI keyboard wanted to replace guilty with gullible BTW, and I’m using the FUTO keyboard no less),

        the real guilty parties are the ones in charge that allow such proven faulty systems to continue running for mission critical systems.

        Like fuck, a bag of Doritos is not a fucking gun!

  • affenlehrer@feddit.org
    link
    fedilink
    arrow-up
    10
    ·
    19 hours ago

    You could hold developers of algorithms, logic and even symbolic AI accountable.

    However, it’s a completely different story for AI based on deep neutral networks. After training they’re just a bunch of weights and parameters without individual meaning and it’s not a few, it’s billions or trillions of them. And almost none of them were individually set, they’re often randomly initialized and then automatically tuned by deep learning algorithms during training until the behavior / predictions of the neural net are “good enough”.

    It’s practically impossible to review the network and when you test it you just get the result for the concrete test cases, you can’t interpolate or assume even slightly different cases will behave similarly. You also can’t fix an individual bug. You can just train again or more and this effort might fix the problem but it could also destroy something that worked before (catastrophic forgetting).

  • ChonkyOwlbear@lemmy.world
    link
    fedilink
    arrow-up
    14
    ·
    21 hours ago

    That’s why cops love using dogs too. Courts have ruled that dogs can’t lie. That means if a dog indicates you have contraband, then a search is warranted, even if nothing is found. This of course ignores that it is entirely possible the dog indicated contraband because the cop trained it to do so on command.

  • individual@toast.ooo
    link
    fedilink
    arrow-up
    8
    ·
    edit-2
    19 hours ago

    this is straight out of the book ‘Do Android Dream Of Electric Sheep’ later turned into the movie ‘iRobot’

    Tap for spoiler

    only humans can be convicted of murder, therefore if a robot kills someone, its nothing more than a common mechanical hazard.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      7
      arrow-down
      2
      ·
      20 hours ago

      I feel like you’re missing the point.
      They’re not saying to jail computers, they’re saying be ware of political leaders using computers to abdicate responsibility.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 hours ago

      We shut down companies for it though, and what AI vendors are doing is basically selling the ability to turn job roles into “accountability sinks”, where your true value is in taking the fall for AI when it gets it wrong (…enough that someone successfully sues).

      If you want to put it in gun terms: The AI vendors are selling a gun that automatically shoots at some targets but not others. The targets it recommends are almost always profitable in the short term, but not always legal. You must hire a person to sit next to the gun and stop it from shooting illegal targets. It can shoot 1000 targets per minute.

    • snooggums@piefed.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      edit-2
      20 hours ago

      We also don’t give the murderer a free pass because they used a gun.

      A tool is a tool, and the person who designed it or used it is responsible depending on why it caused a negative outcome. I know you clarified it later but it is so stupidly obvious I wanted to add to it.

    • over_clox@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      4
      ·
      22 hours ago

      The gun isn’t running software in the background when humans are away either. See my other comment, when shit goes sideways, blame the programmers, engineers, and now the CEOs that decided to jam screwy AI up our collective asses…

      • QuantumTickle@lemmy.zip
        link
        fedilink
        arrow-up
        12
        arrow-down
        1
        ·
        22 hours ago

        We don’t jail gun manufacturers either.

        When a tool is used to kill a human, the user of the tool is guilty.

        • atrielienz@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          edit-2
          21 hours ago

          So when a kid commits suicide because the Generative AI LLM agreed with him in a harmful way?

          Edit: In before someone says something about how the gun manufacturers still shouldn’t be held accountable.

          Gen AI LLM’s in this instance are products working as intended/designed, and are being used in a way that the manufacturer knows is harmful and admits is damaging. They also admit that there are no laws to safeguard persons against how the AI is designed, implemented etc and these things don’t even have warning labels.

          Guns by contrast have lots of laws involving how and where they can be sold and accessed, as well as by whom, and with respect to informing the user of the dangers. You don’t sign a EULA or a TOS when you buy a gun, waiving your rights to sue. You don’t agree to only arbitration.

            • over_clox@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              20 hours ago

              I’ll agree with you there, I shot myself in the arm when I was only 3 with a pellet gun. My dad realized his mistake and kept all guns away from me, until age 10, when he took me out to shoot some bottles and cans, and teach me proper gun safety.

              Yes that might have been an earlier childhood lesson than many parents might agree to, but he was proper about what and when he taught me. Like, aside from the obvious of keep the gun on safety and never point it at anyone or anything unless you intend to use it, who thinks of things like, don’t lean on a rifle with the barrel in the dirt? The dirt can and will clog the barrel and cause the gun to explode!

              Anyways, back on point of AI…

              Most parents aren’t just up and giving their kids guns, but major corporations are shoving this AI shit up everyone’s asses, as much as they can anyways, knowing good and well that one AI model says 1+2+3=15 and another AI model is suggesting people suffering pain to use heroin…

              So what’s the answer, avoid AI? Well fuck Google then…

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          20 hours ago

          When a human dies because a tool was designed with needless danger, the manufacturer is often prosecuted.

          But again, I think you’re missing the point.

          • QuantumTickle@lemmy.zip
            link
            fedilink
            arrow-up
            6
            arrow-down
            1
            ·
            edit-2
            21 hours ago

            When guns have no legal uses, this is a direction we should go. Until then, this holds people accountable for other people misusing their product.

            In a chain of responsibility, the last person who’s accountable should be punished, not the first.

        • over_clox@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          19 hours ago
          • Google is built on Android
          • Android is built on Linux
          • Linux is open source

          So, I think that the open source developers should file a class action lawsuit for stealing their code.

          Go ahead, ask Linus Torvalds, I bet he’s not exactly happy with the current trajectory…

  • Kyrgizion@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    21 hours ago

    This is unironically one of the main drivers of AI. As soon as all crucial social systems are inundated with AI, the built-in bias will be excused as “minor glitches” of the system, but the real reason was always a total lack of accountability.

  • kingthrillgore@lemmy.ml
    link
    fedilink
    arrow-up
    1
    ·
    21 hours ago

    “While computers are mechanical, the processes must be dictated and implemented by a human. Therefore, the only person culpable when a computer does something, is the human who wrote the instruction set, code, or algorithm.”

    If we interpreted the problem with human culpability and consequences, a whole lot of bugs would stop disappearing from software right quick.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      21 hours ago

      If a mistake in my code exposes me to criminal investigation, I’m getting a new job.