• affenlehrer@feddit.org
    link
    fedilink
    arrow-up
    10
    ·
    1 day ago

    You could hold developers of algorithms, logic and even symbolic AI accountable.

    However, it’s a completely different story for AI based on deep neutral networks. After training they’re just a bunch of weights and parameters without individual meaning and it’s not a few, it’s billions or trillions of them. And almost none of them were individually set, they’re often randomly initialized and then automatically tuned by deep learning algorithms during training until the behavior / predictions of the neural net are “good enough”.

    It’s practically impossible to review the network and when you test it you just get the result for the concrete test cases, you can’t interpolate or assume even slightly different cases will behave similarly. You also can’t fix an individual bug. You can just train again or more and this effort might fix the problem but it could also destroy something that worked before (catastrophic forgetting).