The Fedora Council has finally come to a decision on allowing AI-assisted contributions to the project. The agreed upon guidelines are fairly straight-forward and will permit AI-assisted contributions if it’s properly disclosed and transparent.

The AI-assisted contributions policy outlined in this Fedora Council ticket is now approved for the Fedora project moving forward. AI-assisted code contributions can be used but the contributor must take responsibility for that contribution, it must be transparent in disclosing the use of AI such as with the “Assisted-by” tag, and that AI can help in assisting human reviewers/evaluation but must not be the sole or final arbiter. This AI policy also doesn’t cover large-scale initiatives which will need to be handled individually with the Fedora Council.

  • Durandal@lemmy.today
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    Unfortunate. Announcing this when everyone is migrating from windows… a lot of which is because of the AI bullshit in it… seems like throwing the stick in the spokes move.

    I guess it’s time to go shopping for a new distro :(

    • skilltheamps@feddit.org
      link
      fedilink
      arrow-up
      12
      arrow-down
      3
      ·
      1 day ago

      This is about contributing code that was co-created with an llm like copilot. Not about adding “AI” features to fedora.

        • Vik@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          that’s a fair point, though at least with projects like fedora, commits are public and can be broadly scrutinised, and since this stipulates that the use of LLMs must be disclosed, I’d hope that’d keep its use firmly under the spotlight.

          • Durandal@lemmy.today
            link
            fedilink
            arrow-up
            1
            ·
            22 hours ago

            Well sure… the point of a warning label on the side of a product is to allow you to make informed choices if you want to use that product. A warning label saying “we’re condoning the use of unethical practices” allows me to decide I would rather seek a different product.

            • Vik@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              21 hours ago

              another fair point, by allowing it at all they’re condining its use. i personally see it less as condoning and more that they acknowledge its wide use in the field, and that they probably cannot prevent its contributors from using it entirely.

              I’d be interested in how many commits come in from now suggesting the use of llms.

              • Durandal@lemmy.today
                link
                fedilink
                arrow-up
                3
                ·
                20 hours ago

                Well that’s what “condone” means fwiw.

                I see it the same as how steam does the same thing with requiring disclosure of ai use on store pages now. And I treat it the same way if I see it I make a consumer choice to not support the game.

                What disturbed me was reading the minutes of the meeting that people seemed genuinely excited to include gen ai code. To me that speaks to an ethos that is highly divergent from what I would like to see and what should be happening. It doesn’t feel like it’s “welp I guess we gotta let people do it” and more “oh boy we can finally use it”. And with all the companies that make llm how long before some back door evil nonsense sneaks in. To say I’m dubious would be an understatement. 🤷

                • Vik@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  20 hours ago

                  That’s understandable, I’m inclined to react the same way to games on steam.

                  I hadn’t checked in with how that discussion went down; that’s disappointing. I just settled into fedora after fully moving on from windows earlier in this year (have been multi-booting for several years prior). Time will tell if this all goes to shit as well.

    • woelkchen@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      1 day ago

      I guess it’s time to go shopping for a new distro :(

      If you think that undisclosed AI contributions aren’t happening everywhere, you’re delusional.

      • Durandal@lemmy.today
        link
        fedilink
        arrow-up
        5
        ·
        22 hours ago

        Doesn’t mean I have to support the ones that actively encourage it.

        I guess it’s time for some of the projects to start putting little stickers that say “hand crafted code” on them explicitly.

        • woelkchen@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          11 hours ago

          Properly attributed generated lines are easier to remove should courts declare them illegal.

          What would projects with undeclared AI code do? Shut everything down? Revert everything until the commit before ChatGPT launched? Just say yolo and go on?

          • Durandal@lemmy.today
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            Counterpoint… there is no real enforcement beyond the honor system… so it changes very little other than expressly condoning the activity.