The Fedora Council has finally come to a decision on allowing AI-assisted contributions to the project. The agreed upon guidelines are fairly straight-forward and will permit AI-assisted contributions if it’s properly disclosed and transparent.

The AI-assisted contributions policy outlined in this Fedora Council ticket is now approved for the Fedora project moving forward. AI-assisted code contributions can be used but the contributor must take responsibility for that contribution, it must be transparent in disclosing the use of AI such as with the “Assisted-by” tag, and that AI can help in assisting human reviewers/evaluation but must not be the sole or final arbiter. This AI policy also doesn’t cover large-scale initiatives which will need to be handled individually with the Fedora Council.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    52
    arrow-down
    2
    ·
    24 hours ago

    I think not allowing it at all would be worse, because then people start claiming not to use Ai while they secretly do. Allowing it with a disclosure at least makes this process a bit more transparent. You can think about ai what you want, at least handling it this way is better than not allowing it at all.

  • passepartout@feddit.org
    link
    fedilink
    arrow-up
    17
    arrow-down
    5
    ·
    edit-2
    1 day ago

    Don’t know how bad this take is, but not using LLMs for coding assistance to some degree just for the sake of not using LLMs might not be the best option right now.

    There has to be a middle ground between letting the thing spit out whole kernel modules and refusing to use it at all.

    Also having it declared as AI assisted code might be better than the people doing it anyway undisclosed.

    • curbstickle@anarchist.nexus
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      1
      ·
      24 hours ago

      The middle ground, IMO, is not letting it spit out code.

      Its almost certainly terrible, every time. Sometimes though… Its just mostly bad.

      Ive found it useful for finding errors and potential optimizations though. Just not, you know, letting it actually write anything.

      But letting it review and seeing:

      This library is currently being considered for deprecation on this mailing list, where other library is being suggested instead.

      Thats useful! Helpful, even.

      Just not the nonsense it makes on its own.

      • woelkchen@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 hours ago

        The middle ground, IMO, is not letting it spit out code.

        Are SPEC files for RPM creation code? How much actual code is even written under the Fedora umbrella, except maintenance scripts and such? Adjacent projects such as Anaconda are in the rhinstaller organization on Github: https://github.com/rhinstaller/anaconda

        Either I overlooked the details or they aren’t spelled out. From my experience of packaging software for myself as RPM (for openSUSE) the amount of actual code are a few lines of bash scripting to invoke sed and such.

  • woelkchen@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    21 hours ago

    Much of distribution development is writing menial scripts and SPEC files. It’s tedious work with little creativity. The last SPEC file for an RPM package I wrote from scratch was years ago but it was so tedious work. The Arch maintainers even argue that their PKGBUILD files are so simple, they don’t pass the so-called threshold of originality and therefore are public domain anyway.

    Much can be (and probably already is) automated. Compilation directives like CMake files already contain all the info needed to generate a workable if a bit bare bones SPEC file. I’d say an LLM might even be overkill for what a script could also achieve. The result is public domain anyway.

  • Durandal@lemmy.today
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    1 day ago

    Unfortunate. Announcing this when everyone is migrating from windows… a lot of which is because of the AI bullshit in it… seems like throwing the stick in the spokes move.

    I guess it’s time to go shopping for a new distro :(

    • skilltheamps@feddit.org
      link
      fedilink
      arrow-up
      12
      arrow-down
      3
      ·
      1 day ago

      This is about contributing code that was co-created with an llm like copilot. Not about adding “AI” features to fedora.

        • Vik@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          22 hours ago

          that’s a fair point, though at least with projects like fedora, commits are public and can be broadly scrutinised, and since this stipulates that the use of LLMs must be disclosed, I’d hope that’d keep its use firmly under the spotlight.

          • Durandal@lemmy.today
            link
            fedilink
            arrow-up
            1
            ·
            18 hours ago

            Well sure… the point of a warning label on the side of a product is to allow you to make informed choices if you want to use that product. A warning label saying “we’re condoning the use of unethical practices” allows me to decide I would rather seek a different product.

            • Vik@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              17 hours ago

              another fair point, by allowing it at all they’re condining its use. i personally see it less as condoning and more that they acknowledge its wide use in the field, and that they probably cannot prevent its contributors from using it entirely.

              I’d be interested in how many commits come in from now suggesting the use of llms.

              • Durandal@lemmy.today
                link
                fedilink
                arrow-up
                2
                ·
                16 hours ago

                Well that’s what “condone” means fwiw.

                I see it the same as how steam does the same thing with requiring disclosure of ai use on store pages now. And I treat it the same way if I see it I make a consumer choice to not support the game.

                What disturbed me was reading the minutes of the meeting that people seemed genuinely excited to include gen ai code. To me that speaks to an ethos that is highly divergent from what I would like to see and what should be happening. It doesn’t feel like it’s “welp I guess we gotta let people do it” and more “oh boy we can finally use it”. And with all the companies that make llm how long before some back door evil nonsense sneaks in. To say I’m dubious would be an understatement. 🤷

                • Vik@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  16 hours ago

                  That’s understandable, I’m inclined to react the same way to games on steam.

                  I hadn’t checked in with how that discussion went down; that’s disappointing. I just settled into fedora after fully moving on from windows earlier in this year (have been multi-booting for several years prior). Time will tell if this all goes to shit as well.

    • woelkchen@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      22 hours ago

      I guess it’s time to go shopping for a new distro :(

      If you think that undisclosed AI contributions aren’t happening everywhere, you’re delusional.

      • Durandal@lemmy.today
        link
        fedilink
        arrow-up
        4
        ·
        18 hours ago

        Doesn’t mean I have to support the ones that actively encourage it.

        I guess it’s time for some of the projects to start putting little stickers that say “hand crafted code” on them explicitly.

        • woelkchen@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          7 hours ago

          Properly attributed generated lines are easier to remove should courts declare them illegal.

          What would projects with undeclared AI code do? Shut everything down? Revert everything until the commit before ChatGPT launched? Just say yolo and go on?