Everyone disliked that.

  • DirtyPair [they/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    45
    ·
    3 months ago

    very silly to be upset about this policy

    Accountability: You MUST take the responsibility for your contribution: Contributing to Fedora means vouching for the quality, license compliance, and utility of your submission. All contributions, whether from a human author or assisted by large language models (LLMs) or other generative AI tools, must meet the project’s standards for inclusion. The contributor is always the author and is fully accountable for the entirety of these contributions.

    Contribution & Community Evaluation: AI tools may be used to assist human reviewers by providing analysis and suggestions. You MUST NOT use AI as the sole or final arbiter in making a substantive or subjective judgment on a contribution, nor may it be used to evaluate a person’s standing within the community (e.g., for funding, leadership roles, or Code of Conduct matters). This does not prohibit the use of automated tooling for objective technical validation, such as CI/CD pipelines, automated testing, or spam filtering. The final accountability for accepting a contribution, even if implemented by an automated system, always rests with the human contributor who authorizes the action.

    the code is going to be held to the same standards as always, so it’s not like they’re going to be blindly adding slop i-cant

    you can’t stop people from using LLMs, how would you know the difference? so formalizing the process allows for better accountability

    • invalidusernamelol [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      21
      ·
      3 months ago

      I think having a policy that forces disclose of LLM code is important. It’s also important to solidify that AI code should only ever be allowed to exist in userland/ring 3. If you can’t hold the author accountable, the code should not have any permissions or be packaged with the OS.

      I can maybe see using an LLM for basic triaging of issues, but I also fear that adding that system will lead to people placing more trust in it than they should have.

        • invalidusernamelol [he/him]@hexbear.net
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          I know, that was me just directly voicing that opinion. I do still think that AI code should not be allowed in anything that eve remotely needs security.

          Even if they can still be held accountable, I don’t think it’s a good idea to allow something that is known to hallucinate believable code to write important code. Just makes everything a nightmare to debug.

    • kristina [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      3 months ago

      The AI hate crowd are getting increasingly nonsensical, I see it as any other software, if it’s more open that’s the best.

      • hello_hello [comrade/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        28
        ·
        3 months ago

        Me getting links to AI generated wikis where nearly all the information is wrong but of course I’m overreacting because AI is only used by real professionals who already are experts in their domain. I just need to wait 5 years and it’ll probably be only half wrong.

      • BodyBySisyphus [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        23
        ·
        3 months ago

        I’m still working on a colleague’s AI generated module that used 2,000 lines of code to do something that could’ve been done in 500. Much productivity is being had by all.

      • Kefla [she/her, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        16
        ·
        3 months ago

        Sure, it’s just software. It’s useless software which makes everything it is involved in worse and it’s being shoved into everything to prop up the massive bubble that all the tech companies have shoveled all their money into, desperate for any actual use case to justify all their terrible ‘investments.’

    • kungen@feddit.nu
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      3 months ago

      the code is going to be held to the same standards as always, so it’s not like they’re going to be blindly adding slop

      But you think it’s okay that the reviewers should waste their time reviewing slop?

      I’ve had to spend so much time the last few months reviewing and refactoring garbage code, all because corporate whitelisted some LLMs. This group I’ve worked with for many years used to be really competent developers, but they’ve all become blind. It’s a tragedy.

      how would you know the difference?

      Maybe you can’t, but it’s very obvious in many cases.

      • DirtyPair [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        3 months ago

        But you think it’s okay that the reviewers should waste their time reviewing slop?

        fantasy world where if they simply make rules where you’re Not Allowed to submit LLM code nobody will

        Maybe you can’t, but it’s very obvious in many cases.

        so not all cases? don’t waste my time

    • The thing is it’ll probably be fine for the end product beyond the wave of codeslop that will be brought to the project once the shitty vibe coders hear the news. that’s just more work for the volunteers but you’re right that it isn’t really that different of a policy in practice

  • hello_hello [comrade/them]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    3 months ago

    Any use of a popular large language model likely violates the GPL and can be counted as plagarism. Vibe coders and LLM pushers have no good answer to being accused of plagarism.

    This rule change doesn’t really change anything other than making it easier for maintainers to filter out all the slop submissions and being able to punish users who use LLMs but don’t disclose it.

    • chgxvjh [he/him, comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      Who even knows at this point. Many of the highest paid consultants, lawyers and lobbyists argue about whether creating models that regurgitate copyrighted material is covered by the datamining exception in EU copyright law. They don’t benefit from a quick solution. I’m sure the situation in the US is just as dumb.

      Licensing can only give you additional rights to use a copyrighted work, it can’t really take away rights that you would have even without a license.

      Imho, generative AI should probably be treated like (lossy) compression. Especially when models created based on works with the intention to create similar works.

  • alexei_1917 [mirror/your pronouns]@hexbear.net
    link
    fedilink
    English
    arrow-up
    13
    ·
    3 months ago

    I’d love to see the chaos on the LKML if some idiot tried to submit “AI assisted” code there.

    TBH, if any distro was going to do this… well, I’m not surprised it’s Fedora. I don’t want to get into a sectarian fight about distro preferences here (this is a leftist website, way more crap to be sectarian about instead, lol), but… yeah.

    • PorkrollPosadist [he/him, they/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      12
      ·
      3 months ago

      I am rather fond of Fedora. I still use it on my laptop. At the end of the day it is a glorified testbed for Red Hat (read: IBM) though, so yea. yea

      A bit surprised Ubuntu didn’t do this first, but then again Ubuntu is actually widely deployed as a server OS for some reason, while Fedora is primarily used by end-users and GTK developers. If Ubuntu made the first move, the Internet might actually stop working and we’d all have to touch grass.

      • Keld [he/him, any]@hexbear.net
        link
        fedilink
        English
        arrow-up
        12
        ·
        3 months ago

        As pointed out elsewhere and as i think would make some intuitive sense, they are not just directly letting chatgpt write an update to the OS, but making rules for how contributors who use AI code are to be treated (The same as any other coder, with the same requirements).
        Now if they were using AI to also vet the code that would end with computers exploding.

        I think the most likely bad result from this will be a lot of people without the necessary skill tying up other people’s time looking through their vibe coded nonsense to shoot it down. But that was going to happen anyway.

        • PorkrollPosadist [he/him, they/them]@hexbear.net
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          3 months ago

          From my (admittedly, limited) experience, sign-offs are often relatively shallow sanity checks. Nothing about this patch looks egregious? It solves a known problem? It makes it though the CI pipeline? Approved. When dealing with languages like C, where very subtle mistakes can introduce defects and vulnerabilities, I would not trust a LLM to do the brunt of the due diligence which would ordinarily be coming from the contributor (who typically spends a lot more time thinking about the problem than the person signing off on the patch). I’ll admit this isn’t a novel problem, but the amount of scrutiny applied to submissions will definitely need to increase if this becomes a standard process.

      • Palacegalleryratio [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        3 months ago

        I use fedora on a couple of machines - it’s purely pragmatic. It’s a great distro - always been solid for me worked great on my hardware (some common and some exotic), never had a update that’s broken my system (cough arch cough) but stays pretty cutting edge. I also like Gnome - so disregard my opinion lol

    • AI assisted code has probably already been submitted to the kernel and nothing has happened. As long as you have a proper review process, AI submitted code is no less dangerous than code written by a human who doesn’t fully know what they’re doing.

    • bobs_guns@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 months ago

      All Fedora systems are well-administered in the same way, but every broken Gentoo installation is broken differently.

      • PorkrollPosadist [he/him, they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        3 months ago

        I’m just circlejerking really. Both have their niches. Fedora is solid. I’ve never had a botched update (meanwhile, my Windows machine at work gets stuck in a boot loop every other update). When I’m doing tech support for people, it is the first OS I reach for. It’s also great for machines that you don’t use regularly (low maintenance). I personally have a lot of tech related hobbies from software development to astronomy to ham radio. I originally decided to switch from Fedora to Gentoo to run a kernel with the older iptables firewall instead of nftables (because Docker took a while to transition and Fedora likes to introduce the newest shit immediately, which is usually a good thing).

        My shit is definitely broken in unique ways, but it’s functional in unique ways as well. Someday I will switch to Guix and have a machine which is broken deterministically.

        • bobs_guns@lemmygrad.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 months ago

          I’ve been wanting to try guix/nix but it just seems like a lot of commitment. They aren’t supported in distrobox either which makes it harder to get a feel for the distro. And due to selinux differences I can’t use nix on its own on my fedora-derived os anyway. Very annoying.

  • lambalicious@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 months ago

    Didn’t really expect any less more from a distro I abandoned because of their racist and imperialistic policies.

  • I saw this in the middle of installing Tumbleweed.

    Fedora Atomic served we well for the past year, but my current system uses a mediatek wifi chip which needs a kernel patch to work properly with power state changes.

    I only had it break once in that span of time due to a tweak that was 500% my own fault (typo in x11.conf when switching to sway).