Recently [Faith Ekstrand] announced on Mastodon that Mesa was updating its contributor guide. This follows a recent AI slop incident where someone submitted a massive patch to the Mesa project with the claim that this would improve performance ‘by a few percent’. The catch? The entire patch was generated by ChatGPT, with the submitter becoming somewhat irate when the very patient Mesa developers tried to explain that they’d happily look at the issue after the submitter had condensed the purported ‘improvement’ into a bite-sized patch.

The entire saga is summarized in a recent video by [Brodie Robertson] which highlights both how incredibly friendly the Mesa developers are, and how the use of ChatGPT and kin has made some people with zero programming skills apparently believe that they can now contribute code to OSS projects. Unsurprisingly, the Mesa developers were unable to disabuse this particular individual from that notion, but the diff to the Mesa contributor guide by [Timur Kristóf] should make abundantly clear that someone playing Telephone between a chatbot and OSS project developers is neither desirable nor helpful.

That said, [Brodie] also highlights a recent post by [Daniel Stenberg] of Curl fame, who thanked [Joshua Rogers] for contributing a massive list of potential issues that were found using ‘AI-assisted tools’, as detailed in this blog post by [Joshua]. An important point here is that these ‘AI tools’ are not LLM-based chatbots, but rather tweaked existing tools like static code analyzers with more smarts bolted on. They’re purpose-made tools that still require you to know what you’re doing, but they can be a real asset to a developer, and a heck of a lot more useful to a project like Curl than getting sent fake bug reports by a confabulating chatbot as has happened previously.


From Blog – Hackaday via this RSS feed

  • Rentlar@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    9 days ago

    Sorry if I am hand-waving away too many details on the implementation, but I’m glad you get my point.

    • PhilipTheBucket@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      9 days ago

      Yeah, it is a really good idea. The sticking point is that the modern structure of LLMs really doesn’t allow it, it is not that they haven’t tried. But at least so far, whatever secondary structure you try to apply to check the first answer suffers from the exact same issue, that it doesn’t really “understand” and so it’s subject to spouting totally wrong stuff sometimes no matter how carefully you try to set it up with awareness and fact checking.