• Sailor Sega Saturn@awful.systems
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 days ago

    Ah yes the typical workflow for LLM generated changes:

    1. LLM produces nonsense at the behest of employee A.
    2. Employee B leaves a bunch of edits and suggestions to hammer it into something that almost kind of makes sense in a soul-sucking error prone process that takes twice as long as just writing the dang code.
    3. Code submitted!
    4. Employee A gets promoted.
    • wjs018@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      4 days ago

      I just looked at the first PR out of curiosity, and wow…

      this isn’t integrated with tests

      That’s the part that surprised me the most. It failed the existing automation. Even after prompted to fix the failing tests, it proudly added a commit “fixing” it (it still didn’t pass). Then the dev had to step in and say why the test was failing and how to fix the code to make it pass…something that copilot should really be able to check. With this much handholding all of this could have been done much faster and cleaner without any AI involvement at all.

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 days ago

        The point is to get open source maintainers to further train their program because they already scraped all our code. I wonder if this will become a larger trend among corporate owned open source projects.