Matt Garman sees a shift in software development as AI automates coding, telling staff to enhance product-management skills to stay competitive.

  • mashbooq@infosec.pub
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    The only person in my company using AI to code writes stuff with tons of memory leaks that require two experienced programmers to fix. (To be fair, I don’t think he included “don’t have memory leaks” in the prompt.)

    • okamiueru@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      The quality of code available for LLMs to learn from is normally distributed with the peak around “shouldn’t pass code review”.

      What experienced developers write code at would be on the top 5 percentile, and are used to their colleagues to do the same. The effort put into reviewing code, also takes that into account.

      If a team member starts using LLMs to write chunks of code, the quality will at best have the same normal distributed peak as the learning data. Which is a incredibly waste of resources, as you now have to spend 10x more time on reviewing the code, regardless of how often it ends up being ok

    • Mirror Giraffe@piefed.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      3 months ago

      I find that my programming speed is up 15-20 percent since I started using supermaven copilot. I also have become better at naming functions as it increases the odds of the copilot understanding what I’m trying to do.

      Also writing tests go way faster.

      • mashbooq@infosec.pub
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Are you able to share what kinds of applications and what languages you write in? I’m still trying to grasp why LLM programming assistants seem popular despite the flaws I see in them, so I’m trying to understand the cases where they do work.

        For example, my colleague was writing CUDA code to simulate optical physics, so it’s possible that the LLM’s failure was due in part to the niche application and a language that is unforgiving of deviations from the one correct way of writing things.

    • Repple (she/her)@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      3 months ago

      I’m amazed how overstated llm ability to program is. I keep trying, and I’ve yet to have any model output so much as a single function that ran correctly without modification. Beyond that it has made up APIs when I’ve asked about approaches to problems, and I’ve given it code to find bugs and memory issues I think are fairly obvious and it fails every time.

      • FizzyOrange@programming.dev
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        It really depends on the domain. E.g. I wrote a parser and copilot was tremendously useful, presumably because there are a gazillion examples on the internet.

        Another case where it saved me literally hours was spawning a subprocess in C++ and capturing stdin/out. It didn’t get it 100% right but it saved me so much time looking up how to do it and the names of functions etc.

        Today I’m trying to write a custom image format, and it is pretty useless for that task, presumably because nobody else has done it before.

        • Repple (she/her)@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          3 months ago

          This makes sense, I’ve largely been trying to use it for things I do regularly, and I’m pretty senior, having been in the industry for some time, so I tend not to be asking the questions that will have a million examples out there. But then again, these are the sorts of things that it will need to be able to do to replace people in industry.

          • FizzyOrange@programming.dev
            link
            fedilink
            arrow-up
            0
            ·
            3 months ago

            I’m pretty senior, having been in the industry for some time, so I tend not to be asking the questions that will have a million examples out there

            Me too, but this was C++ where there isn’t a strong culture of making high quality libraries available for everything (because it doesn’t have a proper package manager, at least until very recently), so you do end up having to reinvent the wheel a fair bit.

            And sometimes you just need things a bit different to what other people have done. So even though there are a gazillion expression parsers out there (so the LLM understood it pretty well) there are hardly any that support 64-bit integers. But that’s a small enough difference that it can deal with it.

      • kippinitreal@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        edit-2
        3 months ago

        Could it be prompts by devs are different from lay folk? For example, “write a website for selling shoes” would give a more complete result compared to “write a single page app with a postgres back end with TLS encryption” (or whatever), which would add more constraints & reduce the pool of code the AI steals from.

      • Angry_Autist (he/him)@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        Frankly, AI is FAR more suited to replace the C-Suites than it is competent devs at this point. Can’t wait till the first company nominates an expert system as the CEO.

    • n3m37h@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Oh most definitely GROK™ as our Lord and saviour the Humble master of the greatest vehicle on the planet and in space Eron Musky will have it done by the end of the year!!!

    • Ephera@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      Why maintain code? Just describe the program you want in a query and then every time it might need maintenance, you just tell it to generate anew.

      • floofloof@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        You can add “Don’t have bugs” to the prompt to ensure the application continues to run smoothly.

  • Ephera@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Fucking hell, 80% of my job is finding out what the requirements are. 20% is then using a specialized language to write down all these requirements and the thousands of decisions I make on how to meet them.

    If AI somehow replaces those 20% coding,

    • I still have to do 80% of my job, and
    • I still have to tell the AI all this shit, just this time in natural language, which is awful for codifying requirements.

    How is this guy a manager, but has no idea what a software engineer does all day?

    • astrsk@fedia.io
      link
      fedilink
      arrow-up
      0
      ·
      3 months ago

      This is why engineering managers need to come from engineering. If they couldn’t help out in the codebase in an emergency situation, they shouldn’t be making decisions like this. It’s not unreasonable for ELT to ask questions about this but if their reporters are not telling them the truth, the whole structure is broken.

      • Angry_Autist (he/him)@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        3 months ago

        But without ignorant, self-congratulating MBAs to artificially reduce productivity to maintain the illusion of constant growth, companies would make safer products with greater care and understanding of the entire process holistically, and you can’t pay your shareholders in understanding.

  • captainastronaut@seattlelunarsociety.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    In a leaked recording, Amazon cloud chief admits he doesn’t understand software development, also doesn’t understand the current AI offerings’ capabilities, and just wants the Amazon stock price to rise so he can buy a bigger boat.