OC below by @[email protected]
What called my attention is that assessments of AI are becoming polarized and somewhat a matter of belief.
Some people firmly believe LLMs are helpful. But programming is a logical task and LLMs can’t think - only generate statistically plausible patterns.
The author of the article explains that this creates the same psychological hazards like astrology or tarot cards, psychological traps that have been exploited by psychics for centuries - and even very intelligent people can fall prey to these.
Finally what should cause alarm is that on top that LLMs can’t think, but people behave as if they do, there is no objective scientifically sound examination whether AI models can create any working software faster. Given that there are multi-billion dollar investments, and there was more than enough time to carry through controlled experiments, this should raise loud alarm bells.
What’s the difference between copying a function from stack overflow and copying a function from a llm that has copied it from SO?
LLM are sort of a search engine with advanced language substitution features nothing more nothing less.
But people just love their drama, and others feed on dooming prophecies.
As for the lack of ““scientifically proof of faster software using llm””… What a statement! Give me the scientifically proof of why using neovim is faster or using a lsp is faster, or anything a developer uses while building software is “”““scientifically faster””"
Because it’s not a plain copy but an Interpretation of SO.
With llm you just have one more layer between you and the information that can distort that information.
And?
The issue is that you should not blindly trust code. Being originally written by a human being is not, by any means, a quality certification.
You asked what’s the difference and I just told you.
Are you stupid or something?
Block and reported.
You should not insult people.