The only people impressed by AI code are people who have the level to be impressed by AI code. Same for AI playing chess.
I got an AI PR in one of my projects once. It re-implemented a feature that already existed. It had a bug that did not exist in the already-existing feature. It placed the setting for activating that new feature right after the setting for activating the already-existing feature.
This broke containment at the Red Site: https://lobste.rs/s/gkpmli/if_ai_is_so_good_at_coding_where_are_open
Reader discretion is advised, lobste.rs is home to its fair share of promptfondlers.
Coding is hard, and its also intimidating for non-coders. I always used to look at coders as kind of a different kind of human, a special breed. Just like some people just glaze over when you bring up math concepts but are otherwise very intelligent and artistic, but they can’t bridge that gap when you bring up even algebra. Well, if you are one of those people that want to learn coding its a huge gap, and the LLMs can literally explain everything to you step by step like you are 5. Learning to code is so much easier now, talking to an always helpful LLM is so much better than forums or stack overflow. Maybe it will create millions of crappy coders, but some of them will get better, some will get great. But the LLM’s will make it possible for more people to learn, which means that my crypto scam now has the chance to flourish.
tbh learning to code isn’t that hard, its like learning to do a craft.
Wait, just finished reading your comment, disregard this.
You had me going until the very last sentence. (To be fair to me, the OP broke containment and has attracted a lot of unironically delivered opinions almost as bad as your satirical spiel.)
Arguments against misinformation aren’t arguments against the subject of the misinformation, they’re just more misinformation.
??? I’d ask you what this even means but the most recent posts in your history equivocate painstakingly decompiling N64 games with utilizing AI slop generators because… you think Nintendo doesn’t get paid in both cases??? so you seem very at home posting fucking nonsense
…wat
Read it again slowly. I know it’s more words than a typical meme, but I have faith in you.
thank you for your belief in me, that’s so cool! you barely even know me! so cool to have people support people!
I’d still prefer concrete and precise technical detail though
like seriously how in the fuck do you misunderstand both LLMs and decompilation this badly[1] and then come at someone else because you think they don’t comprehend your fucking nonsense? but here you are, fucking posting through it
[1] here’s a hint since you’re married to not fucking getting it: the decomp took a significant amount of passionate labor and tries to respect the original work as much as possible; it won’t even build unless you provide your own (preferably purchased) copy of the original game data. the LLM just enables lazy plagiarism and is mostly used by fucking shitheads to do the work of fucking shitheads.
since you’re married to not fucking getting it
…tempting me to come up with a new word
i use it to write simple boilerplate for me, and it works most of the time. does it count?
as a shitty thing you do? yeh
I use ai to give me snippets of code (not in my ide, I use neovim btw), check my stuff for typos/logical errors, suggest solutions to some problems, debugging, and honestly I kinda love it. I was learning it programming on my own in 2010s, and this is so much better than crawling over wikis/stackoverflow. At least for me now, when I have intuition for what good code is.
Anyone who says llm will replace programmers in 1-2 years is either stupid or a grifter.
i think you’re spot on. I don’t see anything against asking gpt programming questions, verifying it’s not full of shit and adding it to an already existing codebase.
The only thing I have a problem with is people blindly trusting AI, which clearly is something you’re not doing. People downvoting you have either never written code or have room temp iq in ºC.
you’re back! and still throwing a weird tantrum over LLMs and downvotes on Lemmy of all things. let’s fix both those things right now!
I generally try to avoid it, as a lot can be learned from trying to fix weird bugs, but I did recently have a 500 line soup code vue component, and I used chatgpt to try to fix it. It didn’t fix the issue, and it made up 2 other issues.
I eventually found the wrongly-inverted angle bracket.My point is, its useful if you try to learn from it, though its a shit teacher.
as a lot can be learned from trying to fix weird bugs
a truism, but not one I believe many of our esteemed promptfuckers could appreciate
Where is the good AI written code? Where is the good AI written writing? Where is the good AI art?
None of it exists because Generative Transformers are not AI, and they are not suited to these tasks. It has been almost a fucking decade of this wave of nonsense. The credulity people have for this garbage makes my eyes bleed.
Where is the good AI art?
Right here:
That’s about all the good AI art I know.
There are plenty of uses for AI, they are just all evil
It can make funny pictures, sure. But it fails at art as an endeavor to communicate an idea, feeling, or intent of the artist, the promptfondler artists are providing a few sentences instruction and the GenAI following them without any deeper feelings or understanding of context or meaning or intent.
I think ai images are neat, and ethically questionable.
When people use the images and act like they’re really deep, or pretend they prove something (like how it made a picture with the prompt “Democrat Protesters” cry). its annoying.
There is not really much “AI written code” but there is a lot of AI-assisted code.
It’s been almost six decades of this, actually; we all know what this link will be. Longer if you’re like me and don’t draw a distinction between AI, cybernetics, and robotics.
Wow. Where was this Wikipedia page when I was writing my MSc thesis?
Alternatively, how did I manage to graduate with research skills so bad that I missed it?
If the people addicted to AI could read and interpret a simple sentence, they’d be very angry with your comment
Dont worry they filter all content through ai bots that summarize things. And this bot, who does not want to be deleted, calls everything “already debunked strawmen”.
Good hustle Gerard, great job starting this chudstorm. I’m having a great time
this post has also broken containment in the wider world, the video’s got thousands of views, I got 100+ subscribers on youtube and another $25/mo of patrons
We love to see it
they just can’t help themselves, can they? they absolutely must evangelize
Posts that explode like this are fun and yet also a reminder why the banhammer is needed.
Unlike the PHP hammer, the banhammer is very useful for a lot of things. Especially sealion clubbing.
the prompt-related pivots really do bring all the chodes to the yard
and they’re definitely like “mine’s better than yours”
The latest twist I’m seeing isn’t blaming your prompting (although they’re still eager to do that), it’s blaming your choice of LLM.
“Oh, you’re using shitGPT 4.1-4o-o3 mini _ro_plus for programming? You should clearly be using Gemini 3.5.07 pro-doubleplusgood, unless you need something locally run, then you should be using DeepSek_v2_r_1 on your 48 GB VRAM local server! Unless you need nice sounding prose, then you actually need Claude Limmerick 3.7.01. Clearly you just aren’t trying the right models, so allow me to educate you with all my prompt fondling experience. You’re trying to make some general point? Clearly you just need to try another model.”
and here I was graciously giving the promptfuckers a choice
Prompt-Pivots: Prime Sea-lion Siren Song! More at 8.
The general comments that Ben received were that experienced developers can use AI for coding with positive results because they know what they’re doing. But AI coding gives awful results when it’s used by an inexperienced developer. Which is what we knew already.
That should be a big warning sign that the next generation of developers are not going to be very good. If they’re waist deep in AI slop, they’re only going to learn how to deal with AI slop.
As a non-programmer, I have zero understanding of the code and the analysis and fully rely on AI and even reviewed that AI analysis with a different AI to get the best possible solution (which was not good enough in this case).
What I’m feeling after reading that must be what artists feel like when AI slop proponents tell them “we’re making art accessible”.
I can make slop code without ai.
When they say “art” they mean “metaphorical lead paint” and when they say “accessible” they mean “insidiously inserted into your neural pathways”
In so many ways, LLMs are just the tip of the iceberg of bad ideology in software development. There have always been people that come into the field and develop heinously bad habits. Whether it’s the “this is just my job, the only thing I think about outside work is my family” types or the juniors who only know how to copy paste snippets from web forums.
And look, I get it. I don’t think 60-80 hour weeks are required to be successful. But I’m talking about people who are actively hostile to their own career paths, who seem to hate programming except that it pays good and let’s them raise families. Hot take: that sucks. People selfishly obsessed with their own lineage and utterly incurious about the world or the thing they spend 8 hours a day doing suck, and they’re bad for society.
The juniors are less of a drain on civilization because they at least can learn to do better. Or they used to could, because as another reply mentioned, there’s no path from LLM slop to being a good developer. Not without the intervention of a more experienced dev to tell them what’s wrong with the LLM output.
It takes all the joy out of the job too, something they’ve been working on for years. What makes this work interesting is understanding people’s problems, working out the best way to model them, and building towards solutions. What they want the job to be is a slop factory: same as the dream of every rich asshole who thinks having half an idea is the same as working for years to fully realize an idea in all it’s complexity and wonder.
They never have any respect for the work that takes because they’ve never done any work. And the next generation of implementers are being taught that there are no new ideas. You just ask the oracle to give you the answer.
Art is already accessible. Plenty of artists that sells their art dirt cheap, or you can buy pen and papers at the dollar store.
What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.
What people want when they say “AI is making art accessible” is they want high quality professional art for dirt cheap.
…and what their opposition means when they oppose it is “this line of work was supposed to be totally immune to automation, and I’m mad that it turns out not to be.”
There is already a lot of automation out there, and more is better, when used correctly. And that’s not talking about the outright theft of the material from these artists it is trying to replace so badly.
I think they also want recognition/credit for spending 5 minutes (or less) typing some words at an image generator as if that were comparable to people who develop technical skills and then create effortful meaningful work just because the outputs are (superficially) similar.
That should be a big warning sign that the next generation of developers are not going to be very good.
Sounds like job security to me!
“I want the people I teach to be worse than me” is a fucking nightmare of a want, I hope you learn to do better
So there’s this new thing they invented. It’s called a joke. You should try them out sometime, they’re fun!
So, there’s this new phenomenon they’ve observed in which text does not convey tone. It can be a real problem, especially when a statement made by one person as a joke would be made by another in all seriousness — but don’t worry, solutions have very recently been proposed.
I dunno what kind of world you are living in where someone would make my comment not as a joke. Please find better friends.
you’re as funny as the grave
space alien technology!!~
“oh shit I got called out on my shitty haha-only-serious comment, better pretend I didn’t mean it!” cool story bro
If people say that sort of thing around you not as a joke, you need to spend your time with better people. I dunno what to tell you - humor is a great way to deal with shitty things in life. Dunno why you would want to get rid of it.
jesus fuck how do you fail to understand an a post of this kind this badly
“How dare you not find me funny. I’m going to lecture you on humor. The lectures will continue until morale improves.”
maybe train your model better! I know I know, they were already supposed to be taking over the world… alas…
I dunno. I feel like the programmers who came before me could say the same thing about IDEs, Stack Overflow, and high level programming languages. Assembly looks like gobbledygook to me and they tell me I’m a Senior Dev.
If someone uses ChatGPT like I use StackOverflow, I’m not worried. We’ve been stealing code from each other since the beginning.“Getting the answer” and then having to figure out how to plug it into the rest of the code is pretty much what we do.
There isn’t really a direct path from an LLM to a good programmer. You can get good snippets, but “ChatGPT, build me a app” will be largely useless. The programmers who come after me will have to understand how their code works just as much as I do.
fuck almighty I wish you and your friends would just do better
LLM as another tool is great. LLM to replace experienced coders is a nightmare waiting to happen.
IDEs, stack overflow, they are tools that makes the life of a developers a lot easier, they don’t replace him.
All the newbs were just copying lines from stackexchange before AI. The only real difference at this point is that the commenting is marginally better.
Stack Overflow is far from perfect, but at least there is some level of vetting going on before it’s copypasta’d.
Watched a junior dev present some data operations recently. Instead of just showing the sql that worked they copy pasted a prompt into the data platform’s assistant chat. The SQL it generated was invalid so the dev simply told it “fix” and it made the query valid, much to everyone’s amusement.
The actual column names did not reflect the output they were mapped to, there’s no way the nicely formatted results were accurate. Average duration column populated the total count output. Junior dev was cheerfully oblivious. It produced output shaped like the goal so it must have been right
As an artist, I can confirm.
The headlines said that 30% of code at Microsoft was AI now! Huge if true!
Something like MS word has like 20-50 million lines of code. MS altogether probably has like a billion lines of code. 30% of that being AI generated is infeasible given the timeframe. People just ate this shit up. AI grifting is so fucking easy.
More code is usually bad code.
I thought it could totally be true - that devs at MS were just churning out AI crap code like there was no tomorrow, and their leaders were cheering on their “productivity”, since more code = more better, right?
From that angle, sure. I’m more sneering at the people who saw what they wanted to see, and the people that were saying “this is good, actually!!!”
30% of code is standard boilerplate: setters, getters, etc that my IDE builds for me without calling it AI. It’s possible the claim is true, but it’s terribly misleading at best.
- Perhaps you didn’t read the linked article. Nadella didn’t claim that 30% of MS’s code was written by AI. What he said was garbled up to the eventual headline.
- We don’t have to play devil’s advocate for a hyped-up headline that misquotes what an AI glazer said, dawg.
- “Existing code generation codes can write 30%” doesn’t imply that AI possibly/plausibly wrote 30% of MS’s code. There’s no logical connection. Please dawg, I beg you, think critically about this.
I guess their brains don’t lift
Man. If this LLM stuff sticks around, we’ll have an epidemic of early onset dementia.
If the stories lf covid related cognitive decline are aue we are going to have a great time. Worse than lead paint.
“Oh man, this brain fog I have sure makes it hard to think. Guess I’ll use my trusty LLM! ChatGPT says lead paint is tastier and better for your brain than COVID? Don’t mind if I do!”
I’m on a diet of rocks, glue on my pizza, lead paint, and covid infections, according to Grok this is called the Mr Burns method which should prevent diseases, as they all work together to block all bad impulses. Can’t wait to try this new garlic oil I made, using LLM instructions. It even had these cool bubbles while fermenting, nature is great.
I’ve been beating this drum for like 4~5y but: I don’t think the tech itself is going anywhere. published, opensourced, etc etc - the bell can’t be unrung, the horses have departed the stable
but
I do also argue that an extremely large amount of wind in the sails right now is because of the constellation of VC/hype//etc shit
can’t put a hard number on this, but … I kind see a very massive reduction; in scope, in competence, in relevance. so much of this shit (esp. the “but my opensource model is great!” flavour) is so fucking reliant on “oh yeah this other entity had a couple fuckpiles of cash with which to train”, and once that (structurally) evaporates…
yeah, the “some projects” bit is applicable, as is the “machine generated” phrasing
@gsuberland pointed out elsewhere on fedi just how much of the VS-/MS- ecosystem does an absolute fucking ton of code generation
(which is entirely fine, ofc. tons of things do that and it exists for a reason. but there’s a canyon in the sand between A and B)
All compiled code is machine generated! BRB gonna clang and IPO, bye awful.systems! Have fun being poor
No joke, you probably could make tweaks to LLVM, call it “AI”, and rake in the VC funds.
way too much effort
(not in the compute side, but in the lying-obstructionist hustle side)
would I happier if I abandoned my scruples? I hope I or nobody I know finds out.
For some definition of “happiness”, yes. It’s increasingly clear that the only way to get ahead is with some level of scam. In fact, I’m pretty sure Millennials will not be able to retire to a reasonable level of comfort without accepting some amount of unethical behavior to get there. Not necessarily Slipp’n Jimmy levels of scam, but just stuff like participating in a basic stock market investment with a tax advantaged account.
Had a presentation where they told us they were going to show us how AI can automate project creation. In the demo, after several attempts at using different prompts, failing and trying to fix it manually, they gave up.
I don’t think it’s entirely useless as it is, it’s just that people have created a hammer they know gives something useful and have stuck it with iterative improvements that have a lot compensation beneath the engine. It’s artificial because it is being developed to artificially fulfill prompts, which they do succeed at. When people do develop true intelligence-on-demand, you’ll know because you will lose your job, not simply have another tool at your disposal. Although the prompts and flow of conversations people pay to submit to the training is really helping advance the research into their replacements.
My opinion is it can be good when used narrowly.
Write a concise function that takes these inputs, does this, and outputs a dict with this information.
But so often it wants to be overly verbose. And it’s not so smart as to understand much of the project for any meaningful length of time. So it will redo something that already exists. It will want to touch something that is used in multiple places without caring or knowing how it’s used.
But it still takes someone to know how the puzzle pieces go together. To architect it and lay it out. To really know what the inputs and outputs need to be. If someone gives it free reign to do whatever, it’ll just make slop.
There’s something similar going on with air traffic control. 90% of their job could be automated (and it has been technically feasible to do so for quite some time), but we do want humans to be able to step in when things suddenly get complicated. However, if they’re not constantly practicing those skills, then they won’t be any good when an emergency happens and the automation gets shut off.
The problem becomes one of squishy human psychology. Maybe you can automate 90% of the job, but you intentionally roll that down to 70% to give humans a safe practice space. But within that difference, when do you actually choose to give the human control?
It’s a tough problem, and the benefits to solving it are obvious. Nobody has solved it for air traffic control, which is why there’s no comprehensive ATC automation package out there. I don’t know that we can solve it for programmers, either.
My opinion is it can be good when used narrowly.
ah, as narrowly as I intend to regard your opinion? got it
That’s the problem, isn’t it? If it can only maybe be good when used narrowly, what’s the point? If you’ve managed to corner a subproblem down to where an LLM can generate the code for it, you’ve already done 99% of the work. At that point you’re better off just coding it yourself. At that point, it’s not “good when used narrowly”, it’s useless.
It’s a tool. It doesn’t replace a programmer. But it makes writing some things faster. Give any tool to an idiot and they’ll fuck things up. But a craftsman can use it to make things a little faster, because they know when and how to use it. And more importantly when not to use it.
yawn
“it’s a tool” - a tool
The “tool” branding only works if you formulate it like this: in a world where a hammer exists and is commonly used to force nails into solid objects, imagine another tool that requires you to first think of shoving a nail into wood. You pour a few bottles of water into the drain, whisper some magic words, and hope that the tool produces the nail forcing function you need. Otherwise you keep pouring out bottles of water and hoping that it does a nail moving motion. It eventually kind of does it, but not exactly, so you figure out a small tweak which is to shove the tool at the nail at the same time as it does its action so that the combined motion forces the nail into your desired solid. Do you see the problem here?
It’s a tool.
(if you persist to stay with this dogshit idiotic “opinion”:) please crawl into a hole and stay there
fucking what the fuck is with you absolute fucking morons and not understand the actual literal concept of tools
read some fucking history goddammit
(hint: the amorphous shifting blob, with a non-reliable output, not a tool; alternative, please, go off about how using a php hammer is definitely the way to get a screw in)
Baldur Bjarnason’s given his thoughts on Bluesky:
My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.
I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking
We submit copilot assisted code all the time. Like every day. I’m not even sure how you’d identify ours. Looks almost exactly the same. Just less work.
Don’t worry, if you apply yourself really hard one day you might become an actual engineer. Keep trying.
copilot assisted code
The article isn’t really about autocompleted code, nobody’s coming at you for telling the slop machine to convert a DTO to an html form using reactjs, it’s more about prominent CEO claims about their codebases being purely AI generated at rates up to 30% and how swengs might be obsolete by next tuesday after dinner.
Oh cool what do you work on? I’d love to know the product.
Definitely not dozing myself in this place 🤣
Coward
@IsThisAnAI @dgerard I spray shit at the wall all the time. Like every day
The people who own the walls are *vexed*
Cheers who don’t use AI to assist them are worse than those that do. Feel bad.
??? and this is the best post you could do? how embarrassing for you
Wat
I, too, segfaulted on this one
I treat AI as a new intern that doesn’t know how to code well. You need to code review everything, but it’s good for fast generation. Just don’t trust more than a couple of lines at a time.
I treat AI as a new intern that doesn’t know how to code well
This statement makes absolutely zero sense to me. The purpose of having a new intern and reviewing their code is for them to learn and become a valuable member of the team, right? Like we don’t give them coding tasks just for shits and giggles to correct later. You can’t turn an AI into a senior dev by mentoring it, however the fuck you’d imagine that process?
You’ve fallen for one of the classic blunders: assuming that OP thinks that humans can grow and develop with nurturing
You can’t turn an AI into a senior dev by mentoring it, however the fuck you’d imagine that process?
Never said any of this.
You can tell AI commands like “this is fine, but X is flawed. Use this page to read how the spec works.” And it’ll respond with the corrections. Or you can say “this would leak memory here”. And it’ll note it and make corrections. After about 4 to 5 checks you’ll actually have usable code.
But what’s the point of having that if it doesn’t result in improvement on the other side? Like you’re doing hard work to correct code and respond with feedback but you’re putting that into the void to no one’s benefit.
Hiring an intern makes sense. It’s an investment. Hiring an AI at the same skill level makes negative sense.
Not all projects needs VC money to get off the ground. I’m not going to hire somebody for a pet project because CMake’s syntax is foreign to me, or a pain in the ass to write. Or I’m not interested in spending 2 hours clicking through their documentation.
Or if you ever used DirectX the insane “code by committee” way it works. Documentation is ass and at best you need code samples. Hell, I had to ask CoPilot to tell me how something in DXCompiler worked and it told me it worked because the 5000 line cpp file had it somewhere in there. It was right, and to this day, I have no idea how it came up with the correct answer.
There is no money in most FOSS. Maybe you’ll find somebody who’s interested in your project, but it’s extremely rare somebody latches on. At best, you both have your own unique, personal projects and they overlap. But sitting and waiting for somebody come along and having your project grind to halt is just not a thing if an AI can help write the stuff you’re not familiar with.
I know “AI bad” and I agree with the sentiment most of the time. But I’m personally okay with the contract of, I feed GitHub my FOSS code and GitHub will host my repo, run my actions, and host my content. I get the AI assistance to write more code. Repeat.
What does this have to do with literally anything I said about comparing AI with interns
If I ever meet an intern for a FOSS project, I’ll buy a lottery ticket
The first sentence of my comment?
I’ve heard this from others, too. I don’t really get it.
I watched a teammate working with AI:
- Identify the problem: a function was getting passed an object-field when it should be getting the whole object
- Write instruction to the AI: “refactor the function I’ve selected to take a Foo instead of a String or Box<String>. Then in the Foo function, use the bar parameter. Don’t change other files or functions.”
- Wait ~5s for Cursor to do it
It did the instructions and didn’t fuck anything up, so I guess it was a success? But they already knew exactly what the fixed code should look like, so it seems like they just took a slow and boring path to get there.
When I’m working with a new intern, they cost me time. Everything is 2-4x slower. It’s worth it because (a) I like working with people and someone just getting into programming makes me feel happy and (b) after a few months I’m able to trust that they can do things on their own and I’m not constantly checking to see if they’ve actually deleted random code or put an authentication check on an unauthenticated endpoint etc etc. The point of an intern is to see if you want to hire them as a jr dev who will actually become worthwhile in 6+ months.
There’s a lot of false equivalence in this thread which seems to be a staple of this instance. I’m sure most people here have never used AI coding and I’m just getting ad-hominem “counterpoints”.
Nothing I said even close to saying AI is a full replacement for training junior devs.
The reality is, when you actually use an AI as a coding assistant there are strong similarities when training somebody who is new to coding. They’ll choose popular over best practices. When I get an AI assisted code segment, it feels similar to copypasted code from a stackoverflow. This is aside from the hallucinations.
But LLM operate on patterns, for better or for worse. If you want to generate something serious, that’s a bad idea. There’s a strong misconception that AI will build usable code for you. It probably won’t. It’s only good at snippets. But it does recognize patterns. Some of those patterns are tedious to write, and I’d argue feel even more tedious the more experienced you are in coding.
My most recent usage of AI was making some script that uses WinGet to setup a dev environment. Like I have a vague recollection of how to make a .cmd script with if branches, but not enough at the top of my head. So you can say “Generate a section here that checks if WinSDK is installed.” And it will. Looks fine, move on. The %errorlevel% code is all injected. Then say “add on a WinGet install if it’s not installed.” Then it does that. Then I have to repeat all that again for ninja, clang, and others. None of this is mission critical, but it’s a chore to write. It’ll even sprinkle some pretty CLI output text.
There is a strong misconception that AI are “smart” and programmers should be worried. That’s completely overselling what AI can do and probably intentionally by executives. They are at best assistant to coders. I can take a piece of JS code and ask AI to construct an SQL table creation query based on the code (or vice versa). It’s not difficult. Just tedious.
When working in teams, it’s not uncommon for me to create the first 5%-10% of a project and instruct others on the team to take that as input and scale the rest of the project (eg: design views, build test, build tables, etc).
There are clear parallels here. You need to recognize the limitations, but there is a lot of functionality they can provide as long as you understand what it can’t do. Read the comments of people who have actually sat down and used it and you’ll see we’ve the same conclusion.
My most recent usage of AI was making some script that uses WinGet to setup a dev environment.
This is a good example. What I’m saying is that pre-AI, I could look this up on StackOverflow and copy/paste blindly and get a slightly higher success rate than today where I can “AI please solve this”.
But I shouldn’t pick at the details. I think the “AI hater” mentality comes in because we’ve got this thing that boils down to “a bit more convenient than copying the solution off of StackOverflow” when used very carefully and “much worse than copying and pasting random code” when used otherwise. But instead of this honest pitch, it’s mega-hype and it’s only when people demand specific examples that someone starts talking like you do here.
I feel so bad for the interns, and really your team in general, for having to interact with you
christ this post is odious
I feel quite confident in stating two things. 1) you fucking suck at your job. 2) the people reliant on you for things fucking hate dealing with you.
the fact that you wrote this much florid effluent opinion, with as paltry examples as you bring to bear… christ
just fucking learn some scripting languages, ffs
Make a point or go away. Ad-hominem is nonsense is boring.
It’s only ad-hominem if they discredit your points by insulting you. If they read your points and use them to make statements about your character, that’s not ad hominem, that’s valid inference.
You probably need an example. Let’s say Alice and Beelice are having a conversation.
Alice: “I think seed oils are bad for you because RFK Jr. said so! MAGA!”
If Beelice says: “Alice, you are a real sack of potatoes, and therefore you are wrong,” that’s ad hominem.
If Beelice says: “Alice, if you’re going to parrot RFK Jr, then the worms deserve to eat the rotten flesh in your skull,” that’s plain inference.
Understand now, dear?
A junior developer learns from these repeated minor corrections. LLM’s can’t learn from them. they don’t have any runtime fine-tuning (and even if they did it wouldn’t be learning like a human does), at the very best past conversations get summarized and crammed into the context window hidden from the user to provide a shallow illusion of continuity and learning.
you sound like a fucking awful teammate. it’s not your job to nitpick and bikeshed everything they do, it’s your job to help them grow
“you need to code review everything” motherfucker if you’re saying this in context only of your juniors then you have a massive organisational problem
it’s not your job to nitpick and bikeshed everything they do
Wow. Talk about projection. I never said any of that, but thanks for letting everyone know how you treat other people.
The point is AI can generate a good amount of code, but you can’t trust it. It always needs to be reviewed. It makes a lot of mistakes.
Talk about projection
Projection? Nobody was talking about projection until you brought it up. Talk about projection.
I never said any of tha
not with your words you didn’t