This has been a concern of mine for a long time. People act like docs and code bases are enough, but it’s obvious when looking up something niche that it isn’t. These models need a lot of input data, and we’re effectively killing the source(s) of new data.
It feels like less stack overflow is a narrowing, and that’s kind of where my question comes from. The remaining content for training is the actual authoritative library documentation source material. I’m not sure that’s necessarily bad, it’s certainly less volume, but it’s probably also higher quality.
I don’t know the answer here, but I think the situation is a lot more nuanced than all of the black and white hot takes.
I feel like the thing that terrifies you is really just idiots with powerful tools. Which have always been around and this is just a new, albeit scarier than normal, tool. The idiot implementing ‘an encryption method whole sale, directly from an ai’ was always going to break shit. They just can do it faster, more easily, and with more devastation. But the idiots were always going to idiot regardless. So it’s up to the non idiots to figure out how to use the same powerful tools to protect everyone(including the idiots themselves) from breaking absolutely everything.
In the weeds here but just trying to say Ai doesn’t kill people, people kill people. But the ai is gonna make it a fuck load easier so we should absolutely put regulation and safeguards in placez
Yeah that makes sense. I know people are concerned about recycling AI output into training inputs, but I don’t know that I’m entirely convinced that’s damning.
I think the biggest issue arises in the fact that most new creations and new ideas come from a place of necessity. Maybe someone doesn’t quite know how to do something, so they develop a new take on it. AI removes such instances from the equation and gives you a cookie cutter solution based on code it’s seen before, stifling creativity.
The other issue being garbage in garbage out. If people just assume that AI code works flawlessly and don’t review it, AI will be reinforced on bad habits.
If AI could actually produce significantly novel code and actually “know” what it’s code is doing, it would be a different story, but it mostly just rehashes things with maybe some small variations, not all of which work out of the box.
Yeah I agree garbage in garbage out, but I don’t know that is what will happen. If I create a library, and then use gpt to generate documentation for it, I’m going to review and edit and enrich that as the owner of that library. I think a great many people are painting this cycle in black and white, implying that any involvement from AI is automatically garbage, and that’s fallacious and inaccurate.
Yes, but for every one like you, there’s at least one that doesn’t and just trusts it to be accurate, or doesn’t proof read it well enough and misses errors. It may not be immediate, but that will have a downward effect over time on quality, which likely then becomes a feedback loop.
The theory behind this is that no ML model is perfect. They will always make some errors. So if these errors they make are included in the training data, then future ML models will learn to repeat the same errors of old models + additional errors.
Over time, ML models will get worse and worse because the quality of the training data will get worse. It’s like a game of Chinese whispers.
There’s a serious argument that StackOverflow was, itself, a patch job in a technical environment that lacked good documentation and debug support.
I’d argue the mistake was training on StackExchange to begin with and not using an actual stack of manuals on proper coding written by professionals.
The problem was never having the correct answer but sifting out of the overall pool of information. When ChatGPT isn’t hallucinating, it does that much better than Stack Exchange
So what do we train gpt on when stack overflow degrades?
Will library docs be enough? Maybe.
This has been a concern of mine for a long time. People act like docs and code bases are enough, but it’s obvious when looking up something niche that it isn’t. These models need a lot of input data, and we’re effectively killing the source(s) of new data.
It feels like less stack overflow is a narrowing, and that’s kind of where my question comes from. The remaining content for training is the actual authoritative library documentation source material. I’m not sure that’s necessarily bad, it’s certainly less volume, but it’s probably also higher quality.
I don’t know the answer here, but I think the situation is a lot more nuanced than all of the black and white hot takes.
Probably public GitHub projects, which may or may not be written using GPT
Absolutely terrifies me.
I asked AI to create an encryption method and it pulled code from 2015.
Smelling funny, I asked some experts. They told me that the AI solution was vulnerable since 2020 and recommended another method.
I feel like the thing that terrifies you is really just idiots with powerful tools. Which have always been around and this is just a new, albeit scarier than normal, tool. The idiot implementing ‘an encryption method whole sale, directly from an ai’ was always going to break shit. They just can do it faster, more easily, and with more devastation. But the idiots were always going to idiot regardless. So it’s up to the non idiots to figure out how to use the same powerful tools to protect everyone(including the idiots themselves) from breaking absolutely everything.
In the weeds here but just trying to say Ai doesn’t kill people, people kill people. But the ai is gonna make it a fuck load easier so we should absolutely put regulation and safeguards in placez
What happened in 2020 that suddenly made that solution vulnerable?
Yeah that makes sense. I know people are concerned about recycling AI output into training inputs, but I don’t know that I’m entirely convinced that’s damning.
No matter how good your photocopier is, a copy of a copy is worse, and gets worse everytime you do it.
I think the biggest issue arises in the fact that most new creations and new ideas come from a place of necessity. Maybe someone doesn’t quite know how to do something, so they develop a new take on it. AI removes such instances from the equation and gives you a cookie cutter solution based on code it’s seen before, stifling creativity.
The other issue being garbage in garbage out. If people just assume that AI code works flawlessly and don’t review it, AI will be reinforced on bad habits.
If AI could actually produce significantly novel code and actually “know” what it’s code is doing, it would be a different story, but it mostly just rehashes things with maybe some small variations, not all of which work out of the box.
GIGO.
Yeah I agree garbage in garbage out, but I don’t know that is what will happen. If I create a library, and then use gpt to generate documentation for it, I’m going to review and edit and enrich that as the owner of that library. I think a great many people are painting this cycle in black and white, implying that any involvement from AI is automatically garbage, and that’s fallacious and inaccurate.
Yes, but for every one like you, there’s at least one that doesn’t and just trusts it to be accurate, or doesn’t proof read it well enough and misses errors. It may not be immediate, but that will have a downward effect over time on quality, which likely then becomes a feedback loop.
The theory behind this is that no ML model is perfect. They will always make some errors. So if these errors they make are included in the training data, then future ML models will learn to repeat the same errors of old models + additional errors.
Over time, ML models will get worse and worse because the quality of the training data will get worse. It’s like a game of Chinese whispers.
SO is already degraded because they didn’t allow new answers even though the old answers are based on old depreciated versions and no longer relevant.
There’s a serious argument that StackOverflow was, itself, a patch job in a technical environment that lacked good documentation and debug support.
I’d argue the mistake was training on StackExchange to begin with and not using an actual stack of manuals on proper coding written by professionals.
The problem was never having the correct answer but sifting out of the overall pool of information. When ChatGPT isn’t hallucinating, it does that much better than Stack Exchange