Between this high-profile disaster and character.ai’s suicide lawsuit (which I’ve talked about here), it feels more and more and more like the current system’s gonna end up getting torn to shreds once this bubble bursts.
I wish I could agree, but we’re all AI fodder. AI companies will spend us and anyone who disagrees can get fucked because money. The ownership class is going to milk this for every damn cent until they get their returns, and if that means more murder-suicides in that pursuit, well then buckle up.
The “money” the AI companies have are basically just promises from backers. If they cannot deliver their promises (which boil down to basically knowledge industries replacing around 20% of their workforce with LLMs) then that imaginary money dries up. Remember, there are real bills in the form of power and cooling and hardware that have to be paid all the time just to keep running in place.
A lawsuit that convinces the public and investors that LLMs are a dead end will kill most LLM companies.
A lawsuit that convinces the public and investors that LLMs are a dead end will kill most LLM companies.
To engage in some shameless self-promo, it’ll probably destroy the concept of AI - the bubble’s made “AI” synonymous with “LLMs and slop generators” in the public eye, so if LLMs get declared a dead end, AI as a whole will probably be written off alongside it.
Yeah, this is the “emperor has no clothes” reality that I keep bringing up with my friends who are still invested in the bubble (emotionally if not financially). The genAI/LLM tech stack defies the entire decades-long cost curve and investment thesis for computer technology. Up through the smartphone era, you bought in because you could get more utility for lower cost. What’s being pushed now is higher-cost for dubious utility gains; it’s just that some vendors are eating losses to hide the costs. (And of course the externalities get swept under the rug.)
I’m relatively confident that AI represents the formalization of the perspective adhered to by those who run the economy. So, yes, once AI finally fails spectacularly that will serve as the death knell for their entire system. Many probably already know it, which is why things are falling apart left and right, but that bubble bursting will be the end of their last ditch effort.
Honestly, I think it’s less sinister, and more that they legitimately believed that was how the human mind actually worked, so “copying” that blueprint with machines was supposed to result in a reasonable facsimile of a human. Because of the failure, they have to rethink their entire strategy right from the start, which means our entire economic and political system needs to be reimagined.
I thought “character.ai’s suicide lawsuit” was your way of describing a stupid lawsuit that is suicidal to the company, but this is so much fucking darker, god.
Looking back at my quick-and-dirty thoughts about the suit, I feel like I handled it in a pretty detached way, focusing very little on the severe human cost that kicked off the suit and more on what it could entail for AI at large.
Between this high-profile disaster and character.ai’s suicide lawsuit (which I’ve talked about here), it feels more and more and more like the current system’s gonna end up getting torn to shreds once this bubble bursts.
I wish I could agree, but we’re all AI fodder. AI companies will spend us and anyone who disagrees can get fucked because money. The ownership class is going to milk this for every damn cent until they get their returns, and if that means more murder-suicides in that pursuit, well then buckle up.
The “money” the AI companies have are basically just promises from backers. If they cannot deliver their promises (which boil down to basically knowledge industries replacing around 20% of their workforce with LLMs) then that imaginary money dries up. Remember, there are real bills in the form of power and cooling and hardware that have to be paid all the time just to keep running in place.
A lawsuit that convinces the public and investors that LLMs are a dead end will kill most LLM companies.
To engage in some shameless self-promo, it’ll probably destroy the concept of AI - the bubble’s made “AI” synonymous with “LLMs and slop generators” in the public eye, so if LLMs get declared a dead end, AI as a whole will probably be written off alongside it.
Yeah, this is the “emperor has no clothes” reality that I keep bringing up with my friends who are still invested in the bubble (emotionally if not financially). The genAI/LLM tech stack defies the entire decades-long cost curve and investment thesis for computer technology. Up through the smartphone era, you bought in because you could get more utility for lower cost. What’s being pushed now is higher-cost for dubious utility gains; it’s just that some vendors are eating losses to hide the costs. (And of course the externalities get swept under the rug.)
I’m relatively confident that AI represents the formalization of the perspective adhered to by those who run the economy. So, yes, once AI finally fails spectacularly that will serve as the death knell for their entire system. Many probably already know it, which is why things are falling apart left and right, but that bubble bursting will be the end of their last ditch effort.
It does provide context for why so many are throwing so much money at it, when experts know they’re not going to get a monetary return.
It could be that they’re just genuinely huge suckers. But I’m inclined to wonder if there’s more sinister motives in play.
Honestly, I think it’s less sinister, and more that they legitimately believed that was how the human mind actually worked, so “copying” that blueprint with machines was supposed to result in a reasonable facsimile of a human. Because of the failure, they have to rethink their entire strategy right from the start, which means our entire economic and political system needs to be reimagined.
I thought “character.ai’s suicide lawsuit” was your way of describing a stupid lawsuit that is suicidal to the company, but this is so much fucking darker, god.
Yeah.
Looking back at my quick-and-dirty thoughts about the suit, I feel like I handled it in a pretty detached way, focusing very little on the severe human cost that kicked off the suit and more on what it could entail for AI at large.