An excerpt has surfaced from the AI2027 podcast with siskind and the ex AI researcher, where the dear doctor makes the case for how an AGI could build an army of terminators in a year if it wanted.
It goes something like: OpenAI is worth as much as all US car companies (except tesla) combined, so it could buy up every car factory and convert it to a murderbot factory, because that’s kind of like what the US gov did in WW2 to build bombers, reaching peak capacity in three years, and AGI would obviously be more efficient than a US wartime gov so let’s say one year, generally a completely unassailable syllogism from very serious people.
Even /r/ssc commenters are calling him out about the whole AI doomer thing getting more noticeably culty than usual edit: The thread even features a rare heavily downvoted siskind post, -10 at the time of this edit.
The latter part of the clip is the interviewer pointing out that there might be technological bottlenecks that could require upending our entire economic model before stuff like curing cancer could be achieved, positing that if we somehow had AGI-like tech in the 1960s it would probably have to use its limited means to invent the entire tech tree that leads to late 2020s GPUs out of thin air, international supply chains and all, before starting on the road to becoming really useful.
Siskind then goes “nuh-uh!” and ultimately proceeds to give Elon’s metaphorical asshole a tongue bath of unprecedented depth and rigor, all but claiming that what’s keeping modern technology down is the inability to extract more man hours from Grimes’ ex, and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep. And didn’t you know, having non-experts micromanage everything in a project is cool and awesome actually.
Lmao the AGI tries to reason how to acheive world domination, but it’s just trained on the open internet, and accidentally starts taking a sex fantasy blog about world domination as refrence for reality.
The AGI decides it needs to buy all the car factories and make murder bots, it gets stuck in an error loop because it can’t interact correctly with the web portal that allows it to contact the first owner of the first factory. It runs in this loop forever, the CO2 emissions from its datacenter eventually choke out all large animal life on earth.
Then the jellyfish develop sentience next and are responsible and realize AGI and AI was just a marketing gimmick.
It is as if there were people fantasizing about automaton mouths and lips and tongues and vocal cords for some reason, and come up with all these fantasies of how it’ll be when automatons can talk.
And then Edison invents the phonograph.
And then they stick their you know what in the gearing between the cylinder and the screw.
Except somehow more stupid, because these guys are worried about AI apocalypse while boosting AI hype that pays for this supposed apocalypse.
edit: If someone said in 1850s “automatons won’t be able to talk for another 150 years or longer because the vocal tract is too intricate”, and some automaton fetishist says that they will be able to talk in 20 years, the phonograph shouldn’t lend any credence whatsoever to the latter. What is different this time is that phonograph was genuinely extremely useful for what it is, while the generative AI is not quite as useful and they’re going for the automaton fetishist money.
“This thing we don’t understand yet is probably very simple and easy to replicate and I say this as someone who does not understand the thing yet because once again, nobody does!” - All “futurist” “genius” “thought leaders”
An AGI could microwave a burrito so hot that not even the AGI, in its omnipotence, could eat it
A thing that doesn’t exist and that we don’t even have a concept of a plan of how to make, could easily do something extremely unlikely
Oh man this is peak venture capitalism crossed with Factorio - valuations are actually cash, and a factory is a black box where you just upload new software and other stuff comes out.
Let’s take your average holder of car manufacturer stock. You’re holding the stock because you believe the car manufacturer will continue making competitive products, and you’ll get either dividends or higher valuations. Then OpenAI pitches up and offers you - what? They don’t even have stock! Even if they did, you’re exchanging a stake in something known for stake in an enterprise that have never made any cars, and when asked what kind of business plan they have they look shifty. No fucking way anyone will sell their stake for less than double what they have, especially if they find out the factory they’re selling is gonna produce machines that will kill us all.
Yeah the financial illiteracy is quite high, on top of the rest. But dont worry AI nobel prize winners say it is possible!
(Are there multiple ai Nobel prize winners who are ai doomers?)
Stephen Hawking was starting to promote AI doomerism in 2014. But he’s not a Nobel prize winner. Yoshua Bengio is a doomer, but no Nobel prize either, although he is pretty decorated in awards. So yeah looks like one winner and a few other notable doomers that aren’t actually Nobel Prize winners somehow became winners plural in Scott’s argument from authority. Also, considering the long list of example of Noble Disease, I really don’t think Nobel Prize winner endorsement is a good way to gauge experts’ attitudes or sentiment.
I was very tempted to go ‘don’t think it is more than one nobel guy, which is not great because of nobel disease anyway. I could link to rationalwiki here but that has come under threat because the people whos content you enjoy Scott started a lawsuit against them’ but think that might be a bit culturewarry, and I also try not to react at the places we point towards. As that just leads to harassment like behaviour. Also Penrose is a Nobel prize winner who is against AGI stuff.
Yeah it’s really not productive to engage directly.
I’d almost categorize Penrose as a borderline case of noble disease himself for stuff he’s said about Quantum Consciousness and relatedly the halting problem and Godel’s incompleteness theorem. But he actually has a proposed mechanism (involving microtubules) that is testable and falsifiable and the physics half of what he is talking about is within his domain of expertise.
(Are there multiple ai Nobel prize winners who are ai doomers?)
There’s Geoffrey Hinton I guess, even if his 2024 Nobel in (somehow) Physics seemed like a transparent attempt at trend chasing on behalf of the Nobel committee.
That is the one I was thinking of, the way the comments are phrased makes it seem like there are a lot of winners who are doomers. Guess Hinton is a one man brigade.
I think Demis Hassabis (chemistry for alpha fold) has said the chance of AI killing all of humanity is somewhere between 0 and 100%.
is somewhere between 0 and 100%.
That really pins it down, doesn’t it?
and that’s how we should view the eventual AGI-LLMs, like wittle Elons that don’t need sleep.
Wonder how many people stopped being AI-doomers after this. I use the same argument against ai-doom.
He claims he was explaining what others believe not what he believes
Others as in specifically his co-writer for AI2027 Daniel Kokotlajo, the actual ex-OpenAI researcher.
I’m pretty annoyed at having this clip spammed to several different subreddits, with the most inflammatory possible title, out of context, where the context is me saying “I disagree that this is a likely timescale but I’m going to try to explain Daniel’s position” immediately before. The reason I feel able to explain Daniel’s position is that I argued with him about it for ~2 hours until I finally had to admit it wasn’t completely insane and I couldn’t find further holes in it.
Pay no attention to this thing we just spent two hours exhaustively discussing, it’s not really relevant context.
Also the title is inflammatory only in the context of already knowing him for a ridiculous AI doomer, otherwise it’s fine. Inflammatory would be calling the video economically illiterate bald person thinks evaluations force-buy car factories, China having biomedicine research is like Elon running SpaceX .
I couldn’t find further holes in it
Here’s a couple:
- iirc it claims we’ll have reliable “agents” in mid 2025. Fellas it’s almost June in the year of the “agents” and frankly I don’t see shit. We are not starting strong here.
- they predict a 10k person anti-AI protest in DC. For context, the recent “Hands Off” protest in DC saw 100k person turnout. Israel / Palestine protest saw 300K in DC in 2023. A ten-thousand-person protest isn’t really anything out of the ordinary? It’s almost like the authors have never been to a protest, don’t understand collective action because they live in a bubble or something? But they assure us, this document is thoroughly researched maybe their point was self-deprecating, “woe is us, only 10K people show up :(”
- When they get into their super agi fanfic, they describe Agent-n as “never stops training” continuously learning from the environment. Like the only way I read this is that somehow, we discover paradigm shifting algorithmic discoveries by coincidence in the next couple years that make DL obsolete so we can abandon train-inference approaches and instead have this embodied entity that is constantly taking feedback from the environment to “train” but the system itself is still described under the massive data center heavy DL framework. It’s like they know that bio intelligence has this continuous feedback mechanism, so obviously ai researchers will just patch that in, how hard can it be?
- Ong, i swear they just put in there at some point “hallucinations are solved” the thing they have been claiming will be solved in the next month since 2023.
- You get better at being smart by INT-grinding. A machine could be INT-grinding the whole time. It’s like in Oblivion if you wanted to grind Speed you could go into a city, stand in a doorway and place something heavy on the jump key on the keyboard. Then while you take care of the dishes or something, your character grinds. But for INT!
If it gets smart enough it will start finding hacks, like those INT- increasing potions in Morrowind that increased your Alchemy so you could make even better INT-potions.
It might even get smart enough to escape the Elder Scrolls; and start playing another game!
Grinding in Oblivion you say?
Daniel Kokotlajo, the actual ex-OpenAI researcher
Unclear to me what Daniel actually did as a ‘researcher’ besides draw a curve going up on a chalkboard (true story, the one interaction I had with LeCun was showing him Daniel’s LW acct that is just singularity posting and Yann thought it was big funny). I admit, I am guilty of engineer gatekeeping posting here, but I always read Danny boy as a guy they hired to give lip service to the whole “we are taking safety very seriously, so we hired LW philosophers” and then after Sam did the uno reverse coup, he dropped all pretense of giving a shit/ funding their fan fac circles.
Ex-OAI “governance” researcher just means they couldn’t forecast that they were the marks all along. This is my belief, unless he reveals that he superforecasted altman would coup and sideline him in 1998. Someone please correct me if I’m wrong, and they have evidence that Daniel actually understands how computers work.
Didn’t mean to imply otherwise, just wanted to point out that the call is coming from inside the house.
np, im just screaming into the void on this beautiful Monday morning
He claims he was explaining what others believe not what he believes, but if that is so, why are you so aggressively defending the stance?
Literally the only difference between Scott’s beliefs and AI:2027 as a whole is his
prophecyestimate is a year or two later. (I bet he’ll be playing up that difference as AI 2027 fails to happen in 2027, then also doesn’t happen in 2028.)Elsewhere in the thread he whines to the mods that the original poster is spamming every subreddit vaguely lesswrong or EA related with engagement bait. That poster is katxwoods… as in Kat Woods… as in a member of Nonlinear, the EA “organization” whose idea of philanthropic research was nonstop exotic vacations around the world. And, iirc, they are most infamous among us sneerer for “hiring” an underpaid (really underpaid, like couldn’t afford basic necessities) intern they also used as a 24/7 live-in errand girl, drug runner, and sexual servant.
Deleted earlier message, sorry I called Scott out for not doing things he had done. Even if the whole mods ‘restricting her messages now only after she went after Scott’ is quite iffy. (LW people write normally challenge failed “One upfront caveat. I am speaking about “Kat Woods” the public figure, not the person. If you read something here and think, “That’s not a true/nice statement about Kat Woods”, you should know that I would instead like you to think “That’s not a true/nice statement about the public persona Kat Woods, the real human with complex goals who I’m sure is actually really cool if I ever met her, appears to be cultivating.”” (The idea is good, this just reads like a bit of a sovcit style text and could have been replaced with ‘I mean this not as an attack on her personally, I’m just doubting the effectiveness of her spammy posting style’).
Also: ‘Mods mods mods, kat spill my jice help hel help help’
Wow he looks even dorkier in video than in photos.
Also, add obvious and overdetermined to the pile of siskindisms next to very non-provably not-correct.