First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses. That somehow ‘rationalists’ are so certain advanced AI will kill everyone in the future (pDoom = 100%!) that they need to commit any violent act needed to stop AI from being developed.
The flaw here is that there’s 8 billion people alive right now, and we don’t actually know what the future is. There are ways better AI could help the people living now, possibly saving their lives, and essentially eliezer yudkowsky is saying “fuck em”. This could only be worth it if you actually somehow knew trillions of people were going to exist, had a low future discount rate, and so on. This seems deeply flawed, and seems to be one of the points here.
But I do think advanced AI is possible. And while it may not be a mainstream take yet, it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future. I can link deepmind papers with all of these, published in 2022 or 2023.
And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.
So I was wondering what the people here generally think. There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.
I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.
Here’s my questions:
-
Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.
-
Do you consider it likely, before 2040, those domains will include robotics
-
If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.
-
Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen
-
Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?
*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…
it’s the S in TESCREAL, if that doesn’t answer your question you have some more deprogramming to do (and we are not your exit counselors)
Consider a flying saucer cult. Clearly a cult, great leader, mothership coming to pick everyone up, things will be great.
…What if telescopes show a large object decelerating into the solar system, the flaw from the matter annihilation engine clearly visible. You can go pay $20 a month and rent a telescope and see the flare.
The cult uh points out their “sequences” of writings by the Great Leader and some stuff is lining up with the imminent arrival of this interstellar vehicle.
My point is that lesswrong knew about GPT-3 years before the mainstream found it, many OpenAI employees post there etc. If the imminent arrival of AI is fake - like the hyped idea of bitcoin going to infinity or replacing real currency, or NFTs - that would be one thing. But I mean, pay $20 a month and man this tool seems to be smart, what could it do if it could learn from it’s mistakes and had the vision module deployed…
Oh and I guess the other plot twist in this analogy : the Great Leader is saying the incoming alien vehicle will kill everyone, tearing up his own Sequences of rants, and that’s actually not a totally unreasonable outcome if you could see an alien spacecraft approaching earth.
And he’s saying to do stupid stuff like nuke each other so the aliens will go away and other unhinged rants, and his followers are eating it up.
My point is that lesswrong knew about GPT-3 years before the mainstream found it
yud lost his shit when it turned out that it’s not his favourite flavour of ai that became widely known and successful
you’ve changed my mind, we should introduce Eliezer to seminal work of J. Posadas
We just nuke the datacenters, then aliens will come down and hand us the aligned symbolic AGI, which in turn will teach us communism, water birth and communication with porpoises? WTF I love TREACLESP now!
Content Warning: Ratspeak
spoiler
Let’s say that tomorrow, they build AGI on HP/Cray Frontier. It’s human equivalent. Mr Frontier is rampant or whatever and wants to improve himself. In order to improve himself he will need to create better chips. He will need approximately 73 thousand copies of himself just to match the staff of TSMC, but there’s only one Frontier. And that’s to say nothing of the specialized knowledge and equipment required to build a modern fab, or the difficulty of keeping 73 thousand copies of himself loyal to his cause. That’s just to make a marginal improvement on himself, and assuming everyone is totally ok with letting the rampant AI get whatever it wants. And that’s just the ‘make itself smarter’ part, which everything else is contingent on; it assumes that we’ve solved Moravec’s paradox and all of the attendant issues of building robots capable of operating at the extremes of human adaptability, which we have not. Oh and it’s only making itself smarter at the same pace TSMC already was.
The practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.
What I’m trying to get at is that the practicalities of improving technology are generally skated over by aingularatians in favor of imagining technology as a magic number that you can just throw “intelligence” at to make it go up.
this is where the singularity always lost me. like, imagine, you build an AI and it maxes out the compute in its server farm (a known and extremely easy to calculate quantity) so it decides to spread onto the internet where it’ll have infinite compute! well congrats, now the AI is extremely slow cause the actual internet isn’t magic, it’s a network where latency and reliability are gigantic issues, and there isn’t really any way for an AI to work around that. so singulatarians just handwave it away
or like when they reach for nanomachines as a “scientific” reason why the AI would be able to exert godlike influence on the real world. but nanomachines don’t work like that at all, it’s just a lazy soft sci-fi idea that gets taken way too seriously by folks who are mediocre at best at understanding science
Indeed, if distributed computing worked as well as singulatarians fear everyone would be using Beowulf clusters for their workloads instead of AWS.
Can I live in this world? Please? Pretty please with a cherry on top?
It sounds so much less frustrating than this pile of mistakes with Pike’s shitty ideas at every fucking api and datamodel
Serious answer not from yudnowsky: the AI doesn’t do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes “most of the labor in a chip fab” and GPUs become cheap and then things start to get crazy.
Same elementary school logic but I mean this is how a nuke works.
wait, so the AI is just your fears about capitalism?
Same elementary school logic but I mean this is how a nuke works.
what. no it isn’t
(To be read in the voice of an elementary schooler who is a sore loser at make believe): Nuh-uh! My AGI has quantum computers, so it doesn’t get slow from the internet, and, and, and, it builds robots, with jetpacks, and those robots have tiny robots that can go in your brain and and and make your brain explode, and if you say anything mean about me or the AGI it’ll take your brain and clone it and put wires in it and make you think youre getting like, wedgied and stuff, but really youre not but you think you are because it’s really good at making you think it
oh god, rationalists really were those kids and they never grew out of it
I’m being explicitly NSFW in the hopes that your eyes will be opened.
The Singularity was spawned in the 1920s, with no clear initiating event. Its first two leaps forward are called “postmodernism” and “the Atomic age.” It became too much for any human to grok in the late 1940s, and by the 1960s it was in charge of terraforming and scientific progress.
I find all of your questions irrelevant, and I say this as a machine-learning practitioner. We already have exponential growth in robotics, leading to superhuman capabilities in manufacturing and logistics.
Current the global economy doubles every 23 years. Robots building robots and robot making equipment can probably double faster than that. It won’t be in a week or a month, energy requirements alone limit how fast it can happen.
Suppose the doubling time is 5 years, just to put a number on it. So the economy would be growing a bit over 16 times faster than it was previously. This continues until the solar system runs out of matter.
Is this a relevant event? Does it qualify as a singularity? Genuinely asking, how have you “priced in” this possibility in your world view?
You are an exponential economist, but I am a finite physicist. Do the math.
I actually really liked this reply purely on the fact that it walked a different avenue of response
Because yeah indeed, under the lens of raw naïve implementation, the utter breadth of scope involved in basically anything is so significantly beyond useful (or even tenuous) human comprehension it’s staggering
We are, notably, remarkably competent at abstraction[0], and this goes a hell of a long way in affordance but it’s also not an answer
I’ll probably edit this later to flesh the post out a bit, because I’m feeling bad at words rn
[0] - this ties in with the “lossy at scale” post I need to get to writing (soon.gif)
Yeah, this
post(edit: “comment”, the original post does not spark joy) sparked joy for me too (my personal cult lingo is from Marie Kondo books, whatcha gonna do)One of my takes is that the “AI alignment” garbage is way less of a problem than “Human Alignment” i.e. how to get humans to work together and stop being jerks all the time. Absolutely wild that they can’t see that, except perhaps when it comes to trying to get other humans to give them money for the AIpocalype.
I will answer these sincerely in as much detail as necessary. I will only do this once, lest my status amongst the sneerclub fall.
- I don’t think this question is well-defined. It implies that we can qualify all the relevant domains and quantify average human performance in those domains.
- See above.
- I think “AI systems” already control “robotics”. Technically, I would count kids writing code for a simple motorised robot to satisfy this. Everywhere up the ladder, this is already technically true. I imagine you’re trying to ask about AI-controlled robotics research, development and manufacturing. Something like what you’d see in the Terminator franchise- Skynet takes over, develops more advanced robotic weapons, etc. If we had Skynet? Sure, Skynet formulated in the films would produce that future. But that would require us to be living in that movie universe.
- This is a much more well-defined question. I don’t have a belief that would point me towards a number or probability, so no answer as to “most.” There are a lot of factors at play here. Still, in general, as long as human labour can be replaced by robotics, someone will, at the very least, perform economic calculations to determine if that replacement should be done. The more significant concern here for me is that in the future, as it is today, people will still only be seen as assets at the societal level, and those without jobs will be left by the wayside and told it is their fault that they cannot fend for themselves.
- Yes, and we already see that as an issue today. Love it or hate it, the partisan news framework produces some consideration of the problems that pop up in AI development.
Time for some sincerity mixed with sneer:
I think the disconnect that I have with the AGI cult comes down to their certainty on whether or not we will get AGI and, more generally, the unearned confidence about arbitrary scientific/technological/societal progress being made in the future. Specifically with AI => AGI, there isn’t a roadmap to get there. We don’t even have a good idea of where “there” is. The only thing the AGI cult has to “convince” people that it is coming is a gish-gallop of specious arguments, or as they might put it, “Bayesian reasoning.” As we say, AGI is a boogeyman, and its primary use is bullying people into a cult for MIRI donations.
Pure sneer (to be read in a mean, high-school bully tone):
Look, buddy, just because Copilot can write spaghetti less tangled than you doesn’t mean you can extrapolate that to AGI exploring the stars. Oh, so you use ChatGPT to talk to your “boss,” who is probably also using ChatGPT to speak to you? And that convinces you that robots will replace a significant portion of jobs? Well, that at least convinces me that a robot will replace you.
1, 2 : since you claim you can’t measure this even as a thought experiment, there’s nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks…culminating in robots assembling new robots.
It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.
- This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it’s cheaper to have robots do the task than a human in most situations.
Regarding (3) : the specific mechanism would be AI that works like this:
Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting “what would a human do”. You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the “foundation model”: you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.
The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don’t need to do a lot of engineering work for a robot to do a million different jobs.
Multiple startups and deepmind are working on this.
since you claim you can’t measure this even as a thought experiment, there’s nothing to discus
You’re going to have to lose the LessWrongy superstition that you have to be able to assign numbers to something for it to be meaningful. Sometimes when talking about this big, messy, complicated world, your error bars are so large that assigning any number at all would be meaningless and lead to error. That doesn’t mean you can’t talk qualitatively about what you do know or believe.
- +2, You haven’t made the terms clear enough for there to even be a discussion.
- see above (placeholder for list formatting)
- Uh, OK? Then no (pure sneer: the plot thins). Robots building robots probably already happens in some sense, and we aren’t in the Singularity yet, my boy.
- Sure, why not.
(pure sneer response: imagine I’m a high school bully, and that I assault you in the manner befitting someone of my station, and then I say, “How’s that for a thought experiment?”)
Just to engage with the high school bully analogy, the nerd has been threatening to show up with his sexbot bodyguards that are basically T-800s from terminator for years now, and you’ve been taking his lunch money and sneering. But now he’s got real funding and he goes to work at a huge building and apparently there are prototypes of the exact thing he claims to build inside.
The prototypes suck…for now…
More like you say they’re T-800 prototypes, and I go in and see TI-84s.
Sure, but they were 4 function calculators a few months ago. The rate of progress seems insane.
but crucially, this weird fucker is still trying to have sex with a calculator in front of the whole school
Enclosed please find one (1) Internet, awarded in recognition of the best/worst mental image I’ve had all week
Ok, you do see that you’ve written a self-own, right? Because if you do, bravo, you can eat with us today. But if not, you’re gonna have to do some deep learning elsewhere.
The thing about AI designing and building robots is that making physical things is vastly more expensive than pooping out six-fingered portrait jpegs. All that trial-and-error learning would not come cheap. Even if the AI were controlling CNC machining centers.
There’s no guarantee that the AI would have access to enough parts and materials to be able to be trained to a level of sufficient competence.
I may have accidentally steelmanned robots building robots (pitching RBR for short) in my head picturing those robot arms they have in car factories.
wrong place for this. joint probabilities joke was kinda fire though
1.
Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.
There is no set of domains over which we can quantify to make statements like this. “at least 25% of the domains that humans can do” is meaningless unless you willfully adopt a painfully modernist view that we really can talk about human ability in such stunningly universalist terms, one that inherits a lot of racist, ableist, eugenicist, white supremacist, … history. Unfortunately, understanding this does not come down to sitting down and trying to reason about intelligence from techbro first principles. Good luck escaping though.
Rest of the questions are deeply uninteresting and only become minimally interesting once you’re already lost in the AI religion.
ooooookay longpost time
first off: eh wtf, why is this on sneerclub? kinda awks. but I’ll try give it a fair and honest answer.
First, let me say that what broke me from the herd at lesswrong was specifically the calls for AI pauses.
look, congrats on breaking out, but uh… you’re still wearing the prison jumpsuit in the grocery store and that’s why people are looking at you weirdly
“yay you got out” but you got only half the reason right
take some time and read this
This seems deeply flawed
correct
But I do think advanced AI is possible
one note here: “plausible” vs “possible” are very divergent paths and likelihoods
in the Total Possible Space Of All Things That Might Ever Happen, of course it’s possible, but so are many, many other things
it seems like the problems current AI can’t solve, like robotics, continuous learning, module reuse - the things needed to reach a general level of capabilities and for AI to do many but not all human jobs - are near future
eh. this ties back to my opener - you’re still too convinced about something on essentially no grounded basis other than industry hype-optimism
I can link deepmind papers with all of these, published in 2022 or 2023.
look I don’t want to shock you but that’s basically what they get paid to do. and (perverse) incentives apply - of course goog isn’t just going to spend a couple decabillion then go “oh shit, hmm, we’ve reached the limits of what this can do. okay everyone, pack it in, we’re done with this one!”, they’re gonna keep trying to milk it to make some of those decabillions back. and there’s plenty of useful suckers out there
And if AI can be general and control robots, and since making robots is a task human technicians and other workers can do, this does mean a form of Singularity is possible. Maybe not the breathless utopia by Ray Kurzweil but a fuckton of robots.
okay this is a weird leap and it’s borderline LW shittery so I’m not going to spend much effort on it, but I’ll give you this
it doesn’t fucking matter.
even if we do somehow crack even the smallest bit of computational sentience, the plausibility of rapid acting self-reinforcing runaway self-improvement on such a thing is basically nil. we’re 3 years down the line on the Evergreen getting stuck in the suez and fabs shutting down (with downstream orders being cancelled) and as a result of it a number of chips are still effectively unobtanium (even if and when you have piles and piles of money to throw at the problem). multiple industries, worldwide, are all throwing fucking tons of money at the problem to try recover from the slightest little interruption in supply (and like, “slight”, it wasn’t even like fabs burned down or something, they just stopped shipping for a while)
just think of the utter scope of doing robotics. first you have to solve a whole bunch of design shit (which by itself involves a lot of from-principles directed innovation and inspiration and shit). then you have to figure out how to build the thing in a lab. then you have to scale it? which involves ordering thousounds of parts and SKUs from hundred of vendors. then find somewhere/somehow to assemble it? and firmware and iteration and all that shit?
this isn’t fucking age of ultron, and tony’s parking-space fab isn’t a real thing.
this outcome just isn’t fucking likely on any nearby horizon imo
So I was wondering what the people here generally think
we generally think the people who believe this are unintentional suckers or wilful grifters. idk what else to tell you? thought that was pretty clear
There are “boomer” forums I know of where they also generally deny AI is possible anytime soon, claim GPT-n is a stochastic parrot, and make fun of tech bros as being hypesters who collect 300k to edit javascript and drive Teslas*.
wat
I also have noticed that the whole rationalist schtick of “what is your probability” seems like asking for “joint probabilities”, aka smoke a joint and give a probability.
okay this gave me a momentary chuckle, and made me remember JRPhttp://darklab.org/jrp.txt (which is a fun little shitpost to know about)
from here, answering your questions as you asked them in order (and adding just my own detail in areas where others may not already have covered something)
-
no, not a fuck, not even slightly. definitely not with the current set of bozos at the helm or techniques as the foundation or path to it.
-
no, see above
-
who gives a shit? but seriously, no, see above. even if it did, perverse incentives and economic pressures from sweeping hand motion all this other shit stands a very strong chance to completely fuck it all up 60 ways to sunday
-
snore
-
if any of this happens at some point at all, the first few generations of it will probably look the same as all other technology ever - a force-multiplier with humans in the loop, doing things and making shit. and whatever happens in that phase will set the one on whatever follows so I’m not even going to try predict that
*“epistemic status”: I uh do work for a tech company, my job title is machine learning engineer, my girlfriend is much younger than me and sometimes fucks other dudes, and we have 2 Teslas…
…okay? congrats? is that fulfilling for you? does it make you happy?
not really sure why you mentioned the gf thing at all? there’s no social points to be won here
closing thoughts: really weird post yo. like, “5 yud-steered squirrels in a trenchcoat” weird.
-
Before 2030, do you consider it more likely than not that current AI techniques will scale to human level in at least 25% of the domains that humans can do, to average human level.
Domains that humans can do are not quantifiable. Many fields of human endeavor (e.g. many arts and sports) are specifically only worthwhile because of the limits of human minds and bodies. Weightlifting is a thing even though we have cranes and forklifts. People enjoy paintings and drawing even though we have cameras.
I do not find likely that 25% of currently existing occupations are going to be effectively automated in this decade and I don’t think generative machine learning models like LLMs or stable diffusion are going to be the sole major driver of that automation.
Do you consider it likely, before 2040, those domains will include robotics
Humans are capable of designing a robot, procuring the components to build the robot, assembling it and using the robot to perform a task. I don’t expect (or desire) a computer program to be able to do the same independently during any of our expected lifetime. It is entirely plausible that tools which apply ML techniques will be used more and more in robotics and other industries, but my money is on those tools being ultimately wielded by humans for the foreseeable future.
If AI systems can control robotics, do you believe a form of Singularity will happen. This means hard exponential growth of the number of robots, scaling past all industry on earth today by at least 1 order of magnitude, and off planet mining soon to follow. It does not necessarily mean anything else.
No. Even if Skynet had full control of a robot factory, heck, all the robot factories, and staffed them with a bunch of sleepless foodless always motivated droids, it would still face many of the constraints we do. Physical constraints (a conveyor belt can only go so fast without breaking), economic constraints (Where do the robot parts and the money to buy them come from? Expect robotics IC shortages when semiconductor fabs’ backlogs are full of AI accelerators), even basic motivational constraints (who the hell programmed Skynet to be a
paperclipC3PO maximizer?)Do you think that mass transition where most human jobs we have now will become replaced by AI systems before 2040 will happen
No. A transition like that brought by mechanization and industrialization of agriculture, or the outsourcing of manufacturing industry accompanied by the shift to a service economy, seems plausible, but not by 2040 and it won’t be driven by just machine learning alone.
Is AI system design an issue. I hate to say “alignment”, because I think that’s hopeless wankery by non software engineers, but given these will be robotic controlling advanced decision-making systems, will it require lots of methodical engineering by skilled engineers, with serious negative consequences when the work is sloppy?
Yes, system design is an important issue with all technology. We are already seeing real damage from “AI” technology getting to make important decisions: self-driving vehicle accidents, amplified marginalization of minorities due to feedback of bias into the models, unprecedented opportunities for spam and propaganda, bottlenecks of technology supply chains and much more.
Automation will absolutely continue to replace more and more different kinds of human labor. While this does and will drive unemployment to some extent, there is a more subtle issue with it as well. Productivity of human labor per capita has been soaring decade by decade, but median wages and work hours have stagnated. AI, like many other technologies before and after, is probably gonna end up creating more bullshit jobs, with some people coming into them from already bullshit jobs. If AI can replace half of human labor, that should then mean the average person has to work half as hard, but instead they will have to deliver double the results.
I just think the threat model of autonomous robot factories making superhuman android workers and replicas of itself at an exponential rate is pure science fiction.
These questions of singularity amuse me. People focus on stupid fantasy questions like it means humanity’s doom. Yet, actual singularities are ignored and such part of our lives we have trouble imagining they were singularities.
The current singularity is internet: no one could imagine it 50 years ago, no one still grasp what we can do with it today or even understand it well, and no one can imagine what we will do about it in the future. The biggest fights around it are to fight it with copyrights, surveillance and corporation controle of the applications.
If you take a step back, this is actually the telecommunications revolution. Internet is merely a step after this one. Another big thing was the mecanisation. The shake from it was the industrial revolution. Agriculture may have been another one.
AI, it should be a revolution too. But we’re still in the infancy of this technology, and it’s far, far from what it will deliver in the future. The beginning was with algorithm. The end won’t be a machine revolution achieving sentience. It’ll probably be another era of feudalism because it’s what happens each time: those who possess the technology use it to enslave the rest of the world. And humanity grow and advance forward.