Is Zuckerberg an idiot? Or does he have an actual plan with this?
Seems to me it’s completely useless like Metaverse.
If the LLM is so stupid it can’t figure out the sides of an equal sign can be reversed as simple as in 2+2=4 <=> 4=2+2. He will never achieve general intelligence by just throwing more compute power at it.
As powerful as LLM is, it’s still astoundingly stupid when it hits its limitations.
The difference is that we can go beyond that limitation. Even self-coding AI will either solve a problem, or compound its own inefficiencies before asking an operator to help out.
There is a lot of theoretical work on this problem, but I’m in the camp that isn’t convinced large language models are the path towards general intelligence.
Throw 10x the computing power on it and it might learn that a maths equation is reversible, because it will probably have seen enough examples of that. But it won’t learn what an equation represents, and therefore won’t extrapolate situations that can be solved by equations.
You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.
Is Zuckerberg an idiot? Or does he have an actual plan with this?
Seems to me it’s completely useless like Metaverse.
If the LLM is so stupid it can’t figure out the sides of an equal sign can be reversed as simple as in 2+2=4 <=> 4=2+2. He will never achieve general intelligence by just throwing more compute power at it.
As powerful as LLM is, it’s still astoundingly stupid when it hits its limitations.
Humans are astoundigly stupid when they hit their limitations.
The difference is that we can go beyond that limitation. Even self-coding AI will either solve a problem, or compound its own inefficiencies before asking an operator to help out.
Your post sounds almost as dense as:
“everything that can be invented has been invented.” - Duell 1899.
I don’t know much, but from what I know, we still haven’t reach a point of diminishing returns, so more power = more better.
There is a lot of theoretical work on this problem, but I’m in the camp that isn’t convinced large language models are the path towards general intelligence.
Throw 10x the computing power on it and it might learn that a maths equation is reversible, because it will probably have seen enough examples of that. But it won’t learn what an equation represents, and therefore won’t extrapolate situations that can be solved by equations.
You can already ask ChatGPT to model a real life scenario with a simple math equation. There is at least a rough model of how basic math can be used to solve problems.