The problem is that whether or not an AI is self-aware isn’t a technical question - it’s a philosophical one.
And our current blinkered focus on STEM and only STEM has made it so that many (most?) of those most involved in AI R&D are woefully underequipped to make a sound judgment on such a matter.
And our current blinkered focus on STEM and only STEM has made it so that many (most?) of those most involved in AI R&D are woefully underequipped to make a sound judgment on such a matter.
who would be equipped to make a sound judgment on such a matter?
It’s not self aware, it’s just okay at faking it. Just because some people might believe it doesn’t make it so, people also don’t believe in global warming and think the earth is flat.
The problem is that whether or not an AI is self-aware isn’t a technical question - it’s a philosophical one.
And our current blinkered focus on STEM and only STEM has made it so that many (most?) of those most involved in AI R&D are woefully underequipped to make a sound judgment on such a matter.
who would be equipped to make a sound judgment on such a matter?
Philosophy majors.
Don’t even have to be majors, an introductory course in epistemology does wonders in breaking ones self-confidence, in a good way.
It’s not self aware, it’s just okay at faking it. Just because some people might believe it doesn’t make it so, people also don’t believe in global warming and think the earth is flat.
Did you respond to the wrong post?
Of course it’s not self-aware.
That’s my exact point - people with a grounding in philosophy would’ve known better.
Ah gotcha, yeah I guess I can see what you mean.