As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.
A bit like a tarot reading. (but even those have quite a bit of structure).
Which bothers me a bit is that people look at this and go ‘it is testing me’ and never seem to notice that LLMs don’t really seem to ask questions, sure sometimes there are related questions to the setup of the LLM, like the ‘why do you want to buy a gpu from me YudAi’ thing. But it never seems curious in the other side as a person. Hell, it won’t even ask you about the relationship with your mother like earlier AIs would.
As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.
This is an absolutely profound take that I hadn’t seen before; thank you.
It prob came from a few of the fired from various ai places ai ethicists who actually worry about real world problems like the racism/bias from ai systems btw.
The article itself also mentions ideas like this a lot btw. This: “Fan describes how reinforcement learning through human feedback (RLHF), which uses human feedback to condition the outputs of AI models, might come into play. “It’s not too different from asking GPT-4 ‘are you self-conscious’ and it gives you a sophisticated answer,”” is the same idea with extra steps.
As somebody said, and im loosely paraphrasing here, most of the intelligent work done by ai is done by the person interpreting what the ai actually said.
A bit like a tarot reading. (but even those have quite a bit of structure).
Which bothers me a bit is that people look at this and go ‘it is testing me’ and never seem to notice that LLMs don’t really seem to ask questions, sure sometimes there are related questions to the setup of the LLM, like the ‘why do you want to buy a gpu from me YudAi’ thing. But it never seems curious in the other side as a person. Hell, it won’t even ask you about the relationship with your mother like earlier AIs would.
This is an absolutely profound take that I hadn’t seen before; thank you.
It prob came from a few of the fired from various ai places ai ethicists who actually worry about real world problems like the racism/bias from ai systems btw.
The article itself also mentions ideas like this a lot btw. This: “Fan describes how reinforcement learning through human feedback (RLHF), which uses human feedback to condition the outputs of AI models, might come into play. “It’s not too different from asking GPT-4 ‘are you self-conscious’ and it gives you a sophisticated answer,”” is the same idea with extra steps.