The hardest thing to do with an LLM is to get it to disagree with you.
Yeah, I occasionally use conversational AI and its really hard to let the AI have any agency in the story because they usually just go ahead with whatever you write
A trick I’ve employed is to pretend to believe in something completely different. If it says “no, you’re wrong” and goes on to tell me what I actually believe, then it’s a good indicator that I might be on the right path.
Yeah, I occasionally use conversational AI and its really hard to let the AI have any agency in the story because they usually just go ahead with whatever you write
A trick I’ve employed is to pretend to believe in something completely different. If it says “no, you’re wrong” and goes on to tell me what I actually believe, then it’s a good indicator that I might be on the right path.
You… you got AI to follow Cunningham’s Law? The easiest way to get the right answer is to give the wrong one.
I don’t know how to feel about this.