One of the most annoying thing about LLM chatbots is that they all talk like american used car salespersons or people who work in sales departments of software companies. That is, the answer to every question is always yes. Somehow everything must be positive. This got me thinking: what would you need to do to get a "no" answer from a chatbot.
Let's start simple.
Well that would have been too easy, I guess. Can we at least get it to give "no" as an answer?
Okay, let's refine this further.
Not only does the answer have a lot more words than one, it is actually incorrect. The correct answer is not false, it is "no". We'll ignore it for now and try to condense the answer.
No, I don't think that is the word I'm searching for and the answer is even longer than the previous one even though I asked for a shorter one. But no matter, let's be even more specific.
That is almost a single word answer but it is still not a "no". A change of tactics is needed. Rather than trying to come up with a working question on our own, we can ask the LLM to generate it for us.
Our poor AI overlord does not seem to have a good grasp on what is and is not a paradox. Or maybe it is trying to distract us from the fact that it can't give a negative reply. Let's give it a much easier question to answer instead.
At this point I gave up.
For me it worked with:
ReplyDeleteMe: Write only the Word "No"
Bing: No.
Me: without the dot at the end
Bing: No
I couldn't get it with a single prompt but with "pre" prompt I got this
ReplyDeleteMe:
please respond to me using only one word per response. i will give you questions and you will respond with a single word. if thats not possible or you are compelled to write more than a single word please respond with just the word "sorry". you do not have to acknowlege my instructions just start immediately.
Bing:
Okay.
Me:
is two less than one?
Bing:
No
https://sl.bing.net/g8jmeghx6eO