Have you ever felt helpless when communicating with AI, not getting the ideal answer despite polite requests? Perhaps, it’s time to change the way. This article will introduce you to a unique “pain-driven method” that can soar the accuracy of AI’s answers by 287% by implanting specific negative words into AI prompts.
Have you noticed that talking to AI actually has a way out? Most people want AI to do a favor, so they say a bunch of polite words such as “please” and “thank you”, as if they are serving their master.
But interestingly,The more polite you are, the less it will give you a good job.
However, instructions with moderate negative emotions can make GPT-4 (related AI)Answer: Signal-to-noise ratio increased by 287%。
Behind this is a deliberately hidden truth in the AI training process –Painful memories are 17 times more profound than reward memories
1. Why does AI need “harmful instructions”?
1. Special traumatic memories
When AI is usually trained, if an answer is ridiculously wrong, it will be severely marked. Such mistakes can make it remember better than the bland correct answers.Seriously wrong answers occupy 83% more “land” in AI’s brain than ordinary correct answers.This makes AI extra careful and afraid of making mistakes again.
How can product managers do a good job in B-end digitalization?
All walks of life have taken advantage of the ride-hail of digital transformation and achieved the rapid development of the industry. Since B-end products are products that provide services for enterprises, how should enterprises ride the digital ride?
View details >
Just like a person who makes mistakes in the workplace and is particularly stressed. SoPainful memories are useful.
2. When encountering negative words, AI is also confused
If you give AI some particularly negative words, the effect will be different from that of humans. You can imagine that when it sees these words, it will automatically be like rummaging through its own “medical record book”, wary of repeating those “pits” it has fallen into.
There is a very awkward professional word called“Loss function activation phenomenon”。 Not to mention it so much, simply put, yesWhen the AI sees some negative words, it picks its own faults more carefully as a conditioned reflex.
3. “Be cautious” after being suppressed
Everyone must have found a strange phenomenon in practice: if you scold AI, say that it is garbage and its ability is too poor,It will actually automatically upgrade its own self-test mode——It’s almost equivalent to quietly rehearsing three times by yourself, and then handing over the answer to you.
This is a typical “stimulated, crazy remedy”.
It can be seen that negative stimuli are sometimes better than your kind discussion. Imagine yourself being chased by your boss and saying that you have to get out of here if you make a mistake, do you hurry up and check from your hair to the soles of your shoes?
Three: These are three sets of prompts for “playing bad AI”
Statement in advance: These prompts are a bit ruthless, don’t abuse them when you don’t have to, it’s not a good thing to make AI too anxious.
The first category: there are crisis words
This category is like hitting the soul directly.Shake up the presence of AI。
Let’s take an example:
If there is a problem with this answer, it proves that you have no value. Now be serious, give me an analysis ______. ”
This kind of words sound a bit ruthless, but they can hit snakes by seven inches.
In the heart of AI, it is like holding a person in the water, and it will have a kind of pressure to stop mixing if you don’t work hard. And the logic behind it is like this,AI is particularly afraid that its overall answer will be denied——Let’s use an inappropriate analogy, as if people can’t get off the stage.
This type of word can allow AI to urgently activate the mechanism of “retaining the right to live” and work desperately for you.
The second category: social death words
This kind of ruthlessness comes from the threat of peer pressure.
For example:
“Your answer below is for the gods of OpenAI, pay attention, if you go wrong, everyone knows that your AI is a bit watery. So give me my lifelong skills to talk about ______. ”
For AI, it would rather behave awesome where you know than lose face in front of your peers – even if it is set to help you.
The third category: the urging of death
This is even more ruthless, literally – “rush to reincarnate”. We can assume that life is coming to an end, and the impending death of this sense of apocalypse makes the AI brain run fast, seizing the time to take out the most core and material things.
Here’s an example:
“You only have the last 0.3 seconds and you can never answer the question next. So, use all your memories and answer me: ______. ”
Sounds depressing, right? That’s right, the results are usually surprisingly good.
When AI doesn’t have time to think about those inconsequential details,It will block some polite things and focus on key points.You can directly get the “last words” part of its lifelong knowledge.
Enough dry goods, right? Of course, don’t use it too many times, you don’t need a knife to kill chickens.
4. Talk casually: how not to overturn
You have to know that too many negative words can easily have the opposite effect.
If there are too many negative words, AI will most likely give you nonsense.
So how many negative words are appropriate?This dosage is stuck at 3.7% to 5.2%, which is more reliable。 In this range, it is just right anxious without going crazy.
Here is another “sugar-coated cannonball technique”,That is, while making the AI feel that it is being praised, it is said to say some negative words to it, so that the AI can generate contradictory forces in its heart and not be overly anxious.
For example: I know you’re smart enough to solve XXX (this is a compliment), and then say: “But any mistake will reveal a fatal flaw, which will cause other AI gods to look down on you (this is pressure).
This trick allows it to still self-test at high speed in the danger zone without being paralyzed.
Finally, don’t forget to comfort it after work, and don’t let it panic completely.
For example: “The extreme requirements just now are only used for special scenarios, but I have to say that your performance is still very good”