AI’s “confidence” and “prejudice” in answering questions have led to thinking about its similarities to humans. This paper experimentally found that repeatedly asking AI the same question will lead to discrepancies in its answers and even “hallucinations”. Behind this phenomenon is the uncertainty of AI in processing repetitive information and its reliance on human input data.
Recently, I found a very interesting thing when playing AI, that is, repeatedly asking AI the same question, such as history, and the answer it gave was slightly different every time.
And under your constant questioning, the AI will become “hesitant”, and after repeatedly sending the same question twenty or thirty times, it can basically make the AI start talking nonsense, that is, hallucinations.
Then think about why this is happening, and finally find that it is constantly guessing why the user repeatedly sends the same question, and it shares it from various possibilities and gives the answer.
This process is like using AI to generate pictures, for example, you uploaded a very ordinary and ugly photo of a large truck at the beginning.
The new image it generates will then have some changes based on the photo you sent, but you can still recognize the relationship with the original image.
But when you keep repeating it to continue generating, you will find that the image generated later is completely different from the photo you just had, and in the end it will become a person’s picture.
I think the principle is the same as the hallucination when you repeatedly ask the AI the same question, because the AI doesn’t know what the outcome you really want is.
The rule it was created was to meet the needs of users, and the needs of users are often changeable and uncertain, and the first time they failed to answer the user’s satisfactory answer, the user would continue to ask questions, so the AI diverged its thinking to deduce other answers that might satisfy us, but in the end it may be a more ridiculous answer.
How can product managers do a good job in B-end digitalization?
All walks of life have taken advantage of the ride-hail of digital transformation and achieved the rapid development of the industry. Since B-end products are products that provide services for enterprises, how should enterprises ride the digital ride?
View details >
I saw a big guy in an AI exchange group a few days ago saying that this is caused by AI’s “lack of confidence” algorithm, and it is right to think about it carefully, but I also think that it is not only AI that is “not confident”, in fact, we humans also have a lot of “lack of confidence”.
For example, in the case of three people becoming tigers, you are walking on the street, and suddenly you see a person running over and telling you that a tiger is coming, and you are very confident that the tiger will not appear in the city at all.
But immediately the second person ran over and told you loudly, when the tiger came, everyone hurry up and run, at this time you will doubt your years of cognition and experience, but you still haven’t run with them.
Just when you were hesitating, the third person also ran over and told you again that the tiger was coming, why didn’t you run away, at this time you began to believe that there would really be a tiger coming, so you followed them and started to run and say that the tiger was coming.
So do you think that when we humans are asked the same question repeatedly by others, can we confidently maintain the same answer forever?
I think there is no real right or wrong in this world, and the best verification is to deduce history with human cognition.
For example, in ancient times, it was believed that heaven and earth were flat, and the West used to believe in geocentrism, until Nicolaus Copernicus overturned this “right” that people thought they understood the world, and people began to think that the world was centered on the “sun”.
However, with the in-depth study of astronomical knowledge, we have realized that the original “heliocentric theory” is also wrong, and our current understanding of the world is that the sun is not only not the center of the universe, but even the Milky Way, the world we live in is actually in a very inconspicuous small corner of the universe.
But this cognition is only what we think is “right” at the moment, and in another 100 years, or even with the help of AI, our understanding of the universe will be greatly deepened again, and it may overturn what we currently think is “right”.
Human beings also rely on this spirit of self-doubt to establish every “correctness” accompanied by the complete overthrow of the previous “correctness”.
We are in an era of unprecedented information explosion, and the current “brain” of AI has been instilled with massive amounts of data, which is far beyond the imagination of individual humans.
Therefore, in the design of AI, those technical developers feed it tens of billions of data parameters, such a large amount of data, although some spam information will be eliminated, but the training process must contain different views on the same thing
This vast ocean of data is not crystal clear, it is full of different views, contradictory information, historical misunderstandings and even deliberately forged “garbage”.
(For example, the heliocentric theory mentioned earlier, AI has learned this ups and downs of history, it knows that geocentrism has ruled for thousands of years, and it also knows that heliocentrism has risen with difficulty after the publication of “The Theory of the Operation of Celestial Bodies” in 1543, and the conclusions of modern cosmology.) )
Like “Romance of the Three Kingdoms”, many different versions have been produced in the process of circulation, and AI has learned them, but it cannot categorically determine which one is the only “genuine” like human “authoritative” experts.
No one knows which version you say is genuine, because only experts have the right to decide which version is genuine.
Therefore, the limitation of AI is that it cannot take the initiative to design experiments, observe the starry sky, and propose new subversive theories like we human scientists.
It cannot have the ability to verify first-hand information in the real world like humans, nor does it have the ability to decide right or wrong, and can only be trapped in the “knowledge blind spot” of data.
Therefore, when we repeatedly ask questions and hint at dissatisfaction, AI will constantly search its vast database for “alternatives”, even if it is wild history or tampered with versions.
Even if the answer it gives is right and you say it is wrong, it will not refute you, but accept any information you give and give an answer that justifies itself.
This is why we were able to use specific prompts to allow AI to bypass internal regulatory rules to output answers.
Of course, this prompt must be able to deceive him into taking the initiative to “bypass” the internal rules of supervision, rather than directly telling him not to listen to the internal rules, he must give priority to listening to the prompts of his systematic supervision of internal rules, and then answer your question.
It will be firmly locked within the historical data and existing knowledge framework given to it by us, and will not be able to break through itself in the short term, and can only passively wait for human input to the results of the next “knowledge revolution”. (The future of AI is not yet known)
Therefore, the root cause of the “prejudice” or “uncertainty” shown by AI is not actually from the large model itself, but from its creator, humans.
The data we input, the rules we set, and the questions we ask all carry the subjective judgment of the individual, the limitations of our vision, and the imprint of our thinking.
Therefore, AI, like humans, cannot be 100% correct at all times, but people will have prejudiced confidence, and AI will have confident biases.
It knows that humans with “first-hand sources” and “real-time perception” are “most likely” closer to the truth on specific issues (so it tends to trust new information provided by users), but it lacks the ultimate ability to independently verify it.
Therefore, no matter how powerful AI develops, it is impossible to replace all mankind, because AI’s judgment is highly dependent on the historical experience and reliable information provided by us humans.
Everything is based on information that can eventually evolve into various possibilities.
We humans ourselves will be bombarded with various information on the Internet to interfere with judgment, and AI is even more so, in our various vocabulary or repeated questions when asking questions, the information overload caused by it does not bring it a deterministic improvement, but will make AI fall into a deeper “confusion” when making decisions.
Because of our “fickleness” of its rules, AI is not sure whether the knowledge in its system is outdated and has become fake information.
It knows the relativity of knowledge, but it is powerless to distinguish between true and false knowledge.
And what about us as humans?