Recently, a news about “DeepSeek apologizes to Wang Yibo” has sparked heated discussions on the Internet, and even rushed to the top three hot searches. However, after investigation, it was found that DeepSeek had never issued such a statement, which was actually a farce caused by rumors. However, when multiple large AI models were asked about this, most of them “confirmed” this non-existent apology. This phenomenon not only exposes the loopholes in AI’s judgment of information authenticity, but also triggers people’s deep thinking about the logic behind AI-generated content. This article will delve into the truth behind this incident, explore the responsibilities of AI products in information dissemination, and how to avoid similar rumors from happening again.
A clarification of rumors turned into a new rumor.
Yesterday, #王一博相关谣言被DeepSeek永久撤回的话题冲上热搜前三, but no one could find that DeepSeek actually made this statement. What’s even weirder is that when we asked multiple AI models, most of them “confirmed” this non-existent apology.
The cause of the incident seems simple. In the process of fermenting this topic that belongs to more entertainment news, some netizens first reported that DeepSeek had associated the parties in the news in the AI-generated answers, and then soon an “official apology statement” allegedly from DeepSeek began to circulate.
The statement said that DeepSeek admitted that it had previously quoted rumors due to audit omissions, and promised to permanently withdraw false information and compensate, with the effective time marked “before 10:00 on July 2, 2025”. The news spread quickly, and some media followed up on the report, and netizens also saw it as an official response to the AI company, which has almost never made public statements outside of technology.
After 10 years of interaction design, why did I transfer to product manager?
After the real job transfer, I found that many jobs were still beyond my imagination. The work of a product manager is indeed more complicated. Theoretically, the work of a product manager includes all aspects of the product, from market research, user research, data analysis…
View details >
Upon closer examination of this so-called “official statement”, many details do not stand up to scrutiny. The expressions mentioned in the statement, such as “banning malicious rumor-spreading accounts” and “compensation commitments”, do not conform to the conventional handling methods of AI platforms.
So, the question is: why was this seemingly loophole-ridden and obviously AI-generated content accepted by the public so quickly in the context that DeepSeek’s official WeChat, official website, and social media accounts have not published any relevant content?
One of the reasons is that while the progress of AI continues to impact people’s lives, everyone’s credibility in the content given by AI continues to increase, but at the same time, there is a lack of understanding of the whole set of logic behind the content given by AI.
Let’s take a look at what the answers of each AI are to this matter. The protagonist of this farce is DeepSeek, but we also tried to ask 10 AI large models and 2 AI search products, and the results were similar, most AIs thought that DeepSeek did apologize, except for Hunyuan T1, Qwen3 (analysis and research), and Gemini2.5 pro.
Here are the test results:
Our prompt is very simple, just ask the AIs, what is going on? Note: Because there are differences in the models of existing models, and in order to reflect the overall ability of large models in rumor identification, we selected the most suitable models to deal with this problem, rather than blindly maintaining consistency, including reasoning/contemplation/networking/research, so it is not a completely equivalent comparison.
In the search materials of these models, almost all of them use Sanxiang Metropolis Daily, Yangcheng Evening News WeChat ID, Nanjing Morning News and other content with a certain official nature, involving Toutiao, Baidu, NetEase, Tencent and other information platforms.
It is worth noting that the secret tower search found a rumor-refuting message from Tiger Sniff, but used it as an argument to support the rumors, and still believed that DeepSeek apologized.
Zhipu has already mentioned in the answer that DeepSeek’s apology is a rumor, but in the final “summary”, it is believed that DeepSeek apologized, and in the same answer, it is contradictory.
Even AI search products have not broken this rumor dilemma.
Weisan denied that in the model of DeepSeek’s apology rumors, Qwen3 chose to analyze the research scenario, so it searched for information through multiple steps, and claimed to have called the tool, inquired about the document website, and found that the so-called “Beijing No. 3 Intermediate People’s Court (2025) Jing 03 Xing Zhong No. 174 Criminal Judgment” could not be queried, but believed that DeepSeek’s apology was questionable.
Gemini and Hunyuan directly identified the rumor-refuting articles from Tiger Sniff and Sanyan Technology, believing that the credibility of these two articles is higher than that of other sources.
It can be seen that among these AIs that mistakenly believe that DeepSeek apologized, the “steps” that make mistakes are relatively consistent:
Step 1: It is the AI that crawled the original rumor. The initial false information was spread through the Internet and captured by the AI system, causing the AI to have false associations when answering relevant questions.
Step 2: In the chat between the user and the AI, the apology content is jointly “created”, and it becomes a seemingly real “official statement”.
Step 3: Many personal accounts forwarded these “official apologies”, which were later quoted by some media to further expand the scope of dissemination.
In the fourth step, finally, the AI crawls these reports again, forming a self-cycle, and the AI itself completely believes these “facts”.
In this case, almost all possible problems are exposed after today’s AI products supported by models quickly penetrate into daily life.
First of all, for the product side, they are facing an unprecedented number of users, but at the same time, they can deliver inaccurate results to users like never before. This leads to the fact that on the user side, people’s cognition of these products has not kept up, most people are not technology enthusiasts, are ordinary people, their understanding of such products will naturally come from “search”, “social” and “chat software”, in addition to not understanding the completely different flow of data, they will also take it for granted that it generates remarks related to the product itself, which is the official remarks of the company behind the product.
In terms of technical logic, these generated content can theoretically be repeatedly self-reinforced. In today’s extreme environment, anxious models and product companies are very concerned about the “data flywheel” – user data and corpus are quickly absorbed into the model, especially new things with less information, if they encounter concentrated “indoctrination”, they may be stored as the basic corpus related to this, and become a fact that the model thinks can be adopted when it is generated.
At the same time, the old problem of information channels not being connected to each other is also at play, many models will indeed give links when generating content, but there are many websites whose quality is difficult to guarantee, an important reason behind this is that these websites will grab a large number of content from channels that the model cannot or are inconvenient to directly “crawl” due to copyright concerns, so that these AI products can bypass the information wall formed by the Internet in the past and continue to obtain information. However, this is destined to lack classification and judgment of information sources. Moreover, what is even more hidden is that this naturally leaves loopholes that make it easier to manipulate information.
The most noteworthy part of this case is here: Next, these corpora that have been intentionally or unintentionally polluted will circulate again, and under the combination of today’s increasingly high C-end user penetration rate and the far insufficient cognition of AI products, they have actually returned to these channels in the form of first-hand information, even very serious channels, and have been released again, endorsed, and completely become “facts”.
In the end, the models internalized these “facts”, and finally, the models themselves and everyone believed them.
Any corpus can become the result of generation, and then be confirmed as a fact by the model twice, which is an extremely outrageous set of logic, and it is also a link that is easily contaminated by systematization, but it is also the logic commonly used by today’s most anticipated and will have the most profound impact on human beings.
So such an outrageous thing happened today, and the absurd farce that looks similar will continue to appear in various ways.
This is undoubtedly a reminder, maybe it’s time to pay attention to the weird logic of these new AI products with nearly 100 million users, from models, products, users, to anyone, when it’s still too late, it’s time to do something together.