From $75 million in financing, a valuation of 500 million yuan to 4-month layoffs of two-thirds, Manus’ sharp fall is like a slap in the face, waking up the “universal agent omnipotence”. The article reviews the entire process of its traffic halving, inflated ARR, and users complaining that it is “slow and expensive”, pointing out that before the boundaries between LLM illusion and computing power are broken, the grand narrative advocating zero Workflow is just a bubble; Truly landed AI must return to vertical scenarios, structured processes, and verifiable value.
I woke up to a surprising news: Manus has laid off employees! After thinking about it for a while, I feel that it seems to be taken for granted, so let’s take a look at a timeline first:
- March 6: Explosive release, hard to find a code, the technology circle sleeps nights;
- May 13: Open for registration;
- May: Received $75 million in financing, valued at $500 million;
- June: The headquarters moved to Singapore and began to “internationalize”;
- July 8: In response to layoff rumors, “based on operational efficiency considerations”;
Subsequently, specific figures were seen in some media such as the investment community:
Manus currently has a total of about 120 employees in China, except for more than 40 core technical personnel who will be laid off and optimized after moving to the Singapore headquarters, giving N+3 or 2N compensation.
Compared with the various employee cheering videos celebrated when Manus was released in early March, it is still very embarrassing…
How can product managers do a good job in B-end digitalization?
All walks of life have taken advantage of the ride-hail of digital transformation and achieved the rapid development of the industry. Since B-end products are products that provide services for enterprises, how should enterprises ride the digital ride?
View details >
Looking at another set of figures here, you may have a different feeling: According to extraordinary industry research data, Manus’ annual recurring revenue (ARR) in May 2025 has reached $9.36 million, showing strong growth momentum.
At the same time, its competitor Genspark also handed over a more aggressive report card. Founder Jing Kun revealed on the X platform that ARR exceeded $22 million just one month after Genspark was launched; On the 45th day of launch, it further jumped to $36 million, showing amazing explosiveness and growth efficiency.
But both Manus and its competitor Genspark are experiencing a rapid decline in user traffic after a short-term explosive growth.
According to Similarweb data, Manus’ monthly visits peaked in March 2025 (23.76 million), then fell month by month to 16.16 million in May 2025, and rebounded slightly to 17.3 million in June; Genspark, on the other hand, quickly climbed from 4.4 million in March to 8.88 million in April, and 8.43 million and 7.69 million in May and June, respectively, showing a slow downward trend.
Data quoted from Z Finance “Manus lays off nearly 2/3 of its employees in China, the number of visits has declined month after month, and the general AI Agent may usher in a cooling moment”
As for the reason, the Sequoia AI Summit has proposed a few days ago: AI opportunities in the vertical field in the past two years.
If you have to find the reason for the slippage of Manus product data, I think we should go from the macro to the micro, here are a few core problems: Manus is like a toy, slow and unable to solve the problem, and in the end it is still expensive!
I think a user’s evaluation is very good: the product imagination has far exceeded the boundaries that technology can achieve, and the current main thing is not whether it can be thought, but whether it can be realized, and one of the reasons why there are not many manus products is that technology has not yet been able to achieve similar products.
It’s just that there is a point wrong here: there are so many Manus products, but they are not easy to use.
Despite this, Manus products still leave a lot of things, here we will first discuss from the two aspects of market education and technology forecasting, first of all, the market side:
Generalized anxiety of CEOs
Familiar classmates know that I used to start a business in the direction of AI+ enterprise management, and part of my job was to use a set of AI workflow low-code platform to quickly realize the construction of SOP for the enterprise.
Because management tools ultimately point to two points:
The first human efficiency tool is used to monitor the team, monitor the project, measure human efficiency, and measure the ROI of the project, which no one likes;
The second efficiency tool is used to improve efficiency, but all efficiency improvements mean layoffs;
For the company, unless the top leader needs + the team is really short of manpower for a long time, it is difficult to have the possibility of management tools to enter, for the boss:
You can work overtime to solve problems independently, so why do you need me to buy efficiency tools? You can buy your own efficiency tools!
Don’t laugh, this situation is very common in small and medium-sized companies, if you let the boss choose whether to let employees work overtime or let employees improve efficiency, they will most likely choose to work overtime, there are many Taoists behind this, including cultural considerations, including obedience tests, etc…
Therefore, general digital transformation, and even AI workflows are “out of the boss’s eyes” at all, but the DeepSeek+Manus combination somewhat breaks this pattern, and the reasons are also ridiculous:
On the one hand, small and medium-sized bosses are curious, and on the other hand, they find that this thing may completely kill some people, because there is a possibility of permanently reducing certain costs, so they are willing to give this budget!
On this basis, these companies that even Excel companies are not good for use have actually begun to go to the agent project, which actually stems from the CEO’s unrealistic expectations, and it is also the original intention of me to be the CEO digital clone of the AI management project: so that AI can really assist the boss in thinking, and can also help him work…
So the AI workflow, the skin change skin appeared in front of the bosses again, and the bosses were suddenly very enthusiastic, but the problem will still happen further: this story is difficult to close the loop!
Imagine: companies that can’t understand Excel, what core data do they have to support AI to perform well?
Therefore, in the end, the AI landing of many small and medium-sized companies eager for quick success is finally a process of pitting each other…
Vertical fields are the future
The contraction of Manus products is not only a strategic adjustment of a star company, but also like breaking the fantasy of general AI agents (agents) trying to “take over the world” under the current technical conditions.
CEOs’ anxiety-driven short-sighted procurement (not digitalization, direct to AI) exposes their misunderstanding of AI capabilities; The promise that technology cannot fulfill eventually leads to the collapse of business logic.
But behind this, there is a more fundamental and fierce technical route dispute: general end-to-end Agent (so-called “zero Workflow”) and vertical structured workflow (SOP), who is the current technical path that is more suitable for AI applications?
In the face of Manus’s predicament, optimistic generalists (agent advocates) may offer the “bitter lesson” of Rich Sutton, the father of reinforcement learning: computing power crushes everything, and simple and universal methods will eventually win.
They envision that, like AlphaGo or GPT-3, relying on pure large model capabilities and massive data, agents can independently plan, call tools, solve all problems, and finally eliminate those “clumsy” and “rigid” vertical workflow applications.
However, the real world is far more complex than a chessboard or text sequence. Sutton’s theory has its core of truth, but it requires a key premise: computing power must act on the right architecture that can effectively characterize real-world knowledge.
The essence of current LLMs (large language models) is a “word sequence conditional probabilistic model” based on massive text. It learns “what is most likely to be the next word in a given context”, which is a powerful statistical fitting ability, but far from true understanding and thinking.
This architecture has a short-term insurmountable gap in the face of complex, ambiguous, and tacit knowledge verticals: knowledge damage.
1. Data incompleteness and characterization bottlenecks
In the case of healthcare, real-world diagnostic decisions are far from textbook symptom-drug maps.
Physicians rely on vast amounts of tacit knowledge that cannot be fully coded: patient microexpressions (pain tolerance), socioeconomic factors (ability to pay, family support), ethical considerations (quality of life vs. longevity), and the delicate balance of cross-departmental collaboration.
Most of this knowledge is difficult to structure and accurately describe in words, and naturally cannot be fully learned and characterized by LLMs.
GPT-4’s high score in medical exams does not mean that it can handle complex clinical conditions, just as passing the flight theory test does not mean that it can handle air dangers.
2. Openness vs. closed rules
AlphaGo’s success is based on the complete transparency of Go rules and limited state space.
The “general tasks” that Manus is trying to solve are blurred (different tasks interact with each other), the state is dynamically evolving (user needs change rapidly), and the values are diverse or even conflicting (efficiency vs. security vs. employee feelings).
This requires metacognitive abilities (reflecting on the limitations of one’s own decision-making) and dynamic value trade-offs, far beyond the “statistical fitting” category of current LLMs.
3. Language limitations
The more essential problem is that the text itself can only describe 30-40% of the real world.
A large amount of perceptual information, situational information, and intuitive judgment cannot be effectively encoded into text data for training LLMs.
Multimodality (image, sound, sensor data) is the direction, but its integration, understanding and effective use still have a long technical gap to overcome before supporting a “zero workflow” general agent.
Advocating the realization of “zero Workflow” at the moment is tantamount to letting a child who has just learned to read to see a doctor with a medical textbook.
The result is likely to be what Manus and the others experienced: products behave like toys, slow and expensive, unable to solve practical problems, and are eventually abandoned by users.
Drive in reverse
The value of Workflow (SOP) needs to be re-examined, and he has even been stigmatized as “driving backwards” by some radical Agent advocates. This is not only wrong, but also more likely to be a strategic “entrance battle”.
The core value of Workflow is that it is a “buffer layer” and “stabilizer” against the limitations of current LLMs:
1. Certainty
In high-risk, high-value verticals (financial risk control, medical diagnosis, industrial control), preset workflow rules and check nodes are the last line of defense to avoid model “hallucinations” and unpredictable behavior, which ensures the reliability of output and the controllability of business processes.
Imagine a bank credit approval agent playing completely freely, without workflow constraints based on regulatory rules and risk control models, and the consequences are unimaginable.
2. Pragmatic engineering
Instead of letting agents consume a lot of computing power and tokens for lengthy and error-prone reasoning, it is better to directly call pre-designed, proven workflows to complete tasks efficiently.
This saves more than 90% of resources when dealing with a large number of repetitive and regular tasks.
3. Effective encapsulation of domain knowledge
Workflow is essentially a structured encapsulation of human expert experience and industry best practices.
It solidifies the tacit knowledge, industry slang, and subtle judgments of specific scenarios that are difficult to fully teach AI through a combination of processes, rules, and tool calls.
For small and medium-sized enterprises that “Excel is not useful”, Workflow is a real bridge to their access to AI capabilities, not an obstacle.
It’s just that many companies can’t even sort out Workflow, which is actually a technical job…
4. Flywheel system
The execution process of Workflow produces a large amount of structured, high-quality trajectory data (user actions, decision nodes, result feedback).
This data is the golden fuel for training and optimizing foundation models, enhancing their capabilities in specific domains.
Without this workflow data from real scenarios, the “vertical evolution” of the general model would be passive water.
To sum up, belittling Workflow and advocating the realization of “zero Workflow” under current technical conditions is either a technological romanticism that is divorced from reality, or a well-designed “entrance battle”.
Although products like Genspark hold high the banner of agents in publicity, their actual improvement strategies include: introducing professional data sources, parallel search, multi-agent cross-verification, expert review of content, and using offline agents to ensure accuracy.
In essence, it is building a powerful and implicit workflow system! They want users to end up relying solely on their platform, with data and capability ingress firmly in their hands.
Those who advocate vertical models as “driving backwards” often come from giants with strong general-purpose models and platform ambitions.
However, it should be noted that we are opposed to the radical “zero workflow” rhetoric that departs from current technical reality and belittles necessary engineering practices, rather than the “smarter, more autonomous” future direction represented by Agent.
The pragmatic technology evolution path should be: the general model as the coordination layer, the vertical workflow as the execution layer, which we realized last year in our work scenario…
epilogue
Manus’ layoffs are a wake-up call for the entire AI industry: the era of attention is all you need has passed, and the resources obtained by attention will soon be lost due to the inability to accumulate traffic.
Therefore: when the technology is not yet mature, over-hyping the “general agent” and belittling the necessary engineering path (Workflow) will only create a bubble and ultimately hurt user trust and industry development.
Secondly, CEOs’ anxiety needs to be channeled, not used to peddle unrealistic fantasies, otherwise it will be backlashed…
In the next 5-10 years, the key proposition of AI implementation is not “Agent replaces Workflow” or “Workflow hinders Agent”, but how to make AI application explosive products.
All in all: AI application is heavy engineering, we must respect the technical boundaries, pay attention to data engineering, and face up to the difficulty of engineering! Manus’ story may be just the beginning, but there is still a long way to go!