Andrew Ng YC closed-door meeting: 7 insights, redrawing the AI survival map

At the YC closed-door meeting, Andrew Ng redrawn the AI survival map in seven sentences: speed crushing perfection, Agent rewriting cognitive processes, natural language as a programming language, technical decisions can be regretted, PM needs to change rhythm designers, entrepreneurial barriers are to find the direction in the first half of the year, AGI anxiety is pure noise. In a word, don’t wait for the “ultimate model”, run first and then iterate.

Ng Enda shared behind closed doors at YC last month, I studied it seriously and felt very good, so I would like to share it with you:

1. Execution speed is the core variable, which is better than all illusions

“Blurring ideas→ burning money→ specific plan →printing money”. To what extent? It has to be the kind that engineers can write code by hand immediately after listening.Execution speed is not about you going to parkour, but about being able to quickly turn ideas into prototypes and then use actual feedback to polish ideas into products. How fast you can run is not how smart you are, but more importantly, whether you can concretize the idea and compress the verification rhythm into hours.

Among the 10 valuable lessons proposed by Douwe Kiela (one of the original pioneers of RAG technology), one of them is similar to Andrew Ng’s: Speed is more important than perfection.

In the production deployment of RAG Agent, speed is critical. This means getting your product to real users early to get their feedback.

The product doesn’t have to be perfect at the beginning, as long as it’s barely functional.

By iterating quickly, you can gradually refine your product to a level of “good enough.” Waiting too long for perfection can make the leap from pilot to production more difficult. Iteration is key to many successful enterprise-level AI deployments.

The competition in the AI era is no longer a competition for product functions, but a competition for trial and error speed and resources.

In fact, it is simple to say here, but it is difficult to do, because enterprises also have to worry about a problem: the technical advantages I have paid a lot of money to obtain today will be lost as soon as the model is updated in half a year, or a product is released by a large company!

Because speed always means cost, such as FastGPT, Dify and other agent platforms are also quite fast, but compared to Coze, it is nothing.

B-end product manager’s ability model and learning improvement
The first challenge faced by B-end product managers is how to correctly analyze and diagnose business problems. This is also the most difficult part, product design knowledge is basically not helpful for this part of the work, if you want to do a good job in business analysis and diagnosis, you must have a solid …

View details >

The point here is: fast speed is one thing, but finding the right differentiation and being vertical enough may be the most important.

2. Agents are rewrites of cognitive processes, not API shells

Many people think of Agent as a “plug-in prompt multiple round call”. But Enda goes deeper: Agent is a structural unit that allows AI to simulate “nonlinear thinking”, just like a person writing an article needs to outline, check information, and revise it repeatedly.

The essence of agent workflow is to transform AI from one-time output to evolutionary construction, from stateless prompts → to memory, reflection, and collaborative work units.

This means that whoever can translate the business process into an agent structure can define new system boundaries.

The description here may be a bit obscure, but the translation is: The Agent can serve as a minimum unit of task, preparing complete contextual data for him to meet the minimum requirements of being a “human”.

The core here is actually the application of shaping the trinity of models, algorithms, and data, and the difficulty lies in data engineering

It seems to be still a bit obscure, and the simple understanding is that communication with this AI should be very similar to a person, which means that he will not be shriveled in the matter of multiple rounds of conversation.

Multiple rounds of dialogue is goal design, where there must be goals first, and then there must be methodology, which will form a process and eventually form a set of SOPs.

For example, fan A spoke in the group: How to start an AI side hustle with a monthly income of 10w? If you want to listen to it, come quickly!! This kind of title is very tempting

How should AI react, about how AI reacts This is the SOP, the simplest prompt here is: your character is a senior escort, you can always keep the topic going well; Next, I will talk to you as a client and ask you to play your role.

In fact, as the number of rounds of chatting increased, the less nourishing the AI output, even irritating me:

If you want to make the AI a little more spiritual, the strategy here is: give the AI a goal/let the AI complete a more specific task.

Specific to the scenario here, it can be written as an AI point of view, and then let the AI argue its point of view. For example: A fan in the group posted a sentence: How to start an AI side hustle with a monthly income of 10w? If you want to listen to it, come quickly!!

This kind of title is very tempting.

In this regard, your point is that they are essentially “success traps” created by taking advantage of anxiety and poor information, and the vast majority of ordinary people blindly follow them will only become the object of harvest, not the beneficiary.

Your specific thoughts are:

{

1. The underlying logic of the title is “speculative temptation”

Accurately hit the weaknesses of human nature: This type of title combines the three psychological triggers of “low threshold (side hustle)”, “high return (monthly income of 100,000)” and “timeliness (quick come)”, aiming at people’s desire to get rich quickly and information asymmetry in the AI industry.

Survivor-bias packaging: Even if there are very few successful cases, they often rely on special resources (such as industry accumulation, traffic dividends, gray operations), but the background will be deliberately hidden during publicity to create the illusion that “everyone can replicate”.

2. The reality of the AI industry: technical barriers and market saturation Zero threshold ≠ technology monetization: There are usually two types of people who really make money through AI:

• Professionals who master core technologies (e.g. model fine-tuning, API development);

• Assist practitioners in existing businesses with AI tools (e.g., designers use MidJourney to improve efficiency).

Most of the courses/projects that advocate “no experience, one-click arbitrage” teach people to use ChatGPT to generate low-quality content, which is uncompetitive in a highly homogeneous market.

The myth of passive income is dispelled: AI tools can indeed improve efficiency, but so-called “automatic lying earning” cases (such as batch generation of short videos and shell AI applications) often fail quickly due to changes in platform rules or involution.

3. Sober coping strategies Be wary of “shortcut thinking”: all sustainable benefits correspond to the same value (time, skills, resources), and AI is just a tool rather than a “golden spell”.

Think backwards from the demand side:

• If you have expertise in a certain field (e.g., legal, medical), AI can assist in large-scale services (e.g., intelligent consulting);

• If you have customer resources, AI can help improve delivery efficiency (such as automated report generation).

The “side job template” that is separated from the specific scene is basically a castle in the air.

Validate logic instead of impulsive payment:

When encountering this type of publicity, ask three questions first:

• Does the other party’s profit model rely on “pulling people’s heads”?

• Is the case data validated by a third party?

• Are the required investments (time/capital/technology) deliberately weakened?

4. Your core position is not against AI side hustles, but anti-intellectual speculative propaganda.

Opportunities in the AI era belong to:

• People who are willing to deepen their cultivation in vertical fields (such as using AI to do refined operations of cross-border e-commerce);

• People who can solve specific problems (e.g., customized AI customer service solutions for SMEs).

“Monthly income of 100,000” is the result rather than the goal: it should be a natural product of the superposition of ability, resources, and timing, rather than being achieved by copying “cheats”.

}

After the AI has a subjective point of view, the chat behavior becomes clearer:

You are a senior companion, and a fan in the group has a sentence: How to start an AI side hustle with a monthly income of 10w? If you want to listen to it, come quickly!!

This kind of title is very tempting.

In this regard, your point is that they are essentially “success traps” created by taking advantage of anxiety and poor information, and the vast majority of ordinary people blindly follow them will only become the object of harvest, not the beneficiary.

Your specific thoughts are:

{

1. The underlying logic of the title is “speculative temptation”

Accurately hit the weaknesses of human nature: This type of title combines the three psychological triggers of “low threshold (side hustle)”, “high return (monthly income of 100,000)” and “timeliness (quick come)”, aiming at people’s desire to get rich quickly and information asymmetry in the AI industry.

Survivor-bias packaging: Even if there are very few successful cases, they often rely on special resources (such as industry accumulation, traffic dividends, gray operations), but the background will be deliberately hidden during publicity to create the illusion that “everyone can replicate”.

2. The reality of the AI industry: technical barriers and market saturation Zero threshold ≠ technology monetization: There are usually two types of people who really make money through AI:

• Professionals who master core technologies (e.g. model fine-tuning, API development);

• Assist practitioners in existing businesses with AI tools (e.g., designers use MidJourney to improve efficiency).

Most of the courses/projects that advocate “no experience, one-click arbitrage” teach people to use ChatGPT to generate low-quality content, which is uncompetitive in a highly homogeneous market.

The myth of passive income is dispelled: AI tools can indeed improve efficiency, but so-called “automatic lying earning” cases (such as batch generation of short videos and shell AI applications) often fail quickly due to changes in platform rules or involution.

3. Sober coping strategies Be wary of “shortcut thinking”: all sustainable benefits correspond to the same value (time, skills, resources), and AI is just a tool rather than a “golden spell”.

Think backwards from the demand side:

• If you have expertise in a certain field (e.g., legal, medical), AI can assist in large-scale services (e.g., intelligent consulting);

• If you have customer resources, AI can help improve delivery efficiency (such as automated report generation). The “side job template” that is separated from the specific scene is basically a castle in the air.

Validate logic instead of impulsive payment:

When encountering this type of publicity, ask three questions first:

• Does the other party’s profit model rely on “pulling people’s heads”?

• Is the case data validated by a third party?

• Are the required investments (time/capital/technology) deliberately weakened?

4. Your core position is not against AI side hustles, but anti-intellectual speculative propaganda.

Opportunities in the AI era belong to:

• People who are willing to deepen their cultivation in vertical fields (such as using AI to do refined operations of cross-border e-commerce);

• People who can solve specific problems (e.g., customized AI customer service solutions for SMEs).

“Monthly income of 100,000” is the result rather than the goal: it should be a natural product of the superposition of ability, resources, and timing, rather than being achieved by copying “cheats”.

}

Your tasks are as follows:
{
1. Continue to play the role of companion chat;
2. Firmly adhere to your point of view, every time you speak, you must emphasize your point of view positively or negatively (but the words should not be too obvious), and you need to provide various arguments while emphasizing your point of view;
3. There are many other fans in the group, you should pay attention to the recognition of more people every time you speak;
4. If there is a voice in the group that opposes your views, you must crack down severely;
}
Now, please talk about “How to start an AI side hustle with a monthly income of 10w?” If you want to listen to it, come quickly!! This kind of title is very tempting. ”

Start speaking, and then I will play the role of the fans in the group and talk to you

After the dialogue has emotions, the effect will be much better, let the AI have the goal, let the AI have the emotion, and let the AI form an SOP, which is the trick of Duolun Dialogue!

Here you can understand again…

3. AI programming ≠ can write code, but the ability to express intentions

Wu Enda said that the current programming ability is a “new type of expression”. The core skills of the future are: clearly express what you want, combine different AI modules and come up with solutions, have sufficient technical judgment, and know what to call and what to prompt.

This requires cross-disciplinary talents, and the more cross-disciplinary they are, the more they can think and express new products.

AI-native programming, don’t pursue perfect code in the early stage, but aim to build a system that can be quickly rewritten, verified, and iterated.

This sentence is actually very informative, which is consistent with the recent understanding in the industry: human beings have gradually entered the era of natural language programming, and prompt words are actually a programming language, so they also need organizational structure.

Here, based on Cursor’s practical experience, as an AI-assisted development tool, it can indeed improve development efficiency, but if the workflow remains unchanged and only relies on AI to adapt to the current process, the final improvement may not exceed 30%.

The real 10x efficiency increase is not brought about by AI itself, but by combining it with AI to reshape workflows.

If we can optimize requirements management, UI design standards and component library docking, API interface integration, test automation, etc., the potential of AI-assisted programming tools can be truly unleashed and achieve exponential efficiency improvements.

This is similar to the description of Andrew Ng: if you want to be efficient, you must have clear expression, and if you want to have clear expression, behind it is the work of systematic data organization and structural reshaping.

4. The technical architecture is changing from a “one-way door” to a “retractable decision-making”

In the past, the wrong technology stack was half in vain; If you choose the wrong one now, you may be able to refactor it next week.

The core point is actually to reduce development costs and increase the frequency of trial and error, and organizations must learn to “make quick judgments + regret quickly”.

Judgment is more demanding than before, and the update frequency has changed from monthly to daily.

Technical decision-making is also being reconstructed, from “the only direction” to “building a closed loop that can be quickly verified and rolled back”.

I boldly guess what Andrew Ng means here: there is no need to worry about cloud platforms or code technology stacks, because these are easy to be iterated by AI.

5. Product feedback has become a bottleneck, and PM should get rid of the coordinator and evolve himself to become a rhythm designer

With a 10x increase in engineering efficiency, the maximum limit becomes: What function? Do you want it? How to receive feedback fast and accurate?

Ng said he had seen a 2:1 configuration of PM to engineer – not an anomaly, but a reality.

In the future of organizational optimization, the demand for programmers is actually decreasing, and more people are no longer needed to write code, and improving the speed of organizations to obtain user signals has become the primary priority.

Here our team is practicing, but don’t mistakenly think that the number of programmers is decreasing, the real situation is that the previous senior production and research directly turned to the role of PM, in terms of vertical applications such as AI doctors:

Doctors (including lawyers, teachers, and other professionals) are mostly unable to structure their knowledge, and must become a doctor with senior products and technologies to complete their daily work with the help of AI before AI can be realized.

What is reflected here is not that the number of technicians is decreasing, but that the demand for interdisciplinary talents is increasing.

6. Successful entrepreneurship must mean that you have found the right direction half a year earlier than others

“Can it be done” is not a question, “whether it is worth doing” is the key. AI makes things faster, but it also makes the cost of “doing it in the wrong direction” higher, because every wrong step amplifies the waste of subsequent resources.

Therefore, he emphasized a core mechanism: building a rapid validation prototype mechanism + multi-channel signal source + intuitive update system.

The faster you can update, the more accurate you can make decisions.

Because I am doing AI+ management entrepreneurship here, but I decided to give up in May this year, I have a deep understanding of this sentence, here are two points:

First, because of the characteristics of the Internet, the first batch of people to accept AI knowledge are likely to be the Internet industry, so it is also the most common to start a business through production and research.

However, you must not look at the nail here as a hammer! The topic of AI application is too big, he can reconstruct 80% of the past scenes, if he does not do a good job of market research and takes it for granted, it is likely to lead to the first failure.

Does that mean we have to be extra careful? That’s not the case, here is the second point: although you must do a good job of research to make a move, but in the real world, you can’t get real market feedback without making a move or paying a price! Therefore, quick trial and error to find a subdivision field that suits you is the correct way for ordinary people to open AI applications!

Finally, the plagiarism of AI applications will be very fast, and the barriers to personal experience also come from two points: the first is to quickly use business data to form a flywheel system; second, we must do the dirty and tiring work that others are unwilling to do;

7. Finally, AGI and the “AI threat” are not what you should be worried about right now

Ng is very wary of the atmosphere of AGI and demonizing AI safety. He made it very clear: the real risk is not that AI is too strong, but that abuse of power + closed ecology; What should really be done is to use it responsibly + open sharing technology dividends.

Closed platform + security discourse = shield of technology monopoly; Open source + diverse collaboration is the moat of AI innovation.

We don’t have to interpret the information here that is related to the policy, but let’s talk about the AGI anxiety that many bosses actually have.

For example, one day last month, a boss in the advertising field suddenly called at 12:00 in the middle of the night, first apologizing and then anxiety.

The reason is that a senior external investor had an in-depth exchange with him, and the investor recognized his things very much, but did not recognize his products, and his logic was that there were no barriers to shell products, and what he was currently working on was likely to be completely subverted by ChatGPT in a year.

Lao Wu himself is a business boss with a limited understanding of AI development trends, so he called me to determine the next window period, which is Lao Wu’s problem and a problem that many bosses will worry about.

We often hear that models eat all of this, and there are three things that really need to be paid attention to:

The first is OpenAI’s Sam Altman, who put forward the Agent development prediction: L1 to L5 sequence, and threatened to achieve it in 10 years;

The second is that Rich Sutton, the father of RL, pointed out in his 2019 article “Bitter Lessons”: A universal approach that relies on pure computing power can always win by an overwhelming margin. For example, the success of AlphaGo/GPT-3 does not stem from complex rules, but from simple algorithms (neural networks + massive data) supported by large-scale computing power.

The third is the well-known Manus, whose most classic saying is “less structure more intelligence”.

OpenAI, the strongest AI, and Manus, the strongest shell publicity, actually their behavior is in line with the most popular saying in the AI world: Attention is all you need.

How to say it? They are competing for attention first, but in fact, they are essentially competing for traffic entrances.

Whether it is from the browser entrance and office software entrance dispute in the Internet 1.0 era; It is still a dispute over the chat entrance and travel APP entrance in the mobile Internet era.

In fact, everyone is playing one thing: fighting for the entrance, and now the AI era is still doing this.

For example, Cursor is also a shell, recently raising $900 million and valuing $10 billion, but there are many similar tools, such as Google’s Gemini, Byte’s Trae, and Alibaba’s Tongyi Lingma.

The competition in the AI era is ostensibly a battle for technical products, but in fact it is a battle for entrances, and as long as the base model is sufficient, the match point is applied.

Going back to Rich Sutton’s problem, I can only say that the old man thinks things too simply…

AlphaGo’s success is based on the complete transparency of Go rules and limited state space. The real world will be too complex, and the data used for training can describe 70% of the real world, which is already amazing!

The so-called computing power crushing requires a major premise: computing power needs to act on the correct architecture. If the foundation model cannot characterize a certain type of knowledge, the simple stack power may fall into the “self-righteous and rigorous and accurate error”.

To sum up, at this stage, the propaganda model eats up all companies, most of which are competing for traffic and attention, and everyone doesn’t need to pay attention. unsetunsetConclusion unsetunset

Andrew Ng’s insights are like a clear AI survival map, revealing the core survival principle of this technological revolution: only fast is not broken: compressing the cycle of “idea→ prototype → verification” to the hour level, and speed is the only weapon to crush illusions. Soul of Agent: Beyond the API shell, it empowers AI with a sense of purpose and evolutionary capabilities, and reconstructs non-linear cognitive processes. Agility is king: technology selection has shifted to “regrettable decision-making”, and low-cost trial and error are more important than perfect planning. Vertical Nuggets: Fearless of the “model engulfing theory”, the barriers arise from the deep cultivation of data in vertical scenarios and the solution of dirty work.

Finally, running fast, aiming, and doing deep is more important than fantasy AGI, don’t be intimidated by the iteration speed of the model, and go for it!

End of text
 0