Some time ago, I sorted out the prompts of more than a dozen mainstream AI products, and many product managers shouted after reading them:Do you want the prompt to be so rolled?
“volume” is the real volume, for example, the system-level prompt of Manus in the screenshot above has 10,208 characters and a full 250 lines; The prompt for Agent mode of the programming software Cursor is 18,543 characters, and the prompt for Chat mode is 12,568 characters……
However, the product manager who asks this sentence must not have practiced it in real scenarios.
It’s not that the developers are “malicious” in the number of words in the volume prompt, as the system prompt that dominates the overall situation is lower than this magnitude, and there is no way to constrain the large model within a controllable range.
The more ordinary people don’t need to write prompts, the higher the requirements for the level of prompts of the product managers behind each product.
No matter how good the model is, it can’t do what you want it to do with just one look, and the “prompt words” that effectively discipline large models to complete the set tasks are an essential part.
A product manager who cannot effectively discipline large models will most likely become an outcast of the times.
Prompt Word ProjectThere is so much involved in this concept that today we will only talk about its product, “engineering-grade prompts”.
What is an engineering-grade prompt?
This is a concept I created, but anyone working on an AI project should know what it is at a glance.
The so-called “engineering-grade prompts” are the kind of prompts that are placed in real projects, can constrain large models to complete tasks and output expected content, and can ensure the stable and automatic operation of the entire project.
Unlike the prompts we write when we use DeepSeek on a daily basis, engineering-grade prompts run in a state where you can’t see them.
You can’t have a conversation with DeepSeek, where you can modify, ask, or even reopen the window if the output is not good. As a link in the project system, it must stabilize the output, or even stabilize the output specified format, and once it goes wrong, it will be a crash of the whole chain.
For product managers and engineers in AI projects, all the large models that may be called in the project are no longer Party B and interns working for you, but partners who serve “Party A” with you. You have to tell this “partner” everything through the prompt.
Next, let’s talk about a few details of writing “engineering-grade prompts”.
3 details that must be taken into account
In actual scenarios, large models love to “make trouble” in the following three situations:
1. Not output in the required format, resulting in subsequent links “not being continued”2. Hallucinations and fabrication of answers 3. Do not follow your requirements and guide the task to play, and play freely The second point is the easiest place to “overturn”.
Every week, I share a breakdown of an engineering-grade prompt in “AI Event of the Week”, and even the “high-end” prompts often write hallucinatory prompts.
Structured output
Regarding the first point of output format constraints, the method is simple:
1. Examples of output formats for large models; 2. Open JSON Mode in API Requests One detail is that when giving large model format examples, never use too specific examples.
Examples that are too specific will cause large models to blindly imitate and lose flexibility.
For example, when I was setting up a Dify session stream, I wanted the model to extract user preferences from the user’s conversation and store it as a memory.
I used a prompt like this:
Facts and preferences are inferred from the information entered below, and structured information is output for storage as memory.
Scope of memory:
– Factual information such as the user’s name, company, industry, etc
– User preferences for responses
– The problem that users are experiencing is not within the scope of memory storage
Inference facts, constraints of preferences:
– Facts, preferences should be concise and clear
– Don’t start with “This person likes pizza”. Instead, start with “love pizza”.
Example output:
Name: Zhang Jia; Company: Starting point classroom; Preferences: Requires a popular answer. ”
Then the model outputs me this ……
How to avoid model hallucinations
It is true that large models make up content they don’t know, but 99% of the reasons it makes up are caused by humans.
In the previous screenshot, many module requirements in the prompt of DeerFlow product will cause model hallucinations:
(The following is a Chinese translation)
For example, the information quantity and quality standard section:
1. **Comprehensive Coverage**:
– The information must cover all aspects of the topic
– Multiple points of view must be presented
– Mainstream and non-mainstream views should be included
2. **Sufficient Depth**:
– Superficial information is not enough
– Detailed data points, facts, statistics are required
– Requires in-depth analysis from multiple sources
3. **Sufficient Quantity**:
– Collecting “just enough” information is unacceptable
– The goal is to obtain a wealth of relevant information
– More high-quality information is always better than less
Causes of hallucinations:
1. There is no metric for “all aspects” in “all aspects of the topic must be covered”, and the model will generate as many “aspects” as possible to complete the task, most of which must be made up; 2. In “mainstream and non-mainstream views should be included”, the definitions of mainstream and non-mainstream are not defined, and the hallucination rate must be greatly increased based on the randomness of the model; 3. “Detailed data points, facts, statistics are required”, the most efficient strategy for the model to face this requirement is to generate fake content because it saves tokens4. “More high-quality information is always better than less” is a very PUA way of bootstrapping, which is clearly implying that the model wants to “rumor” about high-quality information5. …… There are no shortcuts to avoiding model hallucinations:
1. Only let the model do tasks that you can clearly define and have sufficient positive and negative examples2. Leave a way for the model in a scene you are not familiar withThe kind of PUA-style requirement that treats subordinates at weekly meetings is forbidden to be included in the prompt.
I recommend everyone to read and learn the pdf-craft project, and the author gave me the most patient guidance prompt at the moment when guiding the large model to help him process the format after OCR recognition
I have dismantled this project in the 0407-0413 issue of “A Week’s AI Event”, and I recommend it again to join and learn together
Prevent the model from forgetting the requirements
Large models have a very fatal drawback: divergent attention.
Although any large model can now have a 128K memory window, its attention cannot always remain focused on all window texts.
You must have encountered this situation: if you send an English article or code to a model, even if you send instructions to it in Chinese, it will speak English on its own.
By Raycast’s Gemini 2.5 Pro
The core reason is that as the prompt (and context) increases, the model’s attention gradually becomes “out of focus”.
“Out of focus” here is not dispersion, but focus on the back position.
The solution is simple: put the instruction requirements you think are important at the end of the prompt.
I usually talk about it first, and at the end I will list all the requirements that must be complied with.
How to improve the level of “writing”
There is no good way to do it, just look at how the excellent prompts are expressed, what they are restricting, how to guide and how to write examples.
Channel 1: The prompt words of the 15 excellent products in the screenshot at the beginning of the article are translated and learned one by one. I organized them into Tencent Docs, and joined the AI learning circle and searched for “engineering-grade prompt documents” to find them.
Channel 2: I disassemble a prompt in the “AI Event of the Week” every week, and as of this moment, I have disassembled and shared 18 projects, and I can unlock the reading permission of this document (2025) by signing up for AI Action Faction, asking 100 questions about AI job interviews, or implementing AI large model applications.
Channel 3: Regularly follow Github Trending, every AI project on the list is worth learning.
About AI Learning Action Circle
I have been running the “AI Learning Action Circle” with Qidian Classroom since 23, and as of now, I have updated 1500+ topics, and discussed and exchanged practical AI applications with nearly 4000 practitioners who pay attention to AI every day for the past 500+ days.
The study circle currently has three core learning and exchange “positions”:
1. Knowledge Planet: The core channel for the precipitation of knowledge and data skills, which can be consulted at any time. WeChat communication group: Currently, there are 6 groups, and circle members exchange and share their AI usage experience every day3. Blowing Water Bureau Live: 19:30-21:30 on weekday evenings, one AI application theme per session
Position 1: Knowledge Planet
I mainly maintain three tags on the planet: “Combat Sharing”, “Toolbox” and “Intelligence Agency”
Practical sharing is a prompt word and efficiency tool that can be directly applied in daily work and life. The Step-Back prompt in the screenshot above is very useful, comparable to o4. The prompts for all AI practical applications demonstrated on the official account and live broadcast are also under this tab.
AI tools and rare known, popular AI tools, information sharing, I have filtered out those that are too technical and too exaggerated, and put them in this label are all fun that can be used directly!
The planet also has a “column” system, and its current positioning is similar to that of the label.
If you are looking for a place where you can learn about the latest and practical AI information and practical skills for the first time, you can find a circle where you can communicate, consult, and discuss with peers at any time when encountering any AI application problems.
Position 2: WeChat communication group
We have equipped a WeChat communication group for our circle friends, and now the 6 groups are almost full.
In the WeChat group, there is an AI morning newspaper every morning, and there is also a “newspaper reading time” in the morning and afternoon, as well as my experience of various AI tools at the level of swiping every day, prompt arrangement and thinking, and industry news interpretation synchronization.
Also, you can discuss any AI-related tools and applications in the group, and you can almost find answers.
If you are looking for a place where you can learn about the latest and practical AI information and practical skills for the first time, you can find a circle where you can communicate, consult, and discuss with peers at any time when encountering any AI application problems.
Scan the code now to receive a 50 yuan instant discount
Position 3: AI water blowing bureau live broadcast
In the past year, I have done a total of 130 live broadcasts related to AI applications, actual combat, and hot spot interpretation for the study circle, accumulating 257 hours!
There are more than a dozen closed-door live broadcasts that can only be watched by members of the study circle, with an average viewing time of more than 1 hour, sometimes approaching 2 hours!
There is no dry goods, and the average stay time cannot reach this level.
If you are looking for a place where you can learn about the latest and practical AI information and practical skills for the first time, you can find a circle where you can communicate, consult, and discuss with peers at any time when encountering any AI application problems.
Scan the code now to receive a 50 yuan instant discount