When the AI wave swept in, product managers stood at a crossroads: on the one hand, there was a strategic-driven product layout, and on the other hand, there was a user demand where experience was king. In this game between technology and human nature, how to find a balance, neither lost in the rational deduction of the “left brain” nor indulged in the perceptual imagination of the “right brain”? This article will dismantle the thinking logic behind AI products from the dual perspectives of product strategy and user experience, and take you to see the new coordinates of product people in the AI era.
The requirement of product managers to focus on user experience has long been commonplace, so let’s talk about strategy design today.
Why open a separate article to introduce it?
On the one hand, today’s products are seriously homogenized, and products in the same field have the same goal in terms of product core process, design structure and visual style. On the other hand, under the wave of AI hard technology, it is difficult for you to really understand a product only from the appearance of the product, and the difference between the product is no longer how fancy the visual and ingenious interactive experience is, but more of the difference between the underlying strategy.
Returning to today’s topic, my conclusion is:In the era of large model products, the construction of product strategies has become more scarce and core than in the past.The main reasons are as follows:
The first is the generalization ability of large model products, which is no longer plug and play, but depends on how you encapsulate specific capabilities into product functions. This approximation is a conditional optimization problem: based on the current large model capabilities, find a business scenario that can produce the most value, choose a point that is rigidly needed, controllable and can measure ROI, and break through this point first. This transformation process relies heavily on the product manager to define applicable scenarios and constraint boundaries;
The second is the cost-sensitive feature, unlike the marginal cost of traditional software services that tend to be close to 0, each inference of the model has a significant cost, and product managers have to make resource trade-offs between call frequency, peripheral engineering, and user path. For example, you can define some engineering methods to make the large model as stable as possible, such as adding some rule engines, knowledge graphs, RAG, prompt templates, review and bottom-up mechanisms, etc., to ensure that the output of the large model is cost-controllable and the experience is stable.
How can product managers do a good job in B-end digitalization?
All walks of life have taken advantage of the ride-hail of digital transformation and achieved the rapid development of the industry. Since B-end products are products that provide services for enterprises, how should enterprises ride the digital ride?
View details >
Finally, there is the uncertainty of feedback. Even if you find the right application scenario and the engineering methods are available, the output effect of the product is nonlinear, lacks a closed-loop feedback mechanism, and is purely a static test item. The usage data, problem records, and satisfaction feedback of real users can be used as the basis for iterative optimization. You need to design closed-loop feedback and competency assessment mechanisms, which are strategic endeavors.
When the demand scenario is clear, the design of user experience and product strategy should go hand in hand. If user experience design is like building a house, such as designing room layout, function, and decoration style; Then the product strategy design is to build water, electricity and heating, so that the whole house can operate reasonably and save energy and efficiency. One of the two is indispensable.
1. Strategic products: start from business and use data to drive
Let’s first introduce the strategy products.
The essence of strategic products is to formulate scientific product strategies based on data and business insights, and implement these strategies into specific executable product functions through data systems, algorithm strategies and capability encapsulation.
Focus: one is from the business, and the other is data-driven.
Figure 1: Core work content of strategy products
For example, if you are responsible for optimizing the recommendation strategy of “Guess What You Like” on an e-commerce platform, the module’s recommendation click-through rate is low, user conversion is poor, and the overall GMV is affected.
Your focus cannot avoid the following:
1) Identify strategic opportunities from a business perspective:
Low conversion rate → Inaccurate user recommendations → Unmet user interests, turning the question into a strategic question: How to make the recommendation results more relevant to user interests?
To do this, you may need to adjust your recommendation priority logic and recall policy. You can define target scenarios and priorities: increase the click-through rate of homepage recommendations + subsequent conversion rate → Prioritize the scenario of “high active users + high-priced products”.
2) Data-driven, design the underlying data system:
- Input data: user profile (gender, age, consumption preferences), browsing behavior (recent clicks, stay time), product characteristics (category, price, promotion strength);
- Output metrics: Recommended click-through rate (CTR), recommended conversion rate (CVR), GMV contribution per unit recommendation;
On this basis, you can collaborate with the data team to supplement user interest tag fields, establish a “recommendation behavior-conversion” mapping relationship, and improve data embedding points.
3) Linkage between strategy model and algorithm:
In the previous step, you use the user interest tag as input to the algorithm to recall the product and then score and sort the model. At the same time, you need to work with the algorithm team to build a priority to recall categories with frequent interactions recently, and do personalized weighting, such as the user’s historical consumption amount * click preference.
4) Formulate product solutions, including policy logic, interaction design and background configuration capabilities:
- Strategy logic description: For example, “Guess what you like” gives priority to high-converting products under the category that users have viewed in the last 3 times;
- Interaction design: the recommended bit style remains unchanged, and the background policy is adjusted to ensure that users are not perceived when interacting.
- Background configuration capabilities: The data indicators are calculated through the data warehouse and then released through the strategy operation platform to form a distribution strategy for selecting users.
5) Strategic Synergy and Conflict Resolution:
Multiple strategies will inevitably encounter conflict scenarios. For example, the homepage also has a “limited-time promotion” recommendation, and both modules want to grab traffic space.
This is not uncommon, and you need to define the synergy mechanism between policies, design a priority configuration system, and dynamically assign recommendation positions based on user profiles. For example, high-consumption users give priority to “guess what you like”, and low-consumption users give priority to “promotion”, so that all recommended strategies can enter a unified strategy operation platform to facilitate subsequent continuous optimization.
This is the closed loop of the work of product managers in strategy construction.
In the same way, product managers also need to transform business scenarios into a landable model strategy system in the strategy design of AI products, including strategy design, data support, verification feedback and productization, so as to achieve a complete link of “business goals→ model capabilities→ and product forms”.
2. AI product strategy: tuning model capabilities and building peripheral engineering
In addition to commonalities, let’s talk about the core differences between AI product strategy and traditional product strategy design.
Looking back carefully, the original so-called product strategy is more of a rule and data-driven product design paradigm, that is: people formulate policy rules + system execution logic, such as: recommendation strategy, pricing logic, sorting rules, matching mechanism, task compliance logic, etc. Behind it is:
- Structured data (tags, metrics, user behavior, etc.)
- Explainable logic design (if-then, weight model, AB Test, etc.)
- Controllable and adjustable system architecture (policy platform, rule engine, scheduled scheduling, etc.)
AI products have changed this paradigm.
On the one hand, large models introduce “capability as a service”. Base models (e.g., GPT, Gemini, Claude) provide the understanding + generation capabilities of a common language, and the system no longer relies on specific rules, but:
- Learning patterns from data (training)
- Driving behavior (prompts) with natural language prompts
- Improve contextual adaptability through plug-in knowledge or model fine-tuning
The underlying paradigm is as follows: building capabilities→ applying capabilities→ collecting feedback→ and reinforcing capabilities. Therefore, the algorithm team needs to lead the pre-training of the base model, and the product manager needs to spend more energy on the tuning of model capabilities, such as fine-tuning, which is the most commonly considered but most cautious action point.
On the other hand, policy design has changed from “explicit logic” to “capability configuration”. The input of traditional strategies mainly relies on writing rules and weighting, but now it is more inclined to analyze layer by layer from designing prompt word structures, configuring knowledge bases, and judging whether the model is fine-tuned. In terms of iterative adjustment of strategies, traditional products mainly adjust strategies based on AB test, while AI products rely on multiple rounds of data-feedback-ability iterations to improve capabilities.
Yes, when a model is trained, it has general and domain-specific knowledge and reasoning capabilities, but it may not be suitable for specific business scenarios. Although the large model is strong, it is naturally unstable, uncertain, and does not understand the boundaries, so it is necessary to build a large number of engineering systems around it to restrain and monitor it.
What is peripheral engineering? Peripheral engineering refers to enhancing the practicability and reliability of the model through peripheral capabilities without changing the parameters and training corpus of the large model itself, so as to better serve the needs of vertical business.
Other wordsTo truly transform a large model from “capability” to “productivity”, it is necessary to build a complete engineering system around it, which is the so-called peripheral engineering.
Generally speaking, peripheral engineering mainly covers several aspects:
- Prompt Engineering: Designing and writing prompt text to guide the model to generate content that meets specific requirements.
- Knowledge Base System (RAG): Combine external knowledge bases (such as enterprise documents, FAQs, databases) for retrieval and generation. For example, enterprise knowledge quizzes, IT technical support, financial/legal vertical assistants, etc.;
- Model networking: As the name suggests, it allows the model to index content on websites or search engines, summarize and reference it, and then output content. For example, stock market information summaries, cross-border e-commerce price searches, etc.;
- Plugin system/tool call: Configure the model with plugins with specific functions, such as search, calculate, call APIs, etc., to give it the ability to “observe-decide-act”. It usually integrates multi-modal input, external system interfaces, state management and other capabilities, which is one of the base capabilities of current agent applications.
Not only that, but also context management, multi-model routing and policy control, multimodal input and understanding support, etc., are all strictly part of engineering.
This set of projects has become the basic plate for the implementation of AI products and the focus of strategic work. Especially in the current AI product position, whether you are serving B-end or C-end scenarios, almost all of these responsibilities will be involved.
Figure 2: Classification of the upstream and downstream of the AI industry
Then, in terms of model capability tuning and peripheral engineering, what are the responsibilities of product managers in it, and what is the division of labor between algorithms and R&D, seems to be less obvious.
In short, in the strategic design of large model products, product managers are not responsible for “how to write code or train models”, but must be responsible for “why to do it, what to do, and how to judge how to do it well”.
Based on the three core strategies of AI products, let’s talk about them one by one.
2.1 Prompt Engineering
Prompt engineering is a low-cost job that can do strategy tuning without programming ability, and product managers should go deep into prompt engineering optimization to study.
Many people have explained the prompt project, so here is a little clarification.
Prompts are essentially an AI interaction design language that can directly affect the quality of output, with the goal of optimizing input to make large models better “understand” and “answer”.
Therefore, the product manager needs to be responsible for the application scenario design of the prompt, plan the system of the prompt, and then the R&D team writes the details of the prompt, so that the model output is more in line with business expectations and user needs.
Sounds a bit winding, in simple terms:The product manager is responsible for designing “how to ask”, and the algorithm is responsible for “how to calculate”.
Compared with traditional interaction design, prompt engineering is more like UX design of the intent layer, that is, product managers need to design the entire closed loop of “how user intent is understood” + “how the model is guided to output”.
How to improve the design ability of prompt word engineering? Rather than applying fancy templates, I think a more feasible way for product managers to train is:
- The prompt design process is made using Figma + flowchart + table, and it is designed like multiple rounds of bot dialogue before, that is: each round must consider the intention to trigger → command disassembly → content generation → format output;
- Establish a Prompt component library, just like you did before, so that it can be quickly reused into different agents or scenarios, reducing the cost of cross-departmental collaboration.
- When suggesting output structures, strongly constrained formats such as JSON, Markdown, tables, etc., just like you used to design standard return formats. The formatting of the output structure is very convenient for subsequent systematic AB experiments, as well as the classification and attribution of failed cases, forming an analytable prompt-to-output mapping.
In essence, this is to elevate prompt design to the height of “product engineering”, so that prompt words are no longer just conversation toys or bluffing mindsets, but interface design with real engineering calling capabilities.
2.2 Plug-in knowledge base
Another core part of peripheral engineering is the construction of knowledge base, which is essentially to supplement the content that the model cannot remember, do not know, or update. In this case, the R&D and algorithm teams will lead the infrastructure development of the knowledge base, and the product manager will be responsible for the design of the knowledge base scenario definition, content strategy, structure, and recall rules to ensure that the system meets the implementation requirements.
Specific responsibilities include:
- Clarifying which problems rely on the knowledge base and which ones are generated by large models can be achieved through problem type classification tables and intent recognition routing strategies.
- Formulate knowledge source standards (source channels, content standards such as accuracy, update timeliness, security, etc.);
- Design the organizational structure of the knowledge base (FAQ, entity, document), for example, the entity type is based on the knowledge graph or structured data organization of queryable entities, such as hotel information, product data, etc.;
- Develop a knowledge update mechanism (static or dynamic?) who maintains it, whether it needs manual review, update frequency, etc.);
- Set a knowledge recall strategy, that is, the timing and priority of the model to call the knowledge base.
For example, you’re doing a support AIIntelligent customer serviceThe model answers some standard questions that are often inaccurate or hallucinating, and you judge:
- the general knowledge ability of large models is insufficient;
- Prompt words have limited enhancements;
- It requires a plug-in knowledge base and adopts the RAG (Retrieval-Augmented Generation) architecture.
Then, in terms of strategy considerations, you need to lead the knowledge base structure design and RAG call logic scheme.
The first is the overall planning of the knowledge base structure, and the core questions you need to think about are:
- What knowledge to inject?
- In what form is knowledge stored?
- How to make the model easy to use?
Figure 3: Knowledge base management
1) Knowledge Import:
Clarify the source of knowledge and output the “Knowledge Content Source List & Type Classification Table”. To this end, you need to connect the business team, customer service team, and knowledge base team to confirm that the knowledge structure includes:
2) Knowledge extraction:
The product is responsible for designing knowledge granularity and segmentation strategies:
- If the segment is too thick, it may cause more information interference in the recall;
- Segments that are too thin can easily lead to loss of context or inability to overwrite the full answer.
Common segmentation methods include: segmenting according to the document structure, such as each FAQ paragraph and each operation step paragraph; segmented by theme, such as by return, lost parcel, delivery and other themes; Organized by intent labels, for example, the standard answer set for “logistics query intent” is a paragraph.
3) Knowledge recall:
The process involves many links, among which the product manager should pay attention to defining the embedding policy (Embedding), outputting the knowledge storage rules, and then handing it over to the algorithm or engineering team to access the vector storage platform.
In simple terms, embedding is about turning a sentence into a string of numbers (vectors) that a computer can understand. When you build a knowledge warehouse with a large number of sentences and articles, you hope that when others ask questions in the future, AI can accurately find relevant content and answer. AI does not understand human language, it can only understand numbers. So you need to turn the text into a vector for subsequent vector recall.
Note that you don’t need to write an embedding algorithm, but you need to make sure that:
- Which fields need to be embedding in the segmented content (e.g. body + title)
- Vector library structure: Do you need multimodal vectors? Multiple channels?
- Is an indexed field used for recall filtering?
The second is the design of RAG call logic, the core goal is that when a user asks a sentence, the application can recall the correct knowledge segment and organize it into an effective prompt, allowing the large model to generate reliable answers.
1) Design the search logic, clarify the search rules, number of recalls, and filtering mechanisms to ensure the recall rate of knowledge.
2) To design a prompt splicing strategy, you need to define: how to connect the recalled knowledge segments into the prompt? What to fight? How many paragraphs do you spell? Where is the spelling?
For example, when splicing knowledge, we should pay attention to the standardization of structure, and some format templates can be defined. This structured knowledge can be pieced together in a system prompt or supplemented by the user as a background supplement.
In addition, the length of the spliced content is generally controlled, and the weight is sorted based on data relevance or classification to avoid exceeding the token limit.
3) Design a credible answer mechanism and a comprehensive response strategy. Large models may still output hallucinations, so you need to define:
- When the recall fails: Output “Sorry I didn’t find the relevant information” or other back-and-forth and guidance words;
- When there is too much recall information: prompt users to refine or further clarify the problem;
- Whether the output result is accompanied by a “Reference Information” field to enhance trust (e.g., “This answer comes from XXX knowledge document”)
4) Closed loop of data after launch. This process will be emphasized many times when all types of products are launched, and for knowledge bases, you need to focus on the recall and accuracy of knowledge, and at the same time establish a version management and dynamic update mechanism of knowledge content to ensure the continuous effectiveness of the knowledge base in practical applications and business matching.
This is not only a means of effect evaluation, but also a key mechanism to promote the continuous evolution of the knowledge base and the iteration of model capabilities.
2.3 Model fine-tuning
If the pre-training link of the model is to let the model learn general laws through a large corpus, so that the model can be self-taught in a huge library, then fine-tuning is the teacher who makes the model, defines the standard answers, is responsible for polishing high-quality student samples, and tutoring them in a targeted manner, so that the model can learn and imitate.
In the pre-training process, the abundance and diversity of parameters and corpus almost directly determine the upper limit of the IQ of the pre-trained model. In the fine-tuning process, the quality and diversity of the sample determine the professionalism and controllability of the model.
For example, you made a logistics customer service robot, and the generalized model answered too broadly, so you proposed to fine-tune a special model for the “express logistics problem”. In the process of fine-tuning the model, it involves specific fine-tuning methods and training parameters, which are supported by the algorithm team. But the definition of fine-tuning scenarios, data strategy, and prioritization of resources are the responsibility of the product manager.
Step 1: Clarify the tuning goal and output the requirements specification for capability tuning, including problem type analysis, failure examples, and current capability evaluation results.
You need to clarify the following questions positively:
- What kind of problems are performing poorly? Is the answer stable?
- Is prompt optimization effective?
- Is it worth fine-tuning?
Step 2: Define the fine-tuned data scope and quality standards, and output the “Annotation Task Description”, including the labeling system, data format, cases, etc., which can be handed over to the data team for implementation.
Step 3: Define fine-tuned strategies and capability boundaries, including goals, call logic, capability boundaries, and risk points.
Step 4: Design the evaluation mechanism and the criteria for online acceptance, define “what are the good results” from a business perspective, and help the algorithm clarify the optimization direction.
The measurement indicators can be: improvement of accuracy, consistency of answers to similar questions, user satisfaction, control of inference costs, etc., depending on the specific application scenario.
Step 5: After the launch, the data is closed-loop and strategy optimization, and the “Launch Effect Evaluation Report” is output to see if the goal is achieved and whether further iteration is needed.
Like the plug-in knowledge base mentioned above, after launching, you need to monitor the changes in the core indicators after the model is fine-tuned, such as the number of model calls, the accuracy of answers, user satisfaction, etc., and analyze the failure cases in time to see if there are any problems with the intention to identify errors and answer content shifts.
The difference is that the cost of model fine-tuning is higher (R&D investment + computing power), so you must incorporate the evaluation ROI into the review of each fine-tuning to determine whether the satisfaction improvement brought by fine-tuning matches the cost investment of training + inference.
3. Summary
At this stage, many of the so-called “AI product managers” on the market are actually just “pseudo-AI product managers” who have used APIs, and there are very few AI product managers who can really evaluate and optimize the effects of “business needs→ model capabilities→ scenario design→ effect evaluation and optimization”.
It is understandable that most traditional product managers pay special attention to the implementation of user experience paths and product function forms. However, in addition, AI product managers should pay more attention to the collaboration of upstream and downstream technology (algorithms, data, engineering teams), and need to focus on integrating capability tuning and peripheral engineering into product design, and maximize product value through strategy formulation.
Product strategy is the steering wheel, which determines what we do right; The user experience is the accelerator and brakes that determine what we do right. The two cooperate and restrain each other: to the left, you need to understand the capability boundaries, business goals and feedback mechanisms of the model; Going to the right, you need to delve into the motivations behind user behavior, build good interaction experiences, and output trust.
In the process of left and right, user experience must serve the strategic goals of the product, and the product strategy must also make way for user experience.