In the era of information explosion, how can the knowledge base go from “lying in favorites and eating ashes” to being effectively used by users? This article will provide an in-depth analysis of the AI Knowledge Base 3.0 upgrade plan, and explore how to make knowledge truly flow through intelligent retrieval, content optimization, and interaction design, improving user learning and application efficiency.
Is this you? I saw dry goods articles collecting like crazy, and ended up lying in my favorites and eating ashes; I want to learn about the latest AI tools, but I am dissuaded by professional terms on the screen; Full of expectations, I opened “Teach You to Make Money with AI in One Minute”, “Teach You to Use AI to Generate 10 Short Videos in 3 Minutes, with millions of views”, thinking it was “what martial arts secrets”, but after reading it, I found out that it was the “Dragon Slaying Technique” of nine yuan and nine free shipping…
To sum up, in the process of knowledge collection, you will encounter these problems to a greater or lesser extent:
- There is no time to sort out the screening data
- I don’t know which ones are suitable for reading
- Some articles are highly professional and incomprehensible/useless to me
- Full of titles, the party marketing article is difficult to screen
- The idle rate of data collection is high
I have written 2 articles about the AI knowledge base before, interested friends can click on the link archaeology:
- Version 1.0:How can product managers use AI to build your own knowledge base? (Babysitting Tutorial) – Everyone is a product managerI mainly tried to let AI help me organize the data;
- Version 2.0:Say goodbye to staying up late to tidy up! It’s time for your AI knowledge base to learn to “forage” on its own: automatic keyword search + data archiving + organization and reporting – everyone is a product managerIt mainly introduces how to let AI automatically collect information for me.
- This 3.0 version will upgrade the AI knowledge base on the previous basis in terms of knowledge classification and organization, data application efficiency, etc., if you are interested, you can read down.
1. Thinking about the plan
Overall plan mind map
What does a product manager need to do?
In the process of a product from scratch, it is not easy to do a good job in the role of product manager, in addition to the well-known writing requirements, writing requirements, writing requirements, there are many things to do. The product manager is not what you think, but will only ask you for trouble, make a request:
View details >
The overall plan is disassembled into the following brain diagram from three aspects: where the data comes from, how to organize the data, and how to apply the data:
1) Sources
There are two main sources of knowledge base: on the one hand, I hope that AI will automatically crawl the website and filter excerpts according to the rules I set, and on the other hand, I usually see articles that I think are good and excerpt them into the knowledge base according to the rules. From this, the following two workflows can be sorted out:
2) Data specifications
Whether it is manual or AI, there are many kinds of articles, and they need to be recorded in the knowledge base according to unified standards and specifications, so that whether they are read by themselves, or searched or AI recommended reading article rules, there is a basis. Here I have set the following fields according to my usage habits. Accordingly, a multi-dimensional table of the knowledge base is created in Feishu. (See screenshot below for details of the types that should be selected in the corresponding table list)
3) Data application
AI can replace humans to collect and organize data, but it cannot replace humans to learn. Collecting useful information into the knowledge base is only the first step, and more importantly, how to use it in the future. Here I consider two major scenarios, on the one hand, when the user actively inputs the requirements to initiate the search, and on the other hand, I hope that the knowledge base agent can automatically push and remind the user to read. This also sorts out the corresponding workflows:
2. Detailed scheme design
Agent Building Platform: Coze
Build content: Workflow X5, Agent X1
Operation path: Workspace-Resource Library-Resources-Workflow
Platform fee: Free to create, and tokens are required to call plugins and large models during workflow runtime. Platform accounts can be registered for free, and there is a free token quota every day, and excess needs to be recharged
Step 1: Create a human curated article organization workflow
Objectives and scenarios:When “I” see the article content that I think is valuable, I only need to send the article link to my agent, and the agent can help me organize the corresponding content through the article link and include it in the knowledge base.
Workflow nodes: The following figure shows the workflow after the configuration is completed, in addition to the necessary start and end nodes, it mainly includes 4 nodes:
- Link reading: This is a plugin, select [Add node] and then select [Plugin], and select the plugin officially provided by the button in the plugin library. It is used to read all the information in the article link, and is used for the analysis and content sorting of subsequent large models;
- Large model (1): Mainly used to analyze and organize content, the following figure is the corresponding input and output fields and formats of this node. The input content directly refers to the title and content fields output by the plug-in of the previous node, and the output content corresponds to the fields to be written to the knowledge base (Feishu table) in the future. In the prompt words of the large model, I mainly defined 4 tasks for it: 1) write a refined summary and filter advertising information; 2) Classify the content into 5 professional topics or “others” through feature matching; 3) Output structured data including titles, abstracts, links, types, topics, and times in strict formats, emphasizing information accuracy, format standardization, and logical hierarchy of topic judgment.
3. Large model (2): It is mainly used to organize the fields output of large model (1) according to the input format required by Feishu documents, and nodes can also be <代码>used to replace them;
4. Feishu multi-dimensional table: This is also a plug-in node, in the plug-in library, search for “Feishu” and find the plug-in in the figure below. app_token field copies the previously created document address, and the records field directly refers to the field output by the large model (2);
After linking all the nodes of the workflow, click the trial run, and paste the link of an article to see if it can be successfully run.
Step 2: Create a workflow for AI to automatically collect information
Objectives and scenarios:AI can collect high-quality AI-related articles from high-quality websites (e.g., everyone is a product manager) at a fixed frequency, and organize them into my knowledge base according to the format I require (Feishu documents)
Workflow nodes: The following figure shows the workflow that has been configured, which mainly includes three nodes in addition to the necessary start and end nodes:
1. Article acquisition: This is a plugin node, you can search “everyone is a product manager” through the plugin library, the ability of this plugin is to help me get 30 popular article titles/popularity/time/article links, etc. at one time from the website where everyone is a product manager.
In the same way, you can also choose to get the plugin for the article from other platforms. I think this platform always insists on manually reviewing the content of articles, the quality and learning of articles are much higher than many platforms, and the hot list has been reviewed by the editorial team and liked by many readers, and the quality of the selected articles is guaranteed to a certain extent.
2. Large model: The large model node here is mainly used to screen AI-related articles. The more than 30 popular articles in the previous node may not all be related to AI, so here in the large model prompt, it is asked to sort out links to AI-related articles.
3. Loop: The addition loop node includes two parts: the loop and the loop body, which is a specific loop action, and the loop controls the input and output fields of the node. The loop node here is mainly responsible for parsing the AI-related articles filtered out by the large model in the previous node, parsing them one by one by article URL, extracting and classifying the content, and writing them to the Feishu knowledge base (the steps here are the same as the workflow of manually selected articles created in step 1, so the workflow in step 1 is directly reused here). There is also a node in the loop that sets variables, which is added to ensure that the URL of each article needs to go through this process.
After linking all processes and the trial run is successful, click Publish Workflow in the upper right corner.
Step 3: Create a user active retrieval workflow
Objectives and scenarios:When users enter what they want to know, agents can retrieve articles in the knowledge base and organize and recommend articles that best meet the user’s needs. (The actual presentation effect is shown below)
Workflow nodes: The following figure shows the workflow that has been configured, which mainly includes two nodes, in addition to the necessary start and end nodes:
1. Table content reading: This is a plug-in node, add through the plug-in library, search for Feishu multi-dimensional table in the plug-in library, find the plug-in in the figure below and select add, and enter the multi-dimensional table link to be read at app_token place.
2. Large model: Here I give the AI a task through prompts to let it analyze the user’s query intention, filter matching articles from the knowledge base according to specific rules, calculate the total score according to the scoring mechanism and recommend according to the rules, and also need to explain the scoring basis when the user has questions, and present up to 5 recommendation results in the required format. The figure below is the recommended scoring system I designed.
After linking all processes and the trial run is successful, click Publish Workflow in the upper right corner.
Step 4: Create an automated content recommendation workflow
Objectives and scenarios:When the user does not actively request to retrieve the content, the agent can actively trigger the recommendation of the latest included article, attach the article recommendation to facilitate the user to filter, and click the link to jump directly to the original text.
Workflow nodes: The following figure shows the workflow that has been configured, which mainly includes two nodes, in addition to the necessary start and end nodes:
1. Table content reading: This is a plug-in node, add through the plug-in library, search for Feishu multi-dimensional table in the plug-in library, find the plug-in in the figure below and select add, and enter the multi-dimensional table link to be read at app_token place.
2. Large model: The prompt words of the large model here are different from those in step 3, which defines that it filters the latest articles from the specified knowledge base according to the rules of the most recent priority of the inclusion time, and the same inclusion time according to the publication time by near and far, generates recommendations based on the summary or topic content, displays the title, link and recommendation in a specific format, and informs the user that “there are currently no recommended articles” when there are no articles.
After linking all processes and the trial run is successful, click Publish Workflow in the upper right corner.
Step 5: Create an article content podcast workflow
Objectives and scenarios:Realize fragmentation on the way off work, play the content of articles, and improve the utilization rate of knowledge base data. (The picture below is the effect of the trial run, and a voice URL will be generated in the end)
Workflow nodes: The following figure shows the workflow that has been configured, which mainly includes three nodes in addition to the necessary start and end nodes:
1. Link reading: This is a plug-in node, added through the plug-in library, to obtain the content in the link;
2. Large model: Because of the previous node, the plugin will output a lot of fields, which need to be organized into smoother and clearer text content;
3. Text-to-speech: This is also a plug-in node, which can be found by searching for “text-to-speech” in the plug-in library, and is a plug-in officially released by Button. test variable can directly refer to the text organized by the large model in the previous node.
Step 6: Create an agent
Objectives and scenarios:In order to facilitate subsequent use, you need to create an agent to set prompts, configure workflows, and trigger tasks at regular intervals to realize the series and collaborative invocation of all the above workflows.
Agent Tasks:The following table summarizes the workflow to be executed by the agent and the conditions for the execution of the corresponding workflow.
Agent configuration
1. Add plug-ins: Add two major plug-ins in the plug-in bar to get the current time and connection reading, which is to allow the agent to identify the time when the user talks to it, when the user sends it an article link, you can first read the content through the plug-in, judge whether there is still information such as marketing sales and drainage, and do the initial screening;
2. Add workflows: Add all configured workflows in the above scheme, so that the agent can clearly know which workflows to call in the future.
3. Add triggers: Here are two workflows for regularly collecting AI-related articles and automatically recommending article content every day, triggers support two types of triggers or triggering according to events, here are timed triggers;
4. Arrange the persona: In the persona and recovery logic on the left, define the agent’s workflow, call what workflow under what circumstances, and use {{}} to refer to the specific workflow.
5. Preview and debug: test on the right side, and after confirming the completion, click Publish in the upper right corner;
Experience sharing
Nowadays, there are many overwhelming AI information articles, and the more you learn, the more you feel that “the more you know, the less you know”. While crossing the river by feeling the stones, I also have some experience in AI learning:
AI learning should cast a wide net, but it must be retracted:The knowledge system in the field of AI is vast and intersecting, from natural language processing (NLP) to computer vision (CV), from reinforcement learning to multimodal models, and the underlying logic is interconnected. Each branch is a huge amount of learning, and it is also worth in-depth study and exploration. But human energy is limited after all, and if it is broad and complete, it can only break the stick in the end. You have to set a main line for yourself, and then find the fragmented knowledge that fills in the main line according to the main line. (BTW: This is why I keep thinking about how to iterate on the knowledge base.) In this way, the efficiency will be higher, and you will not fall into confusion.
AI is Archimedes’ fulcrum, not the mythical Aladdin’s lamp:AI is not a magic lamp, as long as you make a wish, it can meet all needs, it is essentially a “lever”, it can only amplify people’s abilities, and cannot replace people’s thinking and judgment. AI relies on data and algorithms, and without high-quality data, AI models cannot be trained effectively. At the same time, AI decisions are based on data and mathematical models, and it can be wrong or biased. Some marketing accounts advocate “AI one-click generation of explosive articles” and “mastering so-and-so AI technology and getting rich overnight” seems really attractive, but the author may not have experienced it at all, so he blew it up. When faced with such information or output, you must always have your own judgment and not blindly believe or follow blindly.
Precipitation is the moat:AI eliminates not people, but people who don’t know how to use AI. Although AI technology iterates extremely quickly, the underlying methodology and practical experience will increase in value over time. By writing technical notes, reviewing project cases, and building a personal knowledge system, you can establish your own barriers in the field of AI.