Designers often face the problem that static semantic documents are difficult to directly translate into convenient tools for work, and they are not efficient at querying specifications, generating or optimizing copy. If you directly use ordinary AI tools, there are problems such as unstable answers and inability to accurately understand the content of the document. To this end, this article details how to use Coze as a tool to build an AI semantic assistant with zero code – starting from requirements analysis, by optimizing the knowledge base (allowing machines to understand semantic specifications), building workflows (clarifying the process of AI processing tasks), designing application interfaces (improving user experience), and after testing and optimization, static documents are transformed into intelligent tools that can efficiently assist in copywriting, optimization, specification checking, etc., empowering designers.
Copywriting is the most direct bridge between products and users, and it is also the key to influencing user understanding and behavioral decisions.
Previously, each business had established its own semantic specifications. How to turn static semantic documents into a tool that can be used in daily work is an important part of promoting the application of specifications.
We built a Coze app with the help of current AI tools, AI Semantic Assistant. Here is the detailed process.
1. Demand analysis
I have organized semantic documents in my hand, including semantic specifications and cases, in the form of plain text, images, tables, multimodality (text, pictures, table mixing), etc.
B-end product manager’s ability model and learning improvement
The first challenge faced by B-end product managers is how to correctly analyze and diagnose business problems. This is also the most difficult part, product design knowledge is basically not helpful for this part of the work, if you want to do a good job in business analysis and diagnosis, you must have a solid …
View details >
I want the AI to read these documents and answer my questions based on the relevant specifications and cases in the documents, including writing copywriting, optimizing existing copywriting, seeing if the copywriting meets the specifications, and comparing several sets of copywriting to see which one is better. Ask questions in a way that includes text and images (e.g., copywriting based on images):
First I tried asking DeepSeek/Kimi directly, sending these documents as attachments and sending them with questions, and the answers were okay, but there were some drawbacks:
1. The effect is like drawing cards, which is not very stable, and the document size may exceed the limit of the context token.
2. Some large models can only recognize the text of the picture and cannot understand the content of the picture, which loses some of the information of the semantic specification.
3. Losing specifications to AI every time is too troublesome to use.
I need to have a tool that understands the semantic specifications that have been established and can generate answers based on my questions directly with reference to the specifications.
With this goal in mind, I started building the workflow. Because of the workflow, orders can be executed step by step to achieve a relatively high degree of certainty.
Workflow tools
Commonly used workflow tools include Coze, Dify, n8n, FastGPT, etc.
- Coze: A tool developed by ByteDance, free and easy to use, supports API calls, and can quickly create robot applications. The advantages are that the interface is friendly, quick to get started, and supports a variety of knowledge base import methods; The downside is that customization features are relatively limited, and some advanced features are still in development.
- Dify: An open-source AI application development platform with more customization options. The advantage is high flexibility and deep customization of workflows; The disadvantage is that deployment and maintenance are relatively complex and require high technical requirements.
- n8n: Powerful automated workflow platform with rich integrations. The advantage is that it can connect various services and APIs, and the degree of automation is high; The downside is that it has a steep learning curve, making it less suitable for designers.
- FastGPT: A platform focused on building GPT applications quickly. The advantage is that it is optimized for LLM applications and has powerful knowledge base management functions. The disadvantage is that some functions require payment, and the stability of domestic use is sometimes limited.
Considering the ease of use, stability, and cost, I finally chose Coze as the development tool.
It’s simple enough for designers with no programming background to get started quickly, and the existing features are enough to meet my needs.
The main steps of task disassembly: knowledge base optimization, workflow construction, and application interface construction Optimize the knowledge base first. The purpose is to make the semantic specifications that humans can understand, so that computers can also understand them, so that large models can answer more accurately and better.
Build workflows. Define the process of AI work so that it can complete specific tasks step by step. Equivalent to back-end work.
Finally, build the application interface. Put a skin on the built workflow to make it easier to use. Equivalent to front-end work.
2. Knowledge base optimization
The knowledge of knowledge base optimization is quite profound. There is a lot of knowledge online about RAG (Retrieval Augmented Generation) in knowledge bases, but a lot of it requires writing code. I’ve explored some designer-friendly, no-code implementation methods for knowledge base optimization:
- Content division: Centralize relevant content together, divide paragraphs clearly, reduce redundancy and cross-references, and improve retrieval efficiency.
- Format conversion: Markdown format is preferred for text and tables; Multimodal files are in PDF format.
- Naming Patterns: Use clear and consistent naming patterns for documents, such as “[category]-[name]”, to help AI better understand document content relationships.
- Tag description: Add a description of the document to the document, such as “For XXX business”, to help the AI quickly locate the topic.
- Image Processing: For images, add detailed text descriptions to help the AI understand the image content.
- Highlighted Focus: Use formatting like bold and headings to highlight key information to help the AI identify key content.
- Rich examples: Add 2-3 positive and negative examples to each specification to help AI better understand application scenarios.
- Test verification: After uploading, conduct multiple rounds of testing to find out where the answers are inaccurate and optimize the document content in a targeted manner.
I uploaded it as a knowledge base in Coze.
3. Creation of knowledge base
Workflows are the core engine of AI assistants, determining how user inputs are processed and generating valuable outputs.
The core of the workflow is how data flows from one node to another, and the semantic assistant mainly includes: input flow, processing flow, generation flow, output flow, and feedback flow.
Input stream
The user interaction stage, which obtains the necessary information from the user, including the following nodes:
- Input: Defines the parameters for the input, including user-entered questions (text), uploaded images (optional), and business and functionality (see below).
- Service selection: The semantic specifications applicable to different services are different, and it is necessary to distinguish between services. Different knowledge bases are called according to the business selected by the user.
- Function selection: After research, the AI assistant needs to have the following functions, which have different requirements and output content. Depending on the function chosen by the user, different workflows are followed.
Process flows
Background data processing stage, turning user input into useful queries and analysis:
- Query rewriting: Query refers to a question or request entered by a user. The customer service assistant is directly from the knowledge base, and we need to process it on the basis of the knowledge base, so we can’t directly use the user’s question as a query, but to rewrite it and convert it into a form that is more suitable for knowledge base retrieval, so as to improve the retrieval accuracy. For example, if a user asks, “I want to write a copy on the membership card introduction page to promote conversion….”, we extract the category of business copywriting, which can be rewritten as “business copywriting design specifications and cases”.
- Knowledge base retrieval: In this step, you need to retrieve relevant specifications and cases and provide them to subsequent large model nodes for use.
Coze’s knowledge base search requires the following settings:
Generate flows
The AI comprehension and generation stage, processing the information and generating answers:
Large model generation: Answer user questions based on retrieval information based on the knowledge base. The core of the large model node is to set prompts, which is also the most critical part of the entire workflow.
The former is more like the “operating system” of AI behavior, which determines the basic framework for handling problems. The latter is like a “task list” for AI, completing tasks based on user input.
We can set it up like this:
Output stream
The final rendering stage, which delivers the generated content to the user:
Output: Output the generated content. Here I used 2-3 large models to answer the questions at the same time, so I need to splice the answers together when outputting, and at the same time use Markdown language to define the format to improve readability.
Feedback flow
Optimize the cycle for continuous improvement. A separate workflow is set up here, when the answer is generated, users can like or tap for feedback, and the background will record the user’s input, answers and feedback.
The background log also records key data from the entire process to provide a basis for subsequent optimization.
The design of workflows requires consideration of data integrity, accuracy, and delivery efficiency. In actual implementation, it may be necessary to adjust the data transmission method and content format between nodes multiple times according to the test results to achieve the best results.
4. Application interface construction
Coze provides an interface building tool that can be dragged and dropped to complete the construction of the basic page.
In the interface design, simplicity and ease of use are prioritized, with function buttons clearly displayed on the main interface.
The following core elements are mainly designed:
- Selection area: Drop-down to select the business and function.
- Input area: Spacious text box with image upload support and clear operation prompts.
- Action Zone: Action buttons “Ask” and “Clear”.
- Answer area: Displays answers, in Markdown form, with copy, regenerate, and like/click functions at the bottom.
There are a few things to pay attention to when building:
- Loading status feedback: At this time, a loading animation should be designed to alleviate the anxiety of users waiting.
- Result Presentation Hierarchy: Use cards, dividing lines, or background colors to differentiate content from different sections, enhancing readability.
- Component Display Logic: Ensure that the display logic of each component in different states (waiting for input, loading, displaying results, errors) is clear.
For example, after clicking Ask, the workflow starts working, and there is no answer output at this time, the loading animation is displayed first, and then after the answer is output, the loading animation disappears and the toolbar appears.
5. Testing and optimization
After the workflow is built, multiple rounds of testing and optimization are required to ensure that the AI semantic assistant can answer user questions stably and accurately:
Test the strategy
- Scenario testing: Prepare test cases for typical scenarios for different services and functional types.
- Boundary Testing: Test questions in the knowledge base that don’t have direct answers, validating the AI’s generalization capabilities.
- Negative Testing: Deliberately asking unreasonable or wrong questions to verify the AI’s error handling capabilities.
- Performance Testing: Test response times, especially retrieval efficiency when the knowledge base is large.
Optimize the direction
- Knowledge Base Optimization: Adjust the organization and granularity of knowledge base content based on test results.
- Prompt Adjustment: Optimize system prompts and user prompts to improve the accuracy and relevance of answers.
- Parameter tuning: Adjust parameters such as the maximum number of recalls and the minimum matching degree to balance the recall rate and accuracy rate.
- Model Selection: Test different large models and choose the model or combination of models that best suits the semantic assistant scenario.
Through multiple rounds of testing and optimization, we can continuously improve the performance of AI semantic assistants, making them truly powerful assistants in designers’ daily work.
6. Achievement display
After these steps, we successfully created an AI assistant that can understand and apply semantic specifications.
7. Summary and prospect
The process of building an AI semantic assistant from 0 to 1 is not only a technical practice, but also our in-depth thinking about the application scenarios of the design semantic specification. With this project, we achieved several key goals:
- Specification from static to dynamic: Transforming originally static semantic documents into interactive, intelligent tools greatly improves specification usability.
- Efficiency Improvement: Designers can quickly access semantic suggestions, reducing the time cost of consulting documents.
- Consistency Assurance: AI assistants based on unified specifications ensure the consistency of semantic expression in different scenarios.
- Knowledge precipitation: As usage increases, the data collected through feedback streams can continuously optimize the semantic specification itself.
In the future, we plan to continue optimizing AI semantic assistants in the following directions:
- Specification iteration: Regularly update the semantic specification library based on user feedback and actual application results.
- Function Expansion: Add advanced features such as multi-round dialogue to meet more complex work scenarios.
- Personalized customization: Provide customized semantic assistants for different business lines to match their needs more accurately.
- Self-Learning: Continuously optimize response quality and response speed by analyzing user feedback and logs.
This project not only improves the application efficiency of semantic specifications, but also provides valuable experience for us to explore AI-enabled design work. I hope this experience sharing can help more designers integrate AI tools into their workflows, unleash creativity, and improve design quality.