In the era of digital marketing, user review data contains great business value. This article demonstrates the entire process from data collection, label system construction to insight refinement through practical cases, and delves into how to use artificial intelligence (AI) technology to conduct in-depth analysis of these data to gain insight into users’ real needs and market trends.
Analyzing market demand from massive user reviews is tantamount to gold panning in the sand.
On the one hand, user reviews are often painless “medium reviews”, and it is impossible to see what customers want to express?
On the other hand, there are too many evaluations, and if you use traditional Python and other tools to analyze, the workload is very large.
Let alone:
1. “Is the standard uniform?” – Subjective judgment trouble: Different people may understand the same comment very differently. The label used by student A today may not be agreed with by student B tomorrow. This problem of inconsistent “ruler” leads to a significant reduction in the accuracy and credibility of the analysis results.
2. “Dig deep?” – Fog of superficial phenomena: Traditional keyword statistics may tell you what users are discussing, but it is difficult to reveal the underlying reasons, emotional tendencies, and differentiated needs of different user groups behind these discussions. We can often only stay at the level of “users mentioned XX”, and it is difficult to form insights that truly drive decisions.
Fortunately, AI is coming.
But even in the AI era, many people’s way to analyze user reviews is to directly throw the collected user comments into the AI and let the AI analyze it itself to draw conclusions. Basically, such a conclusion is either a serious data hallucination or a good or bad one, just like buying a lottery ticket.
So, if you want to get effective user review insights reports, you need to have a methodological framework and build an AI workflow on top of it. Today we will do such a practice.
The following picture is the user post-purchase review AI insight report made this time:
After 10 years of interaction design, why did I transfer to product manager?
After the real job transfer, I found that many jobs were still beyond my imagination. The work of a product manager is indeed more complicated. Theoretically, the work of a product manager includes all aspects of the product, from market research, user research, data analysis…
View details >
It can not only do correct data statistics, but also realize the frequency and cross-analysis of comments after marking, and finally derive key insights and action suggestions.
(This analysis is based on business scenarios, and the report results have also been recognized by the business, which can be seen at the end of the article.) )
Before the official start, we still need to bring in real business scenarios when doing analysis.
Let’s say we sell the automatic hair trimmer below.
Product address: https://www.amazon.com/PawSwing-AutoComb-Automatic-Surround-biomimetic/dp/B0DMSVNTC1/ref=cm_cr_arp_d_product_top?ie=UTF8
As you can see, the reviews on Amazon are very rich and suitable for review insight analysis.
User reviews
Insight analysis methodology
Before doing any analysis, there must be a “rule”, that is, a methodological and theoretical framework.
The following is our methodology for doing user post-purchase review insight analysis, and the picture is a summary.
Below, you can see how this methodology gradually disassembles large pieces of user reviews into multiple tags, and then conducts cross-insight analysis based on the tags to obtain the business suggestions we want.
Core objectives
Through in-depth analysis of real reviews left by users after purchase, we can gain insight into users’ satisfaction points, pain points, unmet needs and potential expectations in actual use, so as to guide target group portrait optimization, product iteration/innovation and precise content marketing.
Analytical framework
4 user value tiers based on the product:
1. Crowd + Scenario: Who is the user? What are their cats like? Under what circumstances do they use/expect to use this cat self-assisting hair device?
2. Functional Value: What core problems does this self-assisting hair device solve for users (such as reducing cat hair and entertaining cats)? How do the claimed functions (e.g. imitate cat tongue, collect cat hair, smart feeder) perform?
3. Assurance Value: What is the quality of the product’s materials, durability, safety (cat-friendly), and brand services (such as manuals, accessories)?
4. Experience Value: How does the user (and cat) feel overall during installation, bootstrapping, daily use, and cleaning? (Convenience, cat acceptance, product design aesthetics, noise, etc.)
Landing steps
Stage 1: Preparation and planning
Step 0: Clarify the objectives and scope of the analysis
Goal: This analysis is to optimize the user experience of AutoComb self-assisting hair machine and find innovation points for next-generation products, while refining more accurate marketing communications.
Product range: Clearly analyze the “AutoComb Automatic Cat Self Groomer, 360° Surround Self Grooming For Cats…” (white model) comments.
Time Frame: Select all reviews from the product launch to the last month.
Platform scope: Amazon USA.
Stage 2: Data collection and processing
Step 1: Post-purchase review data collection
Channel: Get data from the “Customer reviews” module of this Amazon product page.
Tools: You can use Amazon backend review export, third-party review analysis tools, or browser plugins.
Content: Collect review text, star rating, Verified Purchase logo, review image/video, review date, username (clue to identify whether there are multiple cat households), etc.
Step 2: Build the core principles of the analytical labeling system
Focusing on the four major user value levels, combined with the characteristics of AutoComb products, design detailed labels.
Example tags (using AutoComb as an example):
1) Crowd + Scene:
- Characteristics of cat owners: long-haired cat owners, multi-cat families, cat size (large/small), cat personality (timid/curious), cat age (kitten/adult cat/elderly cat), cat owner experience (novice/veteran), degree of trouble with cat hair (high/medium/low).
- Usage scenarios/expectations: reduce floating hair at home, cats entertain themselves, replace artificial grooming, cats avoid space/nests, cat behavior training (combined with snack machines), and use as cat cafes/rescue station supplies.
2) Functional value:
- Core grooming functions: grooming effect (amount of floating hair), imitation of cat tongue design feeling, self-cleaning bristles, 360° surround grooming.
- Smart feeding function: snack machine lure effect, snack machine ease of use, snack machine charging/battery life, snack machine recording function (QR code).
- Cat hair collection function: convenience of cat hair collection, cat felting elements (fun).
- Adaptability: Adapt to cats of different sizes, entrance size adjustment.
- Power supply method: No electricity required (grooming part).
3) Guarantee value:
- Product material/quality: material texture (such as felt, plastic), structural stability, durability (such as snaps, bristles), quality of accessories (such as charging cable), whether the product has any peculiar smell.
- Brand service/support: Clarity of installation instructions (e.g., oversized instructions), accessory provision (e.g., additional brush heads), customer service response.
- Cat safety: The material is safe for cats, no sharp corners, and the bristles are gentle.
4) Experience value:
- Installation experience: ease of installation (such as snap design), whether tools are required, installation time.
- Cat acceptance/use experience: cat’s initial reaction, cat’s adaptation process/duration, whether the cat likes/actively uses, how long the cat stays inside/behavior (sleeping/observation), whether the cat can play/interact.
- The owner’s operating experience: the convenience of adjusting the size of the inlet, the convenience of operation of the snack machine, the convenience of cleaning (especially the bottom), and the portability of product movement (weight vs. portability).
- Sensory experience: product appearance design, product volume/space occupancy, operating noise (if any), snack machine screen readability.
- Price perception: cost performance, whether the price is too high.
- Emotional value: whether to reduce the burden on the owner, whether to enhance the interaction between humans and pets (reverse, such as “disconnecting you from bonding time” mentioned in the comments), interest (such as cat felting), and willingness to recommend.
Sentimental tendencies labels: Label each segment with positive, negative, neutral/suggested sentimental tendencies.
Step 3: Comment on the item-by-item marking and attribution method
It can be done with the help of the “shortcuts” of Feishu multi-dimensional tables.
Sample comments:
“Great quality and works well to help groom my very long haired Siberian cat. He’s quite big and he still has no issues making himself comfortable in it. It’s also his hidy-hole… He loves getting treats in it as well… this does help with the amount of cat hair that we usually have to clean up.”
Marking:
- Guaranteed value -> Product material/quality -> Front
- Functional value -> Grooming effect (amount of floating hair) -> Front
- Crowd + Scene -> Long-haired cat owner -> Neutral
- Crowd + Scene -> Cat Size (Large) -> Neutral
- Experience value -> Cat acceptance/experience (comfort/hiding place) -> Positive
- Functional value -> Smart feeding function (snack machine lure effect) -> Positive
Stage 3: Data analysis and insight refinement
Step 4: Data quantification and statistical analysis
Frequency statistics: “Ease of installation (positive)” was mentioned 20 times, and “ease of installation (negative, such as difficult to snap button)” was mentioned 5 times.
Cross-analysis:
- Crowd x demand point: Analyze whether the evaluation of the snack machine lure effect is mostly positive or negative in the reviews of multi-cat families.
- Scenario x Demand Point: Analyze how satisfied users who expect to reduce hair at home are satisfied with the hair grooming effect.
- Pain points and highlights: “cat adaptation process (requires patient guidance)” may be a common neutral/suggested point, “snack machine lure” is a highlight, and “overpriced” is a common pain point.
Trend analysis (optional): Analyze changes in feedback on “ease of installation” from early and recent reviews (which may be related to product batches or manual updates).
Step 5: Insight Refinement and Interpretation
Summary around 4 major value levels:
- Crowd + Scene Insight: Post-purchase reviews show that the product is attractive to long-haired cats and multi-cat families, but the cat’s personality (timidity) is a big challenge in accepting the product. Some users see it as a private space for cats rather than a pure grooming tool.
- Functional value insight: The snack machine lure function is generally considered to be an effective means of guiding cats to use it. In terms of grooming effect, user feedback is mixed, with some believing that it helps reduce floating hair, but it cannot completely replace manual brushing. The inlet size adjustment function is well received.
- Guarantee value insights: Users have mixed reviews of the material texture used in the product (such as the stability of felt), with some saying “well built” and some thinking “flimsy”. The clarity of the installation instructions is mentioned, but the installation experience of the snap button is one of the negative feedback points.
- Experience Value Insights: The installation process, particularly snaps, is a pain point for some users. The cat’s adaptation process requires patience and correct guidance from the owner, and it is important to follow the instructions. The large size of the product and the difficulty of cleaning the bottom are the shortcomings of the experience. Price perception is generally high.
Identifying Key Issues and Opportunities:
- Core pain points: cat acceptance and training cost, difficulty in installing some components, inconvenient bottom cleaning, cost-effective perception.
- Unique advantages: the innovative concept of combining snack lure and grooming, and the cat’s “private nest” attribute.
- Unmet needs: easier installation, easier to clean base material, more detailed guidance solutions for cats with different personalities, and more competitive prices.
Stage 4: Insight application and action
Step 6: Apply insights to business decisions
1) Portrait of the target group
Optimize/Build:
Input: “hidy-hole”, “treat dispenser is key”, “takes time to acclimate”, “helps with cat hair”, “hilarious cat hair felting” in Step 5.
Output:
- Optimized portrait: More emphasis is placed on “cat lovers who are willing to invest time in training cats” rather than “lazy cat owners who pursue once and for all”.
- Subdivision portrait: “technology pet early adopters”, they are interested in novel products and are willing to pay for “intelligence” and “automation”, but have high expectations for actual effects and ease of use.
- The portrait description should include: pet raising situation (cat breed, quantity, personality), pet pain points (cat hair, cat entertainment at home alone), attitude towards new products, price sensitivity, and expectations for product functions and experience (such as easy to guide cats to use, real and effective grooming effects).
2) Product Development:
Old product optimization:
Input:
In Step 5, feedback on “difficult to install snap buttons”, “the bottom felt is not easy to clean”, “the screen of the snack machine is difficult to read”, and “some cats are uncomfortable with the brush head or the grooming effect is not good”.
Output:
- Functional value: Research more effective bristle materials and structures, and optimize the entrance adjustment mechanism to accommodate more cats.
- Guaranteed value: Improve the snap design or adopt a more secure and convenient connection method, consider using more durable felt or providing replaceable felt parts.
- Experience value: The base is changed to a detachable and washable waterproof material, the snack machine screen is enlarged or the display is optimized, and more detailed cat guidance video tutorials are provided.
New product opportunity exploration:
Input: Potential user demand for “lower price but retaining core lure and light grooming features” or “modular design with separate feeding or grooming modules”.
Output:
Concept 1: “AutoComb Lite”, a simplified version, focuses on snack lure and basic grooming, and is more affordable.
Concept 2: “Smart Cat Interactive Station”, strengthen entertainment and feeding functions, groom as an auxiliary, increase App interaction and remote control.
3) Content Marketing:
Input: Insights from Step 5 on “long-haired cat owners,” “multi-cat family owners,” and “owners with patience with cat behavior training,” as well as their concerns about “reducing floating hair,” “cat entertainment and rewards,” and “product durability.”
Output:
- Selling points Refinement and communication: Emphasize “snack temptation, let cats fall in love with grooming”, “not only a hair trimmer, but also a private space for cats”, “effectively reduce flying hair at home”.
- Pain point resonance and solution presentation: Make a tutorial video/graphic text on “how to quickly adapt a cat to AutoComb”, emphasizing the importance of following the instructions. In response to concerns about “high price”, its versatility (grooming + feeding + entertainment + cat nest) and long-term use value can be highlighted.
- Scenario-based content: “The Savior of the Long-haired Cat Molting Season: AutoComb Measurement”, “Multi-cat family harmonious pet raising, starting from sharing AutoComb”.
- User testimonials collection: Filter positive reviews that successfully guide cats to use and see results, especially those that contain cute cat videos/pictures.
- FAQ and customer service script optimization: Provide standard answers and solutions to high-frequency questions such as “installation difficulties”, “what should I do if the cat doesn’t go in”, and “cleaning problems”.
Key considerations
- Sample size: Ensure that the number of reviews analyzed is representative of the overall user base to avoid bias in conclusions due to a small sample.
- Review Timeliness: Prioritize analyzing newer reviews to better reflect current user feedback.
- Remove falsehoods and preserve truth: Pay attention to identifying and filtering invalid reviews, malicious negative reviews or good reviews.
- Dynamic iteration: User needs and market environments are changing, and post-purchase review insights should be conducted regularly, and the labeling system and analysis focus may also need to be adjusted accordingly.
- Combine with other data: Combine review insights with other data (such as sales data, customer service feedback, research data, and industry reports) for multi-dimensional validation to make conclusions more reliable.
AI Review Analysis Practice
OK, after reading the theory, we have a concept of user review insights, but if you get started directly, you will find that there are many stuck points. Don’t worry, let’s solve them one by one.
(Skip the data collection part here, you can refer to the AI crawler I introduced in the previous issue, or you can use RPA to collect.) )
Let’s start with the problem
The key to doing user reviews is to label comments with multiple different dimensions, also known as “marking”.
But here comes the problem.
Every time I send a comment to AI, it is independent, which means that the standard for AI to tag each time is not uniform, including the name of the label, how to solve this?
Theoretically, a labeling system should be designed first, and then let AI select and label within this label range, but each product is different, or the situation is different, how can we quickly use AI to establish a good labeling system? And it is also a labeling system that conforms to business implementation.
This is where AI capabilities are particularly needed:
First throw some comments to the AI (if the amount is small, you can just throw them all in, you need the support of the king of long contexts, such as Gemini, KIMI, etc.), and then manually adjust it.
Generate a labeling system
Here are the prompts I used this time, with a few key points:
1. Theoretical learning, we must throw our theoretical framework to AI, so that it can be at the same frequency as us
2. Label design principles, labels should not be randomly stamped, at least in line with MECE principles
3. Output the example and result format, and the final tag system we get is to throw it to the AI to continue the next step, so it is best to get the JSON format
# AI prompts: Build a user comment analysis tag system
## Your Tasks:
You are an experienced product analyst and natural language processing expert. Your task is to build a structured, multi-level analytical labeling system for the product based on a batch of user review texts provided by me. This tagging system will be used to subsequently classify and mark each comment in detail to gain in-depth insights into user needs and feedback.## Core Theoretical Knowledge (Please Learn and Understand First)
1. User Value Hierarchy Model: From the user’s perspective, we will summarize their concerns and evaluations of the product into the following four core value levels. Your label design needs to revolve around these levels:
* Crowd & Scenario: Describes “who” uses or mentions the product “under what circumstances”. This includes the user’s identity, environment, specific context for using the product, or desired goals.
* FunctionalValue: refers to the specific functions, performance and operating characteristics provided by the product to solve the user’s core problems.
* AssuranceValue: Involves the quality, durability, safety, and reliability of the product, as well as the pre-sales, in-sale, and after-sales services and support provided by the brand.
* ExperienceValue: Covers the subjective feelings of users throughout the entire life cycle of interacting with the product, including sensory experience (appearance, sound, smell, etc.), ease of operation, emotional connection, etc.2. Label Design Principles:
* Hierarchy: The labeling system should have a clear hierarchical structure (first-level label, second-level label, third-level label).
* Coverage: Coverage of the main topics mentioned by users in the comments as comprehensively as possible.
* Mutually exclusive (ideally): Labels at the same level should be as mutually exclusive as possible to avoid excessive semantic overlap.
* Conciseness: The name of each label should be concise and clear, with no more than 5 Chinese characters as much as possible.
* Objectivity: The label itself should not contain emotional tendencies (e.g., “good performance”, “poor quality”), only objectively describe the topic discussed (e.g., “cleaning effect”, “product material”). Sentiment analysis will be performed independently in subsequent marking steps.
* Scalability: The system should be flexible enough to be supplemented and adjusted in the future based on new commentary content.Label system hierarchy definition:
* Level1Tag: Must be one of the following four fixed dimensions:
1. ‘Crowd scene’
2. ‘Functional value’
3. ‘Guaranteed value’
4. ‘Experience value’* Level2Tag: is a further breakdown of the Level 1 tag and represents the main area of focus under this value hierarchy.
* Example (for a “cat self-assisting hair” product, for inspiration only, you need to generate based on the actual reviews provided):
* Level 1 tag: ‘Crowd scene’
* Secondary tags: ‘pet characteristics’ (e.g. cat breed, number, age, personality)
* Secondary label: ‘usage expectations’ (e.g. reduced fluff, cat entertainment)
* Level 1 label: ‘Functional value’
* Secondary label: ‘Grooming function’ (e.g. grooming effect, bristle design)
* Secondary label: ‘feeding function’ (e.g. snack lure, machine operation)
* Secondary label: ‘cat hair collection’
* Level 1 label: ‘Guaranteed value’
* Secondary label: ‘Product quality’ (e.g. material, durability)
* Secondary label: ‘Brand support’ (e.g. installation instructions, accessories)
* Secondary label: ‘Cat safe’
* Level 1 label: ‘Experience value’
* Secondary label: ‘Installation experience’
* Secondary label: ‘cat reaction’ (e.g. acceptance, use behavior)
* Secondary label: ‘Master operation’ (e.g. cleaning, conditioning)
* Secondary label: ‘sensory perception’ (e.g. appearance, volume, noise)
* Secondary label: ‘Price Perception’* Level3Tag: This is a materialization of the second-level tag and represents the more detailed topic points actually discussed in user comments. This is the section you need to focus on based on the provided comment text.
* Example (continued from the previous example, for “cat self-assisting hair”):
* Level 1 label: ‘Functional value’
* Secondary label: ‘Grooming function’
* Level 3 Label: ‘Grooming Effect’
* Level 3 label: ‘Bristle material’
* Three-level label: ‘Inlet adjustment’
* Secondary label: ‘Feeding function’
* Level 3 Label: ‘Snack Temptation’
* Three-level label: ‘Feeder Operation’
* Level 3 label: ‘Feeder endurance’
* Level 1 label: ‘Experience value’
* Secondary label: ‘Installation experience’
* Level 3 label: ‘Easy to install’
* Level 3 label: ‘Installation time-consuming’
* Level 3 label: ‘snap design’
* Secondary label: ‘cat reaction’
* Level 3 tag: ‘Cat likes’
* Level 3 label: ‘Cat is afraid’
* Level 3 label: ‘Adaptation process’Your specific operating instructions:
1. Carefully read and analyze the text of a batch of user reviews that I will provide later.
2. Based on the above theoretical knowledge, hierarchical structure definition and design principles, a three-level labeling system is generated for the products discussed in this batch of reviews.
3. The categories and names of the first-level and second-level tags, you can refer to the examples I gave to expand or adjust them to make them more in line with the actual review content, but the first-level tags must be fixed four dimensions.
4. Three-level tags are at the heart of your creativity, requiring you to distill specific issues that users actually discuss from comments and name them with concise words.
5. Ensure that each tertiary label belongs to a clear secondary label and a primary label.
6. Output Format Requirements: Please output your designed label system in structured JSON format. This allows me to use it directly for subsequent AI marking tasks. The format is as follows:“`json
[
{
“level_1_tag”: “crowd scene”,
“level_2_tags”: [
{
“level_2_tag_name”: “pet trait”, // (example secondary tag, please generate from text)
“level_3_tags”: [
“Cat breed”, // (example three-level tag, please generate from text)
“Number of cats”,
“Cat Age”
]
},
{
“level_2_tag_name”: “usage expectation”, // (example secondary tag)
“level_3_tags”: [
“Reduce floating hair”,
“Cat Entertainment”,
“Replace labor”
]
}
// … More secondary labels and their tertiary labels under this primary label
]
},
{
“level_1_tag”: “functional value”,
“level_2_tags”: [
{
“level_2_tag_name”: “Core Function A”, // (Example secondary label, please replace with specific function, e.g. Hair Reduction Function)
“level_3_tags”: [
“Feature A Effect”, // (example three-level label, e.g. Grooming Effect)
“Functional A Design”, // (Example Level 3 Label, e.g. Bristle Design)
“Function A Parameter” // (example three-level label, e.g. Entrance Size)
]
},
{
“level_2_tag_name”: “Accessibility B”, // (example secondary label, e.g. Feeding function)
“level_3_tags”: [
“Feature B Effect”, // (Example three-level label, such as Snack Lure)
“Function B Operation” // (Example three-level label, e.g. Feeder Operation)
]
}
// … More secondary labels and their tertiary labels under this primary label
]
},
{
“level_1_tag”: “guaranteed value”,
“level_2_tags”: [
// … Please design secondary and tertiary labels according to the text
]
},
{
“level_1_tag”: “experience value”,
“level_2_tags”: [
// … Please design secondary and tertiary labels according to the text
]
}
]
“`Please confirm that you understand all of the above requirements. After I provide the user comment text, start your analysis and tagging system.
Tag comments one by one
After getting the tag system, we need to mark the AI one by one.
Note that marking here does not mean that it is over with a few labels, but to let AI mark separately for different dimensions, so that the tags can be enriched and then aggregated for analysis. The effect looks like this:
Would someone send such a complex project to AI one by one????
Here we can use the “field shortcut” in the Feishu multi-dimensional table to achieve this.
First, upload the data to the Feishu multi-dimensional table, then create a new field for each “secondary label”, and select “Smart Label” in the “Field Shortcut”
It can use Deepseek to mark content.
After creating a new field, there is a new problem, so many third-level tags under each secondary label, do you have to fill in them one by one?
How to fill in the reference examples and label requirements for each field here? The effect of not filling in is not good
Here we have to use the capabilities of AI to help us do it.
Prompt:
Next, the next step is to generate a prompt for me to send to the AI for tagging each comment. I also need to give examples, I need to use the Feishu multi-dimensional table as shown in the figure, so I also need to give me the examples and label requirements for each part, according to the secondary labels of the following tag system, one secondary tag corresponds: an input reference example, a custom tag requirement, and a description of the added options (that is, introduce what this column is, and then list all the three-level tags)
The following is the labeling system:
“`
Put in the previously generated label system
“`
In this way, we generate the required options and examples, label requirements under each field.
First-level tag: crowd and scene
A. Secondary labels: Demographic characteristics
– Added option descriptions (column descriptions and optional labels):
Description: Identify the identity, characteristics, or group to which the cat owner or the cat itself is mentioned in the review.
Optional three-level tags: long-haired cat owner, multi-cat family, large cat owner, short-haired cat owner, kitten cat owner, old cat owner, timid cat owner, shedding cat owner, cat don’t like snacks, cat café/rescue station– Few-shot Learning Example:
Comment text: “Great quality and works well to help groom my very long haired Siberian cat. He’s quite big…”
Corresponding tags: long-haired cat owner, big cat owner– Custom label requirements (Constraints for AI):
If the cat’s breed, hair length, size, age, number, or ownership status (e.g. rescue station) are clearly mentioned in the comments, select the appropriate tag. A comment may correspond to multiple Demographic tags.B. Secondary labels: usage scenarios/expectations
– Added option descriptions (column descriptions and optional labels):
Description: Identifies the actual use of the product, the context described by the user in the review, or the purpose or use case they expected to achieve when they purchased the product.
Optional three-level tags: reduce floating hair, cat self-entertainment, replace manual grooming, keep clean, daily grooming, cat adaptation period, provide hiding space, collect cat hair– Few-shot Learning Example:
Comment text: “I have two inside/outside cats and thought this would be great to help keep them cleaner… It’s also his hidy-hole…”
Corresponding tags: Keep clean, provide shelter space– Custom label requirements (Constraints for AI):
Pay attention to the main reason why people mention the product or the problem they want to solve. If you mention a specific behavior of the cat inside (such as hiding), please select it as well.… If the back is too long, it will be ignored, and the result will be the same.
The next step is to create and fill in the fields one by one, for example:
1. Create a new “Crowd Characteristics” column
2. Select “AI-generated options” at the location of the tag
3. Then throw the tag description to the left, and click “Accept” after generating it.
4. Fill in the reference examples and label requirements in the corresponding positions in the figure below
5. Select the field and select “User Comments”, which is the column where you put the comment
Once the dot is confirmed, it will start labeling each line
Finally, multiple tertiary labels were obtained among the 12 secondary labels under the four dimensions in the theoretical framework.
At this point, our most difficult labeling job has been completed.
Although it may seem complicated, there are actually very few places where the human brain needs to be used, and it is left to AI to complete. The workload is reduced to at least 30% data analysis
Next, overcome the next difficulty – labeling
data analysis
As I said at the beginning, throwing the table directly to AI for data analysis is unreliable in most cases, and the data will be hallucinated, that is, wrong.
(Although I don’t believe in evil, I tested Gemini 2.5 pro and Claude 3.7 one after another, and the final results were all wrong.) So don’t try again)
The current solution is to throw the table to the AI, let it learn the table structure, generate Python code, and then run the code. It doesn’t matter if you don’t know programming, basically you won’t report an error, just follow the execution.
Here, we can use Gemini 2.5 pro to program Claude 3.7, after all, it is not a very complex code requirement.
Prompt:
I am doing user review insight analysis, and now I have completed the first two stages, and I have the current table, and I need you to help me complete ‘Phase 3: Data Analysis & Insight Extraction’
The specific methodology is as follows (note that this is a general methodology, which allows you to analyze my table after learning, rather than directly using the content in it)
“`
Throw the previous methodology to AI to learn
“`
Simply complete ‘Phase 3: Data Analysis & Insight Extraction’The crowd/scene labels are: crowd characteristics, usage scenarios/expectations
The functional value is: core function performance, auxiliary special functions, and operation control
The guaranteed value is: product quality and durability, brand service support, safety and reliability
The experience value is: price perceived emotional value, sensory experience, portability for daily use, installation experienceYour analysis must be based on statistics on my tabular data
Ignore “No Match Label”
Please give me a python script to complete the quantitative data analysis part, note that qualitative text analysis is not required
Instead of generating images, just generate the results of the data analysis table in markdown format
Note that the key point here is the final result of generating Markdown format, which is not for us to see, but for the next AI to analyze.
After running the python script directly, you will get the MD file in the figure below, which is full of the data results of each dimension label and multiple dimension label cross-statistics made by python.
If you change it to Excel, it is a lot of sheets
Here you can simply do data verification by yourself, I have done the data is correct.
The next thing is simple, just throw the data analysis result md file to the AI, remember that it is the same dialog box, because the AI has already learned our analysis framework, so just let it complete the remaining steps.
In this way, we complete the qualitative insights of the data analysis.
Finally, let AI make an HTML analysis report, and it’s over!
The HTML part can refer to my previous article to get a dynamic data analysis report.
Business landing
Brother Biscuit’s style is never fictional. The reason why I chose this product for this analysis case is also because a friend is doing it, and after communicating with her, I also recognized the results of this analysis report, which is similar to what they did manually, and can also be directly implemented in the marketing strategy. If it is meaningful for business guidance, Step 5: Insight Refinement and Interpretation will be more useful.
Of course, whether it can be implemented in business depends on the situation of different companies.
In summary, it is necessary to follow the logic of “starting from the end”: starting from the business landing scenario, launching the analysis framework and labeling system, and then implementing the plan, it is foolproof.