Google Prompt Engineering White Paper: 10 Tips for AI Prompt Word Engineering in 2025

With the rapid development of AI technology, how to communicate with large language models (LLMs) more efficiently to obtain more accurate and valuable outputs has become a topic of great concern. Google’s latest Prompt Engineering white paper provides valuable guidance on this issue. This article will detail the 10 prompt engineering techniques proposed in the white paper, hoping to help everyone.

Google released a 65-page white paper this year that explores Prompt Engineering in detail, explaining how to guide AI large language models (LLMs) to generate accurate output by designing and optimizing prompts.

This paper proposes a variety of prompt engineering techniques to solve the problem of prompt optimization for large language models. Here are 10 specific prompt word tips:

  1. Zero-shot prompt: Provide a description of the task and some text for the LLM to start generating. This approach is suitable for tasks that do not require creativity.
  2. Single-shot and few-shot prompts: Provide single or multiple examples to help the model understand the task and ask it to mimic the examples. The quantity and quality of examples have a significant impact on the complexity of the task.
  3. System prompt: Set the overall context and purpose, define the basic capabilities and main tasks of the model. For example, the model is required to return output in JSON format.
  4. Role Prompts: Assign specific roles to the model, such as travel guides, to generate content that aligns with the role.
  5. Contextual Prompts: Provide specific details or context related to the task at hand, helping the model understand the request and generate relevant responses.
  6. Step Back Prompt: Start by having the LLM consider a general question related to the specific task, then feed the answer back into subsequent specific task prompts to activate relevant background knowledge and reasoning processes.
  7. Chain Thinking (CoT) Prompts: Generate intermediate reasoning steps to help LLMs generate more accurate answers. This approach is suitable for tasks that require reasoning.
  8. Self-Consistent Prompts: Improve the accuracy and consistency of responses by generating diverse reasoning paths and selecting the most common answers.
  9. Tree Thinking (ToT) Prompts: Allows LLMs to explore multiple different reasoning paths simultaneously, especially for complex tasks.
  10. ReAct (reason & act) prompts: Combine reasoning and external tools (e.g., search, code interpreter) to solve problems and simulate how humans operate in the real world.

What is prompt engineering?

Prompt engineering is the process of designing high-quality input instructions (i.e., “prompts”) to guide large language models (such as Gemini, GPT, etc.) to generate more accurate outputs. Just like communicating with people, the way you ask questions directly affects the quality of the AI’s responses. Even if you’re not a programmer, you can optimize your prompts to let the AI better understand your needs.

skill

The following are the extracted Prompt Engineering techniques, cases, and applicable scenarios:

1. Zero-Shot

Tip: Give the task directly, without providing examples.

Example:

Question: “Categorize film reviews as positive, neutral, or negative.” ”

Answer: “The movie is beautiful, but the plot drags on. ”

Applicable scenarios: simple tasks (such as classification, translation).

2. One-Shot/Few-Shot

Tip: Provide 1 (single-sample) or multiple (few-sample) examples to guide the model to imitate.

Example:

Task: Parse pizza order to JSON.

Example input: “I want a small pizza with cheese, tomato sauce, and pepperoni.” ”

Example output: {“size”: “small”, “ingredients”: [“cheese”, “tomato sauce”, “pepperoni”]}

Applicable scenarios: Tasks that require specific output formats or complex patterns (such as data parsing, code generation).

3. System Prompting

Tip: Set the overall task framework of the model (e.g., output format, security restrictions).

Example:

Directive: “Returns only uppercase sentiment labels.” ”

Enter: “This movie is terrible.” ”

Output: “NEGATIVE”

Applicable scenarios: You need to control the output structure or security (such as JSON generation and toxicity filtering).

4. Role Prompting

Tip: Assign specific roles to the model (e.g., travel guide, teacher).

Example:

Instruction: “Play as a travel guide and recommend 3 museums in Amsterdam.” ”

Output: List attractions such as the Van Gogh Museum.

Applicable scenarios: Answers that require a specific style or expertise (e.g., customer service, education).

5. Contextual Prompting

Tip: Provide context about the task.

Example:

Context: “You’re writing for a retro gaming blog. ”

Request: “Suggest 3 article topics.” ”

Output: 80s arcade game design evolution, etc.

Applicable scenarios: Personalized or domain-related outputs (such as content creation).

6. Step-Back Prompting

Technique: Let the model answer abstract questions first, and then solve specific tasks.

Example:

Abstract question: “What settings in FPS games create a challenge?” ”

Specific task: “Write a plot based on the setting of ‘abandoned military base’. ”

Applicable scenarios: Complex tasks require activating background knowledge (e.g., game design, academic research).

7. Chain of Thought (CoT)

Technique: Ask the model to reason step by step.

Example:

Question: “When I was 3 years old, my partner was 3 times my age, and now I am 20 years old, how old is my partner?” ”

The model calculates the age difference step by step and outputs: “26 years old.” ”

Applicable scenarios: Mathematical problems, logical reasoning, or tasks that require explanation.

8. Self-Consistency

Tip: Generate answers multiple times and vote for the most common results.

Example:

I asked many times whether classified emails were important, and finally chose the majority result “IMPORTANT”.

Applicable scenarios: Reduce randomness and improve answer reliability (such as classification and evaluation).

9. Tree of Thoughts (ToT)

Technique: Explore multiple reasoning paths in parallel.

Applicable scenarios: Complex problems that require multi-perspective analysis (e.g., strategic planning, idea generation).

10. ReAct (Reasoning and Action)

Trick: Combine inference with external tools (such as search APIs).

Example:

Task: “How many children do the members of the Metallica band have?” ”

The model searches for each member’s information step by step and adds it up.

Applicable scenarios: Tasks that require real-time data or interaction (such as information retrieval, code execution).

End of text
 0