Skip to content

Commit 585c5c1

Browse files
committed
update prompt techniques
1 parent e4b7444 commit 585c5c1

File tree

1 file changed

+104
-11
lines changed

1 file changed

+104
-11
lines changed

ai/prompt-engineering-guide/techniques.md

Lines changed: 104 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,16 @@ Table of contents:
1111
- [2.2.2. Zero-shot CoT](#222-zero-shot-cot)
1212
- [2.2.3. Automatic CoT (Auto-CoT)](#223-automatic-cot-auto-cot)
1313
- [2.2.4. Self-Consistency](#224-self-consistency)
14-
- [2.3. Tree of thoughts (ToT)](#23-tree-of-thoughts-tot)
15-
- [2.4. Knowledge-enhanced techniques](#24-knowledge-enhanced-techniques)
16-
- [2.4.1. Generated knowledge prompting](#241-generated-knowledge-prompting)
17-
- [2.4.2. Retrieval augmented generation (RAG)](#242-retrieval-augmented-generation-rag)
14+
- [2.2.5. Tree of thoughts (ToT)](#225-tree-of-thoughts-tot)
15+
- [2.3. Knowledge-enhanced techniques](#23-knowledge-enhanced-techniques)
16+
- [2.3.1. Generated knowledge prompting](#231-generated-knowledge-prompting)
17+
- [2.3.2. Retrieval augmented generation (RAG)](#232-retrieval-augmented-generation-rag)
1818
- [2.4. Action-oriented techniques](#24-action-oriented-techniques)
1919
- [2.4.1. ReAct Prompting](#241-react-prompting)
20+
- [2.4.2. Program-Aided language models (PAL)](#242-program-aided-language-models-pal)
21+
- [2.4.3. Prompt Chaining](#243-prompt-chaining)
22+
- [2.5. Automated techniques](#25-automated-techniques)
23+
- [2.5.1. Automatic prompt engineering (APE)](#251-automatic-prompt-engineering-ape)
2024

2125
```mermaid
2226
graph TD
@@ -30,7 +34,8 @@ graph TD
3034
C --> C1[Chain-of-Thought]
3135
C --> C2[Zero-Shot CoT]
3236
C --> C3[Self-Consistency]
33-
C --> C4[Tree of Thoughts]
37+
C --> C4[Auto-CoT]
38+
C --> C5[Tree of Thoughts]
3439
3540
A --> D[Knowledge-Enhanced Techniques]
3641
D --> D1[Generated Knowledge]
@@ -217,7 +222,7 @@ Donny
217222
Classify the above email as IMPORTANT or NOT IMPORTANT as it relates to a software company. Let's think step by step.
218223
```
219224

220-
## 2.3. Tree of thoughts (ToT)
225+
### 2.2.5. Tree of thoughts (ToT)
221226

222227
The Tree of Thoughts (ToT) is an advanced prompting framework that extends beyond the Chain-of-Thought (CoT) prompting technique. ToT enables language models to perform complex tasks that require exploration or strategic lookahead by leveraging a tree-based approach to generate and evaluate multiple reasoning paths.
223228

@@ -318,11 +323,11 @@ Final Plan:
318323
- **Total:** $1,500 ✔️
319324
```
320325

321-
## 2.4. Knowledge-enhanced techniques
326+
## 2.3. Knowledge-enhanced techniques
322327

323328
Knowledge-enhanced techniques augment the model's reasoning with additional information, either generated by the model itself or retrieved from external sources.
324329

325-
### 2.4.1. Generated knowledge prompting
330+
### 2.3.1. Generated knowledge prompting
326331

327332
Generated knowledge prompting involves having the model generate relevant knowledge or information before answering a question. This technique helps the model access its own knowledge in a structured way before attempting to solve a problem.
328333

@@ -357,7 +362,7 @@ Generate 4 facts about the Kermode bear:
357362
Then, we feed that information into another prompt to write the blog post:
358363
```
359364

360-
### 2.4.2. Retrieval augmented generation (RAG)
365+
### 2.3.2. Retrieval augmented generation (RAG)
361366

362367
Retrieval Augmented Generation enhances language models by incorporating information from external knowledge sources. This technique retrieves relevant documents or data from a knowledge base and provides them as context for the model to generate a response.
363368

@@ -389,5 +394,93 @@ graph LR
389394

390395
### 2.4.1. ReAct Prompting
391396

392-
> [!WARNING]
393-
> WIP
397+
ReAct (Reasoning and Acting) is a framework where language models generate both reasoning traces and task-specific actions in an interleaved manner. This allows the model to reason about its actions and use external tools to gather information.
398+
399+
ReAct prompting works by combining reasoning and acting into a thought-action loop. The
400+
LLM first reasons about the problem and generates a plan of action. It then performs the
401+
actions in the plan and observes the results. The LLM then uses the observations to update
402+
its reasoning and generate a new plan of action. This process continues until the LLM
403+
reaches a solution to the problem.
404+
405+
To see this action, you need to write some [code](https://github.com/ntk148v/testing/blob/7516cfe2265ac974b9f2bf10305411bf02a69e6c/python/react_agent/main.py).
406+
407+
```mermaid
408+
flowchart TD
409+
A[Question] --> B[Thought: Reasoning about question]
410+
B --> C[Action: Using external tool]
411+
C --> D[Observation: Result from tool]
412+
D --> E[Thought: Reasoning with new info]
413+
E --> F[Action: Using tool again]
414+
F --> G[Observation: New result]
415+
G --> H[Thought: Final reasoning]
416+
H --> I[Final Answer]
417+
```
418+
419+
Below is an example from [HotPotQA2](https://hotpotqa.github.io), a question-answering dataset requiring complex reasoning. ReAct allows the LLM to reason about the question (Thought 1), take actions (e.g., querying Google) (Act 1). It then receives an observation (Obs 1) and continues the thought-action loop until reaching a conclusion (Act 3).
420+
421+
![](https://www.promptingguide.ai/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Freact.8e7c93ae.png&w=828&q=75)
422+
423+
**Key characteristics:**
424+
425+
- Alternates between reasoning (Thought) and actions
426+
- Can interact with external tools like search engines
427+
- Combines internal knowledge with external information
428+
- Improves performance on knowledge-intensive tasks
429+
- Enhanced interpretability through reasoning traces
430+
431+
### 2.4.2. Program-Aided language models (PAL)
432+
433+
Program-Aided Language Models use code as an intermediate step for solving complex problems. Instead of generating the answer directly, PAL generates a program (typically in Python) that computes the answer.
434+
435+
![](https://www.promptingguide.ai/_next/image?url=%2F_next%2Fstatic%2Fmedia%2Fpal.dfc96526.png&w=828&q=75)
436+
437+
**Key characteristics**:
438+
439+
- Uses code generation as an intermediate reasoning step
440+
- Leverages programming language semantics for precise computation
441+
- Offloads calculation and logical reasoning to code execution
442+
- Particularly effective for mathematical and algorithmic tasks
443+
444+
Check out [a simple Python application](https://github.com/ntk148v/testing/blob/master/python/pal/main.py) that's able to interpret the question being asked and provide an answer by leveraging the Python interpret.
445+
446+
### 2.4.3. Prompt Chaining
447+
448+
Prompt chaining involves breaking down complex tasks into subtasks and using the output of one prompt as input to another. This technique creates a chain of prompts, each handling a specific part of the overall task.
449+
450+
Prompt chaining is useful to accomplish complex tasks which an LLM might struggle to address if prompted with a very detailed prompt.
451+
452+
```mermaid
453+
flowchart TD
454+
A[Document] --> B[Prompt 1: Extract Quotes]
455+
A --> D[Prompt 2: Answer Question]
456+
B --> C[Relevant Quotes]
457+
C --> D
458+
E[User Question] --> B
459+
E --> D
460+
D --> F[Final Answer]
461+
```
462+
463+
**Key characteristics:**
464+
465+
- Decomposes complex tasks into manageable subtasks
466+
- Uses outputs from earlier prompts as inputs to later ones
467+
- Increases reliability and controllability
468+
- Allows for more transparent debugging and analysis
469+
- Especially useful for complex multi-stage tasks
470+
471+
## 2.5. Automated techniques
472+
473+
### 2.5.1. Automatic prompt engineering (APE)
474+
475+
At this point you might realize that writing a prompt can be complex. Wouldn't it be nice to automate this (write prompt to write prompt)? Well, there's a method: Automatic Prompt Engineering (APE). This method not only alleviate the need for human input but also enhances the model's performance in various tasks.
476+
477+
![](https://www.promptingguide.ai/_next/image?url=%2F_next%2Fstatic%2Fmedia%2FAPE.3f0e01c2.png&w=828&q=75)
478+
479+
**How APE Works**
480+
481+
- Define a task to optimize
482+
- Use an inference LLM to generate instruction candidates based on output demonstrations
483+
- Execute these instructions using a target model
484+
- Evaluate performance to find the best instruction
485+
486+
APE has discovered more effective zero-shot CoT prompts than human-engineered ones. For example, APE found that "Let's work this out in a step by step way to be sure we have the right answer" elicits better chain-of-thought reasoning than the human-designed "Let's think step by step" prompt.

0 commit comments

Comments
 (0)