Skip to content

Commit 11a8782

Browse files
committed
update
1 parent c7a9145 commit 11a8782

File tree

1 file changed

+118
-6
lines changed

1 file changed

+118
-6
lines changed

ai/prompt-engineering-guide/techniques.md

Lines changed: 118 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,12 @@ Table of contents:
1111
- [2.2.2. Zero-shot CoT](#222-zero-shot-cot)
1212
- [2.2.3. Automatic CoT (Auto-CoT)](#223-automatic-cot-auto-cot)
1313
- [2.2.4. Self-Consistency](#224-self-consistency)
14-
- [2.3. Knowledge-enhanced techniques](#23-knowledge-enhanced-techniques)
15-
- [2.3.1. Generated knowledge prompting](#231-generated-knowledge-prompting)
16-
- [2.3.2. Retrieval augmented generation (RAG)](#232-retrieval-augmented-generation-rag)
14+
- [2.3. Tree of thoughts (ToT)](#23-tree-of-thoughts-tot)
15+
- [2.4. Knowledge-enhanced techniques](#24-knowledge-enhanced-techniques)
16+
- [2.4.1. Generated knowledge prompting](#241-generated-knowledge-prompting)
17+
- [2.4.2. Retrieval augmented generation (RAG)](#242-retrieval-augmented-generation-rag)
18+
- [2.4. Action-oriented techniques](#24-action-oriented-techniques)
19+
- [2.4.1. ReAct Prompting](#241-react-prompting)
1720

1821
```mermaid
1922
graph TD
@@ -214,11 +217,111 @@ Donny
214217
Classify the above email as IMPORTANT or NOT IMPORTANT as it relates to a software company. Let's think step by step.
215218
```
216219

217-
## 2.3. Knowledge-enhanced techniques
220+
## 2.3. Tree of thoughts (ToT)
221+
222+
The Tree of Thoughts (ToT) is an advanced prompting framework that extends beyond the Chain-of-Thought (CoT) prompting technique. ToT enables language models to perform complex tasks that require exploration or strategic lookahead by leveraging a tree-based approach to generate and evaluate multiple reasoning paths.
223+
224+
![](https://www.promptingguide.ai/_next/image?url=%2F_next%2Fstatic%2Fmedia%2FTOT.3b13bc5e.png&w=1200&q=75)
225+
226+
**Framework overview**
227+
228+
ToT maintains a tree of thoughts, where each thought is represented by a coherent language sequence serving as an intermediate step toward problem-solving. This framework allows language models to:
229+
230+
- Generate multiple possible thoughts at each step
231+
- Evaluate the potential of each thought
232+
- Explore the thought space using search algorithms
233+
- Perform lookahead verification and backtracking
234+
235+
```mermaid
236+
flowchart TD
237+
A[Input Problem] --> B[Generate k Thoughts]
238+
B --> C[Evaluate Thoughts]
239+
C --> D[Search Strategy<br/>(BFS/DFS/Beam)]
240+
D --> E[Continue Promising Paths]
241+
E -->|Success| F[Solution Found]
242+
E -->|Failure| G[Backtrack if Necessary]
243+
G --> B
244+
```
245+
246+
>[!Important]
247+
> Oh, you can achieve ToT using code (<https://github.com/princeton-nlp/tree-of-thought-llm> or <https://github.com/jieyilong/tree-of-thought-puzzle-solver>).
248+
> You can apply the Tree of Thought (ToT) methodology on ChatGPT web, but it requires structured interaction and active guidance from you to simulate the branching, pruning, and aggregation process.
249+
250+
Below is a step-by-step guide to implement ToT effectively on ChatGPT:
251+
252+
1. **Define the problem clearly**
253+
254+
```text
255+
"I need a 7-day Southeast Asia trip plan with a $1,500 budget, visiting at least two countries and ensuring good weather. Use the Tree of Thought method to solve this."
256+
```
257+
2. **Branching (Generate Ideas)**
258+
259+
Ask ChatGPT to **generate multiple branches** (ideas/solutions) for each key aspect of the problem.
260+
Example Prompts:
261+
262+
- "List 3 possible country combinations in Southeast Asia for a 7-day trip."
263+
- "Suggest 2 optimal months for travel based on weather."
264+
- "Propose budget allocation strategies (flights, accommodation, etc.)."
265+
266+
Response:
267+
268+
```text
269+
Branch 1: Countries
270+
- Option A: Thailand + Vietnam
271+
- Option B: Malaysia + Cambodia
272+
- Option C: Indonesia + Laos
273+
274+
Branch 2: Timing
275+
- Option 1: February (dry season)
276+
- Option 2: November (risk of rain)
277+
278+
Branch 3: Budget
279+
- Strategy 1: Flights 40%, Accommodation 30%, Rest 30%
280+
- Strategy 2: Flights 50%, Accommodation 25%, Rest 25%
281+
```
282+
283+
3. **Pruning (Eliminate Unfeasible Options)**
284+
285+
Ask ChatGPT to **evaluate and filter branches** based on your criteria (e.g., budget, weather, visa policies).
286+
Example Prompts:
287+
288+
- "Which country combination is cheaper between Thailand+Vietnam and Malaysia+Cambodia?"
289+
- "Why should we exclude November as a travel month?"
290+
- "Which budget strategy keeps costs under $1,500?"
291+
292+
Response:
293+
294+
```text
295+
Pruned Results:
296+
- **Rejected:** Option B (Malaysia+Cambodia) due to higher visa costs.
297+
- **Rejected:** Option 2 (November) due to rainy season risks.
298+
- **Accepted:** Strategy 1 (40%/30%/30%) leaves enough for meals/transport.
299+
```
300+
301+
4. **Evaluation & Aggregation (Synthesize Solutions)**
302+
Ask ChatGPT to combine the best branches into a final plan.
303+
Example Prompt:
304+
305+
- "Using the accepted branches (Thailand+Vietnam, February, Strategy 1), create a detailed itinerary and cost breakdown."
306+
307+
Response:
308+
309+
```text
310+
Final Plan:
311+
- **Countries:** Bangkok (Thailand) → Hanoi (Vietnam).
312+
- **Timing:** February 1–7.
313+
- **Budget Breakdown:**
314+
- Flights: $600 (40% of $1,500)
315+
- Accommodation: $455 (30%)
316+
- Food/Transport: $445 (30%)
317+
- **Total:** $1,500 ✔️
318+
```
319+
320+
## 2.4. Knowledge-enhanced techniques
218321

219322
Knowledge-enhanced techniques augment the model's reasoning with additional information, either generated by the model itself or retrieved from external sources.
220323

221-
### 2.3.1. Generated knowledge prompting
324+
### 2.4.1. Generated knowledge prompting
222325

223326
Generated knowledge prompting involves having the model generate relevant knowledge or information before answering a question. This technique helps the model access its own knowledge in a structured way before attempting to solve a problem.
224327

@@ -253,10 +356,15 @@ Generate 4 facts about the Kermode bear:
253356
Then, we feed that information into another prompt to write the blog post:
254357
```
255358

256-
### 2.3.2. Retrieval augmented generation (RAG)
359+
### 2.4.2. Retrieval augmented generation (RAG)
257360

258361
Retrieval Augmented Generation enhances language models by incorporating information from external knowledge sources. This technique retrieves relevant documents or data from a knowledge base and provides them as context for the model to generate a response.
259362

363+
It is a hybrid approach that enhances large pre-trained language models by combining:
364+
365+
- Parametric memory: A pre-trained sequence-to-sequence (seq2seq) transformer (e.g., BART or T5) that generates responses.
366+
- Non-parametric memory: A retriever that fetches relevant documents from an external knowledge base (e.g., Wikipedia).
367+
260368
```mermaid
261369
graph LR
262370
A[User Query] --> B[Retriever Component]
@@ -276,5 +384,9 @@ graph LR
276384
- Allows access to more up-to-date information
277385
- Especially useful for knowledge-intensive tasks
278386

387+
## 2.4. Action-oriented techniques
388+
389+
### 2.4.1. ReAct Prompting
390+
279391
> [!WARNING]
280392
> WIP

0 commit comments

Comments
 (0)