You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Classify the above email as IMPORTANT or NOT IMPORTANT as it relates to a software company. Let's think step by step.
215
218
```
216
219
217
-
## 2.3. Knowledge-enhanced techniques
220
+
## 2.3. Tree of thoughts (ToT)
221
+
222
+
The Tree of Thoughts (ToT) is an advanced prompting framework that extends beyond the Chain-of-Thought (CoT) prompting technique. ToT enables language models to perform complex tasks that require exploration or strategic lookahead by leveraging a tree-based approach to generate and evaluate multiple reasoning paths.
ToT maintains a tree of thoughts, where each thought is represented by a coherent language sequence serving as an intermediate step toward problem-solving. This framework allows language models to:
229
+
230
+
- Generate multiple possible thoughts at each step
231
+
- Evaluate the potential of each thought
232
+
- Explore the thought space using search algorithms
233
+
- Perform lookahead verification and backtracking
234
+
235
+
```mermaid
236
+
flowchart TD
237
+
A[Input Problem] --> B[Generate k Thoughts]
238
+
B --> C[Evaluate Thoughts]
239
+
C --> D[Search Strategy<br/>(BFS/DFS/Beam)]
240
+
D --> E[Continue Promising Paths]
241
+
E -->|Success| F[Solution Found]
242
+
E -->|Failure| G[Backtrack if Necessary]
243
+
G --> B
244
+
```
245
+
246
+
>[!Important]
247
+
> Oh, you can achieve ToT using code (<https://github.com/princeton-nlp/tree-of-thought-llm> or <https://github.com/jieyilong/tree-of-thought-puzzle-solver>).
248
+
> You can apply the Tree of Thought (ToT) methodology on ChatGPT web, but it requires structured interaction and active guidance from you to simulate the branching, pruning, and aggregation process.
249
+
250
+
Below is a step-by-step guide to implement ToT effectively on ChatGPT:
251
+
252
+
1.**Define the problem clearly**
253
+
254
+
```text
255
+
"I need a 7-day Southeast Asia trip plan with a $1,500 budget, visiting at least two countries and ensuring good weather. Use the Tree of Thought method to solve this."
256
+
```
257
+
2.**Branching (Generate Ideas)**
258
+
259
+
Ask ChatGPT to **generate multiple branches** (ideas/solutions) for each key aspect of the problem.
260
+
Example Prompts:
261
+
262
+
- "List 3 possible country combinations in Southeast Asia for a 7-day trip."
263
+
- "Suggest 2 optimal months for travel based on weather."
Ask ChatGPT to combine the best branches into a final plan.
303
+
Example Prompt:
304
+
305
+
- "Using the accepted branches (Thailand+Vietnam, February, Strategy 1), create a detailed itinerary and cost breakdown."
306
+
307
+
Response:
308
+
309
+
```text
310
+
Final Plan:
311
+
- **Countries:** Bangkok (Thailand) → Hanoi (Vietnam).
312
+
- **Timing:** February 1–7.
313
+
- **Budget Breakdown:**
314
+
- Flights: $600 (40% of $1,500)
315
+
- Accommodation: $455 (30%)
316
+
- Food/Transport: $445 (30%)
317
+
- **Total:** $1,500 ✔️
318
+
```
319
+
320
+
## 2.4. Knowledge-enhanced techniques
218
321
219
322
Knowledge-enhanced techniques augment the model's reasoning with additional information, either generated by the model itself or retrieved from external sources.
220
323
221
-
### 2.3.1. Generated knowledge prompting
324
+
### 2.4.1. Generated knowledge prompting
222
325
223
326
Generated knowledge prompting involves having the model generate relevant knowledge or information before answering a question. This technique helps the model access its own knowledge in a structured way before attempting to solve a problem.
224
327
@@ -253,10 +356,15 @@ Generate 4 facts about the Kermode bear:
253
356
Then, we feed that information into another prompt to write the blog post:
254
357
```
255
358
256
-
### 2.3.2. Retrieval augmented generation (RAG)
359
+
### 2.4.2. Retrieval augmented generation (RAG)
257
360
258
361
Retrieval Augmented Generation enhances language models by incorporating information from external knowledge sources. This technique retrieves relevant documents or data from a knowledge base and provides them as context for the model to generate a response.
259
362
363
+
It is a hybrid approach that enhances large pre-trained language models by combining:
364
+
365
+
- Parametric memory: A pre-trained sequence-to-sequence (seq2seq) transformer (e.g., BART or T5) that generates responses.
366
+
- Non-parametric memory: A retriever that fetches relevant documents from an external knowledge base (e.g., Wikipedia).
0 commit comments