You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Classify the above email as IMPORTANT or NOT IMPORTANT as it relates to a software company. Let's think step by step.
218
223
```
219
224
220
-
## 2.3. Tree of thoughts (ToT)
225
+
###2.2.5. Tree of thoughts (ToT)
221
226
222
227
The Tree of Thoughts (ToT) is an advanced prompting framework that extends beyond the Chain-of-Thought (CoT) prompting technique. ToT enables language models to perform complex tasks that require exploration or strategic lookahead by leveraging a tree-based approach to generate and evaluate multiple reasoning paths.
223
228
@@ -318,11 +323,11 @@ Final Plan:
318
323
- **Total:** $1,500 ✔️
319
324
```
320
325
321
-
## 2.4. Knowledge-enhanced techniques
326
+
## 2.3. Knowledge-enhanced techniques
322
327
323
328
Knowledge-enhanced techniques augment the model's reasoning with additional information, either generated by the model itself or retrieved from external sources.
324
329
325
-
### 2.4.1. Generated knowledge prompting
330
+
### 2.3.1. Generated knowledge prompting
326
331
327
332
Generated knowledge prompting involves having the model generate relevant knowledge or information before answering a question. This technique helps the model access its own knowledge in a structured way before attempting to solve a problem.
328
333
@@ -357,7 +362,7 @@ Generate 4 facts about the Kermode bear:
357
362
Then, we feed that information into another prompt to write the blog post:
358
363
```
359
364
360
-
### 2.4.2. Retrieval augmented generation (RAG)
365
+
### 2.3.2. Retrieval augmented generation (RAG)
361
366
362
367
Retrieval Augmented Generation enhances language models by incorporating information from external knowledge sources. This technique retrieves relevant documents or data from a knowledge base and provides them as context for the model to generate a response.
363
368
@@ -389,5 +394,93 @@ graph LR
389
394
390
395
### 2.4.1. ReAct Prompting
391
396
392
-
> [!WARNING]
393
-
> WIP
397
+
ReAct (Reasoning and Acting) is a framework where language models generate both reasoning traces and task-specific actions in an interleaved manner. This allows the model to reason about its actions and use external tools to gather information.
398
+
399
+
ReAct prompting works by combining reasoning and acting into a thought-action loop. The
400
+
LLM first reasons about the problem and generates a plan of action. It then performs the
401
+
actions in the plan and observes the results. The LLM then uses the observations to update
402
+
its reasoning and generate a new plan of action. This process continues until the LLM
403
+
reaches a solution to the problem.
404
+
405
+
To see this action, you need to write some [code](https://github.com/ntk148v/testing/blob/7516cfe2265ac974b9f2bf10305411bf02a69e6c/python/react_agent/main.py).
406
+
407
+
```mermaid
408
+
flowchart TD
409
+
A[Question] --> B[Thought: Reasoning about question]
410
+
B --> C[Action: Using external tool]
411
+
C --> D[Observation: Result from tool]
412
+
D --> E[Thought: Reasoning with new info]
413
+
E --> F[Action: Using tool again]
414
+
F --> G[Observation: New result]
415
+
G --> H[Thought: Final reasoning]
416
+
H --> I[Final Answer]
417
+
```
418
+
419
+
Below is an example from [HotPotQA2](https://hotpotqa.github.io), a question-answering dataset requiring complex reasoning. ReAct allows the LLM to reason about the question (Thought 1), take actions (e.g., querying Google) (Act 1). It then receives an observation (Obs 1) and continues the thought-action loop until reaching a conclusion (Act 3).
- Alternates between reasoning (Thought) and actions
426
+
- Can interact with external tools like search engines
427
+
- Combines internal knowledge with external information
428
+
- Improves performance on knowledge-intensive tasks
429
+
- Enhanced interpretability through reasoning traces
430
+
431
+
### 2.4.2. Program-Aided language models (PAL)
432
+
433
+
Program-Aided Language Models use code as an intermediate step for solving complex problems. Instead of generating the answer directly, PAL generates a program (typically in Python) that computes the answer.
- Uses code generation as an intermediate reasoning step
440
+
- Leverages programming language semantics for precise computation
441
+
- Offloads calculation and logical reasoning to code execution
442
+
- Particularly effective for mathematical and algorithmic tasks
443
+
444
+
Check out [a simple Python application](https://github.com/ntk148v/testing/blob/master/python/pal/main.py) that's able to interpret the question being asked and provide an answer by leveraging the Python interpret.
445
+
446
+
### 2.4.3. Prompt Chaining
447
+
448
+
Prompt chaining involves breaking down complex tasks into subtasks and using the output of one prompt as input to another. This technique creates a chain of prompts, each handling a specific part of the overall task.
449
+
450
+
Prompt chaining is useful to accomplish complex tasks which an LLM might struggle to address if prompted with a very detailed prompt.
451
+
452
+
```mermaid
453
+
flowchart TD
454
+
A[Document] --> B[Prompt 1: Extract Quotes]
455
+
A --> D[Prompt 2: Answer Question]
456
+
B --> C[Relevant Quotes]
457
+
C --> D
458
+
E[User Question] --> B
459
+
E --> D
460
+
D --> F[Final Answer]
461
+
```
462
+
463
+
**Key characteristics:**
464
+
465
+
- Decomposes complex tasks into manageable subtasks
466
+
- Uses outputs from earlier prompts as inputs to later ones
467
+
- Increases reliability and controllability
468
+
- Allows for more transparent debugging and analysis
469
+
- Especially useful for complex multi-stage tasks
470
+
471
+
## 2.5. Automated techniques
472
+
473
+
### 2.5.1. Automatic prompt engineering (APE)
474
+
475
+
At this point you might realize that writing a prompt can be complex. Wouldn't it be nice to automate this (write prompt to write prompt)? Well, there's a method: Automatic Prompt Engineering (APE). This method not only alleviate the need for human input but also enhances the model's performance in various tasks.
- Use an inference LLM to generate instruction candidates based on output demonstrations
483
+
- Execute these instructions using a target model
484
+
- Evaluate performance to find the best instruction
485
+
486
+
APE has discovered more effective zero-shot CoT prompts than human-engineered ones. For example, APE found that "Let's work this out in a step by step way to be sure we have the right answer" elicits better chain-of-thought reasoning than the human-designed "Let's think step by step" prompt.
0 commit comments