Skip to content

Commit 53cb423

Browse files
committed
add content to CoT review
1 parent c3d29ad commit 53cb423

File tree

4 files changed

+14
-3
lines changed

4 files changed

+14
-3
lines changed

_config.yml

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
11
collections:
22
notes:
33
output: true
4-
articles:
5-
output: true
4+
sort_by: date
5+
order: ascending
6+

_notes/2025-01-06-nlp-lecture1.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
layout: note
33
title: nlp introduction
4+
date: "2025-01-06"
45
---
56

67
{{ page.title }}

_notes/2025-01-08-text-normalization.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
---
22
layout: note
33
title: nlp text normalization
4+
date: "2025-01-08"
45
---
56

67
{{ page.title }}

_posts/2025-01-10-paper-review-cot.md

Lines changed: 9 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,4 +12,12 @@ paper review Chain-of-Thought Prompting Elicits Reasoning in Large Language Mode
1212
This is my review of the paper published by the Brain Team at Google Research. The title is *Chain-of-Thought Prompting Elicits Reasoning in Large Language Models*. This post will go over the main task at hand, analyzing the results, and discussing potential flaws/areas of improvement. A link to this paper can be found [here](https://arxiv.org/abs/2201.11903).
1313

1414
<h2>Task</h2>
15-
The focus is on arithmetic, common sense, and symbolic reasoning tasks. Specifically, when a model is given an input that has multiple layers of reasoning, we find that the output can lead to inaccurate results when dealing with these sorts of challenging tasks.
15+
The focus is on arithmetic, common sense, and symbolic reasoning tasks. Specifically, when a model is given an input that has multiple layers of reasoning, we find that the output can lead to inaccurate results when dealing with these sorts of challenging tasks.
16+
17+
<h2>"Solution"</h2>
18+
We have seen a range of benefits when increasing the size of the Language Model, but unlocking reasoning ability can help us achieve high performance on tasks such as arithmetic, commonsense, and symbolic reasoning. The proposed method is motivated by two ideas: <br>
19+
1. Generating natural language rationales that lead to the final answer (prior work involves generating intermediate steps from scratch or finetuning a pretrained model)
20+
2. in-context few-shot learning via prompting where one can "prompt" a model with a few input-output exemplars demonstrating the task (has been successful with question-answering tasks)
21+
22+
What is few-shot learning?
23+
A machine learning technique where a model is trained to learn and make predictions on a very small amount of labeled data.

0 commit comments

Comments
 (0)