Skip to content

Commit 13d0297

Browse files
committed
add llms unsupervised learners review
1 parent af2085d commit 13d0297

File tree

7 files changed

+36
-6
lines changed

7 files changed

+36
-6
lines changed

_config.yml

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
collections:
22
notes:
33
output: true
4-
sort_by: date
5-
order: ascending
6-
4+
reviews:
5+
output: true

_layouts/review.html

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
---
2+
layout: default
3+
---
4+
<div id="review">
5+
{{ content }}
6+
</div>
File renamed without changes.
File renamed without changes.
Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
2-
layout: post
3-
title: paper review Chain-of-Thought in LLMs
2+
layout: review
3+
title: Chain-of-Thought in LLMs
44
---
55

66
paper review Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
@@ -16,7 +16,9 @@ The focus is on arithmetic, common sense, and symbolic reasoning tasks. Specific
1616

1717
<h2>"Solution"</h2>
1818
We have seen a range of benefits when increasing the size of the Language Model, but unlocking reasoning ability can help us achieve high performance on tasks such as arithmetic, commonsense, and symbolic reasoning. The proposed method is motivated by two ideas: <br>
19+
1920
1. Generating natural language rationales that lead to the final answer (prior work involves generating intermediate steps from scratch or finetuning a pretrained model)
21+
2022
2. in-context few-shot learning via prompting where one can "prompt" a model with a few input-output exemplars demonstrating the task (has been successful with question-answering tasks)
2123

2224
What is few-shot learning?
Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,16 @@
1+
---
2+
layout: review
3+
title: Language Models are Unsupervised Multitask learners
4+
---
5+
6+
paper review Language Models are Unsupervised Multitask learners
7+
================
8+
9+
<p class="meta">13 Jan 2025</p>
10+
11+
<h1>Overview</h1>
12+
This is a blog post created by OpenAI in 2019, which examines the potential for artificial intelligence, where
13+
they showcased how training large models on diverse datasets can lead to versatile systems capable of performing a
14+
wide range of tasks with minimal additional training. The blog post can be found [here](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).
15+
16+

index.html

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,17 @@ <h1>blog</h1>
1111
{% endfor %}
1212
</ul>
1313

14+
<h1>paper reviews</h1>
15+
<ul class="posts">
16+
{% for review in site.reviews %}
17+
<li> &raquo; <a href="{{ review.url }}">{{ review.title }}</a></li>
18+
{% endfor %}
19+
</ul>
20+
1421
<h1>"lecture" notes</h1>
1522
<ul class="posts">
1623
{% for note in site.notes %}
17-
<li><span>{{ note.date | date_to_string }}</span> &raquo; <a href="{{ note.url }}">{{ note.title }}</a></li>
24+
<li>&raquo; <a href="{{ note.url }}">{{ note.title }}</a></li>
1825
{% endfor %}
1926
</ul>
2027

0 commit comments

Comments
 (0)