Skip to content

Commit 90fd2d3

Browse files
iseeyuaniseeyuanRdoubleA
authored
Update the e2e flow tutorial to fix errors of generate (#2251)
Co-authored-by: iseeyuan <[email protected]> Co-authored-by: RdoubleA <[email protected]>
1 parent b8790ce commit 90fd2d3

File tree

1 file changed

+8
-6
lines changed

1 file changed

+8
-6
lines changed

docs/source/tutorials/e2e_flow.rst

+8-6
Original file line numberDiff line numberDiff line change
@@ -275,18 +275,20 @@ Let's first copy over the config to our local working directory so we can make c
275275
276276
$ tune cp generation ./custom_generation_config.yaml
277277
Copied file to custom_generation_config.yaml
278+
$ mkdir /tmp/torchtune/llama3_2_3B/lora_single_device/out
278279
279280
Let's modify ``custom_generation_config.yaml`` to include the following changes. Again, you only need
280281
to replace two fields: ``output_dir`` and ``checkpoint_files``
281282

282283
.. code-block:: yaml
283284
284-
output_dir: /tmp/torchtune/llama3_2_3B/lora_single_device/epoch_0
285+
checkpoint_dir: /tmp/torchtune/llama3_2_3B/lora_single_device/epoch_0
286+
output_dir: /tmp/torchtune/llama3_2_3B/lora_single_device/out
285287
286288
# Tokenizer
287289
tokenizer:
288290
_component_: torchtune.models.llama3.llama3_tokenizer
289-
path: ${output_dir}/original/tokenizer.model
291+
path: ${checkpoint_dir}/original/tokenizer.model
290292
prompt_template: null
291293
292294
model:
@@ -295,7 +297,7 @@ Let's modify ``custom_generation_config.yaml`` to include the following changes.
295297
296298
checkpointer:
297299
_component_: torchtune.training.FullModelHFCheckpointer
298-
checkpoint_dir: ${output_dir}
300+
checkpoint_dir: ${checkpoint_dir}
299301
checkpoint_files: [
300302
ft-model-00001-of-00002.safetensors,
301303
ft-model-00002-of-00002.safetensors,
@@ -312,8 +314,8 @@ Let's modify ``custom_generation_config.yaml`` to include the following changes.
312314
313315
# Generation arguments; defaults taken from gpt-fast
314316
prompt:
315-
system: null
316-
user: "Tell me a joke. "
317+
system: null
318+
user: "Tell me a joke. "
317319
max_new_tokens: 300
318320
temperature: 0.6 # 0.8 and 0.6 are popular values to try
319321
top_k: 300
@@ -330,7 +332,7 @@ these parameters.
330332

331333
.. code-block:: text
332334
333-
$ tune run generate --config ./custom_generation_config.yaml prompt="tell me a joke. "
335+
$ tune run generate --config ./custom_generation_config.yaml prompt.user="Tell me a joke. "
334336
Tell me a joke. Here's a joke for you:
335337
336338
What do you call a fake noodle?

0 commit comments

Comments
 (0)