Skip to content

Commit 7a85372

Browse files
authored
fixed a small mistake in the transformer translation example (#2030)
* fixed a small mistake in the transformer translation example * Update typo in ipynb file * Update typo in ipynb file
1 parent cab40db commit 7a85372

File tree

3 files changed

+3
-3
lines changed

3 files changed

+3
-3
lines changed

examples/nlp/ipynb/neural_machine_translation_with_transformer.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -274,7 +274,7 @@
274274
"As such, the training dataset will yield a tuple `(inputs, targets)`, where:\n",
275275
"\n",
276276
"- `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`.\n",
277-
"`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence \"so far\",\n",
277+
"`encoder_inputs` is the vectorized source sentence and `decoder_inputs` is the target sentence \"so far\",\n",
278278
"that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence.\n",
279279
"- `target` is the target sentence offset by one step:\n",
280280
"it provides the next words in the target sentence -- what the model will try to predict."

examples/nlp/md/neural_machine_translation_with_transformer.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -203,7 +203,7 @@ using the source sentence and the target words 0 to N.
203203
As such, the training dataset will yield a tuple `(inputs, targets)`, where:
204204

205205
- `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`.
206-
`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence "so far",
206+
`encoder_inputs` is the vectorized source sentence and `decoder_inputs` is the target sentence "so far",
207207
that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence.
208208
- `target` is the target sentence offset by one step:
209209
it provides the next words in the target sentence -- what the model will try to predict.

examples/nlp/neural_machine_translation_with_transformer.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -172,7 +172,7 @@ def custom_standardization(input_string):
172172
As such, the training dataset will yield a tuple `(inputs, targets)`, where:
173173
174174
- `inputs` is a dictionary with the keys `encoder_inputs` and `decoder_inputs`.
175-
`encoder_inputs` is the vectorized source sentence and `encoder_inputs` is the target sentence "so far",
175+
`encoder_inputs` is the vectorized source sentence and `decoder_inputs` is the target sentence "so far",
176176
that is to say, the words 0 to N used to predict word N+1 (and beyond) in the target sentence.
177177
- `target` is the target sentence offset by one step:
178178
it provides the next words in the target sentence -- what the model will try to predict.

0 commit comments

Comments
 (0)