Skip to content

Commit 71c468f

Browse files
authored
Formatting cleanup
1 parent 8655733 commit 71c468f

File tree

1 file changed

+9
-15
lines changed

1 file changed

+9
-15
lines changed

recipes_source/torch_export_challenges_solutions.rst

+9-15
Original file line numberDiff line numberDiff line change
@@ -37,15 +37,15 @@ designed for. You can read details about the differences between the various PyT
3737

3838
You can identify graph breaks in your program by using the following command:
3939

40-
.. code:: console
40+
.. code:: sh
4141
4242
TORCH_LOGS="graph_breaks" python <file_name>.py
4343
4444
You will need to modify your program to get rid of graph breaks. Once resolved, you are ready to export the model.
4545
PyTorch runs `nightly benchmarks <https://hud.pytorch.org/benchmark/compilers>`__ for `torch.compile` on popular HuggingFace and TIMM models.
4646
Most of these models have no graph breaks.
4747

48-
The models in this recipe have no graph breaks, but fail with `torch.export`
48+
The models in this recipe have no graph breaks, but fail with `torch.export`.
4949

5050
Video Classification
5151
--------------------
@@ -88,7 +88,7 @@ The code below exports MViT by tracing with ``batch_size=2`` and then checks if
8888
Error: Static batch size
8989
~~~~~~~~~~~~~~~~~~~~~~~~
9090

91-
.. code:: console
91+
.. code-block:: sh
9292
9393
raise RuntimeError(
9494
RuntimeError: Expected input at *args[0].shape[0] to be equal to 2, but got 4
@@ -139,9 +139,6 @@ for ``torch.export`` can be found in the export tutorial. The code shown below d
139139
tb.print_exc()
140140
141141
142-
143-
144-
145142
Automatic Speech Recognition
146143
---------------
147144
@@ -180,7 +177,7 @@ Error: strict tracing with TorchDynamo
180177
181178
By default ``torch.export`` traces your code using `TorchDynamo <https://pytorch.org/docs/stable/torch.compiler_dynamo_overview.html>`__, a byte-code analysis engine, which symbolically analyzes your code and builds a graph.
182179
This analysis provides a stronger guarantee about safety but not all Python code is supported. When we export the ``whisper-tiny`` model using the
183-
default strict mode, it typically returns an error in Dynamo due to an unsupported feature. To understand why this errors in Dynamo, you can refer to this `GitHub issue <https://github.com/pytorch/pytorch/issues/144906>`__
180+
default strict mode, it typically returns an error in Dynamo due to an unsupported feature. To understand why this errors in Dynamo, you can refer to this `GitHub issue <https://github.com/pytorch/pytorch/issues/144906>`__.
184181
185182
Solution
186183
~~~~~~~~
@@ -207,14 +204,12 @@ a graph. By using ``strict=False``, we are able to export the program.
207204
208205
exported_program: torch.export.ExportedProgram= torch.export.export(model, args=(input_features, attention_mask, decoder_input_ids,), strict=False)
209206
210-
211-
212207
Image Captioning
213208
----------------
214209
215210
**Image Captioning** is the task of defining the contents of an image in words. In the context of gaming, Image Captioning can be used to enhance the
216211
gameplay experience by dynamically generating text description of the various game objects in the scene, thereby providing the gamer with additional
217-
details. `BLIP <https://arxiv.org/pdf/2201.12086>`__ is a popular model for Image Captioning `released by SalesForce Research <https://github.com/salesforce/BLIP>`__. The code below tries to export BLIP with ``batch_size=1``
212+
details. `BLIP <https://arxiv.org/pdf/2201.12086>`__ is a popular model for Image Captioning `released by SalesForce Research <https://github.com/salesforce/BLIP>`__. The code below tries to export BLIP with ``batch_size=1``.
218213
219214
220215
.. code:: python
@@ -263,9 +258,8 @@ Clone the `tensor <https://github.com/salesforce/BLIP/blob/main/models/blip.py#L
263258
text.input_ids = text.input_ids.clone() # clone the tensor
264259
text.input_ids[:,0] = self.tokenizer.bos_token_id
265260
266-
Note: This constraint has been relaxed in PyTorch 2.7 nightlies. This should work out-of-the-box in PyTorch 2.7
267-
268-
261+
.. note::
262+
This constraint has been relaxed in PyTorch 2.7 nightlies. This should work out-of-the-box in PyTorch 2.7
269263
270264
Promptable Image Segmentation
271265
-----------------------------
@@ -333,5 +327,5 @@ Conclusion
333327
334328
In this tutorial, we have learned how to use ``torch.export`` to export models for popular use cases by addressing challenges through correct configuration and simple code modifications.
335329
Once you are able to export a model, you can lower the ``ExportedProgram`` into your hardware using `AOTInductor <https://pytorch.org/docs/stable/torch.compiler_aot_inductor.html>`__ in case of servers and `ExecuTorch <https://pytorch.org/executorch/stable/index.html>`__ in case of edge device.
336-
To learn more about ``AOTInductor`` (AOTI), please refer to the `AOTI tutorial <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__
337-
To learn more about ``ExecuTorch`` , please refer to the `ExecuTorch tutorial <https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html>`__
330+
To learn more about ``AOTInductor`` (AOTI), please refer to the `AOTI tutorial <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__.
331+
To learn more about ``ExecuTorch`` , please refer to the `ExecuTorch tutorial <https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html>`__.

0 commit comments

Comments
 (0)