You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: recipes_source/torch_export_challenges_solutions.rst
+9-15
Original file line number
Diff line number
Diff line change
@@ -37,15 +37,15 @@ designed for. You can read details about the differences between the various PyT
37
37
38
38
You can identify graph breaks in your program by using the following command:
39
39
40
-
.. code:: console
40
+
.. code:: sh
41
41
42
42
TORCH_LOGS="graph_breaks" python <file_name>.py
43
43
44
44
You will need to modify your program to get rid of graph breaks. Once resolved, you are ready to export the model.
45
45
PyTorch runs `nightly benchmarks <https://hud.pytorch.org/benchmark/compilers>`__ for `torch.compile` on popular HuggingFace and TIMM models.
46
46
Most of these models have no graph breaks.
47
47
48
-
The models in this recipe have no graph breaks, but fail with `torch.export`
48
+
The models in this recipe have no graph breaks, but fail with `torch.export`.
49
49
50
50
Video Classification
51
51
--------------------
@@ -88,7 +88,7 @@ The code below exports MViT by tracing with ``batch_size=2`` and then checks if
88
88
Error: Static batch size
89
89
~~~~~~~~~~~~~~~~~~~~~~~~
90
90
91
-
.. code:: console
91
+
.. code-block:: sh
92
92
93
93
raise RuntimeError(
94
94
RuntimeError: Expected input at *args[0].shape[0] to be equal to 2, but got 4
@@ -139,9 +139,6 @@ for ``torch.export`` can be found in the export tutorial. The code shown below d
139
139
tb.print_exc()
140
140
141
141
142
-
143
-
144
-
145
142
Automatic Speech Recognition
146
143
---------------
147
144
@@ -180,7 +177,7 @@ Error: strict tracing with TorchDynamo
180
177
181
178
By default ``torch.export`` traces your code using `TorchDynamo <https://pytorch.org/docs/stable/torch.compiler_dynamo_overview.html>`__, a byte-code analysis engine, which symbolically analyzes your code and builds a graph.
182
179
This analysis provides a stronger guarantee about safety but not all Python code is supported. When we export the ``whisper-tiny`` model using the
183
-
default strict mode, it typically returns an error in Dynamo due to an unsupported feature. To understand why this errors in Dynamo, you can refer to this `GitHub issue <https://github.com/pytorch/pytorch/issues/144906>`__
180
+
default strict mode, it typically returns an error in Dynamo due to an unsupported feature. To understand why this errors in Dynamo, you can refer to this `GitHub issue <https://github.com/pytorch/pytorch/issues/144906>`__.
184
181
185
182
Solution
186
183
~~~~~~~~
@@ -207,14 +204,12 @@ a graph. By using ``strict=False``, we are able to export the program.
**Image Captioning** is the task of defining the contents of an image in words. In the context of gaming, Image Captioning can be used to enhance the
216
211
gameplay experience by dynamically generating text description of the various game objects in the scene, thereby providing the gamer with additional
217
-
details. `BLIP <https://arxiv.org/pdf/2201.12086>`__ is a popular model for Image Captioning `released by SalesForce Research <https://github.com/salesforce/BLIP>`__. The code below tries to export BLIP with ``batch_size=1``
212
+
details. `BLIP <https://arxiv.org/pdf/2201.12086>`__ is a popular model for Image Captioning `released by SalesForce Research <https://github.com/salesforce/BLIP>`__. The code below tries to export BLIP with ``batch_size=1``.
218
213
219
214
220
215
.. code:: python
@@ -263,9 +258,8 @@ Clone the `tensor <https://github.com/salesforce/BLIP/blob/main/models/blip.py#L
263
258
text.input_ids = text.input_ids.clone() # clone the tensor
264
259
text.input_ids[:,0] = self.tokenizer.bos_token_id
265
260
266
-
Note: This constraint has been relaxed in PyTorch 2.7 nightlies. This should work out-of-the-box in PyTorch 2.7
267
-
268
-
261
+
.. note::
262
+
This constraint has been relaxed in PyTorch 2.7 nightlies. This should work out-of-the-box in PyTorch 2.7
269
263
270
264
Promptable Image Segmentation
271
265
-----------------------------
@@ -333,5 +327,5 @@ Conclusion
333
327
334
328
In this tutorial, we have learned how to use ``torch.export`` to export models for popular use cases by addressing challenges through correct configuration and simple code modifications.
335
329
Once you are able to export a model, you can lower the ``ExportedProgram`` into your hardware using `AOTInductor <https://pytorch.org/docs/stable/torch.compiler_aot_inductor.html>`__ incase of servers and `ExecuTorch <https://pytorch.org/executorch/stable/index.html>`__ in case of edge device.
336
-
To learn more about ``AOTInductor`` (AOTI), please refer to the `AOTI tutorial <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__
337
-
To learn more about ``ExecuTorch`` , please refer to the `ExecuTorch tutorial <https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html>`__
330
+
To learn more about ``AOTInductor`` (AOTI), please refer to the `AOTI tutorial <https://pytorch.org/tutorials/recipes/torch_export_aoti_python.html>`__.
331
+
To learn more about ``ExecuTorch`` , please refer to the `ExecuTorch tutorial <https://pytorch.org/executorch/stable/tutorials/export-to-executorch-tutorial.html>`__.
0 commit comments