diff --git a/advanced_source/cpp_export.rst b/advanced_source/cpp_export.rst index 5dedbdaaa6..45556a5320 100644 --- a/advanced_source/cpp_export.rst +++ b/advanced_source/cpp_export.rst @@ -203,7 +203,7 @@ minimal ``CMakeLists.txt`` to build it could look as simple as: add_executable(example-app example-app.cpp) target_link_libraries(example-app "${TORCH_LIBRARIES}") - set_property(TARGET example-app PROPERTY CXX_STANDARD 14) + set_property(TARGET example-app PROPERTY CXX_STANDARD 17) The last thing we need to build the example application is the LibTorch distribution. You can always grab the latest stable release from the `download diff --git a/advanced_source/super_resolution_with_onnxruntime.py b/advanced_source/super_resolution_with_onnxruntime.py index ecb0ba4fe4..264678ee17 100644 --- a/advanced_source/super_resolution_with_onnxruntime.py +++ b/advanced_source/super_resolution_with_onnxruntime.py @@ -9,7 +9,7 @@ * ``torch.onnx.export`` is based on TorchScript backend and has been available since PyTorch 1.2.0. In this tutorial, we describe how to convert a model defined -in PyTorch into the ONNX format using the TorchScript ``torch.onnx.export` ONNX exporter. +in PyTorch into the ONNX format using the TorchScript ``torch.onnx.export`` ONNX exporter. The exported model will be executed with ONNX Runtime. ONNX Runtime is a performance-focused engine for ONNX models, diff --git a/intermediate_source/inductor_debug_cpu.py b/intermediate_source/inductor_debug_cpu.py index 94dee3ba15..370180d968 100644 --- a/intermediate_source/inductor_debug_cpu.py +++ b/intermediate_source/inductor_debug_cpu.py @@ -87,9 +87,9 @@ def neg1(x): # +-----------------------------+----------------------------------------------------------------+ # | ``fx_graph_transformed.py`` | Transformed FX graph, after pattern match | # +-----------------------------+----------------------------------------------------------------+ -# | ``ir_post_fusion.txt`` | Inductor IR before fusion | +# | ``ir_pre_fusion.txt`` | Inductor IR before fusion | # +-----------------------------+----------------------------------------------------------------+ -# | ``ir_pre_fusion.txt`` | Inductor IR after fusion | +# | ``ir_post_fusion.txt`` | Inductor IR after fusion | # +-----------------------------+----------------------------------------------------------------+ # | ``output_code.py`` | Generated Python code for graph, with C++/Triton kernels | # +-----------------------------+----------------------------------------------------------------+ diff --git a/recipes_source/distributed_device_mesh.rst b/recipes_source/distributed_device_mesh.rst index dbc4a81043..d41d6c1df1 100644 --- a/recipes_source/distributed_device_mesh.rst +++ b/recipes_source/distributed_device_mesh.rst @@ -156,4 +156,4 @@ they can be used to describe the layout of devices across the cluster. For more information, please see the following: - `2D parallel combining Tensor/Sequance Parallel with FSDP `__ -- `Composable PyTorch Distributed with PT2 `__ +- `Composable PyTorch Distributed with PT2 `__