You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<!-- This is an auto-generated comment: release notes by coderabbit.ai
-->
## Summary by CodeRabbit
- **Documentation**
- Renamed "onediff Enterprise Edition" to "onediff Enterprise Solution"
for clarity on offerings.
- Enhanced details on performance gains and support options within the
"onediff Enterprise Solution" section.
- Elevated the "Distributed Run" section to a prominent header,
emphasizing distributed inference capabilities.
- Adjusted content in the "Distributed Run" section to highlight the use
of onediff's compiler in distributed inference engines.
<!-- end of auto-generated comment: release notes by coderabbit.ai -->
---------
Co-authored-by: Shenghang Tsai <[email protected]>
@@ -256,17 +256,17 @@ onediff supports the acceleration for SOTA models.
256
256
Compile and save the compiled result offline, then load it online for serving
257
257
-[Save and Load the compiled graph](https://github.com/siliconflow/onediff/blob/main/onediff_diffusers_extensions/examples/text_to_image_sdxl_save_load.py)
258
258
- Compile at one device(such as device 0), then use the compiled result to other device(such as device 1~7). [Change device of the compiled graph to do multi-process serving](https://github.com/siliconflow/onediff/blob/main/onediff_diffusers_extensions/examples/text_to_image_sdxl_mp_load.py)
259
-
#### onediff Enterprise Edition
260
-
If you need Enterprise-level Support for your system or business, you can email us at [email protected], or contact us through the website: https://siliconflow.cn/pricing
261
-
262
-
|| onediff Enterprise Edition | onediff Community Edition |
| More Extreme and Dedicated optimization(usually another 20~100% performance gain) for the most used model | Yes ||
265
-
| Technical Support for deployment | High priority support | Community |
259
+
#### Distributed Run
260
+
If you want to do distributed inference, you can use onediff's compiler to do single-device acceleration in a distributed inference engine such as [xDiT](https://github.com/xdit-project/xDiT)
266
261
267
-
### Distributed Run
268
-
If you want to do distributed inference, you can use onediff's compiler to do single-device acceleration in a distributed inference engine such as:
269
-
-[xDiT](https://github.com/xdit-project/xDiT)
262
+
### OneDiff Enterprise Solution
263
+
If you need Enterprise-level Support for your system or business, you can email us at [email protected], or contact us through the website: https://siliconflow.cn/pricing
0 commit comments