Skip to content

Commit 9c7c642

Browse files
committed
Replace deprecated stabilityai/stable-diffusion-2-* models with ones maintained by sd2-community.
Add disclaimers about unofficial mirror used instead. Signed-off-by: Artur Kloniecki <arturx.kloniecki@intel.com>
1 parent 1d8504c commit 9c7c642

File tree

6 files changed

+37
-15
lines changed

6 files changed

+37
-15
lines changed

docs/source/tutorials/stable_diffusion.mdx

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -67,13 +67,15 @@ Check out the [example](/examples/stable-diffusion) provided in the official Git
6767

6868
## Stable Diffusion 2
6969

70+
DISCLAIMER: Stable Diffusion 2 models family has been discontinued and withdrawn by Stability AI. The following instruction uses mirrored models maintained by sd2-community, which is not affiliated in any way with Stability AI. Follow these instructions at your own risk.
71+
7072
[Stable Diffusion 2](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_2) can be used with the exact same classes.
7173
Here is an example:
7274

7375
```python
7476
from optimum.habana.diffusers import GaudiDDIMScheduler, GaudiStableDiffusionPipeline
7577

76-
model_name = "stabilityai/stable-diffusion-2-1"
78+
model_name = "sd2-community/stable-diffusion-2-1"
7779

7880
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
7981

@@ -96,10 +98,10 @@ outputs = pipeline(
9698

9799
<Tip>
98100

99-
There are two different checkpoints for Stable Diffusion 2:
101+
There are two different checkpoints for Stable Diffusion 2 ():
100102

101-
- use [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1) for generating 768x768 images
102-
- use [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base) for generating 512x512 images
103+
- use [sd2-community/stable-diffusion-2-1](https://huggingface.co/sd2-community/stable-diffusion-2-1) for generating 768x768 images
104+
- use [sd2-community/stable-diffusion-2-1-base](https://huggingface.co/sd2-community/stable-diffusion-2-1-base) for generating 512x512 images
103105

104106
</Tip>
105107

examples/stable-diffusion/depth_to_image_generation.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,8 @@ def main():
5252

5353
parser.add_argument(
5454
"--model_name_or_path",
55-
default="stabilityai/stable-diffusion-2-depth",
55+
# Stability AI has removed stable-diffusion-2 models. This uses unofficial mirror by sd2-community
56+
default="sd2-community/stable-diffusion-2-depth",
5657
type=str,
5758
help="Path to pre-trained model",
5859
)

examples/stable-diffusion/training/README.md

Lines changed: 21 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -91,11 +91,15 @@ To download the example conditioning images locally, run:
9191
python download_train_datasets.py
9292
```
9393

94+
> [!DISCLAIMER]
95+
> Stable Diffusion 2 models family has been discontinued and withdrawn by Stability AI. The following instruction uses mirrored models maintained by sd2-community, which is not affiliated in any way with Stability AI.
96+
> Follow these instructions at your own risk.
97+
9498
Then proceed to training with command:
9599

96100
```bash
97101
PT_HPU_LAZY_MODE=1 python train_controlnet.py \
98-
--pretrained_model_name_or_path=stabilityai/stable-diffusion-2-1 \
102+
--pretrained_model_name_or_path=sd2-community/stable-diffusion-2-1 \
99103
--output_dir=/tmp/stable_diffusion2_1 \
100104
--dataset_name=fusing/fill50k \
101105
--resolution=512 \
@@ -116,11 +120,15 @@ with `python ../../gaudi_spawn.py --world_size <num-HPUs> train_controlnet.py`.
116120

117121
### Inference
118122

123+
> [!DISCLAIMER]
124+
> Stable Diffusion 2 models family has been discontinued and withdrawn by Stability AI. The following instruction uses mirrored models maintained by sd2-community, which is not affiliated in any way with Stability AI.
125+
> Follow these instructions at your own risk.
126+
119127
After training completes, you can use `text_to_image_generation.py` sample to run inference with the fine-tuned ControlNet model:
120128

121129
```bash
122130
PT_HPU_LAZY_MODE=1 python ../text_to_image_generation.py \
123-
--model_name_or_path stabilityai/stable-diffusion-2-1 \
131+
--model_name_or_path sd2-community/stable-diffusion-2-1 \
124132
--controlnet_model_name_or_path /tmp/stable_diffusion2_1 \
125133
--prompts "pale golden rod circle with old lace background" \
126134
--control_image "./cnet/conditioning_image_1.png" \
@@ -224,9 +232,13 @@ python download_train_datasets.py
224232

225233
To launch the multi-card Stable Diffusion training, use:
226234

235+
> [!DISCLAIMER]
236+
> Stable Diffusion 2 models family has been discontinued and withdrawn by Stability AI. The following instruction uses mirrored models maintained by sd2-community, which is not affiliated in any way with Stability AI.
237+
> Follow these instructions at your own risk.
238+
227239
```bash
228240
PT_HPU_LAZY_MODE=1 python ../../gaudi_spawn.py --world_size 8 --use_mpi train_dreambooth.py \
229-
--pretrained_model_name_or_path="stabilityai/stable-diffusion-2-1" \
241+
--pretrained_model_name_or_path="sd2-community/stable-diffusion-2-1" \
230242
--instance_data_dir="dog" \
231243
--output_dir="dog_sd" \
232244
--class_data_dir="path-to-class-images" \
@@ -263,9 +275,13 @@ UNet or text encoder.
263275

264276
To run the multi-card training, use:
265277

278+
> [!DISCLAIMER]
279+
> Stable Diffusion 2 models family has been discontinued and withdrawn by Stability AI. The following instruction uses mirrored models maintained by sd2-community, which is not affiliated in any way with Stability AI.
280+
> Follow these instructions at your own risk.
281+
266282
```bash
267283
PT_HPU_LAZY_MODE=1 python ../../gaudi_spawn.py --world_size 8 --use_mpi train_dreambooth.py \
268-
--pretrained_model_name_or_path="stabilityai/stable-diffusion-2-1" \
284+
--pretrained_model_name_or_path="sd2-community/stable-diffusion-2-1" \
269285
--instance_data_dir="dog" \
270286
--output_dir="dog_sd" \
271287
--class_data_dir="path-to-class-images" \
@@ -310,7 +326,7 @@ After training completes, you can use `text_to_image_generation.py` sample for i
310326

311327
```bash
312328
PT_HPU_LAZY_MODE=1 python ../text_to_image_generation.py \
313-
--model_name_or_path stabilityai/stable-diffusion-2-1 \
329+
--model_name_or_path sd2-community/stable-diffusion-2-1 \
314330
--unet_adapter_name_or_path dog_sd/unet \
315331
--prompts "a sks dog" \
316332
--num_images_per_prompt 5 \

optimum/habana/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_depth2img.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -368,7 +368,7 @@ def __call__(
368368
>>> from diffusers import StableDiffusionDepth2ImgPipeline
369369
370370
>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
371-
... "stabilityai/stable-diffusion-2-depth",
371+
... "sd2-community/stable-diffusion-2-depth",
372372
... torch_dtype=torch.float16,
373373
... )
374374
>>> pipe.to("cuda")

optimum/habana/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -339,7 +339,7 @@ def __call__(
339339
>>> mask_image = download_image(mask_url).resize((512, 512))
340340
341341
>>> pipe = StableDiffusionInpaintPipeline.from_pretrained(
342-
... "stabilityai/stable-diffusion-2-inpainting", torch_dtype=torch.float16
342+
... "sd2-community/stable-diffusion-2-inpainting", torch_dtype=torch.float16
343343
... )
344344
>>> pipe = pipe.to("cuda")
345345

tests/test_diffusers.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -793,7 +793,8 @@ def test_no_throughput_regression_autocast(self):
793793
]
794794
num_images_per_prompt = 28
795795
batch_size = 7
796-
model_name = "stabilityai/stable-diffusion-2-1"
796+
# Stability AI has removed stable-diffusion-2 models. This uses unofficial mirror by sd2-community
797+
model_name = "sd2-community/stable-diffusion-2-1"
797798
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
798799
pipeline = GaudiStableDiffusionPipeline.from_pretrained(
799800
model_name,
@@ -2635,7 +2636,8 @@ def test_depth2img_pipeline_hpu_graphs(self):
26352636
@legacy
26362637
def test_depth2img_pipeline(self):
26372638
gaudi_config = GaudiConfig(use_torch_autocast=True)
2638-
model_name = "stabilityai/stable-diffusion-2-depth"
2639+
# Stability AI has removed stable-diffusion-2 models. This uses unofficial mirror by sd2-community
2640+
model_name = "sd2-community/stable-diffusion-2-depth"
26392641
scheduler = GaudiDDIMScheduler.from_pretrained(model_name, subfolder="scheduler")
26402642

26412643
pipe = GaudiStableDiffusionDepth2ImgPipeline.from_pretrained(
@@ -5687,7 +5689,8 @@ def test_stable_diffusion_inpaint_no_throughput_regression(self):
56875689
prompts = [
56885690
"concept art digital painting of an elven castle, inspired by lord of the rings, highly detailed, 8k",
56895691
]
5690-
model_name = "stabilityai/stable-diffusion-2-inpainting"
5692+
# Stability AI has removed stable-diffusion-2 models. This uses unofficial mirror by sd2-community
5693+
model_name = "sd2-community/stable-diffusion-2-inpainting"
56915694
num_images_per_prompt = 12
56925695
batch_size = 4
56935696
pipeline = GaudiStableDiffusionInpaintPipeline.from_pretrained(

0 commit comments

Comments
 (0)