Skip to content

Commit 7f67212

Browse files
committed
Merge branch 'JoePenna-main'
2 parents 7975216 + 4daac80 commit 7f67212

8 files changed

+814
-625
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ I can't release all the tests for the movie I'm working on, but when I test with
2727

2828
Lots of these tests were done with a buddy of mine -- Niko from CorridorDigital. It might be how you found this repo!
2929

30-
I'm not really a coder. I'm just stubborn, and I'm not afraid of googling. So, eventually, some really smart folks joined in and have been contributing. In this repo, specifically: @djbielejeski @gammagec @MrSaad –– but so many others in our Discord!
30+
I'm not really a coder. I'm just stubborn, and I'm not afraid of googling. So, eventually, some really smart folks joined in and have been contributing. In this repo, specifically: [@djbielejeski](https://github.com/djbielejeski) @gammagec @MrSaad –– but so many others in our Discord!
3131

3232
This is no longer my repo. This is the people-who-wanna-see-Dreambooth-on-SD-working-well's repo!
3333

dreambooth_colab_joepenna.ipynb

+251
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,251 @@
1+
{
2+
"nbformat": 4,
3+
"nbformat_minor": 0,
4+
"metadata": {
5+
"colab": {
6+
"provenance": [],
7+
"collapsed_sections": []
8+
},
9+
"kernelspec": {
10+
"name": "python3",
11+
"display_name": "Python 3"
12+
},
13+
"language_info": {
14+
"name": "python"
15+
}
16+
},
17+
"cells": [
18+
{
19+
"cell_type": "code",
20+
"execution_count": null,
21+
"outputs": [],
22+
"source": [
23+
"#@title Load repo (if needed)\n",
24+
"!git clone https://github.com/JoePenna/Dreambooth-Stable-Diffusion\n",
25+
"%cd Dreambooth-Stable-Diffusion"
26+
],
27+
"metadata": {
28+
"collapsed": false
29+
}
30+
},
31+
{
32+
"cell_type": "code",
33+
"execution_count": null,
34+
"metadata": {
35+
"id": "qeTrc2vOeiNh"
36+
},
37+
"outputs": [],
38+
"source": [
39+
"#@title BUILD ENV\n",
40+
"!pip install omegaconf\n",
41+
"!pip install einops\n",
42+
"!pip install pytorch-lightning==1.6.5\n",
43+
"!pip install test-tube\n",
44+
"!pip install transformers\n",
45+
"!pip install kornia\n",
46+
"!pip install -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers\n",
47+
"!pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip\n",
48+
"!pip install setuptools==59.5.0\n",
49+
"!pip install pillow==9.0.1\n",
50+
"!pip install torchmetrics==0.6.0\n",
51+
"!pip install -e .\n",
52+
"!pip install protobuf==3.20.1\n",
53+
"!pip install gdown\n",
54+
"!pip install pydrive\n",
55+
"!pip install -qq diffusers[\"training\"]==0.3.0 transformers ftfy\n",
56+
"!pip install -qq \"ipywidgets>=7,<8\"\n",
57+
"!pip install huggingface_hub\n",
58+
"!pip install ipywidgets==7.7.1\n",
59+
"\n",
60+
"import os\n",
61+
"os._exit(00)"
62+
]
63+
},
64+
{
65+
"cell_type": "code",
66+
"execution_count": null,
67+
"outputs": [],
68+
"source": [
69+
"#@title # Required - Navigate back to the directory.\n",
70+
"%cd Dreambooth-Stable-Diffusion"
71+
],
72+
"metadata": {
73+
"collapsed": false
74+
}
75+
},
76+
{
77+
"cell_type": "code",
78+
"source": [
79+
"#@markdown Hugging Face Login\n",
80+
"from huggingface_hub import notebook_login\n",
81+
"\n",
82+
"notebook_login()"
83+
],
84+
"metadata": {
85+
"id": "6tjx0HcjesFo"
86+
},
87+
"execution_count": 1,
88+
"outputs": []
89+
},
90+
{
91+
"cell_type": "code",
92+
"source": [
93+
"#@markdown Download the 1.4 sd model\n",
94+
"from IPython.display import clear_output\n",
95+
"\n",
96+
"from huggingface_hub import hf_hub_download\n",
97+
"downloaded_model_path = hf_hub_download(\n",
98+
" repo_id=\"CompVis/stable-diffusion-v-1-4-original\",\n",
99+
" filename=\"sd-v1-4.ckpt\",\n",
100+
" use_auth_token=True\n",
101+
")\n",
102+
"\n",
103+
"# Move the sd-v1-4.ckpt to the root of this directory as \"model.ckpt\"\n",
104+
"actual_locations_of_model_blob = !readlink -f {downloaded_model_path}\n",
105+
"!mv {actual_locations_of_model_blob[-1]} model.ckpt\n",
106+
"clear_output()\n",
107+
"print(\"✅ model.ckpt successfully downloaded\")\n"
108+
],
109+
"metadata": {
110+
"id": "O15vMMhCevib"
111+
},
112+
"execution_count": null,
113+
"outputs": []
114+
},
115+
{
116+
"cell_type": "code",
117+
"source": [
118+
"#@title # Download Regularization Images\n",
119+
"#@markdown We’ve created the following image sets\n",
120+
"#@markdown - `man_euler` - provided by Niko Pueringer (Corridor Digital) - euler @ 40 steps, CFG 7.5\n",
121+
"#@markdown - `man_unsplash` - pictures from various photographers\n",
122+
"#@markdown - `person_ddim`\n",
123+
"#@markdown - `woman_ddim` - provided by David Bielejeski - ddim @ 50 steps, CFG 10.0 <br />\n",
124+
"#@markdown - `blonde_woman` - provided by David Bielejeski - ddim @ 50 steps, CFG 10.0 <br />\n",
125+
"\n",
126+
"dataset=\"person_ddim\" #@param [\"man_euler\", \"man_unsplash\", \"person_ddim\", \"woman_ddim\", \"blonde_woman\"]\n",
127+
"!git clone https://github.com/djbielejeski/Stable-Diffusion-Regularization-Images-{dataset}.git\n",
128+
"\n",
129+
"!mkdir -p regularization_images/{dataset}\n",
130+
"!mv -v Stable-Diffusion-Regularization-Images-{dataset}/{dataset}/*.* regularization_images/{dataset}"
131+
],
132+
"metadata": {
133+
"id": "N96aedTtfBjO"
134+
},
135+
"execution_count": 2,
136+
"outputs": []
137+
},
138+
{
139+
"cell_type": "code",
140+
"source": [
141+
"#@title # Training Images\n",
142+
"#@markdown ## Upload your training images\n",
143+
"#@markdown WARNING: Be sure to upload an even amount of images, otherwise the training inexplicably stops at 1500 steps. <br />\n",
144+
"#@markdown - 2-3 full body\n",
145+
"#@markdown - 3-5 upper body\n",
146+
"#@markdown - 5-12 close-up on face <br /> <br />\n",
147+
"#@markdown The images should be as close as possible to the kind of images you’re trying to make (most of the time, that means no selfies).\n",
148+
"from google.colab import files\n",
149+
"from IPython.display import clear_output\n",
150+
"\n",
151+
"# Create the directory\n",
152+
"!rm -rf training_images\n",
153+
"!mkdir -p training_images\n",
154+
"\n",
155+
"# Upload the files\n",
156+
"uploaded = files.upload()\n",
157+
"for filename in uploaded.keys():\n",
158+
" updated_file_name = filename.replace(\" \", \"_\")\n",
159+
" !mv \"{filename}\" \"training_images/{updated_file_name}\"\n",
160+
" clear_output()\n",
161+
"\n",
162+
"# Tell the user what is going on\n",
163+
"training_images_file_paths = !find training_images/*\n",
164+
"if len(training_images_file_paths) == 0:\n",
165+
" print(\"❌ no training images found. Please upload images to training_images\")\n",
166+
"else:\n",
167+
" print(\"\" + str(len(training_images_file_paths)) + \" training images found\")\n"
168+
],
169+
"metadata": {
170+
"id": "A7hmdOdOfGzs"
171+
},
172+
"execution_count": null,
173+
"outputs": []
174+
},
175+
{
176+
"cell_type": "code",
177+
"source": [
178+
"#@title # Training\n",
179+
"\n",
180+
"#@markdown This isn't used for training, just to help you remember what your trained into the model.\n",
181+
"project_name = \"project_name\" #@param {type:\"string\"}\n",
182+
"\n",
183+
"# MAX STEPS\n",
184+
"#@markdown How many steps do you want to train for?\n",
185+
"max_training_steps = 2000 #@param {type:\"integer\"}\n",
186+
"\n",
187+
"#@markdown Match class_word to the category of the regularization images you chose above.\n",
188+
"class_word = \"person\" #@param [\"man\", \"person\", \"woman\"] {allow-input: true}\n",
189+
"\n",
190+
"#@markdown This is the unique token you are incorporating into the stable diffusion model.\n",
191+
"token = \"firstNameLastName\" #@param {type:\"string\"}\n",
192+
"reg_data_root = \"/content/Dreambooth-Stable-Diffusion/regularization_images/\" + dataset\n",
193+
"\n",
194+
"!rm -rf training_images/.ipynb_checkpoints\n",
195+
"!python \"main.py\" \\\n",
196+
" --base configs/stable-diffusion/v1-finetune_unfrozen.yaml \\\n",
197+
" -t \\\n",
198+
" --actual_resume \"model.ckpt\" \\\n",
199+
" --reg_data_root \"{reg_data_root}\" \\\n",
200+
" -n \"{project_name}\" \\\n",
201+
" --gpus 0, \\\n",
202+
" --data_root \"/content/Dreambooth-Stable-Diffusion/training_images\" \\\n",
203+
" --max_training_steps {max_training_steps} \\\n",
204+
" --class_word \"{class_word}\" \\\n",
205+
" --token \"{token}\" \\\n",
206+
" --no-test"
207+
],
208+
"metadata": {
209+
"id": "m2o_fFFvfxHi"
210+
},
211+
"execution_count": null,
212+
"outputs": []
213+
},
214+
{
215+
"cell_type": "code",
216+
"source": [
217+
"#@title # Copy and name the checkpoint file\n",
218+
"\n",
219+
"directory_paths = !ls -d logs/*\n",
220+
"last_checkpoint_file = directory_paths[-1] + \"/checkpoints/last.ckpt\"\n",
221+
"training_images = !find training_images/*\n",
222+
"date_string = !date +\"%Y-%m-%dT%H-%M-%S\"\n",
223+
"file_name = date_string[-1] + \"_\" + project_name + \"_\" + str(len(training_images)) + \"_training_images_\" + str(max_training_steps) + \"_max_training_steps_\" + token + \"_token_\" + class_word + \"_class_word.ckpt\"\n",
224+
"!mkdir -p trained_models\n",
225+
"!mv {last_checkpoint_file} trained_models/{file_name}\n",
226+
"\n",
227+
"print(\"Download your trained model file from trained_models/\" + file_name + \" and use in your favorite Stable Diffusion repo!\")"
228+
],
229+
"metadata": {
230+
"id": "Ll_ZIFNUulKJ"
231+
},
232+
"execution_count": null,
233+
"outputs": []
234+
},
235+
{
236+
"cell_type": "code",
237+
"source": [
238+
"#@title Save model in google drive\n",
239+
"from google.colab import drive\n",
240+
"drive.mount('/content/drive')\n",
241+
"\n",
242+
"!cp trained_models/{file_name} /content/drive/MyDrive/{file_name}"
243+
],
244+
"metadata": {
245+
"id": "mkidEm4evn1J"
246+
},
247+
"execution_count": null,
248+
"outputs": []
249+
}
250+
]
251+
}

0 commit comments

Comments
 (0)