Skip to content

Commit d2d3944

Browse files
authored
feat: add support for SD2.x with TINY U-Nets (#939)
1 parent 0fa3e1a commit d2d3944

File tree

5 files changed

+68
-47
lines changed

5 files changed

+68
-47
lines changed

docs/distilled_sd.md

Lines changed: 50 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,40 +1,66 @@
1-
# Running distilled models: SSD1B and SD1.x with tiny U-Nets
1+
# Running distilled models: SSD1B and SDx.x with tiny U-Nets
22

3-
## Preface
3+
## Preface
44

5-
This kind of models have a reduced U-Net part.
6-
Unlike other SDXL models the U-Net of SSD1B has only one middle block and lesser attention layers in up and down blocks, resulting in relatively smaller files. Running these models saves more than 33% of the time. For more details, refer to Segmind's paper on https://arxiv.org/abs/2401.02677v1 .
7-
Unlike other SD 1.x models Tiny-UNet models consist of only 6 U-Net blocks, resulting in relatively smaller files (approximately 1 GB). Running these models saves almost 50% of the time. For more details, refer to the paper: https://arxiv.org/pdf/2305.15798.pdf .
5+
These models feature a reduced U-Net architecture. Unlike standard SDXL models, the SSD-1B U-Net contains only one middle block and fewer attention layers in its up- and down-blocks, resulting in significantly smaller file sizes. Using these models can reduce inference time by more than 33%. For more details, refer to Segmind's paper: https://arxiv.org/abs/2401.02677v1.
6+
Similarly, SD1.x- and SD2.x-style models with a tiny U-Net consist of only 6 U-Net blocks, leading to very small files and time savings of up to 50%. For more information, see the paper: https://arxiv.org/pdf/2305.15798.pdf.
87

98
## SSD1B
109

11-
Unfortunately not all of this models follow the standard model parameter naming mapping.
12-
Anyway there are some very useful SSD1B models available online, such as:
10+
Note that not all of these models follow the standard parameter naming conventions. However, several useful SSD-1B models are available online, such as:
1311

1412
* https://huggingface.co/segmind/SSD-1B/resolve/main/SSD-1B-A1111.safetensors
15-
* https://huggingface.co/hassenhamdi/SSD-1B-fp8_e4m3fn/resolve/main/SSD-1B_fp8_e4m3fn.safetensors
13+
* https://huggingface.co/hassenhamdi/SSD-1B-fp8_e4m3fn/resolve/main/SSD-1B_fp8_e4m3fn.safetensors
1614

17-
Also there are useful LORAs available:
15+
Useful LoRAs are also available:
1816

1917
* https://huggingface.co/seungminh/lora-swarovski-SSD-1B/resolve/main/pytorch_lora_weights.safetensors
20-
* https://huggingface.co/kylielee505/mylcmlorassd/resolve/main/pytorch_lora_weights.safetensors
18+
* https://huggingface.co/kylielee505/mylcmlorassd/resolve/main/pytorch_lora_weights.safetensors
2119

22-
You can use this files **out-of-the-box** - unlike models in next section.
20+
These files can be used out-of-the-box, unlike the models described in the next section.
2321

2422

25-
## SD1.x with tiny U-Nets
23+
## SD1.x, SD2.x with tiny U-Nets
2624

27-
There are some Tiny SD 1.x models available online, such as:
25+
These models require conversion before use. You will need a Python script provided by the diffusers team, available on GitHub:
26+
27+
* https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/scripts/convert_diffusers_to_original_stable_diffusion.py
28+
29+
### SD2.x
30+
31+
NotaAI provides the following model online:
32+
33+
* https://huggingface.co/nota-ai/bk-sdm-v2-tiny
34+
35+
Creating a .safetensors file involves two steps. First, run this short Python script to download the model from Hugging Face:
36+
37+
```python
38+
from diffusers import StableDiffusionPipeline
39+
pipe = StableDiffusionPipeline.from_pretrained("nota-ai/bk-sdm-v2-tiny",cache_dir="./")
40+
```
41+
42+
Second, create the .safetensors file by running:
43+
44+
```bash
45+
python convert_diffusers_to_original_stable_diffusion.py \
46+
--model_path models--nota-ai--bk-sdm-v2-tiny/snapshots/68277af553777858cd47e133f92e4db47321bc74 \
47+
--checkpoint_path bk-sdm-v2-tiny.safetensors --half --use_safetensors
48+
```
49+
50+
This will generate the **file bk-sdm-v2-tiny.safetensors**, which is now ready for use with sd.cpp.
51+
52+
### SD1.x
53+
54+
Several Tiny SD 1.x models are available online, such as:
2855

2956
* https://huggingface.co/segmind/tiny-sd
3057
* https://huggingface.co/segmind/portrait-finetuned
3158
* https://huggingface.co/nota-ai/bk-sdm-tiny
3259

33-
These models need some conversion, for example because partially tensors are **non contiguous** stored. To create a usable checkpoint file, follow these **easy** steps:
60+
These models also require conversion, partly because some tensors are stored in a non-contiguous manner. To create a usable checkpoint file, follow these simple steps:
61+
Download and prepare the model using Python:
3462

35-
### Download model from Hugging Face
36-
37-
Download the model using Python on your computer, for example this way:
63+
##### Download the model using Python on your computer, for example this way:
3864

3965
```python
4066
import torch
@@ -46,35 +72,22 @@ for param in unet.parameters():
4672
pipe.save_pretrained("segmindtiny-sd", safe_serialization=True)
4773
```
4874

49-
### Convert that to a ckpt file
50-
51-
To convert the downloaded model to a checkpoint file, you need another Python script. Download the conversion script from here:
52-
53-
* https://raw.githubusercontent.com/huggingface/diffusers/refs/heads/main/scripts/convert_diffusers_to_original_stable_diffusion.py
54-
55-
56-
### Run convert script
57-
58-
Now, run that conversion script:
75+
##### Run the conversion script:
5976

6077
```bash
6178
python convert_diffusers_to_original_stable_diffusion.py \
62-
--model_path ./segmindtiny-sd \
63-
--checkpoint_path ./segmind_tiny-sd.ckpt --half
79+
--model_path ./segmindtiny-sd \
80+
--checkpoint_path ./segmind_tiny-sd.ckpt --half
6481
```
6582

66-
The file **segmind_tiny-sd.ckpt** will be generated and is now ready to use with sd.cpp
83+
The file segmind_tiny-sd.ckpt will be generated and is now ready for use with sd.cpp. You can follow a similar process for the other models mentioned above.
6784

68-
You can follow a similar process for other models mentioned above from Hugging Face.
6985

70-
71-
### Another ckpt file on the net
72-
73-
There is another model file available online:
86+
### Another available .ckpt file:
7487

7588
* https://huggingface.co/ClashSAN/small-sd/resolve/main/tinySDdistilled.ckpt
76-
77-
If you want to use that, you have to adjust some **non-contiguous tensors** first:
89+
90+
To use this file, you must first adjust its non-contiguous tensors:
7891

7992
```python
8093
import torch

model.cpp

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1788,6 +1788,9 @@ SDVersion ModelLoader::get_sd_version() {
17881788
if (is_inpaint) {
17891789
return VERSION_SD2_INPAINT;
17901790
}
1791+
if (!has_middle_block_1) {
1792+
return VERSION_SD2_TINY_UNET;
1793+
}
17911794
return VERSION_SD2;
17921795
}
17931796
return VERSION_COUNT;

model.h

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,6 +26,7 @@ enum SDVersion {
2626
VERSION_SD1_TINY_UNET,
2727
VERSION_SD2,
2828
VERSION_SD2_INPAINT,
29+
VERSION_SD2_TINY_UNET,
2930
VERSION_SDXL,
3031
VERSION_SDXL_INPAINT,
3132
VERSION_SDXL_PIX2PIX,
@@ -52,7 +53,7 @@ static inline bool sd_version_is_sd1(SDVersion version) {
5253
}
5354

5455
static inline bool sd_version_is_sd2(SDVersion version) {
55-
if (version == VERSION_SD2 || version == VERSION_SD2_INPAINT) {
56+
if (version == VERSION_SD2 || version == VERSION_SD2_INPAINT || version == VERSION_SD2_TINY_UNET) {
5657
return true;
5758
}
5859
return false;

stable-diffusion.cpp

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,7 @@ const char* model_version_to_str[] = {
2323
"SD 1.x Tiny UNet",
2424
"SD 2.x",
2525
"SD 2.x Inpaint",
26+
"SD 2.x Tiny UNet",
2627
"SDXL",
2728
"SDXL Inpaint",
2829
"SDXL Instruct-Pix2Pix",

unet.hpp

Lines changed: 12 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,7 @@ class UnetModelBlock : public GGMLBlock {
180180
int num_head_channels = -1; // channels // num_heads
181181
int context_dim = 768; // 1024 for VERSION_SD2, 2048 for VERSION_SDXL
182182
bool use_linear_projection = false;
183+
bool tiny_unet = false;
183184

184185
public:
185186
int model_channels = 320;
@@ -208,15 +209,17 @@ class UnetModelBlock : public GGMLBlock {
208209
num_head_channels = 64;
209210
num_heads = -1;
210211
use_linear_projection = true;
211-
} else if (version == VERSION_SD1_TINY_UNET) {
212-
num_res_blocks = 1;
213-
channel_mult = {1, 2, 4};
214212
}
215213
if (sd_version_is_inpaint(version)) {
216214
in_channels = 9;
217215
} else if (sd_version_is_unet_edit(version)) {
218216
in_channels = 8;
219217
}
218+
if (version == VERSION_SD1_TINY_UNET || version == VERSION_SD2_TINY_UNET) {
219+
num_res_blocks = 1;
220+
channel_mult = {1, 2, 4};
221+
tiny_unet = true;
222+
}
220223

221224
// dims is always 2
222225
// use_temporal_attention is always True for SVD
@@ -290,7 +293,7 @@ class UnetModelBlock : public GGMLBlock {
290293
context_dim));
291294
}
292295
input_block_chans.push_back(ch);
293-
if (version == VERSION_SD1_TINY_UNET) {
296+
if (tiny_unet) {
294297
input_block_idx++;
295298
}
296299
}
@@ -311,7 +314,7 @@ class UnetModelBlock : public GGMLBlock {
311314
d_head = num_head_channels;
312315
n_head = ch / d_head;
313316
}
314-
if (version != VERSION_SD1_TINY_UNET) {
317+
if (!tiny_unet) {
315318
blocks["middle_block.0"] = std::shared_ptr<GGMLBlock>(get_resblock(ch, time_embed_dim, ch));
316319
if (version != VERSION_SDXL_SSD1B) {
317320
blocks["middle_block.1"] = std::shared_ptr<GGMLBlock>(get_attention_layer(ch,
@@ -358,7 +361,7 @@ class UnetModelBlock : public GGMLBlock {
358361
}
359362

360363
if (i > 0 && j == num_res_blocks) {
361-
if (version == VERSION_SD1_TINY_UNET) {
364+
if (tiny_unet) {
362365
output_block_idx++;
363366
if (output_block_idx == 2) {
364367
up_sample_idx = 1;
@@ -495,7 +498,7 @@ class UnetModelBlock : public GGMLBlock {
495498
}
496499
hs.push_back(h);
497500
}
498-
if (version == VERSION_SD1_TINY_UNET) {
501+
if (tiny_unet) {
499502
input_block_idx++;
500503
}
501504
if (i != len_mults - 1) {
@@ -512,7 +515,7 @@ class UnetModelBlock : public GGMLBlock {
512515
// [N, 4*model_channels, h/8, w/8]
513516

514517
// middle_block
515-
if (version != VERSION_SD1_TINY_UNET) {
518+
if (!tiny_unet) {
516519
h = resblock_forward("middle_block.0", ctx, h, emb, num_video_frames); // [N, 4*model_channels, h/8, w/8]
517520
if (version != VERSION_SDXL_SSD1B) {
518521
h = attention_layer_forward("middle_block.1", ctx, h, context, num_video_frames); // [N, 4*model_channels, h/8, w/8]
@@ -554,7 +557,7 @@ class UnetModelBlock : public GGMLBlock {
554557
}
555558

556559
if (i > 0 && j == num_res_blocks) {
557-
if (version == VERSION_SD1_TINY_UNET) {
560+
if (tiny_unet) {
558561
output_block_idx++;
559562
if (output_block_idx == 2) {
560563
up_sample_idx = 1;

0 commit comments

Comments
 (0)