This repository contains training, generation and utility scripts for Stable Diffusion.
Stable Diffusion web UI now seems to support LoRA trained by sd-scripts
. Thank you for great work!!!
Note: The LoRA models for SD 2.x is not supported too in Web UI.
- 24 Jan. 2023, 2023/1/24
- Change the default save format to
.safetensors
fortrain_network.py
. - Add
--save_n_epoch_ratio
option to specify how often to save. Thanks to forestsource!- For example, if 5 is specified, 5 (or 6) files will be saved in training.
- Add feature to pre-caclulate hash to reduce loading time in the extension. Thanks to space-nuko!
- Add bucketing matadata. Thanks to space-nuko!
- Fix an error with bf16 model in
gen_img_diffusers.py
. train_network.py
のモデル保存形式のデフォルトを.safetensors
に変更しました。- モデルを保存する頻度を指定する
--save_n_epoch_ratio
オプションが追加されました。forestsource氏に感謝します。- たとえば 5 を指定すると、学習終了までに合計で5個(または6個)のファイルが保存されます。
- 拡張でモデル読み込み時間を短縮するためのハッシュ事前計算の機能を追加しました。space-nuko氏に感謝します。
- メタデータにbucket情報が追加されました。space-nuko氏に感謝します。
gen_img_diffusers.py
でbf16形式のモデルを読み込んだときのエラーを修正しました。
- Change the default save format to
Stable Diffusion web UI本体で当リポジトリで学習したLoRAモデルによる画像生成がサポートされたようです。
注:SD2.x用のLoRAモデルはサポートされないようです。
Please read Releases for recent updates. 最近の更新情報は Release をご覧ください。
For easier use (GUI and PowerShell scripts etc...), please visit the repository maintained by bmaltais. Thanks to @bmaltais!
This repository contains the scripts for:
- DreamBooth training, including U-Net and Text Encoder
- fine-tuning (native training), including U-Net and Text Encoder
- LoRA training
- image generation
- model conversion (supports 1.x and 2.x, Stable Diffision ckpt/safetensors and Diffusers)
These files do not contain requirements for PyTorch. Because the versions of them depend on your environment. Please install PyTorch at first (see installation guide below.)
The scripts are tested with PyTorch 1.12.1 and 1.13.0, Diffusers 0.10.2.
All documents are in Japanese currently, and CUI based.
- DreamBooth training guide
- Step by Step fine-tuning guide: Including BLIP captioning and tagging by DeepDanbooru or WD14 tagger
- training LoRA
- note.com Image generation
- note.com Model conversion
Python 3.10.6 and Git:
- Python 3.10.6: https://www.python.org/ftp/python/3.10.6/python-3.10.6-amd64.exe
- git: https://git-scm.com/download/win
Give unrestricted script access to powershell so venv can work:
- Open an administrator powershell window
- Type
Set-ExecutionPolicy Unrestricted
and answer A - Close admin powershell window
Open a regular Powershell terminal and type the following inside:
git clone https://github.com/kohya-ss/sd-scripts.git
cd sd-scripts
python -m venv venv
.\venv\Scripts\activate
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 --extra-index-url https://download.pytorch.org/whl/cu116
pip install --upgrade -r requirements.txt
pip install -U -I --no-deps https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/f/xformers-0.0.14.dev0-cp310-cp310-win_amd64.whl
cp .\bitsandbytes_windows\*.dll .\venv\Lib\site-packages\bitsandbytes\
cp .\bitsandbytes_windows\cextension.py .\venv\Lib\site-packages\bitsandbytes\cextension.py
cp .\bitsandbytes_windows\main.py .\venv\Lib\site-packages\bitsandbytes\cuda_setup\main.py
accelerate config
update: python -m venv venv
is seemed to be safer than python -m venv --system-site-packages venv
(some user have packages in global python).
Answers to accelerate config:
- This machine
- No distributed training
- NO
- NO
- NO
- all
- fp16
note: Some user reports ValueError: fp16 mixed precision requires a GPU
is occurred in training. In this case, answer 0
for the 6th question:
What GPU(s) (by id) should be used for training on this machine as a comma-separated list? [all]:
(Single GPU with id 0
will be used.)
Other versions of PyTorch and xformers seem to have problems with training. If there is no other reason, please install the specified version.
When a new release comes out you can upgrade your repo with the following command:
cd sd-scripts
git pull
.\venv\Scripts\activate
pip install --upgrade -r requirements.txt
Once the commands have completed successfully you should be ready to use the new version.
The implementation for LoRA is based on cloneofsimo's repo. Thank you for great work!!!
The majority of scripts is licensed under ASL 2.0 (including codes from Diffusers, cloneofsimo's), however portions of the project are available under separate license terms:
Memory Efficient Attention Pytorch: MIT
bitsandbytes: MIT
BLIP: BSD-3-Clause