Skip to content

Releases: Kim2091/Kim2091-Models

4x-SwinIR-S_Pretrain

15 May 11:41
1541714
Compare
Choose a tag to compare

4x-SwinIR-S_Pretrain

Scale: 4
Architecture: SwinIR Small

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject:
Input Type: Images
Date: 5-14-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures, modified for training + UltraSharpV2
Dataset Size: 26k
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 50k
Batch Size: 6
GT Size: 128

Description: Simple pretrain for SwinIR Small, trained on CC0 content. "Ethical" model

4x-SwinIR-M_Pretrain

15 May 11:39
1541714
Compare
Choose a tag to compare

4x-SwinIR-M_Pretrain

Scale: 4
Architecture: SwinIR Medium

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject:
Input Type: Images
Date: 5-14-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures, modified for training + UltraSharpV2
Dataset Size: 26k
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 50k
Batch Size: 6
GT Size: 128

Description: Simple pretrain for SwinIR Medium, trained on CC0 content. "Ethical" model

4x-SMFAN_Pretrain

09 Apr 23:05
1541714
Compare
Choose a tag to compare

4x-SMFAN_Pretrain

Scale: 4
Architecture: SMFAN
Links:

Author: Kim2091
License: CC0
Purpose: Restoration
Subject: Game Textures
Input Type: Images
Date: 4-9-23

Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures + private self-taken images
Dataset Size: 24k tiles
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 40k
Batch Size: 6
GT Size: 128

Description: Basic pretrain for SMFAN

4x-RealPLKSR_dysample_pretrain

06 Jul 22:33
1541714
Compare
Choose a tag to compare

4x-RealPLKSR_dysample_pretrain

Scale: 4
Architecture: RealPLKSR Dysample

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject:
Input Type: Images
Date: 7-6-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: UltraSharpV2 Ethical
Dataset Size: 7k
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 120k
Batch Size: 8
GT Size: 192

Description: Simple pretrain for RealPLKSR Dysample, trained on CC0 content. "Ethical" model

4x-RealPLKSR_Pretrain_V4

26 May 04:18
1541714
Compare
Choose a tag to compare

4x-RealPLKSR_Pretrain_V4

Scale: 4
Architecture: RealPLKSR

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject:
Input Type: Images
Date: 5-25-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures, modified for training + UltraSharpV2
Dataset Size: 26k
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 25k
Batch Size: 8
GT Size: 128

Description: Simple pretrain for RealPLKSR, trained on CC0 or self-made content. Don't use V1-V3, they're not stable for training.

Dataset consisted of simple resizing:

  • Lanczos
  • Linear
  • Bicubic
  • Box

4x-RealPLKSR_Pretrain_V3

25 May 19:27
1541714
Compare
Choose a tag to compare

4x-RealPLKSR_Pretrain_V3

Scale: 4
Architecture: RealPLKSR

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject:
Input Type: Images
Date: 5-25-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures, modified for training + UltraSharpV2
Dataset Size: 26k
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 25k
Batch Size: 8
GT Size: 128

Description: Simple pretrain for RealPLKSR, trained on CC0 content. V3, just in case there were any issues with V2. Trained with a fully functioning config now

4x-RealPLKSR_Pretrain_V2

15 May 11:42
1541714
Compare
Choose a tag to compare

4x-RealPLKSR_Pretrain

Scale: 4
Architecture: RealPLKSR

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject:
Input Type: Images
Date: 5-9-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures, modified for training + UltraSharpV2
Dataset Size: 26k
OTF (on the fly augmentations): No
Pretrained Model: None
Iterations: 25k
Batch Size: 8
GT Size: 128

Description: Simple pretrain for RealPLKSR, trained on CC0 content. V1 was broken, this is V2.

4x-PBRify_UpscalerSIR-M_V2

20 May 03:27
1541714
Compare
Choose a tag to compare

4x-PBRify_UpscalerSIR-M_V2

Scale: 4
Architecture: SwinIR Medium

Author: Kim2091
License: CC0
Purpose: Pretrained
Subject: Game Textures
Input Type: Images
Date: 5-19-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG textures, modified for training + UltraSharpV2
Dataset Size: 26k
OTF (on the fly augmentations): No
Pretrained Model: 4x-SwinIR-M_Pretrain
Iterations: 400k
Batch Size: 4 to 8
GT Size: 128 to 256

Description: This is part of my PBRify_Remix project. This is a much more capable model based on SwinIR Medium, which should strike a balance between capacity for learning + inference speed. It appears to have done so 🙂

4x-PBRify_UpscalerDAT2_V1

06 Jun 02:30
1541714
Compare
Choose a tag to compare

4x-PBRify_UpscalerDAT2_V1

Scale: 4
Architecture: DAT2

Author: Kim2091
License: CC0
Purpose: Upscaling
Subject: Game Textures
Input Type: Images
Date: 6-5-24
Size: 137 MB
I/O Channels: 3(RGB)->3(RGB)

Dataset: ambientCG + PolyHaven V2 dataset
Dataset Size: 5.5k
OTF (on the fly augmentations): No
Pretrained Model: 4x-DAT2_mssim_Pretrain Latest
Iterations: 300k
Batch Size: 4-18
GT Size: 96-192

Description: Yet another model in the PBRify_Remix series. This is a new upscaler to replace the previous 4x-PBRify_UpscalerSIR-M_V2 model.

This model far exceeds the quality of the previous, with far more natural detail generation and better reconstruction of lines and edges.

Comparisons: https://slow.pics/c/DCjlXPGb

Comparison1

4x-PBRify_RPLKSRd_V3

02 Oct 17:51
1541714
Compare
Choose a tag to compare

4x-PBRify_RPLKSRd_V3

Scale: 4
Architecture: RealPLKSR Dysample

Author: Kim2091
License: CC0
Purpose: Game Textures
Subject:
Input Type: Images
Date: 9-23-24
Size:
I/O Channels: 3(RGB)->3(RGB)

Dataset: PolyHaven, FreePBR, ambientCG, UltraSharpV2
Dataset Size: 4k-26k
OTF (on the fly augmentations): No
Pretrained Model: 4x-RealPLKSR_dysample_pretrain
Iterations: 98k (~160k total)
Batch Size: 6-12
GT Size: 128-256

Description: This update brings a new upscaling model, 4x-PBRify_RPLKSRd_V3. This model is roughly 8x faster than the current DAT2 model, while being higher quality. It produces far more natural detail, resolves lines and edges more smoothly, and cleans up compression artifacts better.

As a result of those improvements, PBR is also much improved. It tends to be clearer with less defined artifacts.

However, this model is currently only compatible with ComfyUI. chaiNNer has not yet been updated to support this architecture.

Comparisons